Is AI a Solution looking for a Problem?

I accept that specific machine learning systems have a value in medical X-Ray and sample analysis and quality control checks where the field of application is controlled and monitored.

I do not, currently, see any real use for the Large Language Models. The results they produce are based on a limited amount of, possibly deliberately selected, data. There is no validation. Further training of the models and especially removing of incorrect  data (withdrawn papers etc.) seems difficult to impossible.

We are told that AI will replace many current jobs. I have to wonder that if a job can be replaced with AI is it really a valid job? If it is a valid job will the person really be made redundant or just be moved to a position checking and updating the AI model?

If you need a specific AI/machine learning system, as in the opening paragraph, how much support is needed to update/validate the model on an ongoing basis. At what point is it better just to keep the humans?

Another interesting point, the same as with ‘self-driving’ cars is who is responsible?

If I have a design to calculate I can pick up my Engineers Handbook (which I can reference), select the formulas,  make the calculations showing my working and then sign it off. This is my responsibility.

I can ask the question of some AI package and receive an answer. If I use this answer who carries the responsibility when an accident occurs? Is it me for being foolish enough to use an AI package or the AI package provider?

Parents
  • I have found a couple of uses for the LLM type of AI available on-line:-

    1. Asking a general question.  Google is becoming little more than an advertising engine these days.  You type in a bunch of keywords and get back a page of adverts.  But with something like Microsoft Copilot, you can ask it a question and get back an actual answer.  It will probably be right, or at least close.  You can ask follow-up questions to drill down futher.
    2. Writing little software scripts.  I have got Copilot to write me command shell scripts and even scripts for IBM DOORS*.  They are usually pretty close.  If I'm precise enough with my question, it may even be right first time.

    I haven't yet used an LLM for "real" software coding.  But for odd jobs, it's a lot quicker than reading on-line manuals.

    *It's a tool for requirements recording and checking traceability.

  • To back up Simon's thoughts, quite a few of my software colleagues (in fact 100% of those who have commented!) have also said they find it really useful for coding.

    I come across AI & LLM more where other colleagues have used it to help write documents...personally I'm yet to be convinced there - not that the writing standard is any worse than a poor human author, but so far what I have seen has been a long way below a good one. But much closer than it was only a very few years ago, so I suppose watch this space.

    Here's a thought, back to my time as an electronic circuit designer, very few of us were very good at documenting our designs in such a way that 10 years (or 10 weeks!) later someone else could understand our designs. You have to be really diligent in documenting every single aspect of the design which may be obvious to you as the designer but not obvious at all to someone coming to it afresh. I can imagine an AI engine, if it could analyse electronic circuits and had a LLM, that could actually document a design much better than most designers could.   

    Whether an AI engine could design a circuit better or worse than a human could I suspect depends on many factors. Probably mostly better? I suspect what it would do is make us much better at being more precise about our input requirements.  

Reply
  • To back up Simon's thoughts, quite a few of my software colleagues (in fact 100% of those who have commented!) have also said they find it really useful for coding.

    I come across AI & LLM more where other colleagues have used it to help write documents...personally I'm yet to be convinced there - not that the writing standard is any worse than a poor human author, but so far what I have seen has been a long way below a good one. But much closer than it was only a very few years ago, so I suppose watch this space.

    Here's a thought, back to my time as an electronic circuit designer, very few of us were very good at documenting our designs in such a way that 10 years (or 10 weeks!) later someone else could understand our designs. You have to be really diligent in documenting every single aspect of the design which may be obvious to you as the designer but not obvious at all to someone coming to it afresh. I can imagine an AI engine, if it could analyse electronic circuits and had a LLM, that could actually document a design much better than most designers could.   

    Whether an AI engine could design a circuit better or worse than a human could I suspect depends on many factors. Probably mostly better? I suspect what it would do is make us much better at being more precise about our input requirements.  

Children
  • I can imagine an AI engine, if it could analyse electronic circuits and had a LLM, that could actually document a design much better than most designers could.   

    My thought was that the AI could be better at prompting for the issues that do/don't need (or have) 'documentation' of the sort "here's what I was thinking" / "I hadn't thought of that" which would help in getting all the design factors down onto the page.

    The other aspect (one I struggle with) is trying to guess the level of 'stupidity' of the future reader. I either over explain or under explain. Plus the story in hindsight can be quite different once the design has been finished / proven. 

    It can be worse when the initial 'design' is being proposed as no one thinks it can be done, or at least, not 'that' way! I've had a few of those - some still outstanding!

  • is trying to guess the level of 'stupidity' of the future reader.

    Including oneself! I spent 8 years responsible for the design of a product at one manufacturer and then 23 years responsible for the design of various products at a different manufacturer, and in both cases it was scary how quickly I found I'd forgotten the rationale for my own design decisions when the time came to modify or update them, and how I'd written down the stuff which was easy to deduce anyway and left out really important bits! Hence yes I agree if only I'd had an engine to prompt me as to what should be written down. 

    Also, AI debugging of human create designs would be interesting - for example theoretically AI could be much better at FMEA (because it wouldn't get bored and also hopefully wouldn't take things for granted in the way that humans tend to). That thought was prompted by the thought that an IA documentation of a design could well identify that the design wasn't actually doing what the human intended.

    However, AI analysis of AI generated designs would be challenging to protect against errors due to shared (mis)understanding. As is also true of human designs, but potentially worse with AI.