3 minute read time.

Yesterday evening I went to a talk given by the Sussex branch of the British Computer Society. The title of the talk was 'Calculemus (Let us calculate): What world is AI giving us?' and, although on the subject of AI, the speaker was a philosopher. So the topic was really about how we should view developments in AI and the 'products' that will result. After presenting a view on what we mean by artificial intelligence (Is an electronic calculator intelligent because it can do sums that a human would find difficult?) the talk moved into comparisons between the learning process of a child and that of a machine. Then comparisons with other inventions, such as plastic, were explored, arguing that we should put some form of constraints on the products which were being developed by corporations, whose main objective was to make money. The talk also raised considerable concern about the predicted development of Artificial General Intelligence, generally thought of as being equivalent to human intelligence.

Perhaps philosophers are being led down a particular route by use of the word 'intelligence'. As soon as we hear that word we try to relate, in this case the computer program, to having human intelligence and philosophers seem to be asking questions that are so big that they are impossible to answer. Perhaps philosophers like it that way, so that debate on the topic can last forever, without coming to any resolution. Any questions that depend upon an understanding of human intelligence seem doomed to failure.

Don't get me wrong. We should be concerned about the application of AI, but when engineers are faced with a big problem to solve they usually break it down into smaller problems that are more easily solved, bearing in mind that those solutions need to integrated so as to solve the big problem.

AI technology is currently being applied to help tools solve a variety of problems. The usually cited example is the analysis of x-ray images to help identify cancerous growths, but speech recognition, fraud detection and manufacturing defect detection are other examples where AI techniques can improve the tools that we use. Generally these tools involve some form of pattern identification, where the tool can be trained on examples, classified by humans.

Our history is not short of 'machines' that have changed the world and taken over activities previously carried out by humans. The printing press made the written word available to everyone, but probably put a number of scribes out of business. Machines that stamp out 'widgets' put manual workers out of a job, but made them available for more skilled and rewarding occupations. The World Wide Web probably worried publishers for a while, but made information available to anyone with an internet connection, even if it has now been taken over by big business. The point is that each of these innovations involved some different form of 'intelligence'. A form that is not related to human intelligence at all.

So when it comes to tools based on AI technology, tools that perform a specific, well bounded task, it should be within our capability to formulate a set of questions that need to be answered by the developer. We've all heard of bias in training data and the need to be able to find the source of output described as fact. But trying to relate these to human intelligence seems to be a futile gesture.

When we start thinking about so called Artificial General Intelligence, then that might need us to step up a gear. My knowledge of the subject is poor, but I've not seen anything that describes how a computer program can make deductions, inferences or invent, without specific algorithms being written by humans. If something more advanced than a tool to help solve a specific problem is developed then, again, it would seem unwise to try to compare it to human intelligence, but better to think of it as another form of intelligence, break it down into its component parts and question/regulate each part before trying to answer the big questions.

It is probably time for me to stop rambling and pass it over to you to add your thoughts and comments.

Cover photo by Rod Long on Unsplash

  • Thank you David for sharing (Thoughts Around the Ethics of Artificial Intelligence). Very interesting topic.

    The topic of (Ethics of AI) is in fact a worrying issue, and should be given a considerable attention. This is because of the ability of such advanced AI algorithms to generate decisions, even for massive data training on short time ( human cannot). In addition, the data itself being used for training, might be biased in terms of interpretation.

  • An interesting post and topic, and one which I think will be around for a while! Perhaps AI is like any new technology and generates lots of questions and scepticism until it's worth can be proved or humans become more used to it and its capabilities.

  •  thanks for your article which raises interesting points. I recently hosted an AI Day at Birmingham City University and we have a diverse range of speakers providing insight into how AI is being applied in various sectors (automotive, defence, education) and and the challenges around deployment and management None of the speakers were suggesting we should stop or even curtail development in AI, however, there is some significant and rapid catching up to be done, for example, in how we mark essays in the education sector, and how we determine whether military hardware/software is "safe" to deploy. It seems as though this will involve human gatekeepers to monitor decisions, at least in the short term. In the longer term, I expect the solutions to be derived from AI learning. It is an interesting philosophical discussion!

  • Not sufficiently knowledgeable enough on AI technology (or should that be technologies) or their applications - known/obvious applications never mind the subtle (deliberately or otherwise) uses/usage and applications to contribute meaningfully I'm afraid. Your raise interesting points...