2 minute read time.

You’ve probably seen all the media reports about the dangers of Artificial Intelligence (AI) which have been published recently. And when senior AI experts such as Geoffrey Hinton speak out, it adds credibility to the concerns. Some of us grew up watching the Terminator movies, and although those movies present an extreme scenario, the concerns get into our subconscious.

Currently experts appear to be in agreement that we don’t have Artificial General Intelligence (AGI), we only have narrow AI. AGI is an AI which surpasses human capabilities to perform any task, whereas narrow AI only surpasses human capabilities for a specific narrow task. So where are the concerns coming from?

AI and machine learning systems use standard software programming, and either learn from examples or do what a human has programmed it to do. Until recently, the software was written by humans, but recently generative AI programs have been developed which can write software. Generative AI programs use machine learning and very large amounts of data to learn from. While it is possible for generative AI programs to write other generative AI programs, currently the original software will have been written by a human.

Why am I telling you this?

Narrow AI systems learn what humans tell them to learn, and do what humans have programmed them to do. This does not mean that the AI will not do unforeseen things, since humans make mistakes. We’ve all spent time debugging and testing code, getting to a point where we are confident that it works as planned, and then it does something different to expected due to a programming error. When providing examples for machine learning, humans also have to provide guidance instructing the machine learning what to learn. This is usually in the form of labelled examples or a mathematical formula which tells the machine learning if it is getting things right. Both labelled examples and mathematical formulae may contain errors, which can lead to undesired behaviour in the trained system.

Thus AI and machine learning systems are not the issue, humans are! In addition to humans being fallible, there is the potential that humans will instruct the AI to do bad things. As a metaphor, consider using an axe, it can be used to chop firewood, or it can be used to hurt someone. Is the axe good or bad? The answer is neither, it is the human using the axe that is good or bad. Similarly AI is not good or bad, it is what humans tell it to do that is. As Geoffrey Hinton said in his interview with the New York Times “It is hard to see how you can prevent the bad actors from using it for bad things.”

This is the first in a series of posts from the IET AI Technical Network committee, examining AI and the concerns around it. Look out for next month’s blog in which Dr Ivan Ling will be continuing this conversation.

In the meantime, we’d love to hear your thoughts on whether AI is good or bad, so please comment!

Photo by Possessed Photography on Unsplash

Parents Comment Children
No Data