The IET is carrying out some important updates between 17-30 April and all of our websites will be view only. For more information, read this Announcement

With AI starting to hit natural limitations imposed by the large language model approach, do we need to revisit some of the earlier AI methodologies? AI’s family tree continues to grow.

“If the human brain were so simple that we could understand it, we would be so simple that we couldn’t.” So said the late Emerson M Pugh, physics professor at Carnegie Mellon University in Pennsylvania. The years since his passing in 1981 have seen researchers strive to prove Pugh wrong and, in the current AI wave, spend more than ever on the effort.

The endgame is, supposedly, artificial general intelligence (AGI) – machines as smart as we are – or even the ‘singularity’, when machines surpass human intelligence and become self-advancing. More realistically, researchers want better, more trustworthy AI that can finally deliver killer applications and huge revenues.

Today’s poster children are foundation or, more commonly, large language models (LLMs). They marry the transformer architecture first proposed...