Neuroscience is being used to build human-like agentic AIs – and you don’t need copyrighted data to do so.

Ask ChatGPT to come up with party ideas, write a plan or analyse data and it will return an answer in seconds. Ask it follow-up questions and it will use memory and context to update its response. Ask it to use its initiative, however, and the AI suddenly becomes more cautious. It will not commit to making a decision because it has been designed to sit firmly on the fence – always acting as a servant, never as its own entity.

This is partly deliberate, to negate claims of bias and similar, but it is also an inherent, technical limitation of the type of large language model (LLM) that has soared to prominence in recent months. In fact, this is one of a number of limitations, coupled with growing controversy over LLMs’ use of copyrighted data, that is fuelling the next generation of artificial intelligence known as agentic AI.

As its name suggests, agentic AI refers to models that have...