The controversy around Google engineer Blake Lemoine’s belief that the company’s LaMDA AI has become sentient is gradually moving from the distant and unhelpful Skynet topic to the more pressing one of how we interact with these systems as they become ever better at mimicking human discourse. That trend is also once more highlighting issues around regulation.

An especially striking thing about Lemoine’s claims is that they come from a Google engineer with seven years of experience.

Much is being made of Lemoine’s personal interest in spirituality. He styles himself as a priest. This may have made him more susceptible when it came to how he has interpreted LaMDA’s responses. But he does appear to have a solid understanding of how AI and pattern matching work. He helped in the development of “a fairness algorithm for removing bias from machine-learning systems” before joining Google’s Responsible AI team.

This raises an obvious question: if a chatbot like...