A new study shows that people embroiled in political discussions on social media find it difficult to identify AI bots, increasing the risk of spreading misinformation.
Social media platforms are increasingly used to engage in political discourse. However, with the rise in AI bots it is becoming increasingly difficult to decipher whether the user behind the account is human or not.
AI bots are automated accounts programmed to interact in a very human-like manner. AI bots based on large language models (LLMs) – which enable them to understand language and generate text – were used by researchers at the University of Notre Dame in Indiana, US, to engage with humans in a political discussion on the social networking platform Mastodon.
These AI bots were customised with different personas that included realistic, varied personal profiles and perspectives on global politics. They were directed to offer commentary and to link global events to personal experiences. Each persona’s design was based...