For AI systems to learn, they first must be trained using information that is often labelled by humans. However, most users never see how the data is labelled, leading to doubts about the accuracy and bias of those labels.

Showing users that visual data fed into systems was labelled correctly was shown make people trust AI more and could pave the way to help scientists better measure the connection between labelling credibility, AI performance, and trust, the Penn State University team said.

In a study, the researchers found that high-quality labelling of images led people to perceive that the training data was credible and they trusted the AI system more. However, when the system shows other signs of being biased, some aspects of their trust go down while others remain at a high level.

“When we talk about trusting AI systems, we are talking about trusting the performance of AI and the AI’s ability to reflect reality and truth,” said Sundar, who also...