Artificially intelligent hiring tools could be offensive or prejudiced toward individuals with disabilities, according to researchers at the Penn State College of Information Sciences and Technology (IST).
AI models have been increasingly used for natural-language processing (NLP) applications, such as smart assistants or email autocorrect and spam filters. In the past, some of these tools have been found to have biases based on gender and race. However, until now similar biases against people with disabilities have not been widely explored.
Researchers at Penn State analysed 13 different AI models commonly used for NLP applications to measure attitudes towards people with and without disabilities.
“The 13 models we explored are highly used and are public in nature,” said Pranav Venkit, the first author of the study’s paper, which was presented at the 29th International Conference on Computational Linguistics (COLING).
“We hope that our findings help...