OpenAI has formed a Preparedness team that will assess, test and evaluate artificial intelligence (AI) models to address their potential dangers. 

Some of the risks the company aims to mitigate are the technology's capacity to pose “chemical, biological, and radiological threats” and facilitate “autonomous replication”. The team will also evaluate an algorithm’s ability to persuade and fool humans in instances such as phishing attacks and generating malicious codes. 

The team will be led by Aleksander Madry, the director of MIT’s Center for Deployable Machine Learning.

“We believe that frontier AI models, which will exceed the capabilities currently present in the most advanced existing models, have the potential to benefit all of humanity,” OpenAI said in a blog post making the announcement. “But they also pose increasingly severe risks.”

Coinciding with the launch of the team, OpenAI has also made a community call-out for ideas for risk studies...