After years of non-binding agreements and ethical debates, the Biden administration has taken a step forward in its journey towards increasing regulatory oversight over AI technologies.
The US President has announced a new executive order that aims to drive the development of “safe, secure and trustworthy” AI. The law sets a series of safety assessments that all AI tools must follow, introducing new consumer protections and the need to ensure that AI respects equity and civil rights.
The new law will require companies developing foundational models that “pose a serious risk to national security, national economic security or national public health and safety” to notify the government of their activities and share the results of all red-team safety tests they conduct.
OpenAI’s GPT and Meta’s Llama 2 would be among the models affected by this requirement, although a senior Biden administration official told reporters in a briefing that the guidelines...