OpenAI Unleashes Guidelines for Proactive AI Risk Assessment

On Monday, OpenAI, the company behind ChatGPT, released its latest recommendations for assessing “catastrophic risks” connected with artificial intelligence in continuing model development.

This announcement follows a recent shakeup in which CEO Sam Altman was fired by the board only to be rehired a few days later in response to worker and investor protests.

According to reports in the US media, board members chastised Altman for favoring the quick expansion of OpenAI, apparently ignoring various concerns about the risks linked with its technology.

OpenAI acknowledged in its newly published “Preparedness Framework” on Monday that the scientific analysis of catastrophic threats from AI has been insufficient. The framework is intended to replace this void, with a monitoring and evaluation team established in October to analyze “frontier models” that outperform even the most advanced AI software.

The team will assess each new model and assign a risk level, ranging from “low” to “critical,” across four major categories. According to the framework, only models with a “medium” or lower score will be eligible for deployment.

  • The first category assesses cybersecurity, analyzing the model’s ability to carry out large-scale cyberattacks.
  • The second category assesses the likelihood of the program leading to the creation of dangerous entities such as chemical mixes, creatures (such as viruses), or nuclear weapons.
  • The third category investigates the model’s persuasive power, assessing its influence on human behavior.

The fourth danger category focuses on the model’s potential autonomy, specifically its ability to elude the authority of its designers and programmers.

Leave a Reply

Your email address will not be published. Required fields are marked *