Search engine

According to Cointelegraph: OpenAI has announced the creation of a “Preparedness Framework”. As a measure aimed at protecting against “catastrophic risks” associated with advanced AI systems, the framework involves the creation of a dedicated team to assess potential risks. 


In its blog post on December 18, OpenAI revealed that its newly formed “Preparedness Team” would serve as a bridge, connecting safety and policy teams across the organization. This team's responsibilities include functions such as a system of checks and balances to protect against potential dangers associated with increasingly sophisticated AI models. 



OpenAI notes that it will only launch its technology if the security team deems it safe. Under this new framework, the Preparedness Team will review safety reports before forwarding them to company executives and the OpenAI board. 


Although executives are tasked with making the final decisions, the board now has the authority to overturn safety decisions if necessary. This announcement follows OpenAI's recent organizational changes, including the firing and reinstatement of Sam Altman as the organization's CEO. The new board, introduced following Altman's return, includes Bret Taylor as chairman, as well as Larry Summers and Adam D'Angelo. OpenAI launched ChatGPT to the public in November 2022, sparking excitement about AI's possibilities and concerns about its potential dangers. 


To address these issues, leading AI developers such as OpenAI, Microsoft, Google, and Anthropic started the Frontier Model Forum in July. This forum operates to oversee the self-regulation of responsible AI creation. Additionally, in October, US President Joe Biden issued an executive order outlining new AI security standards for the development and implementation of high-level models, a move involving leading AI developers such as OpenAI to ensure the safe and transparent development of AI models

Post a Comment