By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
Times CatalogTimes CatalogTimes Catalog
  • Home
  • Tech
    • Google
    • Microsoft
    • YouTube
    • Twitter
  • News
  • How To
  • Bookmarks
Search
Technology
  • Meta
Others
  • Apple
  • WhatsApp
  • Elon Musk
  • Threads
  • About
  • Contact
  • Privacy Policy and Disclaimer
© 2025 Times Catalog
Reading: OpenAI is launching an ‘independent’ safety board that can stop its model releases
Share
Notification
Font ResizerAa
Font ResizerAa
Times CatalogTimes Catalog
Search
  • News
  • How To
  • Tech
    • AI
    • Apple
    • Microsoft
    • Google
    • ChatGPT
    • Gemini
    • YouTube
    • Twitter
  • Coming Soon
Follow US
  • About
  • Contact
  • Privacy Policy and Disclaimer
© 2025 Times Catalog
Times Catalog > Blog > Tech > AI > OpenAI is launching an ‘independent’ safety board that can stop its model releases
AITech

OpenAI is launching an ‘independent’ safety board that can stop its model releases

Debra Massey
Last updated: September 17, 2024 11:43 am
Debra Massey
Share
4 Min Read
OpenAI is launching an ‘independent’ safety board that can stop its model releases
SHARE

The company’s Safety and Security Committee will become an ‘independent Board oversight committee.’

In a pivotal move toward enhancing safety protocols, OpenAI has announced that its internal Safety and Security Committee will evolve into an independent Board Oversight Committee, tasked with overseeing and, if necessary, delaying the release of its AI models to address safety concerns. This significant development was revealed in an official blog post following a comprehensive 90-day review of OpenAI’s safety processes and security safeguards.

The newly structured committee, chaired by renowned AI expert Zico Kolter, also includes key figures such as Adam D’Angelo (CEO of Quora), General Paul Nakasone (Director of the National Security Agency), and Nicole Seligman (former President of Sony Entertainment). This team will be briefed by OpenAI’s leadership on safety evaluations for major model releases. Furthermore, the committee, along with the full board of directors, will hold the authority to delay a model’s release if it believes further safety measures are necessary to mitigate risks. OpenAI’s broader board will also receive regular briefings on ongoing safety and security matters to ensure continued oversight.

However, while the intention is to create an independent safety board, some may question how autonomous this committee truly is. Notably, all current committee members also serve on OpenAI’s full board of directors, raising concerns about potential conflicts of interest or limitations to their independence. Former committee member and OpenAI CEO, Sam Altman, has stepped back from this committee, which may be an effort to reduce any perceived overlap. When asked for further clarification on how this independence is structured, OpenAI has yet to comment.

By forming this independent safety board, OpenAI seems to be taking a similar approach to Meta’s Oversight Board, which reviews and rules on the company’s major content moderation decisions. In contrast to OpenAI, Meta’s Oversight Board members are completely independent and have no formal ties to the company’s board of directors, ensuring an unbiased decision-making process.

The 90-day review, which ultimately led to the formation of this oversight committee, also revealed opportunities for deeper industry collaboration to advance AI security standards. OpenAI plans to expand efforts to share and explain its safety work and will seek more opportunities for independent testing and validation of its systems. This commitment to transparency and collaboration could foster greater trust among stakeholders, partners, and the public as OpenAI continues to develop increasingly powerful AI models.

In sum, OpenAI’s independent safety board signals a critical step forward in addressing the inherent risks posed by advanced AI technologies. By granting this committee the authority to delay model releases, OpenAI is reinforcing its commitment to ensuring that safety takes precedence over speed in AI innovation. The move could set a precedent for other AI companies to follow, highlighting the importance of responsible governance in the rapidly evolving field of artificial intelligence.

You Might Also Like

Logitech’s MX Creative Console now supports Figma and Adobe Lightroom

Samsung resumes its troubled One UI 7 rollout

Google Messages starts rolling out sensitive content warnings for nude images

Vivo wants its new smartphone to replace your camera

Uber users can now earn miles with Delta Air Lines

Share This Article
Facebook Twitter Pinterest Whatsapp Whatsapp Copy Link
What do you think?
Love0
Happy0
Sad0
Sleepy0
Angry0
Previous Article TikTok faces a skeptical panel of judges in its existential fight against the US government TikTok faces a skeptical panel of judges in its existential fight against the US government
Next Article YouTube launches Communities, a Discord-like space for creators and fans to interact with each other YouTube launches Communities, a Discord-like space for creators and fans to interact with each other
Leave a comment Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

144FollowersLike
23FollowersFollow
237FollowersPin
19FollowersFollow

Latest News

Pinterest is prompting teens to close the app at school
Pinterest is prompting teens to close the app at school
News Tech April 22, 2025
ChatGPT search is growing quickly in Europe, OpenAI data suggests
ChatGPT search is growing quickly in Europe, OpenAI data suggests
AI ChatGPT OpenAI April 22, 2025
social-media-is-not-wholly-terrible-for-teen-mental-health-study-says
Social media is not wholly terrible for teen mental health, study says
News April 22, 2025
Google is trying to get college students hooked on AI with a free year of Gemini Advanced
Google is trying to get college students hooked on AI with a free year of Gemini Advanced
AI Gemini Google Tech April 19, 2025
Times CatalogTimes Catalog
Follow US
© 2025 Times Catalog
  • About
  • Contact
  • Privacy Policy and Disclaimer
Welcome Back!

Sign in to your account

Lost your password?