
OpenAI forms safety committee ahead its next model release
By Karah Rucker (Anchor), Brock Koller (Senior Producer), Ian Kennedy (Video Editor)
Media Landscape
See how news outlets across the political spectrum are covering this story. Learn moreBias Summary
- Lorem ipsum dolor sit amet consectetur adipiscing elit gravida etiam, ante diam sociosqu turpis habitant id efficitur facilisi, semper nulla neque arcu lobortis vehicula sollicitudin maecenas.
- Mi morbi mollis et lacus parturient odio neque interdum etiam scelerisque conubia quam, ad aliquet porttitor felis senectus curabitur maximus nunc laoreet congue nisl.
- Interdum odio non varius luctus tellus dui pulvinar, convallis duis libero orci netus erat ad, tempor sollicitudin natoque taciti metus class.
- Nam diam nascetur phasellus finibus gravida auctor massa hac elit viverra amet purus semper nunc risus pellentesque, dui elementum facilisi litora ut imperdiet odio fames nibh ac sodales sit pulvinar vitae.
- Sodales ad leo venenatis non himenaeos mauris augue vitae suspendisse dui inceptos, lectus felis eget at elit tempus et pharetra tempor integer.
Bias Comparison
Bias Distribution
Left
Right
Untracked Bias
Just weeks after OpenAI got rid of a team focused on AI safety, the company established a new committee aimed at enhancing safety and security. The company also announced on Tuesday, May 28, that it has begun training its next AI model.
In a blog post, OpenAI said the new committee will be led by CEO Sam Altman, Chair Bret Taylor, Adam D’Angelo, and Nicole Seligman.

Download the SAN app today to stay up-to-date with Unbiased. Straight Facts™.
Point phone camera here
One of the first things the Safety and Security Committee will do is evaluate and further develop OpenAI’s processes and safeguards over the next 90 days. After that, the group will share recommendations with OpenAI’s full board, which will then decide how to move forward with those recommendations.
This follows May’s earlier exits of OpenAI safety executive Jan Leike and company co-founder Ilya Sutskever, who were both on the company’s Superalignment team, which was dedicated to foreseeing and stemming potential issues caused by advanced AI.
In a thread on X about his departure, Leike criticized the company, saying he “reached a breaking point” and “over the past years, safety culture and processes have taken a backseat to shiny products.”
OpenAI said the Safety and Security Committee will also work with the company’s technical and policy experts and other cybersecurity officials.
Unbiased news.
Directly to your inbox. Free!
Learn more about our emails. Unsubscribe anytime.
By entering your email, you agree to the Terms & Conditions and acknowledge the Privacy Policy.
Also in Tuesday’s blog post, OpenAI confirmed it has started training its next big language model, which will be the successor to its current GPT-4. That is expected to be unveiled later this year.
“While we are proud to build and release models that are industry-leading on both capabilities and safety,” OpenAI’s post said, “we welcome a robust debate at this important moment.”
[KARAH RUCKER]
OPEN A-I IS STARTING A NEW COMMITTEE – AIMED AT ENHANCING SAFETY AND SECURITY.
IN A BLOG POST TUESDAY… OPEN A-I SAYING THE NEW COMMITTEE WILL BE LED BY C-E-O SAM ALTMAN AND OTHER COMPANY EXECUTIVES.
AMONG THE COMMITTEE’S TOP PRIORITIES: EVALUATING AND FURTHER DEVELOPING OPEN A-I’S PROCESSES AND SAFEGUARDS.
THAT’LL TAKE PLACE OVER THE NEXT 90 DAYS.
AFTER THAT, IT’LL SHARE RECOMMENDATIONS WITH OPENAI’S FULL BOARD… WHICH WILL THEN DECIDE HOW TO MOVE FORWARD WITH THOSE RECOMMENDATIONS.
THIS COMES JUST WEEKS AFTER OPENAI GOT RID OF ITS “SUPER ALIGNMENT” TEAM FOCUSED ON A-I SAFETY AND FOLLOWS THE EXITS EARLIER THIS MONTH OF OPEN A-I SAFETY EXECUTIVE JAN (YAHN) LEIKE (LIE-kuh) AND COMPANY CO-FOUNDER ILYA (ILL-EE-YAH) SUTSKEVER (SOOT-SKEH-VER).
IN A THREAD ON X ABOUT HIS DEPARTURE — LEIKE (LIE-kuh) CRITICIZED THE COMPANY, SAYING, “OVER THE PAST YEARS, SAFETY CULTURE AND PROCESSES HAVE TAKEN A BACKSEAT TO SHINY PRODUCTS.”ALSO IN ITS BLOG POST — OPEN A-I HAS CONFIRMED IT HAS STARTED TRAINING ITS NEXT BIG LANGUAGE MODEL, WHICH WILL BE THE SUCCESSOR TO ITS CURRENT “G-P-T 4” TECHNOLOGY WHICH DRIVES THE COMPANY’S CHATBOT – CHAT-G-P-T.
YOU CAN FIND OUR LATEST STORIES ON AI BY DOWNLOADING THE STRAIGHT ARROW NEWS APP TO YOUR MOBILE DEVICE.
Media Landscape
See how news outlets across the political spectrum are covering this story. Learn moreBias Summary
- Risus aenean aptent ad non turpis tristique imperdiet fusce libero, eu dapibus ultricies senectus praesent diam proin vestibulum, sit mus euismod hendrerit aliquam cras litora hac.
- Sem fermentum conubia ultrices finibus neque nam euismod taciti libero justo per tempor, pharetra ornare ac tellus viverra quisque nibh adipiscing vehicula tincidunt id.
- Taciti nam est pretium fringilla cursus ipsum et, varius ligula amet dignissim placerat suspendisse pharetra, quis litora lobortis porttitor dolor mauris.
- Torquent dapibus luctus aliquet venenatis fusce class curae in imperdiet magna non ullamcorper sit adipiscing massa leo, ipsum facilisi vestibulum integer ante vitae nam dictumst augue duis consequat ad et efficitur.
- Consequat pharetra lacus eros est iaculis purus metus efficitur dictum ipsum potenti, sociosqu tellus inceptos molestie imperdiet malesuada ultrices tempus quis cubilia.
Bias Comparison
Bias Distribution
Left
Right
Untracked Bias
Straight to your inbox.
By entering your email, you agree to the Terms & Conditions and acknowledge the Privacy Policy.
MOST POPULAR
-
Getty Images
Maine sues USDA after funding freeze amid dispute over transgender athletes
ReadYesterday -
Getty Images
Starbucks ordered to pay $50 million to driver burned by hot coffee
Watch 1:31Mar 17 -
Getty Images
Trump envoy to meet Putin in Moscow over potential ceasefire in Ukraine
Watch 1:35Mar 11 -
Getty Images
Coinbase says SEC is dropping its lawsuit, ‘righting a major wrong’ for crypto
Watch 3:41Feb 21