OpenAI announces new independent board oversight committee focused on safety

OpenAI announces new independent board oversight committee focused on safety


OpenAI CEO Sam Altman speaks during the Microsoft Build conference at Microsoft headquarters in Redmond, Washington, on May 21, 2024.

Jason Redmond | AFP | Getty Images

OpenAI on Monday said its Safety and Security Committee, which the company introduced in May as it dealt with controversy over security processes, will become an independent board oversight committee.

The group will be chaired by Zico Kolter, director of the machine learning department at Carnegie Mellon University’s school of computer science. Other members include Adam D’Angelo, an OpenAI board member and co-founder of Quora, former NSA chief and board member Paul Nakasone, and Nicole Seligman, former executive vice president at Sony.

The committee will oversee “the safety and security processes guiding OpenAI’s model deployment and development,” the company said. It recently wrapped up its 90-day review evaluating OpenAI’s processes and safeguards and then made recommendations to the board. OpenAI is releasing the group’s findings as a public blog post.

OpenAI, the Microsoft-backed startup behind ChatGPT and SearchGPT, is currently pursuing a funding round that would value the company at more than $150 billion, according to sources familiar with the situation who asked not to be named because details of the round haven’t been made public. Thrive Capital is leading the round and plans to invest $1 billion, and Tiger Global is planning to join as well. Microsoft, Nvidia and Apple are reportedly also in talks to invest.

The committee’s five key recommendations included the need to establish independent governance for safety and security, enhance security measures, be transparent about OpenAI’s work, collaborate with external organizations; and unify the company’s safety frameworks.

Last week, OpenAI released o1, a preview version of its new AI model focused on reasoning and “solving hard problems.” The company said the committee “reviewed the safety and security criteria that OpenAI used to assess OpenAI o1’s fitness for launch,” as well as safety evaluation results.

The committee will “along with the full board, exercise oversight over model launches, including having the authority to delay a release until safety concerns are addressed.”

While OpenAI has been in hyper-growth mode since late 2022, when it launched ChatGPT, it’s been simultaneously riddled with controversy and high-level employee departures, with some current and former employees concerned that the company is growing too quickly to operate safely.

In July, Democratic senators sent a letter to OpenAI CEO Sam Altman concerning “questions about how OpenAI is addressing emerging safety concerns.” The prior month, a group of current and former OpenAI employees published an open letter describing concerns about a lack of oversight and an absence of whistleblower protections for those who wish to speak up.

And in May, a former OpenAI board member, speaking about Altman’s temporary ouster in November, said he gave the board “inaccurate information about the small number of formal safety processes that the company did have in place” on multiple occasions.

That month, OpenAI decided to disband its team focused on the long-term risks of AI just a year after announcing the group. The team’s leaders, Ilya Sutskever and Jan Leike, announced their departures from OpenAI in May. Leike wrote in a post on X that OpenAI’s “safety culture and processes have taken a backseat to shiny products.”

WATCH: OpenAI is indisputable leader in AI supercycle

OpenAI is the indisputable leader in the AI supercycle, says Altimeter Capital's Apoorv Agrawal



Source

Here are 3 factors that drove the big swings in the stock market last week
Technology

Here are 3 factors that drove the big swings in the stock market last week

It was a tale of two markets last week: Industrials surged while financial and tech names buckled under the growing weight of artificial intelligence fears. A mixed bag of economic data complicated matters further. Although the S & P 500 bounced slightly Friday following an inflation print that bolstered the future case for lower interest […]

Read More
AI startups want to crack open the recipe book in Big Food’s test kitchens
Technology

AI startups want to crack open the recipe book in Big Food’s test kitchens

In the world of big food, artificial intelligence is nothing new. McCormick, which owns brands including Frank’s RedHot, Cholula and Old Bay, has been using AI in flavor development for nearly a decade, with the company saying its development timelines have been cut by 20% to 25%, on average, by identifying promising flavor combinations and narrowing down which ideas are […]

Read More
It’s been a big — but rocky — week for AI models from China. Here’s what’s happened
Technology

It’s been a big — but rocky — week for AI models from China. Here’s what’s happened

The Alibaba stand at the World Artificial Intelligence Conference at the Shanghai World Expo Exhibition Center in Shanghai, China, on July 5, 2024. Nurphoto | Nurphoto | Getty Images While U.S. markets have been focused on the impact of Anthropic and Altruist’s tools on software and financial services, China’s tech giants have released AI models […]

Read More