OpenAI last week removed Aleksander Madry, one of OpenAI’s top safety executives, from his role and reassigned him to a job focused on AI reasoning, sources familiar with the situation confirmed to CNBC.
Madry was OpenAI’s head of preparedness, a team that was “tasked with tracking, evaluating, forecasting, and helping protect against catastrophic risks related to frontier AI models,” according to a bio for Madry.
Madry will still work on core AI safety work in his new role, OpenAI told CNBC.
Madry is also director of MIT’s Center for Deployable Machine Learning and a faculty co-lead of the MIT AI Policy Forum, roles from which he is currently on leave, according to the university’s website.
The decision to reassign Madry came less than a week before a group of Democratic senators sent a letter to OpenAI CEO Sam Altman concerning “questions about how OpenAI is addressing emerging safety concerns.”
The letter, sent Monday and viewed by CNBC, also stated, “We seek additional information from OpenAI about the steps that the company is taking to meet its public commitments on safety, how the company is internally evaluating its progress on those commitments, and on the company’s identification and mitigation of cybersecurity threats.”
OpenAI did not immediately respond to a request for comment.
The lawmakers requested that OpenAI respond with a series of answers to specific questions about its safety practices and financial commitments by August 13.
It’s all part of a summer of mounting safety concerns and controversies surrounding OpenAI, which along with Google, Microsoft, Meta and other companies is at the helm of a generative AI arms race — a market that is predicted to top $1 trillion in revenue within a decade — as companies in seemingly every industry rush to add AI-powered chatbots and agents to avoid being left behind by competitors.
Earlier this month, Microsoft gave up its observer seat on OpenAI’s board, stating in a letter viewed by CNBC that it can now step aside because it’s satisfied with the construction of the startup’s board, which has been revamped in the eight months since an uprising that led to the brief ouster of CEO Sam Altman and threatened Microsoft’s massive investment into OpenAI.
But last month, a group of current and former OpenAI employees published an open letter describing concerns about the artificial intelligence industry’s rapid advancement despite a lack of oversight and an absence of whistleblower protections for those who wish to speak up.
“AI companies have strong financial incentives to avoid effective oversight, and we do not believe bespoke structures of corporate governance are sufficient to change this,” the employees wrote at the time.
Days after the letter was published, a source familiar to the mater confirmed to CNBC that the FTC and the Department of Justice were set to open antitrust investigations into OpenAI, Microsoft and Nvidia, focusing on the companies’ conduct.
FTC Chair Lina Khan has described her agency’s action as a “market inquiry into the investments and partnerships being formed between AI developers and major cloud service providers.”
The current and former employees wrote in the June letter that AI companies have “substantial non-public information” about what their technology can do, the extent of the safety measures they’ve put in place and the risk levels that technology has for different types of harm.
“We also understand the serious risks posed by these technologies,” they wrote, adding that the companies “currently have only weak obligations to share some of this information with governments, and none with civil society. We do not think they can all be relied upon to share it voluntarily.”
In May, OpenAI decided to disband its team focused on the long-term risks of AI just one year after it announced the group, a person familiar with the situation confirmed to CNBC at the time.
The person, who spoke on condition of anonymity, said some of the team members are being reassigned to other teams within the company.
The team was disbanded after its leaders, OpenAI co-founder Ilya Sutskever and Jan Leike, announced their departures from the startup in May. Leike wrote in a post on X that OpenAI’s “safety culture and processes have taken a backseat to shiny products.”
CEO Sam Altman said at the time on X he was sad to see Leike leave and that the company had more work to do. Soon after, OpenAI co-founder Greg Brockman posted a statement attributed to Brockman and Altman on X, asserting that the company has “raised awareness of the risks and opportunities of AGI so that the world can better prepare for it.”
“I joined because I thought OpenAI would be the best place in the world to do this research,” Leike wrote on X at the time. “However, I have been disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point.”
Leike wrote that he believes much more of the company’s bandwidth should be focused on security, monitoring, preparedness, safety and societal impact.
“These problems are quite hard to get right, and I am concerned we aren’t on a trajectory to get there,” he wrote. “Over the past few months my team has been sailing against the wind. Sometimes we were struggling for [computing resources] and it was getting harder and harder to get this crucial research done.”
Leike added that OpenAI must become a “safety-first AGI company.”
“Building smarter-than-human machines is an inherently dangerous endeavor,” he wrote at the time. “OpenAI is shouldering an enormous responsibility on behalf of all of humanity. But over the past years, safety culture and processes have taken a backseat to shiny products.”
The Information first reported about Madry’s reassignment.