‘Dangerous proposition’: Top scientists warn of out-of-control AI

‘Dangerous proposition’: Top scientists warn of out-of-control AI


Yoshua Bengio (L) and Max Tegmark (R) discuss the development of artificial general intelligence during a live podcast recording of CNBC’s “Beyond The Valley” in Davos, Switzerland in January 2025.

CNBC

Artificial general intelligence built like “agents” could prove dangerous as its creators might lose control of the system, two of of the world’s most prominent AI scientists told CNBC.

In the latest episode of CNBC’s “Beyond The Valley” podcast released on Tuesday, Max Tegmark, a professor at the Massachusetts Institute of Technology and the President of the Future of Life Institute, and Yoshua Bengio, dubbed one of the “godfathers of AI” and a professor at the Université de Montréal, spoke about their concerns about artificial general intelligence, or AGI. The term broadly refers to AI systems that are smarter than humans.

Their fears stem from the world’s biggest firms now talking about “AI agents” or “agentic AI” — which companies claim will allow AI chatbots to act like assistants or agents and assist in work and everyday life. Industry estimates vary on when AGI will come into existence.

With that concept comes the idea that AI systems could have some “agency” and thoughts of their own, according to Bengio.

“Researchers in AI have been inspired by human intelligence to build machine intelligence, and, in humans, there’s a mix of both the ability to understand the world like pure intelligence and the agentic behavior, meaning … to use your knowledge to achieve goals,” Bengio told CNBC’s “Beyond The Valley.”

“Right now, this is how we’re building AGI: we are trying to make them agents that understand a lot about the world, and then can act accordingly. But this is actually a very dangerous proposition.”

Bengio added that pursuing this approach would be like “creating a new species or a new intelligent entity on this planet” and “not knowing if they’re going to behave in ways that agree with our needs.”

“So instead, we can consider, what are the scenarios in which things go badly and they all rely on agency? In other words, it is because the AI has its own goals that we could be in trouble.”

The idea of self-preservation could also kick in, as AI gets even smarter, Bengio said.

“Do we want to be in competition with entities that are smarter than us? It’s not a very reassuring gamble, right? So we have to understand how self-preservation can emerge as a goal in AI.”

AI tools the key

For MIT’s Tegmark, the key lies in so-called “tool AI” — systems that are created for a specific, narrowly-defined purpose, but that don’t have to be agents.

Tegmark said a tool AI could be a system that tells you how to cure cancer, or something that possesses “some agency” like a self-driving car “where you can prove or get some really high, really reliable guarantees that you’re still going to be able to control it.”

“I think, on an optimistic note here, we can have almost everything that we’re excited about with AI … if we simply insist on having some basic safety standards before people can sell powerful AI systems,” Tegmark said.

“They have to demonstrate that we can keep them under control. Then the industry will innovate rapidly to figure out how to do that better.”

Tegmark’s Future of Life Institute in 2023 called for a pause to the development of AI systems that can compete with human-level intelligence. While that has not happened, Tegmark said people are talking about the topic, and now it is time to take action to figure out how to put guardrails in place to control AGI.

“So at least now a lot of people are talking the talk. We have to see if we can get them to walk the walk,” Tegmark told CNBC’s “Beyond The Valley.”

“It’s clearly insane for us humans to build something way smarter than us before we figured out how to control it.”

There are several views on when AGI will arrive, partly driven by varying definitions.

OpenAI CEO Sam Altman said his company knows how to build AGI and said it will arrive sooner than people think, though he downplayed the impact of the technology.

“My guess is we will hit AGI sooner than most people in the world think and it will matter much less,” Altman said in December.



Source

Pinterest shares rise 15% on better-than-expected guidance
Technology

Pinterest shares rise 15% on better-than-expected guidance

Bill Ready, CEO of Pinterest, rings the opening bell at the New York Stock Exchange on May 15, 2024. Brendan McDermid | Reuters Pinterest shares rose 15% in extended trading Thursday after the company reported first-quarter earnings and provided better-than-expected guidance. Here’s how the company did, compared to analysts’ consensus estimates from LSEG: Revenue: $855 […]

Read More
Celsius CEO Alex Mashinsky sentenced to 12 years in multi-billion-dollar crypto fraud case
Technology

Celsius CEO Alex Mashinsky sentenced to 12 years in multi-billion-dollar crypto fraud case

Alex Mashinsky, former chief executive officer of Celsius Network Ltd., arrives at court in New York, US, on Thursday, May 8, 2025. Yuki Iwamura | Bloomberg | Getty Images Alexander Mashinsky, the former CEO of Celsius Network, was sentenced to 12 years in prison on Thursday after pleading guilty to two counts of fraud, a […]

Read More
Affirm drops 10% on weaker-than-expected guidance for current quarter
Technology

Affirm drops 10% on weaker-than-expected guidance for current quarter

PayPal Inc. co-founder and Affirm’s CEO Max Levchin on center stage during day one of Collision 2019 at Enercare Center in Toronto, Canada. Vaughn Ridley | Sportsfile | Getty Images Affirm, the provider of buy now, pay later loans, gave a revenue forecast for the current quarter that trailed analysts’ estimates even as profit for […]

Read More