The limits of intelligence — Why AI advancement could be slowing down

The limits of intelligence — Why AI advancement could be slowing down


Generative artificial intelligence has developed so quickly in the past two years that massive breakthroughs seemed more a question of when rather than if. But in recent weeks, Silicon Valley has become increasingly concerned that advancements are slowing.  

One early indication is the lack of progress between models released by the biggest players in the space. The Information reports OpenAI is facing a significantly smaller boost in quality for its next model GPT-5, while Anthropic has delayed the release of its most powerful model Opus, according to wording that was removed from its website. Even at tech giant Google, Bloomberg reports that an upcoming version of Gemini is not living up to internal expectations.  

“Remember, ChatGPT came out at the end of 2022, so now it’s been close to two years,” said Dan Niles, founder of Niles Investment Management. “You had initially a huge ramp up in terms of what all these new models can do, and what’s happening now is you really trained all these models and so the performance increases are kind of leveling off.”

If progress is plateauing, it would call into question a core assumption that Silicon Valley has treated as religion: scaling laws. The idea is that adding more computing power and more data guarantees better models to an infinite degree. But those recent developments suggest they may be more theory than law. 

The key problem could be that AI companies are running out of data for training models, hitting what experts call the “data wall.” Instead, they’re turning to synthetic data, or AI-generated data. But that’s a band-aid solution, according to Scale AI founder Alexandr Wang.  

 “AI is an industry which is garbage in, garbage out,” Wang said. “So if you feed into these models a lot of AI gobbledygook, then the models are just going to spit out more AI gobbledygook.”  

But some leaders in the industry are pushing back on the idea that the rate of improvement is hitting a wall.  

“Foundation model pre-training scaling is intact and it’s continuing,” Nvidia CEO Jensen Huang said on the chipmaker’s latest earnings call. “As you know, this is an empirical law, not a fundamental physical law. But the evidence is that it continues to scale.”

OpenAI CEO Sam Altman posted on X simply, “there is no wall.” 

OpenAI and Anthropic didn’t respond to requests for comment. Google says it’s pleased with its progress on Gemini and has seen meaningful performance gains in capabilities like reasoning and coding. 

If AI acceleration is tapped out, the next phase of the race is the search for use cases – consumer applications that can be built on top of existing technology without the need for further model improvements. The development and deployment of AI agents, for example, is expected to be a game-changer. 

“I think we’re going to live in a world where there are going to be hundreds of millions, billions of AI agents, eventually probably more AI agents than there are people in the world,” Meta CEO Mark Zuckerberg said in a recent podcast interview.  

Watch the video to learn more. 



Source

For car, phone, even tractor owners, a populist wave is rising to end the ‘captive’ repair economy
Technology

For car, phone, even tractor owners, a populist wave is rising to end the ‘captive’ repair economy

Ohio gubernatorial candidate Casey Putsch speaks with supporters at a campaign event in Toledo, Ohio, on Thursday, April 9, 2026. He is far behind in the polls, but Putsch is part of a nationwide message of economic populism and is promoting “right to repair” legislation. Sue Ogrocki | AP It used to be that if […]

Read More
Wall Street is getting bullish on neoclouds. These stocks hold more risk than other AI plays
Technology

Wall Street is getting bullish on neoclouds. These stocks hold more risk than other AI plays

There’s a lot of market buzz on the emerging crop of companies known as neoclouds, but these stocks are not for the faint of heart. Neoclouds are building AI-dedicated computing infrastructure and represent the risky edge of artificial intelligence investing. They stand in contrast to the hyperscalers, such as Amazon Web Services , Google Cloud […]

Read More
We tried out xAI’s Grok chatbot while driving a Tesla in NYC. Here’s what happened.
Technology

We tried out xAI’s Grok chatbot while driving a Tesla in NYC. Here’s what happened.

Tesla owner Mike Nelson has been using the AI chatbot Grok in his vehicle for several months now. He finds it is useful, nearly irresistible, and dangerous. Nelson, a lawyer with a background in auto insurance, showed CNBC how he uses Grok on a drive around the New York metro area. Nelson said that while […]

Read More