OpenAI’s Sora 2 is putting safety and censorship to the test with stunningly real videos

OpenAI’s Sora 2 is putting safety and censorship to the test with stunningly real videos


OpenAI's Sora 2 sparks AI 'slop' backlash

Fresh off a $6.6 billion share sale that made it the world’s most valuable private company, OpenAI’s TikTok-style video app, powered by its new artificial intelligence model, Sora 2, is going viral.

Despite the gated release that requires an invite code, the video creation tool has already shot to the number three spot on Apple‘s App Store and sparked a wave of deepfakes, including a viral clip of CEO Sam Altman shoplifting GPUs.

Internally, the rollout has reignited a long-running debate inside OpenAI about how to balance safety with creative freedom.

A person familiar with internal strategy at the company said leadership views strict guardrails as essential, but also worries about stifling creativity or being perceived as censoring too much.

That tension remains unresolved.

OpenAI’s culture has long favored speed, often shipping new tools ahead of rivals and letting the public adapt in real time.

One former employee, who asked not to be named to discuss internal matters, told CNBC that during their tenure, OpenAI leadership had a pattern of prioritizing fast launches. That strategy was on full display after China’s DeepSeek released a powerful model at the end of last year that was cheaper and faster to build than anything out of Silicon Valley.

OpenAI responded within weeks, debuting two new models in what was widely viewed as a defensive move to preserve its lead.

But OpenAI has a key advantage: Its growing institutional muscle.

Once a scrappy research lab in San Francisco’s Mission District, the company has since become more structured, enabling it to spin up cross-functional teams more quickly and accelerate the development and deployment cycles for products like Sora.

OpenAI said Sora includes multiple layers of safeguards meant to prevent unsafe content from being generated, using prompt filtering and output moderation across video frames and audio transcripts. It bans explicit content, terrorist propaganda, and material promoting self-harm. The app also uses watermarks and bans likeness impersonation.

But some users have already found ways to skirt those protections.

Sora 2, the AI model powering OpenAI’s app, is a sharp improvement over the first version. The new system generates longer, more coherent clips that look strikingly real.

Multiple viral videos feature Altman after he granted permission for his likeness to be used on the platform, while others depict popular cartoon characters like Pikachu and SpongeBob SquarePants in unsettling roles.

The content has fueled criticism that OpenAI is once again moving faster than its own guardrails. Its use of copyrighted material — unless rights holders opt out — is consistent with the company’s current policy, though that approach is being challenged in court.

Altman has brushed off concerns, saying in a post on X that Sora is as much about transparency — showing the public what the technology can do — as it is about building commercial momentum to fund OpenAI’s broader ambitions around artificial general intelligence.

The launch comes amid intensifying competition. Meta rolled out Vibes last week, a new short-form AI video feed inside its Meta AI app. Google has Veo 3, while ByteDance and Alibaba have also debuted rival systems.

OpenAI, meanwhile, just committed to fresh spending of $850 billion, deepening its push into infrastructure and next-gen models.

OpenAI hits milestone $500 billion valuation

Experts say the push into video isn’t just about drawing more users into the ecosystem with another sticky consumer app.

Professor Hao Li, a leading expert in video synthesis, told CNBC that most AI systems today are still trained on linguistic data like books and internet text. But to move toward general intelligence, he said, models need to learn from visual and audio information, much like a baby discovers the world through sight.

“We use AI to generate content to then train another model to perform better,” he said.

Li added that his lab already uses AI-generated video to enhance model performance, feeding synthetic data back into the system.

It’s part of a broader trend among researchers who see video generation as a way to simulate reality and help models reason more like humans.

Former OpenAI executive Zack Kass, whose forthcoming book “The Next Renaissance: AI and the Expansion of Human Potential” explores the societal implications of artificial intelligence, echoed that view.

On the broader question of how model makers should approach deployment, Kass argued that the trade-offs of releasing powerful technology early are worth it.

“There are two alternatives to building in the open: Not building at all, or building privately. And those alternatives, to me, are worse,” he told CNBC. “If we have a groundbreaking technology, I think people should know about it and use it so that we can all update to it.”

WATCH: OpenAI cements status as the world’s most valuable private company

OpenAI cements status as the world's most valuable private company



Source

Bernie Sanders and Ron DeSantis speak out against data center boom. It’s a bad sign for AI industry
Technology

Bernie Sanders and Ron DeSantis speak out against data center boom. It’s a bad sign for AI industry

Democratic Socialist Sen. Bernie Sanders and right-wing Gov. Ron DeSantis agree on virtually nothing. But they found common ground this year as leading skeptics of the artificial intelligence industry’s data center boom. The alignment of two national figures on the left and right signals that a political reckoning is brewing over the AI industry’s impact […]

Read More
Dust to data centers: The year AI tech giants, and billions in debt, began remaking the American landscape
Technology

Dust to data centers: The year AI tech giants, and billions in debt, began remaking the American landscape

The Stargate AI data center in Abilene, Texas, US, on Wednesday, Sept. 24, 2025. Kyle Grillot | Bloomberg | Getty Images West Texas dust, iron-tinged and orange-red, rides the wind and sticks like a film to everything you touch. It clings to skin and the inside of your mouth, a fine grit that turns every […]

Read More
How 0 million worth of export-controlled Nvidia chips were allegedly smuggled into China
Technology

How $160 million worth of export-controlled Nvidia chips were allegedly smuggled into China

On Dec. 8, Federal prosecutors in Texas unsealed documents that revealed an investigation into a massive smuggling network that stretched across the U.S. and the world. Dubbed “Operation Gatekeeper” by the feds, the investigation wasn’t focused on drug smuggling or stolen goods but rather an alleged secret, underground network of suppliers for Nvidia‘s graphic processing […]

Read More