Meta Platforms CEO Mark Zuckerberg arrives outside court to take the stand at trial in a key test case accusing Meta and Google’s YouTube of harming kids’ mental health through addictive platforms, in Los Angeles, California, U.S., Feb. 18, 2026.
Mike Blake | Reuters
For the last three decades, internet giants have been able to avoid legal exposure for content on their platforms, thanks to a law that differentiates the companies from online publishers. But those safeguards appear to be weakening.
Meta and Google, which dominate the U.S. digital ad market, find themselves as defendants in a host of lawsuits that collectively serve to undermine the long-held notion that they have legal protection for what surfaces on their sites, apps and services. Companies like TikTok and Snap are in the same predicament.
The unifying aspect of the recent cases is that they’re crafted to circumvent Section 230 of the Communications Decency Act, which Congress passed in 1996 and President Bill Clinton signed into law. Established in the early days of the internet, the law protects websites from being sued over content posted by their users, and allows them to act as moderators without being held liable for what stays up.
Last week, a jury in New Mexico found Meta liable in a case involving child safety, while jurors in Los Angeles held the Facebook parent and Google’s YouTube negligent in a personal injury trial. Days after those verdicts were revealed, victims of the notorious sex offender Jeffrey Epstein filed a class action lawsuit against Google and the Trump administration over allegations related to the wrongful disclosure of personal information.
In that complaint, the plaintiffs argue that Google’s AI Mode, which serves up AI-powered summaries and links, is “not a neutral search index,” a clear effort to make the case that Google isn’t just a platform sitting between users and the information they seek.
“The plaintiffs’ bar is winning the war against section 230 through systematic, relentless litigation that is causing there to be divots and chinks in its protection,” said Eric Goldman, a law professor at Santa Clara University School of Law, in an interview.

The stakes are massive as the technology sector exits the era of traditional online search and social networking and enters a world defined by artificial intelligence, where models designed by the owners of the largest platforms are serving up conversational chats, pictures and videos that can range from controversial to potentially illegal. The financial penalties to date have been minimal — less than $400 million in damages between the two verdicts last week — but the cases establish a troubling precedent for tech giants that are betting their future on AI.
“For so long, tech companies have used Section 230 as an excuse to avoid taking meaningful action to protect users, but especially kids from egregious harms, harassment and abuse, frauds and scams,” Sen. Brian Schatz (D-Hawaii) said in March during a U.S. Senate Commerce Committee hearing tied to the 30th anniversary of Section 230. “It’s not that they don’t know what’s happening or even why it’s happening. It’s that to do something about it would be to hurt their bottom line. And so long as federal law provides a shield, why even bother?”
Meta declined to comment for this story. Google didn’t respond to a request for comment. Both companies said they plan to appeal last week’s verdicts.
‘Complicated questions’
Politicians on both sides of the aisle have proposed all sorts of reforms to Section 230 over the years, and company executives have faced public grilling in congressional hearings over the alleged harms caused by their platforms.
President Donald Trump, during his first term in office, supported greater restrictions on social media companies for what he viewed as their bias against him. And Joe Biden, when he was a presidential hopeful in 2020, told The New York Times editorial board that Section 230 “should be revoked” for tech platforms including Facebook, which he said was “propagating falsehoods they know to be false.”
Nadine Farid Johnson, policy director of the Knight First Amendment Institute at Columbia University, said about legislative efforts that “none of those things have fully come to fruition, in part because they are such complicated questions.”
But while the issue has stagnated in Washington, D.C., plaintiff attorneys are finding other routes toward holding big tech companies accountable.
Meta Platforms CEO Mark Zuckerberg testifies before Los Angeles Superior Court Judge Carolyn Kuhl at a trial in a key test case accusing Meta and Google’s YouTube of harming kids’ mental health through addictive platforms, in Los Angeles, California, U.S., Feb. 18, 2026 in a courtroom sketch.
Mona Edwards | Reuters
The verdict last week against Meta and YouTube was the first time a jury found social media platforms liable for what plaintiff attorneys alleged was intentionally engineering addiction in minors with their products. The case went after how the platforms were designed, not just what content they carried.
Plaintiffs argued that the combination of features like autoplay, recommendation algorithms, notifications and certain filters acted like “digital casinos,” leading to serious mental health problems for a young girl who claimed she couldn’t stop using the apps.
The class action suit against Google, filed last week by a plaintiff with the pseudonym Jane Doe, alleged that the company’s AI Mode created its own summaries and links, exposing Epstein victims’ personal identifying information (PII), including names, phone numbers and email addresses.
Kevin Osborne, the plaintiff’s attorney in the case, told CNBC in an interview that the suit was filed after Google declined a request to take down the victims’ contact information from AI mode. Osborne said the case has to move quickly because of how fast the information is spreading.
“We filed when we filed because we needed to act as soon as possible to get this stuff taken down,” said Osborne, a partner at Erickson Kramer Osborne in San Francisco. “People are getting calls from total strangers and death threats. It’s a nightmare.”
Osborne added that the timing was “serendipitous” given Meta’s court defeats last week, but he said there’s overlap in that they all involve efforts by the plaintiffs to skirt Section 230. Osborne said that in his case, “this is AI mode coming up with its own content and that’s something that’s not been explored very thoroughly by the courts.”
Matthew Bergman, one of the lawyers representing the plaintiffs in the Los Angeles case, testified before a Senate committee in March and said the tech industry has relied on overly broad interpretations of Section 230 in order “to evade all possible legal accountability simply because third-party content is found somewhere in the causal chain of their misconduct.”
Bergman said he looked closely at a 2021 ruling in an appeals court involving allegations about the role a Snapchat feature played in a fatal car crash. The court reversed an earlier decision to dismiss the case under Section 230, citing the plaintiff’s allegations that Snap’s negligent design incentivized young people to drive recklessly.
“I charted a very narrow legal theory that might legally permit certain cases brought by parents to proceed despite Section 230,” Bergman told lawmakers.
The evidence presented in Los Angeles bolstered the plaintiff’s arguments that Meta and YouTube executives knew of their products’ design harms and failed to adequately address them. At a press briefing about the case on Monday, Bergman said “the best way to prove our case is through their own documents.”
In the Google AI Mode suit, the plaintiff also pointed to design flaws related to the public display of personal information.
“Google is intentionally furnishing that PII in a way designed, or at least substantially certain, to fuel harassment and fear,” the suit says.
Osborne expanded on that idea.
“Google didn’t just provide our client’s email address,” he said. “They created a link, so when you’re reading the content, looking at AI mode, all you’ve got to do is click a button and you’ve generated an email directly to the [Epstein] survivor.”

It’s not the first time Google has been sued for how its AI interacted with users, an issue that’s also created legal challenges for ChatGPT creator OpenAI.
Earlier In March, the father of Jonathan Gavalas filed a lawsuit against Google, accusing the Gemini chatbot of convincing his son to carry out a series of missions, including staging a “catastrophic accident.” The younger Gavalas then committed suicide at the instruction of Gemini, the lawsuit alleges.
And in January, Google settled with families who sued the company and Character.AI, alleging their technology caused harm to minors, including suicides. Last year OpenAI was sued by a family who blamed ChatGPT for their teenage son’s death by suicide.
Supreme Court?
Legal experts said appeals in the latest cases could find their way to the Supreme Court, which could determine whether the companies should be protected by law against the claims.
David Greene, senior counsel at the Electronic Frontier Foundation, called the verdicts “very preliminary decisions,” and said there remains a lack of consensus over whether certain product features are protected by Section 230, or even the First Amendment.
“Just labeling something as a design feature means nothing,” Greene said. “If it’s speech, it’s speech and it gets both First Amendment protection and potentially Section 230 protection as well.”
Johnson of Columbia said she’s pushing Congress to enact a more measured approach that could let tech companies obtain Section 230 protections as long as they meet certain conditions related to data privacy, platform transparency and other prerequisites.
“These questions are only becoming more and more challenging, as the platforms continue to expand their use of generative artificial intelligence, as they are kind of upping their algorithm game,” Johnson said. “Our concern is that this becomes a game of essentially whack-a-mole with every new iteration, with every new piece of technological progress that affects the platforms and the people engaging on the platforms.”
If you are having suicidal thoughts or are in distress, contact the Suicide & Crisis Lifeline at 988 for support and assistance from a trained counselor.
WATCH: More litigation to come following Meta ruling, says Harvard Law professor.
