Election deepfakes could undermine institutional credibility, Moody’s warns

Election deepfakes could undermine institutional credibility, Moody’s warns


With election season underway and artificial intelligence evolving rapidly, AI manipulation in political advertising is becoming an issue of greater concern to the market and economy. A new report from Moody’s on Wednesday warns that generative AI and deepfakes are among the election integrity issues that could present a risk to U.S. institutional credibility.

“The election is likely to be closely contested, increasing concerns that AI deepfakes could be deployed to mislead voters, exacerbate division and sow discord,” wrote Moody’s assistant vice president and analyst Gregory Sobel and senior vice president William Foster. “If successful, agents of disinformation could sway voters, impact the outcome of elections, and ultimately influence policymaking, which would undermine the credibility of U.S. institutions.” 

The government has been stepping up its efforts to combat deepfakes. On May 22, Federal Communications Commission Chairwoman Jessica Rosenworcel proposed a new rule that would require political TV, video and radio ads to disclose if they used AI-generated content. The FCC has been concerned about AI use in this election cycle’s ads, with Rosenworcel pointing out potential issues with deep fakes and other manipulated content.

Social media has been outside the sphere of the FCC’s regulations, but the Federal Elections Commission is also considering widespread AI disclosure rules which would extend to all platforms. In a letter to Rosenworcel, it encouraged the FCC to delay its decision until after the elections because its changes would not be mandatory across digital political ads. They added could confuse voters that online ads without the disclosures didn’t have AI even if they did.

While the FCC’s proposal might not cover social media outright, it opens the door to other bodies that can regulate ads in the digital world as the U.S. government moves to become known as a strong regulator of AI content. And, perhaps, those rules could extend to even more types of advertising. 

“This would be a groundbreaking ruling that could change disclosures and advertisements on traditional media for years to come around political campaigns,” said Dan Ives, Wedbush Securities managing director and senior equity analyst. “The worry is you cannot put the genie back in the bottle, and there are many unintended consequences with this ruling.” 

Some social media platforms have already self-adopted some sort of AI disclosure ahead of regulations. Meta, for example, requires an AI disclosure for all of its advertising, and it is banning all new political ads the week leading up to the November elections. Google requires all political ads with modified content that “inauthentically depicts real or realistic-looking people or events” to have disclosures, but doesn’t require AI disclosures on all political ads.

The social media companies have good reason to be seen as proactive on the issue as brands worry about being aligned with the spread of misinformation at a pivotal moment for the nation. Google and Facebook are expected to take in 47% of the projected $306.94 billion spent on U.S. digital advertising in 2024. “This is a third rail issue for major brands focused on advertising during a very divisive election cycle ahead and AI misinformation running wild. It’s a very complex time for advertising online,” Ives said. 

Despite self-policing, AI-manipulated content does make it on platforms without labels because of the sheer amount of content posted every day. Whether its AI-generated spam messaging or large amounts of AI imagery, it’s hard to find everything. 

“The lack of industry standards and rapid evolution of the technology make this effort challenging,” said Tony Adams, Secureworks Counter Threat Unit senior threat researcher. “Fortunately, these platforms have reported successes in policing the most harmful content on their sites through technical controls, ironically powered by AI.”

It’s easier than ever to create manipulated content. In May, Moody’s warned that deep fakes were “already weaponized” by governments and non-governmental entities as propaganda and to create social unrest and, in the worst cases, terrorism.

“Until recently, creating a convincing deepfake required significant technical knowledge of specialized algorithms, computing resources, and time,” Moody’s Ratings assistant vice president Abhi Srivastava wrote. “With the advent of readily accessible, affordable Gen AI tools, generating a sophisticated deep fake can be done in minutes. This ease of access, coupled with the limitations of social media’s existing safeguards against the propagation of manipulated content, creates a fertile environment for the widespread misuse of deep fakes.”

Deep fake audio through a robocall has been used in a presidential primary race in New Hampshire this election cycle.

One potential silver lining, according to Moody’s, is the decentralized nature of the U.S. election system, alongside existing cybersecurity policies and general knowledge of the looming cyberthreats. This will provide some protection, Moody’s says. States and local governments are enacting measures to block deepfakes and unlabeled AI content further, but free speech laws and concerns over blocking technological advances have slowed down the process in some state legislatures.

As of February, 50 pieces of legislation related to AI were being introduced per week in state legislatures, according to Moody’s, including a focus on deepfakes. Thirteen states have laws on election interference and deepfakes, eight of which were enacted since January.

Moody’s noted that the U.S. is vulnerable to cyber risks, ranking 10th out of 192 countries in the United Nations E-Government Development Index.

A perception among the populace that deepfakes have the ability to influence political outcomes, even without concrete examples, is enough to “undermine public confidence in the electoral process and the credibility of government institutions, which is a credit risk,” according to Moody’s. The more a population worries about separating fact from fiction, the greater the risk the public becomes disengaged and distrustful of the government. “Such trends would be credit negative, potentially leading to increased political and social risks, and compromising the effectiveness of government institutions,” Moody’s wrote.

“The response by law enforcement and the FCC may discourage other domestic actors from using AI to deceive voters,” Secureworks’ Adams said. “But there’s no question at all that foreign actors will continue, as they’ve been doing for years, to meddle in American politics by exploiting generative AI tools and systems. To voters, the message is to keep calm, stay alert, and vote.” 



Source

Sam Altman tells OpenAI staffers that military’s ‘operational decisions’ are up to the government
Technology

Sam Altman tells OpenAI staffers that military’s ‘operational decisions’ are up to the government

Open AI CEO Sam Altman speaks during a talk session with SoftBank Group CEO Masayoshi Son at an event titled “Transforming Business through AI” in Tokyo, on Feb. 3, 2025. Tomohiro Ohsumi | Getty Images OpenAI CEO Sam Altman told employees in an all-hands meeting on Tuesday that the company doesn’t “get to make operational […]

Read More
The lead U.S. cyber agency is stretched thin as Iran hacking threat escalates
Technology

The lead U.S. cyber agency is stretched thin as Iran hacking threat escalates

Iraqi Shiites shout slogans as they carry a portrait of Iran’s Supreme Leader Ayatollah Ali Khamenei and wave Iran flags during a protest against US and Israeli attacks on Iran at a bridge leading to Green Zone where the US embassy is located, in Baghdad on February 28, 2026. Several hundred people protested against the […]

Read More
Apple raises MacBook prices across the board as M5 chips, new displays signal AI-first strategy
Technology

Apple raises MacBook prices across the board as M5 chips, new displays signal AI-first strategy

Apple Macbook Pro Source: Apple Inc. Apple on Tuesday rolled out new MacBook Pro and MacBook Air models with its latest M5 chips, along with an updated Studio Display lineup, in its biggest Mac refresh in more than a year. The push gives Apple a fresh shot at reviving Mac demand while making a broader […]

Read More