
New York is 3,000 miles away from the tech hub of Silicon Valley, but in recent weeks, the state has inserted itself into the center of a fierce debate around artificial intelligence regulation.
A bipartisan super PAC called “Leading the Future” announced last week that it will target Alex Bores, a Democratic congressional candidate who has openly championed AI safety legislation in New York by promoting the the Responsible AI Safety and Education (RAISE) Act. The bill would require large AI companies to publish safety and risk protocols and disclose serious safety incidents.
“They don’t want there to be any regulation whatsoever,” Bores told CNBC’s “Squawk Box” on Monday. “What they’re saying is the fact that you dared step up and push back on us at all means we need to bury you with millions and millions of dollars.”
Leading the Future (LTF) launched in August with more than $100 million in funding, and aims to elevate “candidates who support a bold, forward-looking approach to AI,” according to a release. The group largely represents the view of the Trump administration, that federal AI laws should preempt regulations implemented by specific states, an effort mostly meant to undermine big blue states like California and New York.
The super PAC is backed by high-profile names in tech, including OpenAI President Greg Brockman, Palantir co-founder Joe Lonsdale, venture firm Andreessen Horowitz and AI startup Perplexity.
“LTF and its affiliated organizations will oppose policies that stifle innovation, enable China to gain global AI superiority, or make it harder to bring AI’s benefits into the world, and those who support that agenda,” the group said in the release.
Bores has served as a New York State Assembly member since 2023, and previously worked at several tech companies, including Palantir. He launched his congressional campaign for New York’s 12th district in October after sitting Democratic Rep. Jerry Nadler announced he would not run for reelection.
As an assemblyman, Bores co-sponsored the RAISE Act.
“I’m very bullish on the power of AI, I take the tech companies seriously for what they think this could do in the future,” Bores said on Monday. “But the same pathways that will allow it to potentially cure diseases [will] allow it to, say, build a bio weapon. And so you just want to be managing the risk of that potential.”
Assembly member Alex Bores speaks during a press conference on the Climate Change Superfund Act at Pier 17 on May 26, 2023 in New York City.
Michael M. Santiago | Getty Images
The RAISE Act passed in New York’s state assembly and senate in June. Democratic Gov. Kathy Hochul has until the start of the 2026 session to decide whether to sign it into law.
On Nov. 17, LTF’s leaders Zac Moffatt and Josh Vlasto announced they plan to spend millions of dollars to try to sink Bores’ congressional bid. In a statement, they accused Bores of pushing “ideological and politically motivated legislation” that would “handcuff” the U.S. and its ability to lead in AI.
The bill is “a clear example of the patchwork, uninformed, and bureaucratic state laws that would slow American progress and open the door for China to win the global race for AI leadership,” Moffatt and Vlasto told CNBC in a statement.
Moffatt has more than two decades of experience in digital and political strategy, while Vlasto previously served as press secretary to Sen. Chuck Schumer (D-NY) and chief of staff to former New York Governor Andrew Cuomo.
Politico was first to report LTF’s effort to target Bores.
Bores has capitalized on LTF’s announcement as a fundraising opportunity, urging voters to donate to his campaign if they “don’t want Trump mega-donors writing all tech policy,” he wrote in a post on X.
“I am someone with a master’s in computer science, two patents, and nearly a decade working in tech,” Bores told CNBC in a statement last week. “If they are scared of people who understand their business regulating their business, they are telling on themselves.”
What is the RAISE Act?
The RAISE Act applies to any large AI company, like Google, Meta or OpenAI, that has spent more than $100 million in computational resources to train advanced models.
It would require these companies to write, publish and follow safety and security protocols, and to update them as necessary. Violators could be subject to penalties of up to $30 million.
The companies would also have to take steps to implement safeguards to prevent their models from engaging in “critical harm,” like assisting in the creation of chemical weapons or large-scale, automated criminal activities. “Critical harm” is defined in the bill as the death or serious injury of 100 people or at least $1 billion in damages.
Under the RAISE Act, large AI companies would not be able to release models that would create “unreasonable risk of critical harm.” Bores said the bill’s opponents have pushed back fiercely on that part of the legislation.
“That’s designed to basically avoid the problem we had with the tobacco companies, where they knew that cigarettes caused cancer but denied it publicly and continued to release their products,” he said.
The RAISE Act would also require AI companies to disclose notable safety incidents. If a model is stolen by a malicious actor, for instance, its developer would have to disclose that incident within 72 hours of learning about it.
“We just saw two weeks ago, Anthropic talk about how China used their model to do a cyber attack on U.S. government institutions and our chemical manufacturing plants,” Bores said. “Shockingly, they didn’t have to disclose that. I think that should be law and be required for every major AI developer.”
Anthropic, an AI startup valued at around $350 billion after recent investments, published a blog post earlier this month detailing what it called “the first documented case of a large-scale cyberattack executed without substantial human intervention.” Anthropic said it believes the threat actor was a Chinese state-sponsored group.
Bores told Tech Brew that he drafted the initial version of the bill in August of 2024 and sent it to “all of the major developers” for feedback. He put together a second draft in December, and solicited another round of red lines.
The RAISE Act was published in March, and amended in May and June.
“I worked really closely with a lot of people in industry to get the details right,” Bores told Tech Brew.
U.S. President Donald Trump arrives on the South Lawn of the White House on November 22, 2025 in Washington, DC.
John Mcdonnell | Getty Images
LTF’s decision to target Bores over the RAISE Act is emblematic of a broader debate around whether AI should be regulated at the state or federal level in the U.S.
Some lawmakers and tech executives have argued that a “patchwork” of state AI policies will hinder innovation and put the U.S. at risk of falling behind its adversaries like China. But others, including Bores, have said that the federal government moves too slowly to keep up with the rapid pace of AI development.
“What’s being debated right now is, should we stop the states from making any progress before the feds have solved the problem? Or should we actually work together to have the federal government solve the problem?” Bores said.
Aside from New York, states including California, Colorado, Illinois and others have their own AI laws that are either already in effect or will be starting early next year.
Last week, President Donald Trump advocated for a federal AI standard in a post on his social media site Truth Social.
“Investment in AI is helping to make the U.S. Economy the ‘HOTTEST’ in the World, but overregulation by the States is threatening to undermine this Major Growth ‘Engine,'” Trump wrote. “We MUST have one Federal Standard instead of a patchwork of 50 State Regulatory Regimes. If we don’t, then China will easily catch us in the AI race.”
The White House also began drafting an executive order that would target state AI laws by launching legal challenges and withholding federal funding, CNBC reported on Thursday. But a day later, the Trump administration put a hold on that effort, according to a report from Reuters.
The White House didn’t provide a comment for this story.
Earlier this year, a proposed amendment to Trump’s “One Big Beautiful Bill Act” would have enacted a 10-year-long suspension on state-level AI laws. That provision ultimately failed and was not included in the legislation, but the Trump administration recently revitalized the effort.
The White House is working to see if a moratorium on certain state AI laws could be included in one of the major must-pass bills that Congress is pursuing.
“What we’re seeing in AI is natural, states are stepping up and moving quickly,” Bores said. “We should eventually have a federal AI standard. I strongly agree with that.”
WATCH: AI industry-backed super PAC picks first target
