EU kicks off landmark AI law enforcement as first batch of restrictions enter into force

EU kicks off landmark AI law enforcement as first batch of restrictions enter into force


The European Union is so far the only jurisdiction globally to drive forward comprehensive rules for artificial intelligence with its AI Act.

Jaque Silva | Nurphoto | Getty Images

The European Union formally kicked off enforcement of its landmark artificial intelligence law Sunday, paving the way for tough restrictions and potential large fines for violations.

The EU AI Act, a first-of-its-kind regulatory framework for the technology, formally entered into force in August 2024.

On Sunday, the deadline for prohibitions on certain artificial intelligence systems and requirements to ensure sufficient technology literacy among staff officially lapsed.

That means companies must now comply with the restrictions and can face penalties if they fail to do so.

The AI Act bans certain applications of AI which it deems as posing “unacceptable risk” to citizens.

Those include social scoring systems, real-time facial recognition and other forms of biometric identification that categorize people by race, sex life, sexual orientation and other attributes, and “manipulative” AI tools.

Companies face fines of as much as 35 million euros ($35.8 million) or 7% of their global annual revenues — whichever amount is higher — for breaches of the EU AI Act.

The size of the penalties will depend on the infringement and size of the company fined.

That’s higher than the fines possible under the GDPR, Europe’s strict digital privacy law. Companies face fines of up to 20 million euros or 4% of annual global turnover for GDPR breaches.

‘Not perfect’ but ‘very much needed’

It’s worth stressing that the AI Act still isn’t in full force — this is just the first step in a series of many upcoming developments.

Tasos Stampelos, head of EU public policy and government relations at Mozilla, told CNBC previously that while it’s “not perfect,” the EU’s AI Act is “very much needed.”

“It’s quite important to recognize that the AI Act is predominantly a product safety legislation,” Stampelos said in a CNBC-moderated panel in November.

“With product safety rules, the moment you have it in place, it’s not a done deal. There are a lot of things coming and following after the adoption of an act,” he said.

“Right now, compliance will depend on how standards, guidelines, secondary legislation or derivative instruments that follow the AI Act, that will actually stipulate what compliance looks like,” Stampelos added.

In December, the EU AI Office, a newly created body regulating the use of models in accordance with the AI Act, published a second-draft code of practice for general-purpose AI (GPAI) models, which refers to systems like OpenAI’s GPT family of large language models, or LLMs.

The second draft contained exemptions for providers of certain open-source AI models while including the requirement for developers of “systemic” GPAI models to undergo rigorous risk assessments.

Setting the global standard?

Several technology executives and investors are unhappy with some of the more burdensome aspects of the AI Act and worry it might strangle innovation.

In June 2024, Prince Constantijn of the Netherlands told CNBC in an interview that he’s “really concerned” about Europe’s focus on regulating AI.

“Our ambition seems to be limited to being good regulators,” Constantijn said. “It’s good to have guardrails. We want to bring clarity to the market, predictability and all that. But it’s very hard to do that in such a fast-moving space.”

Still, some think that having clear rules for AI could give Europe leadership advantage.

“While the U.S. and China compete to build the biggest AI models, Europe is showing leadership in building the most trustworthy ones,” Diyan Bogdanov, director of engineering intelligence and growth at Bulgarian fintech firm Payhawk, said via email.

“The EU AI Act’s requirements around bias detection, regular risk assessments, and human oversight aren’t limiting innovation  they’re defining what good looks like,” he added.



Source

Tesla Optimus robotics vice president Milan Kovac is leaving the company
Technology

Tesla Optimus robotics vice president Milan Kovac is leaving the company

Tesla displays Optimus next to two of its vehicles at the World Robot Conference in Beijing on Aug. 22, 2024. CNBC | Evelyn Tesla’s vice president of Optimus robotics, Milan Kovac, said on Friday that he’s leaving the company. In a post on X, Kovac thanked Tesla CEO Elon Musk and reminisced about his tenure, […]

Read More
Tesla already had big problems. Then Musk went to battle with Trump
Technology

Tesla already had big problems. Then Musk went to battle with Trump

President Donald Trump holds a news conference with Elon Musk to mark the end of the Tesla CEO’s tenure as a special government employee overseeing the U.S. DOGE Service on Friday May 30, 2025 in the Oval Office of the White House in Washington. Tom Brenner | The Washington Post | Getty Images Tesla has […]

Read More
Winklevoss twins’ crypto firm Gemini confidentially files for IPO
Technology

Winklevoss twins’ crypto firm Gemini confidentially files for IPO

Cameron Winklevoss, co-founder and president of Gemini Trust Co., left, and Tyler Winklevoss, co-founder and chief executive officer of Gemini Trust Co., on stage during the Bitcoin 2025 conference in Las Vegas, Nevada, US, on Tuesday, May 27, 2025. Bridget Bennett | Bloomberg | Getty Images Gemini, the cryptocurrency exchange and custodian founded by Cameron […]

Read More