EU kicks off landmark AI law enforcement as first batch of restrictions enter into force

EU kicks off landmark AI law enforcement as first batch of restrictions enter into force


The European Union is so far the only jurisdiction globally to drive forward comprehensive rules for artificial intelligence with its AI Act.

Jaque Silva | Nurphoto | Getty Images

The European Union formally kicked off enforcement of its landmark artificial intelligence law Sunday, paving the way for tough restrictions and potential large fines for violations.

The EU AI Act, a first-of-its-kind regulatory framework for the technology, formally entered into force in August 2024.

On Sunday, the deadline for prohibitions on certain artificial intelligence systems and requirements to ensure sufficient technology literacy among staff officially lapsed.

That means companies must now comply with the restrictions and can face penalties if they fail to do so.

The AI Act bans certain applications of AI which it deems as posing “unacceptable risk” to citizens.

Those include social scoring systems, real-time facial recognition and other forms of biometric identification that categorize people by race, sex life, sexual orientation and other attributes, and “manipulative” AI tools.

Companies face fines of as much as 35 million euros ($35.8 million) or 7% of their global annual revenues — whichever amount is higher — for breaches of the EU AI Act.

The size of the penalties will depend on the infringement and size of the company fined.

That’s higher than the fines possible under the GDPR, Europe’s strict digital privacy law. Companies face fines of up to 20 million euros or 4% of annual global turnover for GDPR breaches.

‘Not perfect’ but ‘very much needed’

It’s worth stressing that the AI Act still isn’t in full force — this is just the first step in a series of many upcoming developments.

Tasos Stampelos, head of EU public policy and government relations at Mozilla, told CNBC previously that while it’s “not perfect,” the EU’s AI Act is “very much needed.”

“It’s quite important to recognize that the AI Act is predominantly a product safety legislation,” Stampelos said in a CNBC-moderated panel in November.

“With product safety rules, the moment you have it in place, it’s not a done deal. There are a lot of things coming and following after the adoption of an act,” he said.

“Right now, compliance will depend on how standards, guidelines, secondary legislation or derivative instruments that follow the AI Act, that will actually stipulate what compliance looks like,” Stampelos added.

In December, the EU AI Office, a newly created body regulating the use of models in accordance with the AI Act, published a second-draft code of practice for general-purpose AI (GPAI) models, which refers to systems like OpenAI’s GPT family of large language models, or LLMs.

The second draft contained exemptions for providers of certain open-source AI models while including the requirement for developers of “systemic” GPAI models to undergo rigorous risk assessments.

Setting the global standard?

Several technology executives and investors are unhappy with some of the more burdensome aspects of the AI Act and worry it might strangle innovation.

In June 2024, Prince Constantijn of the Netherlands told CNBC in an interview that he’s “really concerned” about Europe’s focus on regulating AI.

“Our ambition seems to be limited to being good regulators,” Constantijn said. “It’s good to have guardrails. We want to bring clarity to the market, predictability and all that. But it’s very hard to do that in such a fast-moving space.”

Still, some think that having clear rules for AI could give Europe leadership advantage.

“While the U.S. and China compete to build the biggest AI models, Europe is showing leadership in building the most trustworthy ones,” Diyan Bogdanov, director of engineering intelligence and growth at Bulgarian fintech firm Payhawk, said via email.

“The EU AI Act’s requirements around bias detection, regular risk assessments, and human oversight aren’t limiting innovation  they’re defining what good looks like,” he added.



Source

OpenAI is under pressure as Google, Anthropic gain ground
Technology

OpenAI is under pressure as Google, Anthropic gain ground

Sam Altman is feeling the pressure. The OpenAI CEO sent a memo to his staffers on Monday outlining a “code red” effort to improve its chatbot ChatGPT, according to multiple reports. Altman said OpenAI will be pulling back on investments in areas like health, shopping and advertising as it works to prioritize ChatGPT, the reports […]

Read More
Amazon launches cloud AI tool to help engineers recover from outages faster
Technology

Amazon launches cloud AI tool to help engineers recover from outages faster

Mateusz Slodkowski | SOPA Images | Lightrocket | Getty Images Amazon’s cloud unit on Tuesday announced AI-enabled software designed to help clients better understand and recover from outages. DevOps Agent, as the artificial intelligence tool from Amazon Web Services is called, predicts the cause of technical hiccups using input from third-party tools such as Datadog […]

Read More
French AI lab Mistral releases new AI models as it looks to keep pace with OpenAI and Google
Technology

French AI lab Mistral releases new AI models as it looks to keep pace with OpenAI and Google

Artificial intelligence startup Mistral released a new suite of models Tuesday as it looks to keep pace with leading AI labs Google, OpenAI and DeepSeek.  The French company’s announcement follows on from model releases from the likes of DeepSeek and Google in recent weeks, as AI labs across the globe scramble to remain at the […]

Read More