EU kicks off landmark AI law enforcement as first batch of restrictions enter into force

EU kicks off landmark AI law enforcement as first batch of restrictions enter into force


The European Union is so far the only jurisdiction globally to drive forward comprehensive rules for artificial intelligence with its AI Act.

Jaque Silva | Nurphoto | Getty Images

The European Union formally kicked off enforcement of its landmark artificial intelligence law Sunday, paving the way for tough restrictions and potential large fines for violations.

The EU AI Act, a first-of-its-kind regulatory framework for the technology, formally entered into force in August 2024.

On Sunday, the deadline for prohibitions on certain artificial intelligence systems and requirements to ensure sufficient technology literacy among staff officially lapsed.

That means companies must now comply with the restrictions and can face penalties if they fail to do so.

The AI Act bans certain applications of AI which it deems as posing “unacceptable risk” to citizens.

Those include social scoring systems, real-time facial recognition and other forms of biometric identification that categorize people by race, sex life, sexual orientation and other attributes, and “manipulative” AI tools.

Companies face fines of as much as 35 million euros ($35.8 million) or 7% of their global annual revenues — whichever amount is higher — for breaches of the EU AI Act.

The size of the penalties will depend on the infringement and size of the company fined.

That’s higher than the fines possible under the GDPR, Europe’s strict digital privacy law. Companies face fines of up to 20 million euros or 4% of annual global turnover for GDPR breaches.

‘Not perfect’ but ‘very much needed’

It’s worth stressing that the AI Act still isn’t in full force — this is just the first step in a series of many upcoming developments.

Tasos Stampelos, head of EU public policy and government relations at Mozilla, told CNBC previously that while it’s “not perfect,” the EU’s AI Act is “very much needed.”

“It’s quite important to recognize that the AI Act is predominantly a product safety legislation,” Stampelos said in a CNBC-moderated panel in November.

“With product safety rules, the moment you have it in place, it’s not a done deal. There are a lot of things coming and following after the adoption of an act,” he said.

“Right now, compliance will depend on how standards, guidelines, secondary legislation or derivative instruments that follow the AI Act, that will actually stipulate what compliance looks like,” Stampelos added.

In December, the EU AI Office, a newly created body regulating the use of models in accordance with the AI Act, published a second-draft code of practice for general-purpose AI (GPAI) models, which refers to systems like OpenAI’s GPT family of large language models, or LLMs.

The second draft contained exemptions for providers of certain open-source AI models while including the requirement for developers of “systemic” GPAI models to undergo rigorous risk assessments.

Setting the global standard?

Several technology executives and investors are unhappy with some of the more burdensome aspects of the AI Act and worry it might strangle innovation.

In June 2024, Prince Constantijn of the Netherlands told CNBC in an interview that he’s “really concerned” about Europe’s focus on regulating AI.

“Our ambition seems to be limited to being good regulators,” Constantijn said. “It’s good to have guardrails. We want to bring clarity to the market, predictability and all that. But it’s very hard to do that in such a fast-moving space.”

Still, some think that having clear rules for AI could give Europe leadership advantage.

“While the U.S. and China compete to build the biggest AI models, Europe is showing leadership in building the most trustworthy ones,” Diyan Bogdanov, director of engineering intelligence and growth at Bulgarian fintech firm Payhawk, said via email.

“The EU AI Act’s requirements around bias detection, regular risk assessments, and human oversight aren’t limiting innovation  they’re defining what good looks like,” he added.



Source

Instagram’s map feature spurs user backlash over geolocation privacy concerns
Technology

Instagram’s map feature spurs user backlash over geolocation privacy concerns

Nurphoto | Nurphoto | Getty Images The launch of an Instagram feature that details users’ geolocation data illicited backlash from social media users on Thursday. Meta debuted the Instagram Map tool on Wednesday, pitching the feature as way to “stay up-to-date with friends” by letting users share their “last active location.”  The tool is akin […]

Read More
Tesla exec leading development of chip tech and Dojo supercomputer is leaving company
Technology

Tesla exec leading development of chip tech and Dojo supercomputer is leaving company

Tesla’s vice president of hardware design engineering, Pete Bannon, is leaving the company after first joining in 2016 from Apple, CNBC has confirmed. Bannon was leading the development of Tesla’s Dojo supercomputer and reported directly to Musk. Bloomberg first reported on Bannon’s departure, and added that Musk ordered his team to shut down, with engineers […]

Read More
Block shares pop on full-year guidance boost
Technology

Block shares pop on full-year guidance boost

Block shares jumped 6% in extended trading on Thursday after the fintech company increased its forecast for the year. Here is how the company did, compared to analysts’ consensus estimates from LSEG. Earnings per share: 62 cents adjusted vs. 69 cents expected Revenue: $6.05 billion vs. $6.31 billion expected Revenue fell close to 2% from […]

Read More