Salesforce’s UK chief urges government not to regulate all AI companies in the same way

Salesforce’s UK chief urges government not to regulate all AI companies in the same way


Zahra Bahrololoumi, CEO of U.K. and Ireland at Salesforce, speaking during the company’s annual Dreamforce conference in San Francisco, California, on Sept. 17, 2024.

David Paul Morris | Bloomberg | Getty Images

LONDON — The UK chief executive of Salesforce wants the Labor government to regulate artificial intelligence — but says it’s important that policymakers don’t tar all technology companies developing AI systems with the same brush.

Speaking to CNBC in London, Zahra Bahrololoumi, CEO of UK and Ireland at Salesforce, said the American enterprise software giant takes all legislation “seriously.” However, she added that any British proposals aimed at regulating AI should be “proportional and tailored.”

Bahrololoumi noted that there’s a difference between companies developing consumer-facing AI tools — like OpenAI — and firms like Salesforce making enterprise AI systems. She said consumer-facing AI systems, such as ChatGPT , face fewer restrictions than enterprise-grade products, which have to meet higher privacy standards and comply with corporate guidelines.

“What we look for is targeted, proportional, and tailored legislation,” Bahrololoumi told CNBC on Wednesday.

“There’s definitely a difference between those organizations that are operating with consumer facing technology and consumer tech, and those that are enterprise tech. And we each have different roles in the ecosystem, [but] we’re a B2B organization,” she said.

A spokesperson for the UK’s Department of Science, Innovation and Technology (DSIT) said that planned AI rules would be “highly targeted to the handful of companies developing the most powerful AI models,” rather than applying “blanket rules on the use of AI. “

That indicates that the rules might not apply to companies like Salesforce, which don’t make their own foundational models like OpenAI.

“We recognize the power of AI to kickstart growth and improve productivity and are absolutely committed to supporting the development of our AI sector, particularly as we speed up the adoption of the technology across our economy,” the DSIT spokesperson added.

Data security

Salesforce has been heavily touting the ethics and safety considerations embedded in its Agentforce AI technology platform, which allows enterprise organizations to spin up their own AI “agents” — essentially, autonomous digital workers that carry out tasks for different functions, like sales, service or marketing.

For example, one feature called “zero retention” means no customer data can ever be stored outside of Salesforce. As a result, generative AI prompts and outputs aren’t stored in Salesforce’s large language models — the programs that form the bedrock of today’s genAI chatbots, like ChatGPT.

With consumer AI chatbots like ChatGPT, Anthropic’s Claude or Meta’s AI assistant, it’s unclear what data is being used to train them or where that data gets stored, according to Bahrololoumi.

“To train these models you need so much data,” she told CNBC. “And so, with something like ChatGPT and these consumer models, you don’t know what it’s using.”

Even Microsoft’s Copilot, which is marketed at enterprise customers, comes with heightened risks, Bahrololoumi said, citing a Gartner report calling out the tech giant’s AI personal assistant over the security risks it poses to organizations.

OpenAI and Microsoft were not immediately available for comment when contacted by CNBC.

AI concerns ‘apply at all levels’

Bola Rotibi, chief of enterprise research at analyst firm CCS Insight, told CNBC that, while enterprise-focused AI suppliers are “more cognizant of enterprise-level requirements” around security and data privacy, it would be wrong to assume regulations wouldn’t scrutinize both consumer and business-facing firms.

“All the concerns around things like consent, privacy, transparency, data sovereignty apply at all levels no matter if it is consumer or enterprise as such details are governed by regulations such as GDPR,” Rotibi told CNBC via email. GDPR, or the General Data Protection Regulation, became law in the UK in 2018.

However, Rotibi said that regulators may feel “more confident” in AI compliance measures adopted by enterprise application providers like Salesforce, “because they understand what it means to deliver enterprise-level solutions and management support.”

“A more nuanced review process is likely for the AI services from widely deployed enterprise solution providers like Salesforce,” she added.

Bahrololoumi spoke to CNBC at Salesforce’s Agentforce World Tour in London, an event designed to promote the use of the company’s new “agentic” AI technology by partners and customers.

Her remarks come after U.K. Prime Minister Keir Starmer’s Labour refrained from introducing an AI bill in the King’s Speech, which is written by the government to outline its priorities for the coming months. The government at the time said it plans to establish “appropriate legislation” for AI, without offering further details.



Source

Tesla faces U.S. auto safety probe after reports FSD ran red lights, caused collisions
Technology

Tesla faces U.S. auto safety probe after reports FSD ran red lights, caused collisions

The tablet of the new Tesla Model 3. Matteo Della Torre | Nurphoto | Getty Images Tesla is facing a federal investigation into possible safety defects with FSD, its partially automated driving system that is also known as Full Self-Driving (Supervised). Media, vehicle owner and other incident reports to the National Highway Traffic Safety Administration […]

Read More
Trump meets with Jared Isaacman about top NASA job after pulling nomination
Technology

Trump meets with Jared Isaacman about top NASA job after pulling nomination

Commander Jared Isaacman of Polaris Dawn, a private human spaceflight mission, speaks at a press conference at the Kennedy Space Center in Cape Canaveral, Florida, U.S. August 19, 2024.  Joe Skipper | Reuters President Donald Trump has met with Jared Isaacman to discuss another nomination to lead NASA, a source familiar with the talks confirmed […]

Read More
Talent agency CAA slams OpenAI’s Sora for posing ‘significant risk’ to its clients
Technology

Talent agency CAA slams OpenAI’s Sora for posing ‘significant risk’ to its clients

An illustration photo shows Sora 2 logo on a smartphone. Cfoto | Future Publishing | Getty Images The Creative Artists Agency on Thursday slammed OpenAI’s new video creation app Sora for posing “significant risks” to their clients and intellectual property. The talent agency, which represents artists including Doja Cat, Scarlett Johanson, and Tom Hanks, questioned […]

Read More