The City of Love was not all bonhomie this week as world leaders descended on Paris for a sometimes-contentious summit on the future of international AI development.
Vice President JD Vance set the tone for the Trump administration’s new approach to AI safety with a speech that laid out a vision of US dominance of the tech and criticized European digital regulations. The US and the UK also declined to sign a non-binding pledge calling for more “inclusive and sustainable” AI development backed by more than 60 countries.
The summit also comes as seemingly ultra-efficient generative AI models from DeepSeek and other Chinese companies have intensified a global arms race with the US around the technology.
Safety off: Vance purposefully sought to mark a contrast from the first of these annual summits in England in 2023, when 28 governments—including the US and the UK—signed onto an agreement that noted risks of “serious, even catastrophic, harm” from AI models.
“I’m not here this morning to talk about AI safety, which was the title of the conference a couple of years ago. I’m here to talk about AI opportunity,” Vance said. “The AI future is not going to be won by hand-wringing about safety. It will be won by building.”
Vance warned governments off of “tightening the screws” on American companies with AI regulation and criticized the EU’s Digital Services Act and General Data Protection Regulation (GDPR) as overly onerous. “America cannot and will not accept [over-regulation], and we think it’s a terrible mistake,” Vance said.
But the feeling wasn’t confined to the American delegation; even the gathering’s name shifted from the “AI Safety Summit” to the “AI Action Summit.” EU President Ursula von der Leyen vowed to “cut red tape,” and French President Emmanuel Macron also called for a lighter regulatory touch, according to a New York Times dispatch.
Keep up with the innovative tech transforming business
Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.
Those comments come as Europe is still in the process of implementing parts of its sweeping AI Act, which it passed into law last year.
New tools: But talk of safety wasn’t completely absent from the conversation this week. A high-profile group of backers including Google, OpenAI, Roblox, and Discord unveiled a new nonprofit aimed at creating tools to improve child safety online.
Robust Open Online Safety Tools (ROOST) will offer open-source technology infrastructure to detect and report child sexual abuse material (CSAM) online, a growing problem in the age of generative AI-powered deepfakes.
Elephant in the room: DeepSeek, the Chinese AI lab that emerged as a dark-horse competitor in the global AI arms race with its purportedly shoestring-budget models, also cast a shadow over the conference. The upset has brought more attention to open-source AI and smaller labs.
Linda Griffin, Mozilla’s VP of global policy, told us in an email from Paris that shifting thinking around AI risks have been a positive development for open-source AI, though the regulatory landscape remains fragmented.
“The Paris AI Action Summit marked a turning point in the global AI conversation. Just a year ago, open-source AI was framed as a risk,” Griffin said. “Now, world leaders are recognizing that openness is essential to AI safety, competition, and public trust.”
What’s missing: While some tech leaders cheered the focus on AI innovation over safety, Anthropic CEO Dario Amodei said in a statement that the summit was “a missed opportunity” for an international discussion around safety.
“International conversations on AI must more fully address the technology’s growing security risks,” Amodei said. “Advanced AI presents significant global security dangers, ranging from misuse of AI systems by non-state actors…to the autonomous risks of powerful AI systems.”