At a global gathering formerly known as the AI Safety Summit last week, Vice President J.D. Vance declared that “the AI future is not going to be won by hand-wringing about safety.”
It was a somewhat predictable sentiment from an administration that was already widely expected to put no-holds-barred innovation ahead of regulatory concerns. But a similar attitude seems to be fashionable right now even outside Washington, whether because of the Trump administration’s cues or not.
News items from the past week or so point to a fast-evolving AI safety space.
- The UK government renamed its AI Safety Institute last week, dropping the “Safety” in favor of “Security.” While the announcement insisted that the work wouldn’t change, Politico noted several language shifts on the institute’s website that matched themes in Vance’s speech.
- The Trump administration is set to gut the US’s own AI Safety Institute (AISI), housed within the National Institute of Standards and Technology (NIST), Axios first reported this week. Firings will also target staff at Chips for America, the initiative to ramp up domestic semiconductor production.
- Last week, the EU said it would back off from certain proposed tech regulations, including liability rules that would make it easier for consumers to sue over AI harms. The EU’s digital policy chief, Henna Virkkunen, told the Financial Times the move was an effort to cut red tape and boost competition rather than bowing to Vance’s warning days before about “onerous” tech regulation.
“The policy landscape has undergone a dramatic transformation, particularly in the US,” Manoj Saxena, founder and chairman of the Responsible AI Institute, told Tech Brew in an email. “We’re seeing a clear move away from regulatory oversight.”
The nonprofit Responsible AI Institute itself announced this week that these shifts have pushed it to back away from policy advocacy and focus on building tools that help its corporate members manage risk in the absence of regulation.
Keep up with the innovative tech transforming business
Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.
“We’re seeing a concerning trend where fear of missing out—or FOMO—is driving rapid AI adoption without proper safeguards,” Saxena said. “The risks here are substantial: Uncontrolled AI deployment can lead to severe reputational damage if systems make high-profile mistakes or exhibit biased behavior.”
Caroline Shleifer, founder and CEO of regulatory management platform RegASK, told us businesses should expect that “AI governance will remain in flux for the foreseeable future.”
“That uncertainty increases risk, especially for industries like life sciences and consumer goods,” Shleifer said in an email.
Chinese challenger: All this rethinking comes as the Chinese lab DeepSeek has injected an unexpected rivalry into the global race around AI. The upstart’s purportedly hyper-efficient model, around which safety concerns also abound, has brought more focus on competition with China among world leaders.
Some AI safety advocacy groups we contacted described the need for AI safety in these terms, or other frameworks amenable to stated Trump administration goals.
Varun Krovi, executive director at the Center for AI Safety’s Action Fund, said in a statement that chip export controls and federal support for domestic chip production would boost AI safety.
“[These] are concrete steps the US can take to ensure AI safety and security that align with the administration’s commitment to innovation,” Krovi said. “They are two sides of the same coin.”
AI Policy Institute executive director Daniel Colson also mentioned a “strategic advantage over China” along with “preventing catastrophic risks.”
“As transformative AI systems advance rapidly, we hope the administration’s forthcoming AI Action Plan will include robust measures to prevent the most severe potential harms while promoting responsible innovation,” Colson said in a statement.