Trump jumps from 'anything goes' to 'strict regulation' AI policy

Reddit r/ArtificialInteligence News

Summary

The article discusses President Trump's shift from an 'anything goes' AI policy to considering strict regulation, including pre-deployment government reviews for high-risk frontier AI models, citing cybersecurity and national security concerns.

On second thought, Trump's troopers decided they want to call the shots on AI after all. Will tomorrow's American "legal" AIs require Republican approval? Stay tuned.
Original Article Export to Word Export to PDF
View Cached Full Text

Cached at: 05/08/26, 07:34 PM

# Trump jumps from 'anything goes' to 'strict regulation' AI policy Source: [https://www.theregister.com/ai-and-ml/2026/05/08/trump-jumps-from-anything-goes-to-strict-regulation-ai-policy/5234687](https://www.theregister.com/ai-and-ml/2026/05/08/trump-jumps-from-anything-goes-to-strict-regulation-ai-policy/5234687) OPINIONWhen President Donald Trump returned to power, he cast himself as the anti‑Biden on AI\. First,[he tore up Biden's Executive Order 14110](https://www.theregister.com/on-prem/2025/01/21/trump-wastes-no-time-quashing-biden-ai-ev-executive-orders/926971), which had demanded "safe, secure, and trustworthy" AI\. He then replaced it with his own "Removing Barriers to American Leadership in Artificial Intelligence" directive, ordering agencies to rescind or dilute rules seen as obstacles to innovation\. In short, American AI vendors could do anything they wanted\. That was then\. This is now\. While Trump has yet to issue a new AI Executive Order, we know his crew is forming an AI working group of tech execs and government officials to bring oversight to AI\. Specifically, they're considering requiring all new "high‑risk" AI frontier models to undergo a formal government review before they can be used\. That's going to go over well\. What we do know is that National Economic Council Director Kevin Hassett has said: "We're studying possibly an executive order to give a clear roadmap to everybody about how this is gonna go, and how future AIs that also potentially create vulnerabilities should go through a process so that they’re released into the wild after they've been proven safe –[just like an FDA drug\.](https://www.foxbusiness.com/video/6394769903112)" Considering that people who ignore evidence now regulate healthcare in the United States, that doesn’t fill me with much confidence\. Indeed, we now know the FDA[blocked the publication of studies showing that COVID\-19 and shingles vaccines were safe](https://www.theguardian.com/us-news/2026/may/05/covid-shingles-vaccines-studies-fda)\. Are these the kinds of people we want calling the shots on AI? Be that as it may, the Trump yes\-men are framing this shift as a response to escalating cybersecurity and national‑security risks rather than as a broader embrace of EU‑style AI regulation\. Yes, they're looking at[Anthropic's Mythos](https://www.theregister.com/security/2026/04/08/anthropic-mythos-model-can-find-and-exploit-0-days/5224393)and its potential use by hackers\. At the same time, they emphasize that they want to avoid "onerous" controls on everyday AI applications\. Frontier models that could supercharge cyberwarfare, bio‑threats, or other strategic dangers are another matter\. That's quite a change from last summer when Trump babbled: "We have to[grow that \[AI\] baby](https://rollcall.com/factbase/trump/transcript/donald-trump-speech-artificial-intelligence-ai-executive-orders-july-23-2025/)and let that baby thrive\. We can't stop it\. We can't stop it with politics\. We can't stop it with foolish rules and even stupid rules\." Now he seems to think rules would be a good thing\. Darrell West, a senior fellow at the Center for Technology Innovation at the Brookings Institution, has suggested that[Trump is returning to Biden's policy](https://www.marketwatch.com/story/heres-how-far-the-trump-administrations-startling-turn-on-ai-regulation-might-go-15ce46be)\. Just don't tell him that; he'll have a fit\. While Trump and company are still contemplating exactly how they want to rule – sorry, regulate – AI, the Department of Commerce's Center for AI Standards and Innovation \(CAISI\) announced new agreements with Google DeepMind, Microsoft, and xAI\. According to these new policy statements,[CAISI will conduct pre\-deployment evaluations](https://www.nist.gov/news-events/news/2026/05/caisi-signs-agreements-regarding-frontier-ai-national-security-testing)and targeted research to better assess frontier AI capabilities and advance the state of AI security\. CAISI director Chris Fall said: "Independent, rigorous measurement science is essential to understanding frontier AI and its national security implications\." How to do this? Who will do this? What will it look like? Good question\! Too bad we don’t have any answers yet\. You may have noticed that Anthropic was not invited to this cozy policy get\-together\. Funny, that, since most observers think that Mythos was the model that broke the "do anything you want" AI camel's back in Trump's White House\. That's because the months‑long[feud between the administration and Anthropic is still simmering](https://www.theregister.com/on-prem/2026/02/27/trump-orders-feds-to-drop-woke-anthropic-after-ai-spat/4696877)\. Trump's team moved to block federal agencies from using the company's tools, and Anthropic is now challenging that policy in court\. Recently, however, Trump's tone has softened\.[Trump told CNBC that Anthropic was "shaping up\."](https://www.cnbc.com/2026/04/21/trump-anthropic-department-defense-deal.html)If he can't get peace with Iran, maybe peace with Anthropic will please him\. On the other hand, we also know that the[Trumpies are considering forbidding companies from "interfering" with the government's use of AI models\.](https://www.politico.com/news/2026/05/05/white-house-mulls-tight-new-controls-on-advanced-ai-00907468)You hear that, Anthropic? You will toe the line\! Meanwhile, Gregory Falco, a Cornell assistant professor of mechanical and aerospace engineering, pointed out the obvious: "The federal government[does not currently have the in\-house technical expertise](https://news.cornell.edu/media-relations/tip-sheets/oversight-ai-cannot-simply-mean-political-review-models), infrastructure, or day\-to\-day insight needed to directly evaluate these systems on its own\." Expertise is something Trump's cast of characters sorely lacks across any and all subjects\. "At the same time," Falco continued, "a purely voluntary model of self\-governance is not enough\." After all, foxes are notorious guardians of chicken houses\. What I think is going to happen is that AI vendors who play ball with Trump will end up "governing" AI alongside some Trump loyalists\. It's going to be ugly\. Some regulation is needed, but these are not the people who will do a good job of it\. I won't be surprised if one of Trump's goals isn't so much to make AI safer as it is to ensure that the answers AI gives are the ones he and his regime want people to see\. Today, for example, when I asked a variety of chatbots who lost the 2020 election, they all agreed Trump had lost\. Funnily enough, when the Senate Judiciary Committee asked numerous Trump nominees for federal judgeships the same question,[they universally refused to say he lost](https://demandjustice.org/judicialreport/)\. For better or worse, most Americans don't pay attention to legal news\. What they do, however, is ask AI chatbots for answers\. Foolish of them, considering how inaccurate they can be, but there it is\. If Trump's allowed to call the shots, I've little doubt that the approved bots will follow in the footsteps of his obedient judges and give the answers he wants and not the truth\. ®

Similar Articles

Frontier AI regulation: Managing emerging risks to public safety

OpenAI Blog

OpenAI proposes a regulatory framework for 'frontier AI' models that pose potential public safety risks, advocating for standard-setting processes, registration/reporting requirements, and compliance mechanisms including pre-deployment risk assessments and post-deployment monitoring.

Major U.S. AI Labs Now Subject to Pre-Release Government Security Reviews

Reddit r/ArtificialInteligence

The U.S. government has established voluntary pre-release security review agreements with every major domestic AI lab, marking a significant step in federal oversight of frontier model development. The policy aims to proactively assess national security risks before powerful AI systems are publicly deployed.

Industrial policy for the Intelligence Age

OpenAI Blog

OpenAI releases a slate of people-first policy ideas for the Intelligence Age, proposing frameworks to expand opportunity and ensure advanced AI benefits everyone, accompanied by fellowship programs and a Washington DC workshop.