While artificial intelligence — at least of the narrow typology — had been slowly and quietly making its way into American society since early in the last decade, it was not until after Open AI publicly released its ChatGPT in November 2022 that the U.S. and state governments began to take explicit notice. With Donald Trump entering office on January 20, speculation on the AI technologies executive branch policy agenda for 2025 is worthy of discussion.
“When I’m re-elected,” candidate Trump said at a rally in Cedar Rapids, Iowa, in December 2023, “I will cancel Biden’s artificial intelligence executive order and ban the use of AI to censor the speech of American citizens on day one.”
This promise was also included in the 2024 GOP Party Platform under “Build the Greatest Economy in History.” Other Republican legislators have equated the National Institute of Standards and Technology’s (NIST) AI Safety Institute’s (AISI) work on AI with censorship of conservative speech and attempting to steer AI technologies development with liberal notions about social harms — disinformation and bias — rather than focusing on physical safety-related harms to American consumers.
In a letter to Jason Metheny, RAND Corp. president and CEO, regarding RAND’s role in drafting the Biden administration’s AI executive order, Sen. Ted Cruz, R-Texas, criticized NIST’s “woke AI ‘safety’ standards” as a “plan to control speech” based on “amorphous” social harms.
A coalition of 60 companies, including major AI companies such as Amazon, Google, IBM, and Microsoft; nonprofits, including the Information Technology Industry Council and Americans for Responsible Innovation; and universities, including Carnegie Mellon University and Drexel University, requested in a letter to congressional leaders that Congress enact legislation codifying the AISI before the end of 2024. However, this legislation was not passed.
Among the letter’s signers are OpenAI and Anthropic. Both companies signed agreements with the AISI to collaborate on AI research, testing and evaluation.
During an AI summit in Seoul, South Korea, international leaders agreed in May to form a network of AI safety institutes in Australia, Canada, the European Union, France, Germany, Italy, Japan, Singapore, South Korea, the United Kingdom, and the United States.
Trump has identified two members of his administration who will have AI policy responsibilities. The first member is David Sacks, a Silicon Valley venture capitalist who co-founded PayPal and, in 2024, launched Glue, an AI-powered work chat app designed to streamline workplace communications. Trump appointed Sacks as a special government employee who can serve up to 130 days a year without divesting or publicly disclosing his assets. Sacks will help guide public policy in AI and cryptocurrency.
The second member is Sriram Krishnan, a former Andreessen Horowitz partner, as the senior policy adviser for artificial intelligence in the White House Office of Science and Technology Policy.
Sacks, who has ties to Silicon Valley figures Elon Musk and Peter Thiel, has been a vocal critic of what he describes as “Big Tech censorship” and excessive regulations on his podcast. Sacks’ approach to implementing AI technologies focuses on competition, a “free market” ideology that aligns with Trump’s broader de-regulatory agenda emphasizing reinvigorating U.S. dominance in emerging technologies. Policies under “AI and crypto czar” Sacks will favor an economic environment of innovation over regulation, emphasizing open-source AI development and reducing barriers for AI-focused startups.
Likewise, Krishnan has expressed “the importance of redefining relationships between platforms and AI models,” arguing that the current model undermines innovation and fairness in the data ecosystem. He has advocated for decentralization in technology, describing it as a mechanism to empower users and break away from the controlled platform.
Sacks and Krishnan’s policy approaches align with Trump’s. Yet, at least initially, the administration will need to replace Biden’s executive order with a Trump executive order emphasizing the new administration’s policy approach to future AI development, including maintaining the AISI in the NIST, with the agency’s administrative charge refocused on physical safety standards — rather than social issues — regarding the implementation of AI technologies.
There is a case for a permanent, refocused AISI (legislatively enacted by a Republican-controlled 119th Congress) that could have a positive economic effect on nurturing industry competition, addressing tangible safety issues, and supporting American competitiveness in the global economy.