#5 Federal AI Policy Is Evolving Under the Trump Administration. What about Guardrails?

Major changes to US AI policy commenced early in 2025. Former President Biden’s 2023 Executive AI Order was repealed a few days after President Donald Trump took office with Executive Order (E.O.) 14179, Removing Barriers to American Leadership in Artificial Intelligence. Biden’s policies sought risk reduction from potential AI discoveries and developments and required results of AI safety tests to be released to the US government before public disclosure. This AI stance was similar to the tenets of Biden’s Defense Production Act. Alternatively, Trump believes AI superiority is vital to national security, and American AI capabilities must be unleashed. He issued his own AI Executive Order and Appendices as he believes economic and geopolitical forces necessitate a more aggressive USA-AI agenda. The Trump framework for AI innovation is evolving as a key leadership strategy and a massive change agent for US safety and economic forces. Neither the 2023 nor 2025 Executive Orders have seen US lawmakers pass substantial regulatory guardrails related to the AI orders.
In April 2025, OMB Memorandum M-25-21 Accelerating Federal Use of AI through Innovation, Governance, and Public Trust was released and replaces OMB memos from the Biden Administration. This new memorandum outlines guidance to promote human flourishing, economic competitiveness, and national security while providing improved public services, with strong safeguards for civil rights and liberties, and privacy. Included in the May 2025 tax and policy bill by the House, is a ban on new state-led AI regulations and a blockage of some preexisting state-led AI regulations and oversights.
Blocking new disjointed AI regulations may be worthy but how will protections be crafted? Currently, AI is fragmented and largely unregulated with federal, state, military, industry, academic, research, healthcare, and patients’ rights entities all seeking to harness AI potential while struggling with siloed regulations and governances. AI evolves so quickly, however, that regulations typically lag the newest innovation and catch up is relentless. Government pivots slowly and is not positioned as the best innovator for AI regulation.
The precision needed to strike a balance between supportive or contentious regulation will be a Grand Canyon tightrope walk. Identifying individuals in both the private and government sectors will be an essential part of the pathway forward. On January 21, 2025, Trump publicly announced a venture between OpenAI, Oracle and Softbank. The project is supposed to witness a $500 billion developmental investment over the next few years to accelerate a massive AI data processing infrastructure. Trump sees this project as advancing American leadership and security, creating thousands of jobs while generating massive economic benefit for the world. Read more.
What are your thoughts on recent AI changes? Will academic and healthcare research flourish, be unchanged, or be hurt? How should guardrails be set and are they even possible with the forward-leaning, aggressive, market-oriented approaches? Do we have a choice? Will academia lead with standardized patient-data guardrails for AI data collection and analysis. Email us your feedback!
Author

— by Karen Lindsley, DNP, RN, CDE, CCRC, Manager, Georgia CTSA Coordinating Center and Regulatory Knowledge & Support program, 6/2025
Editor's Response
— by Eunji Emily Kim, MS, MA, School of Public Policy, Georgia Institute of Technology
While the tone and emphasis of AI policy have varied between the Biden and Trump administrations, it is important to note that there is no fundamental divergence in their overarching objective: both administrations acknowledge the necessity of balancing AI innovation with responsible oversight. President Biden’s Executive Order 14110 (October 30, 2023) emphasized the development of safe, secure, and trustworthy AI systems, incorporating safeguards such as pre-deployment risk assessments and federal oversight for high-impact models. Meanwhile, President Trump’s Executive Order 14179 (January 23, 2025) underscores the need to remove regulatory barriers to accelerate U.S. leadership in AI. However, even Trump’s order concedes the necessity of engaging with private and public sectors to ensure American values and national security are preserved in this fast-moving domain. In this way, both administrations reflect a dual commitment to technological advancement and governance, differing more in rhetoric than in policy substance.
The differences lie more in the framing and areas of focus rather than in the core principles. For instance, Biden’s approach leaned heavily on invoking the Defense Production Act to mandate information-sharing, framing AI as a domain requiring public accountability. Trump’s framing, in contrast, positions AI advancement as central to economic growth and national security, calling for deregulation as a way to "unleash" potential. Yet both Executive Orders recognize that neither innovation nor regulation can be neglected in building a globally competitive and ethically sound AI ecosystem. These shifts in language and priority are reflected in the official titles and policy sections of EO 14110 and EO 14179, which reinforce the evolving but fundamentally consistent balancing act.
Finally, Executive Orders function more as agenda-setting tools than as mechanisms for detailed regulation. They serve to signal priorities to federal agencies and shape the national discourse, but do not, in themselves, constitute legally binding regulatory regimes. This is why Congressional action remains crucial. As AI technologies mature and begin to affect broader sectors—from healthcare and labor to defense—there is growing momentum for legislative engagement. Indeed, as of early 2025, Congress has introduced multiple bipartisan proposals for AI regulation, including transparency standards for high-risk systems and requirements for independent audits. These developments suggest that while the executive branch sets the tone, the real regulatory infrastructure will likely emerge through legislative processes responding to technological urgency and public demand.
In sum, while there are notable changes in emphasis and language between the Biden and Trump administrations' AI Executive Orders, the core objective—balancing innovation with oversight—remains consistent. These differences, though important, are primarily declarative at this stage. It remains to be seen how these shifts will translate into substantial policy implementation. For a clearer view of how each administration has framed its AI agenda, see the comparative table of Executive Orders.
Main AI Executive Orders by Administrations
First Trump Administration (2017–2021) | Biden Administration (2021–2025) | Second Trump Administration (2025–present) | ||
Executive Order Title | Executive Order 13859: "Maintaining American Leadership in Artificial Intelligence" | Executive Order 14110: "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence" | Executive Order 14179: "Removing Barriers to American Leadership in Artificial Intelligence" | |
Date Issued |
| October 30, 2023 | January 23, 2025 | |
Policy Focus | Emphasized sustaining and enhancing U.S. leadership in AI through coordinated federal efforts and investment in AI research and development. | Focused on establishing safeguards for AI development, promoting ethical guidelines, and ensuring AI technologies are safe, secure, and trustworthy. | Aims to eliminate regulations deemed to hinder AI innovation, promoting rapid development to maintain global AI dominance. | |
Regulatory Approach | Encouraged innovation with a light-touch regulatory framework, promoting public trust without imposing heavy regulations. | Implemented stringent guidelines for AI development, including requirements for safety testing, transparency, and accountability. | Advocates for deregulation to accelerate AI advancements, reducing federal oversight and emphasizing free-market innovation. | |
National Security Considerations | Recognized AI as vital to national security, promoting initiatives to maintain technological superiority. | Addressed AI-related national security risks by promoting responsible AI use and preventing potential threats. | Prioritizes AI superiority as essential to national security, emphasizing rapid development and deployment to maintain a strategic edge. | |
Impact on Industry | Centered on enabling innovation, reducing regulatory barriers, and fostering public-private collaboration | Sought to balance innovation with ethical considerations, potentially imposing constraints on rapid AI development to ensure safety and public trust. | Encourages swift AI innovation by removing perceived regulatory barriers, aiming to unleash the full potential of American AI capabilities. | |
International Stance | Emphasized the importance of engagement with international allies and organizations to shape the global AI landscape in alignment with U.S. values. | Advocated for global cooperation in establishing AI standards and regulations to ensure ethical use worldwide. | Focuses on strengthening domestic AI capabilities to outperform international competitors, particularly addressing economic and geopolitical challenges. |
Continue the conversation! Please email us your comments to post on this blog. Enter the blog post # in your email Subject.