Skip to main content
Legal Update Article

Trump’s AI EO: Reducing Regulatory Fragmentation Not Employer Responsibility

Takeaways

  • The AI EO attempts to establish a unified national policy for artificial intelligence, directing federal agencies to challenge state AI laws that conflict with federal objectives. However, it does not change existing antidiscrimination statutes governing employment decisions.
  • Employer liability for AI-assisted employment decisions remains anchored in long-standing civil rights laws, which apply regardless of whether decisions are made by humans or algorithms.
  • Employers should consider evaluating AI-influenced decisions under traditional discrimination frameworks, maintain documentation supporting job-relatedness, and use adaptable governance processes.

Related link


Article

The White House’s Dec. 11, 2025, “Ensuring a National Policy Framework for Artificial Intelligence” executive order (EO) establishes federal policy to coordinate a national approach to artificial intelligence and reduce regulatory fragmentation.

The EO directs federal agencies to assess and, where appropriate, challenge state artificial intelligence (AI) laws the Administration views as inconsistent with federal objectives, while signaling the potential use of federal authority to advance a unified national framework. It does not alter the antidiscrimination statutes that have long governed employment decisions. Those statutes remain the central legal framework governing employer exposure.

Two distinct bodies of law govern this legal landscape for employers:

  • AI-specific statutes that regulate how automated tools are built and deployed; and
  • Long-standing civil rights laws that regulate the legality of employment decisions themselves.

The EO speaks to the first category; employment liability generally arises under the second. For employers, the goal is to reconcile these competing risks.

What the EO Reaches — and What It Doesn’t

The EO pushes a unified national approach to AI and directs federal agencies to identify state AI statutes that may conflict with federal priorities. It creates a Department of Justice task force to pursue those efforts and instructs the secretary of commerce to catalogue state requirements the Administration views as burdensome, reflecting the Administration’s stated interest in using available federal tools to influence state AI policy.

None of these directives reach the core discrimination laws, however. Those laws apply regardless of the technology used, and their coverage does not turn on whether a tool is human-driven or algorithmic. Title VII of the Civil Rights Act, the Americans with Disabilities Act, the Age Discrimination in Employment Act, Section 1981, and analogous state statutes continue to govern employment practices because their mandates attach to the nature of the decision, not the mechanism by which the decision is made.

Civil Rights Law Still Anchors All AI-Related Employment Risk

For employers, the legally relevant question is not simply whether a tool uses AI, but whether the practice produces unlawful discrimination. Existing law already provides that framework.

Federal antidiscrimination statutes are unchanged. They apply to any discriminatory employment practice. The introduction of AI has not changed the underlying legal doctrines that apply to employment decisions.

The Uniform Guidelines continue to guide the analysis where applicable. The Uniform Guidelines on Employee Selection Procedures, codified in federal regulations, remain the principal federal framework for evaluating selection tools. When an automated tool functions as a selection procedure, the Uniform Guidelines supply familiar principles: job-relatedness, business necessity, and the evaluation of disparities. Regulators continue to reference these concepts when assessing tools that influence employment decisions.

State civil rights laws remain operative. Even as states adopt AI-specific regulatory approaches, the independent antidiscrimination obligations imposed by state law persist. These laws apply to employment decisions even when decisions are informed by automated tools.

The result is straightforward: changes in AI-specific regulation do not eliminate the baseline standards that determine employment liability.

AI-Specific Statutes and Antidiscrimination Laws Operate on Different Tracks

Two bodies of law govern employers’ use of automated tools.

AI-specific statutes, such as Colorado’s AI Act or emerging California rules, regulate how AI systems are built, deployed, or disclosed. These laws create governance obligations and are the focus of the Administration’s preemption efforts.

Civil rights statutes, by contrast, regulate the legality of employment decisions themselves. They apply regardless of technology and stand entirely outside the EO’s preemption objectives.

Courts Already Applying Traditional Principles to AI Tools

Courts are increasingly evaluating automated hiring and screening tools, and they are evaluating them under familiar civil rights principles. Plaintiffs assert familiar theories (disparate impact, disparate treatment, and evolving vendor-agency claims), and courts have permitted several cases to proceed under existing law. The EO does not alter this trajectory.

Why Reviewing AI-Assisted Decisions Still Matters

Courts and regulators continue to evaluate employment practices by examining their real-world effects, including those shaped by automated tools. Employers should therefore consider analyzing how AI-assisted decisions operate in practice, identifying meaningful patterns, and assessing whether those patterns reflect disparities that may warrant further legal evaluation or support for the tool’s job-relatedness.

Where appropriate, these assessments should be conducted under privilege to ensure they are reviewed and maintained in a protected manner.

These expectations endure regardless of how AI-specific statutes evolve or how broader legal debates unfold.

Practical Guidance for Employers

  • Evaluate AI-influenced decisions under traditional discrimination frameworks. Identify where automated tools affect hiring, promotion, or other decisions, and apply federal and state civil-rights standards to those steps.
     
  • Maintain documentation supporting job-relatedness. Keep clear records of criteria and business rationale and evaluate validation evidence when tools influence employment outcomes.
     
  • Track preemption efforts without relying on them. Even if some state AI statutes create additional frameworks, employers should not expect any reduction in exposure under civil rights laws.
     
  • Use adaptable governance processes. Build repeatable review structures that can adjust as tools, regulations, and business needs evolve.

The Bottom Line

The EO may reshape certain AI-governance rules, but it does not alter the laws that most directly affect employers. Title VII and analogous state statutes continue to govern employment decisions, regardless of how those decisions are made.

Preemption debates may shift compliance burdens for developers or vendors, but they do not change the core liability framework. Employers should therefore ground their AI governance in long-standing antidiscrimination law — the framework that will continue to guide compliance and legal obligations.

* * *

Jackson Lewis attorneys are closely tracking the evolving federal and state landscape on AI and how courts are applying traditional civil rights principles to automated tools. If your organization is evaluating AI-assisted employment decisions or updating related governance processes, our team can help assess practical risks and compliance considerations. Please reach out to your Jackson Lewis attorney with any questions.

© Jackson Lewis P.C. This material is provided for informational purposes only. It is not intended to constitute legal advice nor does it create a client-lawyer relationship between Jackson Lewis and any recipient. Recipients should consult with counsel before taking any actions based on the information contained within this material. This material may be considered attorney advertising in some jurisdictions. Prior results do not guarantee a similar outcome. 

Focused on employment and labor law since 1958, Jackson Lewis P.C.’s 1,100+ attorneys located in major cities nationwide consistently identify and respond to new ways workplace law intersects business. We help employers develop proactive strategies, strong policies and business-oriented solutions to cultivate high-functioning workforces that are engaged and stable, and share our clients’ goals to emphasize belonging and respect for the contributions of every employee. For more information, visit https://www.jacksonlewis.com.