Texas is stepping into the artificial intelligence (AI) regulatory arena with the recent passage of the Texas Responsible Artificial Intelligence Governance Act, or “TRAIGA.” Signed into law by Governor Greg Abbott on June 22, 2025, and going into effect January 1, 2026, TRAIGA marks Texas as the second U.S. state, after Colorado, to enact a comprehensive AI law. Unlike the more prescriptive Colorado and EU AI regulations, TRAIGA is designed to balance innovation with public safety while maintaining Texas’s reputation as a business-friendly state.
Whether your company is located in Texas, serves Texas residents, or simply markets or advertises in the state, this law could apply to you. The scope is sweeping, and the implications are significant. If your company is a contractor to any Texas governmental body, it also may have to comply with the same regulatory obligations for notice and consent that apply to Texas bodies/agencies (except for healthcare providers, and institutions of higher education).
Under TRAIGA, an “artificial intelligence system” means “any machine-based system that, for any explicit or implicit objective, infers from the inputs the system receives how to generate outputs, including content, decisions, predictions, or recommendations, that can influence physical or virtual environments.”
Who Must Comply?
TRAIGA applies to any person or entity that:
- Develops or deploys an AI system in Texas;
- Offers products or services used by Texas residents; or
- Markets, advertises, or conducts business in the state.
In other words, geographical borders won’t shield businesses from this law. If your AI system touches Texas in any meaningful way, compliance is required.
Key Restrictions and Prohibited Uses
TRAIGA avoids categorizing AI systems by “risk level” as seen in the Colorado and EU models. Instead, it focuses on specific, prohibited uses of AI. These include:
- Behavioral Manipulation: AI systems may not be developed or deployed with the intent to incite self-harm, violence, or criminal behavior.
- Social/Behavioral Scoring: Government use of AI for social scoring or behavior-based classifications is banned.
- Constitutional Violations: AI systems may not be designed with the sole intent to infringe upon U.S. constitutional rights.
- Unlawful Discrimination: Only intentional discrimination against protected classes (e.g., race, sex, religion, etc.) is prohibited; disparate impact alone does not prove a violation has occurred.
- Harmful Content Creation: Developing or using AI for producing child pornography, unlawful deep fakes, or sexually explicit material involving impersonated minors is expressly forbidden.
Importantly, the law hinges on intent, not outcomes, making it more favorable for developers who act in good faith and implement safeguards.
No Private Lawsuits, But Enforcement Has Teeth
Unlike some privacy laws, TRAIGA does not allow private individuals to sue businesses. Enforcement rests solely with the Texas Attorney General, who may seek civil penalties, injunctive relief, and attorneys’ fees. Fines can reach up to $200,000 per violation, with daily penalties for continuing offenses.
However, there is a significant safe harbor provision: businesses may avoid liability if they demonstrate substantial compliance with a recognized AI risk management framework, such as the 2023 NIST AIRMF or the 2024 NIST “Generative AI Profile” (NIST-AI-600-1) or actually detect violations through internal audits or adversarial testing.
Regulatory Sandbox and AI Council
TRAIGA provides for an AI Regulatory Sandbox, allowing companies to test AI systems under relaxed regulatory oversight for up to 36 months. This initiative is intended to foster innovation while still maintaining consumer protections.
Additionally, a Texas Artificial Intelligence Council will oversee the ethical use of AI in the state, providing guidance, evaluating risks, and recommending legislative updates. Businesses should monitor this body for future regulatory developments and best practices.
Biometrics and Privacy Law Revisions
TRAIGA updates existing Texas Data Privacy and Security law that became effective last July to cover the use of AI, as well as bringing clarity to the use of biometric data. It exempts AI systems used for security, fraud prevention, or law enforcement from prior consent requirements under the Texas Capture or Use of Biometric Information Act (CUBI). However, commercial uses of biometric data still require compliance with consent, and data retention standards, especially if the data is used to uniquely identify individuals.
Steps for Businesses to Take Now
With a 5-month runway before enforcement begins, businesses have time to prepare but should act soon. Recommended steps include:
- Establish an AI Governance Team to oversee system development, deployment, and compliance.
- Adopt an AI Risk Management Policy, preferably aligned with the 2023 NIST framework (see 09-24-about-the-ai-rmf-for-distro-9-25_508-edit.pdf) to prepare to qualify for the safe harbor.
- Inventory all AI Systems/Tools
- Audit Current AI Tools to ensure no intentional discrimination or manipulation risks exist.
- Document AI System Modifications to distinguish between AI developers and deployers.
- Consider Joining the Sandbox Program for experimental systems.
- Stay Informed by following guidance from the Texas AI Council.
Key Takeaways
TRAIGA may be a turning point for AI governance in the U.S., offering a model that supports innovation while addressing specific risks. Businesses with AI operations that touch Texas should assess their exposure and begin building compliance frameworks today. As the regulatory environment continues to evolve, early preparation will be key to mitigating risk and maintaining public trust. This opportunity also enables businesses to harmonize their AI governance programs to cover Texas, Utah, Colorado, and prospective US legislative/regulatory efforts, as well as the EU and emerging AI regulations in Canada, Brazil, China and elsewhere – every business developing an AI system will be interested in marketing that system globally.
Legal Guidance Recommended
Given the scope of TRAIGA and the significant penalties for noncompliance, businesses are encouraged to consult with experienced legal counsel when developing or auditing their AI governance programs. Counsel can assist in aligning your policies with accepted risk management frameworks, interpreting the law’s safe harbor provisions, and navigating regulatory developments from the Texas AI Council. If your organization is building, deploying, or relying on AI systems in any capacity, early legal involvement is essential to ensure compliance and reduce exposure.
About the Authors
Alan Thiemann serves as a Principal at Conley Rose, where he leads the firm’s Privacy, Data Security, and Testing Practice. With five decades of experience spanning government service, trade associations, and private practice, Alan advises for-profit and non-profit clients on U.S. and international laws governing artificial intelligence, privacy, cybersecurity, and regulatory compliance. He also serves as outside general counsel to several national and international organizations in the psychometric testing, retail technology, and standards development sectors.
Dan Stanger, Of Counsel at Conley Rose, brings his IP expertise to compliance regulations around privacy and AI governance, especially for clients who are patenting AI products or internal systems using AI.