Colorado AI Laws and Regulation (2026)

Colorado made history on May 17, 2024, when Governor Jared Polis signed Senate Bill 24-205 into law, creating the first comprehensive AI consumer protection statute in the United States. The Colorado Artificial Intelligence Act (CAIA) imposes enforceable governance obligations on private-sector organizations that develop or deploy high-risk AI systems. Beyond the comprehensive AI Act, Colorado has also enacted laws addressing election deepfakes and sexually explicit AI-generated content.
This guide provides a detailed analysis of every major Colorado AI law, including the full requirements of the AI Act, its enforcement timeline, and pending amendments. This article is for informational purposes only. Consult an attorney for advice specific to your situation.
The Colorado AI Act (SB 24-205): Overview
The Colorado Artificial Intelligence Act is built around a central concept: preventing algorithmic discrimination. The law defines algorithmic discrimination as any condition where an AI system results in unlawful differential treatment based on protected characteristics such as age, race, color, ethnicity, sex, sexual orientation, gender identity, disability, religion, veteran status, or national origin.
The law focuses specifically on "high-risk artificial intelligence systems," which it defines as AI systems that make, or are a substantial factor in making, "consequential decisions."
What Is a Consequential Decision?
A consequential decision under the CAIA is one that has a material legal or similarly significant effect on the provision or denial to any consumer of, or the cost or terms of, services in eight categories.
| Category | Examples |
|---|---|
| Education | Enrollment decisions, scholarship eligibility, academic opportunity access |
| Employment | Hiring, promotion, termination, compensation decisions |
| Financial/Lending | Loan approvals, credit decisions, interest rate determinations |
| Government Services | Eligibility for public benefits, service access |
| Healthcare | Patient triage, treatment recommendations, coverage decisions |
| Housing | Rental applications, mortgage approvals, housing eligibility |
| Insurance | Policy approvals, premium calculations, claims decisions |
| Legal Services | Case evaluation, resource allocation, service eligibility |
In practical terms, any AI system that screens job applicants, approves or denies loans, sets insurance premiums, determines housing eligibility, triages patients, or makes similar decisions falls under the law's requirements.
Developer Duties Under the AI Act
The CAIA places a duty of reasonable care on developers of high-risk AI systems. Developers are the companies and individuals who create, code, or produce AI systems.
Documentation Requirements
Developers must provide deployers with several categories of documentation. First, they must provide a general statement describing the reasonably foreseeable harmful uses of the system and any known risks of algorithmic discrimination. Second, they must supply detailed documentation covering the training data used, system limitations, intended purposes, and the methods used to test for algorithmic discrimination.
Third, developers must make available any documentation that deployers need to conduct their own impact assessments. This creates a chain of accountability where developers cannot simply sell an AI tool and walk away from responsibility.
Ongoing Obligations
Developer duties are not one-time requirements. If a developer discovers that a high-risk AI system it has developed has caused or materially contributed to algorithmic discrimination, it must notify the Colorado Attorney General and all known deployers within 90 days of discovering the issue.
Deployer Duties Under the AI Act
Deployers are the businesses and organizations that use high-risk AI systems. Their obligations under the CAIA are extensive.
Risk Management Policy
Deployers must implement a risk management policy that incorporates principles, processes, and personnel for identifying and mitigating discrimination risks. This policy must be more than a paper exercise. It must be actively maintained and followed as part of the organization's operations.
Impact Assessment Requirements
Deployers must complete an impact assessment for each high-risk AI system. These assessments must be conducted at least annually and within 90 days of any intentional and substantial modification to the system. Each impact assessment must include:
- The purpose, intended use cases, deployment context, and benefits of the system
- An analysis of known or reasonably foreseeable risks of algorithmic discrimination
- A description of the mitigation steps taken to address those risks
- A description of the categories of data processed as inputs and outputs
- An overview of the data categories used to customize the system
- Metrics and transparency measures in place
- A description of post-deployment monitoring and user safeguards
Impact assessments must be retained for at least three years following the final deployment of the system and must be provided to the Attorney General upon request within 90 days.
Consumer Notice Requirements
When a high-risk AI system makes or will be a substantial factor in making a consequential decision about a consumer, the deployer must provide notice to that consumer. The notice requirements are specific and detailed.
Deployers must inform consumers that an AI system is being used. They must provide a description of the system and how it factors into the decision. They must supply contact information for the deployer. All notices must be provided in plain language, in all languages the deployer typically uses, and in formats accessible to consumers with disabilities.
Consumer Rights
Beyond notification, deployers must provide consumers with an opportunity to correct any incorrect personal information that the AI system processed. Consumers must also have an opportunity to appeal an adverse consequential decision. These rights ensure that consumers are not simply subject to automated decisions without recourse.
Annual Review
Deployers must annually review each deployed high-risk AI system to ensure it is not causing algorithmic discrimination. This ongoing monitoring requirement means compliance is not a one-time effort but an ongoing operational commitment.
Public Transparency
Deployers must publish a readily available statement on their website describing the types of high-risk AI systems they deploy and how they manage risks of algorithmic discrimination.
Small Business Exemptions
A deployer with fewer than 50 full-time employees may qualify for reduced requirements if three conditions are met: the deployer does not use its own data to train or substantially customize the high-risk AI system, the deployer limits use to purposes previously disclosed by the developer, and the deployer provides consumers access to the developer's impact assessment.

NIST Safe Harbor Provision
One of the most significant features of the Colorado AI Act is its safe harbor provision. Compliance with nationally or internationally recognized AI risk management frameworks creates a rebuttable presumption of reasonable care.
How the Safe Harbor Works
If a deployer can demonstrate that it has implemented a risk management program aligned with the NIST AI Risk Management Framework (AI RMF) or ISO/IEC 42001, it receives a rebuttable presumption that it exercised reasonable care. This means the burden shifts to the Attorney General to prove that the deployer's compliance was insufficient.
The CAIA specifically cites NIST's AI RMF as a benchmark but allows other frameworks designated by the Colorado Attorney General. This flexible approach means that as international standards evolve, the law can accommodate new frameworks without legislative amendments.
Practical Significance
The safe harbor provision gives businesses a clear compliance roadmap. Rather than guessing what "reasonable care" means, companies can follow established frameworks and gain legal protection. This design encourages adoption of best practices rather than minimum compliance.
Enforcement and Penalties
Attorney General Authority
The Colorado Attorney General has exclusive authority to enforce the AI Act. The Attorney General can also promulgate rules in six areas, including documentation requirements, notices and disclosures, impact assessment requirements, and risk management policies.
Violations of the CAIA are treated as unfair trade practices under the Colorado Consumer Protection Act. While the statute does not specify precise dollar amounts for penalties, unfair trade practices violations in Colorado can result in civil penalties, injunctive relief, and other remedies available under the Consumer Protection Act.
No Private Right of Action
Importantly, the CAIA does not create a private right of action. Individual consumers cannot sue companies directly for violations of the AI Act. All enforcement must go through the Attorney General's office. This design choice limits the potential for frivolous lawsuits while ensuring that legitimate violations can still be addressed.
Affirmative Defenses
Developers and deployers have an affirmative defense if they discover a violation through feedback, testing, or internal reviews and take prompt corrective action. This "discover and cure" provision incentivizes companies to actively monitor their AI systems and fix problems when they find them.
Enforcement Timeline
The CAIA has undergone significant timeline changes since its enactment.
| Date | Event |
|---|---|
| May 17, 2024 | Governor Polis signs SB 24-205 into law |
| August 28, 2025 | Governor signs SB 25B-004, delaying enforcement |
| June 30, 2026 | CAIA obligations take effect |
| June 30, 2027 | Mandatory cure period ends |
The Delay
The original effective date was February 1, 2026. However, Governor Polis called a special legislative session in August 2025 to address concerns about the law. Lawmakers were unable to reach a compromise on substantive amendments, so they instead passed SB 25B-004, which delayed the effective date by five months to June 30, 2026.
Both Governor Polis and Colorado Attorney General Phil Weiser have urged the legislature to amend the CAIA, cautioning that the statute in its current form may impose burdensome and premature obligations on businesses. The Governor's AI Policy Working Group has taken the initiative to develop recommendations for legislative action.
Proposed Amendments (SB 25-318)
During the 2025 regular session, SB 25-318 was introduced to modify the CAIA's requirements. The bill would have adjusted the risk management program requirements, modified impact assessment obligations, and refined consumer notification provisions. However, SB 25-318 did not pass during the regular session. The CAIA's final form remains uncertain as further amendments may be proposed before the June 2026 effective date.
Election Deepfake Law (HB 24-1147)
Colorado enacted the Candidate Election Deepfake Disclosures Act (HB 24-1147) on May 24, 2024. The law took effect on July 1, 2024.
Disclosure Requirements
The law requires clear disclaimers on communications that have been generated or substantially altered by AI and that falsely depict what a candidate or elected official has said or done. The required disclosure statement reads: "This [image/audio/video/multimedia] has been edited and depicts speech or conduct that falsely appears to be authentic or truthful."
The law specifies formatting and placement standards for visual, audio, and multimedia communications to ensure disclosures are actually visible and understandable.
Penalties
Communications that fail to include proper disclaimers are subject to civil penalties. A hearing officer may impose a penalty of at least $100 for each violation involving unpaid advertising. For paid advertising, the penalty is at least 10% of the amount paid or spent to advertise the communication containing the undisclosed deepfake.
Private Right of Action
Unlike the AI Act, the deepfake disclosure law does create a private right of action. A candidate who is the subject of a communication with an undisclosed or improperly disclosed deepfake may bring a civil action for injunctive or equitable relief, compensatory damages, punitive damages, or both.

Intimate Deepfake Protections (SB 25-288)
Governor Polis signed SB 25-288 into law on June 2, 2025. This law expands Colorado's existing nonconsensual pornography ("revenge porn") laws to cover AI-generated and altered imagery.
What the Law Covers
SB 25-288 addresses both adult and child victims of intimate deepfakes. The law prohibits creating or distributing sexually explicit digital images generated or altered using AI without the depicted person's consent. It covers deepfakes that depict both adults and minors.
Penalties
| Violation | Penalty |
|---|---|
| Creating sexually explicit deepfakes without consent | Up to 18 months in jail |
| Civil damages for victims | Up to $150,000 |
| Sharing or threatening to share deepfake of a minor | Additional criminal penalties |
The law creates a private right of action, allowing victims to sue individuals who disclosed intimate deepfake images. This gives victims a direct legal remedy without relying on law enforcement or the Attorney General's office.
Context
Colorado became the 38th state to pass a law specifically penalizing sexually explicit deepfakes when SB 25-288 was enacted. The law closed a gap in Colorado's existing revenge porn statutes, which had not previously covered AI-generated material.

Federal AI Policy and Colorado
Colorado's comprehensive AI Act makes it a prime target for the federal government's efforts to establish AI policy uniformity.
Executive Order 14365
On December 11, 2025, President Trump issued Executive Order 14365, directing the Department of Justice to establish an AI Litigation Task Force to challenge state AI laws. The order specifically targets state laws that the administration believes may obstruct national AI policy.
Impact on the Colorado AI Act
The Colorado AI Act could face federal scrutiny under several theories. The AI Litigation Task Force could challenge the CAIA as an unconstitutional regulation of interstate commerce, arguing that it burdens AI companies operating nationally. The executive order also ties certain federal funding to state compliance with federal AI policy goals.
However, the executive order has significant limitations. Federal preemption typically requires congressional action, not executive orders. The order also carves out child safety protections and state government AI procurement from potential preemption. These carve-outs could protect Colorado's deepfake laws.
On March 20, 2026, the Trump Administration released its "National Policy Framework for Artificial Intelligence," calling on Congress to pass federal AI legislation. If Congress acts, the Colorado AI Act could face preemption challenges. For now, the CAIA remains in effect and enforceable under its current timeline.
Governor Polis's Position
Governor Polis has expressed concerns about the burden the CAIA may place on businesses, which aligns with some of the federal administration's concerns about state-level AI regulation. However, Colorado has shown no indication of repealing the law. Instead, the focus has been on amending it to make compliance more practical while preserving consumer protections.
Summary of Colorado AI Laws
| Law | Year | Subject | Status |
|---|---|---|---|
| SB 24-205 | 2024 | Comprehensive AI consumer protection | Effective June 30, 2026 |
| HB 24-1147 | 2024 | Election deepfake disclosures | In effect (July 1, 2024) |
| SB 25B-004 | 2025 | AI Act enforcement delay | Signed Aug. 28, 2025 |
| SB 25-288 | 2025 | Intimate deepfake protections | Signed June 2, 2025 |
| SB 25-318 | 2025 | AI Act amendments (FAILED) | Did not pass |
More Colorado Laws
- Colorado Recording Laws
- [Colorado Data Privacy Laws](/us-laws/data-privacy-laws/colorado-data-privacy-laws)
- Colorado Surveillance Camera Laws
- Colorado Background Check Laws
- Colorado Whistleblower Laws
Sources and References
- SB 24-205 Consumer Protections for Artificial Intelligence(leg.colorado.gov).gov
- SB 24-205 Signed Text(leg.colorado.gov).gov
- HB 24-1147 Candidate Election Deepfake Disclosures(leg.colorado.gov).gov
- Colorado Secretary of State Deepfakes Press Release(coloradosos.gov).gov
- NAAG Deep Dive into Colorado AI Act(naag.org)
- FPF Policy Brief: The Colorado AI Act(leg.colorado.gov).gov
- CDT FAQ on Colorado AI Act (SB 24-205)(cdt.org)
- Executive Order on AI National Policy Framework(whitehouse.gov).gov
- Colorado SB 25-288 Intimate Deepfake Protections(leg.colorado.gov).gov
- NIST AI Risk Management Framework(nist.gov).gov