Connecticut AI Laws and Regulation (2026)

Connecticut has emerged as one of the more active states in debating artificial intelligence regulation, even as its most ambitious proposals have stalled short of becoming law. The state has taken a layered approach, enacting targeted measures on government AI use, data privacy, and deepfake crimes while continuing to pursue broader private-sector regulation.
This guide covers all enacted Connecticut AI laws, pending legislation in the 2026 session, and how federal policy may affect the state's regulatory landscape. If you have specific questions about how these laws apply to your situation, consult an attorney for advice specific to your situation.
Enacted AI Laws in Connecticut
Connecticut has passed several laws that directly address or significantly impact artificial intelligence. While the state has not yet enacted a single comprehensive AI statute covering the private sector, these laws collectively create important obligations for businesses, government agencies, and individuals.
Public Act 23-16: Government AI Accountability (SB 1103)
Connecticut took its first major step into AI regulation with Public Act 23-16, signed by Governor Ned Lamont on June 7, 2023. This law established a framework for how state agencies may use artificial intelligence and automated decision-making tools.
The law requires state agencies to conduct impact assessments before deploying AI systems. Agencies cannot use AI systems that have been shown to result in unlawful discrimination or disparate impact against individuals or groups based on protected characteristics.
Key provisions of Public Act 23-16 include:
- Impact assessment requirements for any AI system used in government decision-making
- Anti-discrimination safeguards preventing deployment of AI tools shown to produce disparate impacts
- Inventory mandate requiring state agencies to catalog their use of AI and automated systems
- AI Bill of Rights working group tasked with developing ethical guidelines and best practices for AI governance
The law also created a permanent working group appointed by legislative leaders and the governor. This group includes representatives from AI development companies, academics with technology and public policy expertise, and members of the Connecticut Academy of Science and Engineering. Their mandate is to develop an AI bill of rights and recommend best practices for ethical AI use in state government and private-sector regulation.
Source: Connecticut General Assembly, Public Act 23-16
Public Act 25-113: AI Data Privacy Disclosures (SB 1295)
Governor Lamont signed Public Act 25-113 on June 25, 2025, amending the Connecticut Data Privacy Act (CTDPA) to add mandatory disclosure requirements related to artificial intelligence. The amendments take effect on July 1, 2026.
Under this law, businesses subject to the CTDPA must update their consumer-facing privacy notices to include a clear and conspicuous statement disclosing whether they collect, use, or sell personal data for the purpose of training AI systems. Controllers must also specifically disclose whether they process personal data to train large language models.
The amendments also broadened the CTDPA's applicability thresholds significantly. The law now applies to entities doing business in Connecticut that:
| Requirement | Previous Threshold | New Threshold |
|---|---|---|
| Consumer data processing | 100,000+ consumers | 35,000+ consumers |
| Sensitive data | Not specifically covered | All sensitive data processing covered |
| Data sales | Higher thresholds | All personal data sales covered |
Additional AI-related provisions in Public Act 25-113 include:
- Impact assessments for controllers engaging in automated profiling that produces significant effects on consumers, effective August 1, 2026
- Opt-out rights allowing consumers to prevent automated systems from using their personal data for significant decisions in housing, insurance, healthcare, education, criminal justice, and employment
- Minor protections with categorical prohibitions on processing minors' data for targeted advertising or sale
Source: Connecticut General Assembly, Public Act 25-113
Attorney General's AI Guidance Memorandum
Connecticut Attorney General William Tong released a memorandum clarifying that existing state laws, including anti-discrimination statutes, consumer protection laws, and data privacy requirements, apply to artificial intelligence in the same way they apply to traditional business practices. The memorandum emphasizes that businesses cannot use AI as a shield against liability for discrimination in employment, housing, insurance, or lending based on protected characteristics.
This guidance is significant because it signals that Connecticut will enforce existing legal frameworks against AI-driven harm, even in the absence of AI-specific legislation.
Source: Connecticut Attorney General's Office
Deepfake Laws in Connecticut
Public Act 25-168: Synthetic Intimate Images
Connecticut criminalized the dissemination of AI-generated intimate images through Public Act 25-168, effective October 1, 2025. The law creates the crime of "unlawful dissemination of an intimate synthetically created image."
To be convicted under this law, the prosecution must prove that the image was intimate and synthetically created or altered, that it was shared or threatened to be shared without the depicted person's consent, and that the person acted intentionally.
Penalties for Deepfake Intimate Images in Connecticut:
| Offense Level | Classification | Maximum Penalty |
|---|---|---|
| Basic offense (limited distribution) | Class A Misdemeanor | Up to 1 year imprisonment, up to $2,000 fine |
| Aggravated offense (wider distribution or intent to harm) | Class D Felony | Up to 5 years imprisonment, up to $5,000 fine |
| Domestic violence context | Additional protections | Protective orders, next-day arraignment |
When the offense involves family or household members or dating partners, it can be classified as a family violence crime. This triggers additional consequences including next-day arraignment, involvement of Family Relations, and potential protective orders.
Source: Connecticut Criminal Law Blog, PA 25-168 Analysis

Election-Related Deepfakes
Connecticut considered legislation requiring disclosure of AI-generated content in political communications as part of SB 2 in 2025. While SB 2 did not pass, the proposed provisions would have required disclosure of deceptive AI-generated political communications, with requirements focused on paid campaign advertisements where the creator intended to influence election results.
The 2026 legislative session's SB 5 includes provisions related to synthetic digital content detection requirements, which could address election deepfakes if enacted.
AI in Employment and Hiring
Connecticut does not yet have a standalone law specifically regulating AI in employment decisions, but several measures address this area.
CTDPA Automated Decision Protections
Under the amended CTDPA (Public Act 25-113), consumers have the right to opt out of automated decision-making systems that produce significant effects on them, including in the employment context. This means job applicants and employees in Connecticut can object to AI-driven decisions about hiring, promotions, or termination.
Controllers that use automated profiling to make decisions producing significant effects on consumers must conduct impact assessments under the law's provisions taking effect in August 2026.
SB 2's Employment Provisions (Did Not Pass)
The 2025 version of SB 2 would have created comprehensive obligations for employers using high-risk AI systems in employment decisions. Under the bill, if an AI system contributed to an adverse employment decision, the affected individual would have had:
- The right to an explanation of how the AI system reached its conclusion
- The right to know what personal data was used in the decision
- The right to correct personal data used in the decision
- The right to appeal the decision for human review
These provisions did not become law because SB 2 stalled in the House.
Pending SB 5 Employment Provisions (2026)
The 2026 session's SB 5 includes anti-discrimination protections and disclosure requirements for companies using AI automated systems in employment decisions. Specifically, employers would need to inform job applicants when AI tools such as resume screeners or interview analysis software are used in the hiring process and give applicants the right to appeal if they suspect discrimination.

Insurance Industry AI Regulation
Connecticut has been proactive in regulating AI use in the insurance sector. The Connecticut Insurance Department requires all domestic insurers to complete an annual data certification confirming their use of Big Data and AI complies with applicable anti-discrimination laws.
This requirement predates much of the broader AI legislation, reflecting the insurance industry's early adoption of algorithmic underwriting and claims processing tools. Insurers must demonstrate that their AI systems do not produce discriminatory outcomes based on protected characteristics.
Pending AI Legislation: 2026 Session
The Connecticut General Assembly's 2026 session, running from February 4 through May 6, 2026, features several significant AI bills.
Senate Bill 5: Online Safety Act
SB 5 is the most comprehensive AI bill under consideration. At 97 pages, it addresses a wide range of AI-related topics and would create substantial new regulatory infrastructure.
Key provisions of SB 5 include:
- AI Policy Office: Establishes a new state office to oversee AI research and recommend policies
- AI Chatbot safety requirements: Requires AI chatbot operators to detect suicidal ideation and self-harm indicators, with protocols for providing mental health resources
- Employment AI transparency: Mandates disclosure when AI is used in hiring decisions, with anti-discrimination protections and appeal rights
- Synthetic content detection: Requires subscription-based AI providers and frontier developers to ensure AI-generated content is detectable as synthetic
- AI Academy and workforce training: Expands the Connecticut AI Academy for workforce development
- AI Learning Laboratory Program: Creates educational programs around AI technology
- Technology Advisory Board: Establishes a new advisory body for AI policy guidance
- Catastrophic risk definitions: Sets standards for identifying and managing high-risk AI capabilities
Source: WSHU, CT Lawmakers Consider AI Regulation

Senate Bill 86: AI for Economic Development
SB 86 focuses on using AI to advance economic development in Connecticut. The bill would establish an AI regulatory sandbox program, allowing companies to test AI innovations in a controlled regulatory environment.
Senate Bill 417: AI Small Business Program
SB 417 would require the Department of Economic and Community Development to develop a plan for an AI small business program, aimed at helping smaller companies adopt and benefit from AI technologies.
Legislative History: The SB 2 Saga
Understanding Connecticut's current AI legislative landscape requires knowing the history of SB 2, which has been the state's most ambitious AI regulation attempt.
In 2024, the Connecticut Senate passed the first version of SB 2, which would have been comparable in scope to the EU AI Act. Governor Lamont opposed the measure, and House Speaker Matt Ritter declined to bring it to the House floor.
In 2025, a revised SB 2 passed the Senate on a 32-4 bipartisan vote, with all 25 Democrats and seven of 11 Republicans supporting it. The revised version included amendments that watered down initial requirements for impact assessments and algorithmic discrimination mitigation. Despite these concessions, Lamont again threatened a veto, and the bill was never called in the House.
The core disagreement has been between pro-regulation Senate Democrats who want comprehensive AI accountability and Governor Lamont's administration, which has expressed concern that strict AI regulations could damage Connecticut's technology sector.
Source: CT Mirror, Will CT Pass AI Legislation?
Federal AI Policy and Connecticut
Executive Order 14365: State Preemption Concerns
In December 2025, President Trump signed an executive order titled "Eliminating State Law Obstruction of National Artificial Intelligence Policy," which aims to establish a federal framework for AI regulation and discourage state-level AI laws.
The order directs federal agencies to identify state laws that may conflict with federal AI policy and contemplates using Department of Justice litigation, administrative reinterpretation of existing laws, and conditional federal funding to challenge state regulations.
Connecticut's response has been firm. Attorney General William Tong has stated that "Attorneys general are united in staunch opposition to any effort to restrain states' abilities to pass commonsense AI regulations to fill the vacuum left by federal inaction."
However, legal experts note important limitations on the executive order's practical impact:
- Federal preemption typically requires congressional action, not executive orders alone
- The order provides guidance for federal agencies but does not independently displace state laws
- State AI laws remain in force unless courts enjoin them or Congress passes preemptive legislation
- Connecticut lawmakers have signaled they will continue pursuing AI regulation regardless
The tension between federal preemption efforts and state regulatory ambitions is likely to shape Connecticut's AI policy debates throughout 2026 and beyond.
Source: White House, Executive Order on AI Policy Framework
Key Dates and Timeline
| Date | Event |
|---|---|
| June 7, 2023 | Governor Lamont signs Public Act 23-16 (government AI regulation) |
| July 1, 2023 | Public Act 23-16 Sections 1-3 take effect |
| October 1, 2023 | Public Act 23-16 Section 4 takes effect |
| April 2024 | Senate passes first version of SB 2; does not advance in House |
| May 15, 2025 | Senate passes revised SB 2 on 32-4 vote; stalls in House again |
| June 25, 2025 | Governor Lamont signs Public Act 25-113 (CTDPA AI amendments) |
| October 1, 2025 | Public Act 25-168 takes effect (deepfake intimate images) |
| December 2025 | Federal Executive Order 14365 on AI state preemption issued |
| February 4, 2026 | 2026 legislative session opens with SB 5, SB 86, SB 417 |
| July 1, 2026 | Public Act 25-113 AI disclosure requirements take effect |
| August 1, 2026 | Impact assessment requirements for automated profiling take effect |
More Connecticut Laws
Explore other Connecticut legal guides on Recording Law:
Sources and References
- Connecticut Public Act 23-16 (SB 1103) - Government AI Regulation(cga.ct.gov).gov
- Connecticut Public Act 25-113 (SB 1295) - CTDPA AI Amendments(cga.ct.gov).gov
- Connecticut Attorney General Report on CTDPA(portal.ct.gov).gov
- White House Executive Order on AI State Preemption(whitehouse.gov).gov
- WSHU - CT Lawmakers Consider AI Regulation Bills(wshu.org)
- CT Mirror - Will CT Pass AI Legislation This Year?(ctmirror.org)
- CT Mirror - What Are the New AI Laws in Connecticut?(ctmirror.org)
- Future of Privacy Forum - Connecticut SB 2 Analysis(fpf.org)
- Connecticut General Assembly - SB 2 Bill Analysis(cga.ct.gov).gov