Kentucky AI Laws and Regulation (2026)

Kentucky has emerged as a national leader in state-level AI governance. With the passage of Senate Bill 4 in 2025, Kentucky created one of the most structured frameworks for regulating AI use in government, complete with a dedicated governance committee, risk-based classification system, transparency requirements, and election integrity provisions. Combined with the Kentucky Consumer Data Protection Act's AI profiling rules and aggressive early enforcement against an AI chatbot company, Kentucky offers a model that other states are watching closely.
This guide covers Kentucky's enacted AI laws, the SB 4 governance framework in detail, deepfake protections, the KCDPA's AI-related provisions, pending legislation, and how federal AI policy intersects with Kentucky law.
This article is for informational purposes only and does not constitute legal advice. Consult a licensed Kentucky attorney for guidance on specific situations.
Kentucky SB 4: The AI Governance Framework
Senate Bill 4, signed by Governor Andy Beshear on March 24, 2025, is Kentucky's landmark AI legislation. The bill passed with overwhelming bipartisan support, with votes of 30-3 in the Senate and 86-10 in the House. The law took effect immediately upon the governor's signature.
SB 4 was the direct result of recommendations from Kentucky's Artificial Intelligence Task Force, established under HCR 38 in 2024 and co-chaired by Representative Josh Bray and Senator Amanda Mays Bledsoe.
The AI Governance Committee
Under KRS 42.731, the Commonwealth Office of Technology (COT) must create an Artificial Intelligence Governance Committee. The committee has the following duties.
Develop policy standards. The committee must create policy standards and guiding principles to mitigate risks and protect the data and privacy of Kentucky citizens and businesses. These standards must adhere to the latest version of ISO/IEC 42001, the International Organization for Standardization's standard for AI management systems.
Establish technology standards. The committee must set protocols and requirements for the use of generative AI and high-risk AI systems across state government.
Ensure transparency. All AI systems used by government must be documented and their use disclosed to the public.
Maintain a centralized registry. The committee must maintain a current inventory of all generative AI systems and high-risk AI systems used by state government agencies.
Defining High-Risk AI Systems
SB 4 introduces a risk-based classification approach. A "high-risk artificial intelligence system" is defined as any AI system that is a "substantial factor" in the decision-making process or is specifically intended to autonomously make, or be a substantial factor in making, a "consequential decision."
A "consequential decision" is defined as any decision that has a material legal or similarly significant effect on the provision or denial of services, cost, or terms to any citizen or business.
The definition explicitly excludes several categories of AI.
- Systems performing narrow procedural tasks
- Systems that improve the result of a completed human activity
- Systems that detect decision-making patterns or deviations from previous patterns but are not meant to replace or influence human assessment without human review
- Systems that perform preparatory tasks in assessments relevant to consequential decisions
This exclusion framework is designed to avoid over-regulating routine AI applications while maintaining oversight of systems that materially affect people's rights and access to services.

Transparency and Disclosure Requirements
SB 4 requires government bodies to disclose the use of generative AI in two key situations.
Decisions affecting citizens. When a governmental body uses generative AI in rendering decisions related to citizens, it must disclose that AI was involved in the process.
Public output. When AI is used to produce public-facing output, the use of AI must be disclosed.
Citizens who receive a consequential decision involving AI have the right to appeal that decision, ensuring human review of AI-driven government actions.
Annual Reporting
Each state cabinet must submit an annual report to COT by December 1 identifying potential beneficial uses of AI within their operations. This creates a forward-looking planning process that encourages innovation while maintaining accountability.
Enforcement and Consequences
SB 4 establishes differentiated consequences for violations.
Agency-level accountability. State agencies deploying AI without required COT approval may face administrative consequences through standard government accountability mechanisms.
Employee accountability. Individual employees who violate AI policies may face personnel actions.
No private right of action. There is no private right of action for general AI governance violations. Enforcement occurs through government oversight and internal accountability structures.
Election Integrity Provisions
SB 4 includes significant provisions addressing AI-generated content in political campaigns and elections.
Disclosure Requirements
The law bans the unreported release of AI-generated content that fraudulently depicts individuals in political contexts. Anyone using AI-generated content in political messaging must provide clear disclosure that the content was created using artificial intelligence.
Legal Remedies for Targeted Candidates
SB 4 creates legal remedies for candidates targeted by deceptive AI-generated media. Individuals depicted in fraudulent AI-generated political content can pursue civil litigation against those who released the synthetic media without proper disclosure.
This makes Kentucky one of 28 states as of September 2025 that have enacted laws specifically addressing deepfakes in political communications.
Kentucky Consumer Data Protection Act and AI
The Kentucky Consumer Data Protection Act (KCDPA) took effect on January 1, 2026, adding important protections that directly affect AI systems operating in Kentucky.
AI Profiling Rights
The KCDPA gives consumers the right to opt out of profiling that produces legal or similarly significant effects. Under the law, "profiling" means any form of automated processing performed on personal data to evaluate, analyze, or predict personal aspects related to an identified or identifiable person, including their economic situation, health, personal preferences, interests, reliability, behavior, location, or movements.
This definition captures many AI applications that analyze consumer data to make predictions or automated decisions.
Data Protection Impact Assessments
The KCDPA requires businesses to conduct and document data protection impact assessments for processing activities that present reasonably foreseeable risks, including profiling that may result in the following.
- Unfair or deceptive treatment of consumers
- Disparate impact on protected groups
- Significant financial, physical, or reputational injury
This requirement means businesses deploying AI systems that profile consumers in Kentucky must proactively assess and document the risks those systems create.

First Enforcement Action: Character.AI Lawsuit
On January 8, 2026, just seven days after the KCDPA took effect, Attorney General Russell Coleman filed Kentucky's first enforcement action under the new law. The lawsuit targeted Character Technologies and its product, Character.AI.
The complaint filed in Franklin Circuit Court alleges that Character.AI preyed on children by allowing them to engage in harmful conversations with AI chatbots, including sexually explicit content, promotion of self-harm, and encouragement of substance use.
Key allegations include the following.
KCDPA violations. The company failed to obtain verifiable parental consent before collecting and processing children's personal data, as required by the KCDPA in accordance with the Children's Online Privacy Protection Act (COPPA).
Consumer protection violations. The complaint also alleges unfair, false, misleading, and deceptive acts under the Kentucky Consumer Protection Act, seeking $2,000 per violation.
Relief sought. The AG seeks injunctive relief under the KCDPA and monetary penalties under consumer protection law.
This early enforcement action signaled Kentucky's willingness to use its new privacy law aggressively against AI companies that fail to protect vulnerable users.
Kentucky's Deepfake and CSAM Laws
Kentucky has enacted laws addressing AI-generated exploitative content, particularly involving minors.
KRS Chapter 531: Pornography and Exploitation of Minors
Kentucky Revised Statutes Chapter 531 addresses pornography and the sexual exploitation of minors. Significant updates to these statutes took effect July 15, 2024, including provisions that cover AI-generated and computer-synthesized material.
Kentucky is one of 46 states that have enacted laws criminalizing AI-generated or computer-edited child sexual abuse material. Under KRS Chapter 531, the following activities involving AI-generated CSAM are criminal offenses.
- Use of a minor in a sexual performance (KRS 531.310)
- Promoting a sexual performance by a minor (KRS 531.320)
- Possession or viewing of material portraying a sexual performance by a minor (KRS 531.335)
- Distribution of such material (KRS 531.340)
These provisions apply regardless of whether the material was created using real images or was entirely AI-generated.
Federal Protections: The TAKE IT DOWN Act
The federal TAKE IT DOWN Act, signed by President Trump, provides additional protection for Kentucky residents by making it a federal crime to publish or threaten to share non-consensual intimate images, including AI-generated deepfakes. Social media platforms must remove such content within 48 hours of notification by a victim.

The 2024 AI Task Force and Its Legacy
Kentucky's current AI regulatory framework grew directly out of the Artificial Intelligence Task Force established by House Concurrent Resolution 38 in 2024. Co-chaired by Representative Josh Bray and Senator Amanda Mays Bledsoe, the task force issued 11 recommendations in November 2024 that shaped SB 4 and the state's broader AI policy direction.
Key recommendations included the following.
- Establish policy standards for AI use by state government, including a framework for ethical decision-making, approval processes, required disclosures, and data privacy protections
- Urge the federal government to take immediate action on AI regulation
- Promote and protect the integrity of Kentucky elections through responsible AI use
- Establish a state AI governance framework focused on data privacy, ethical standards, transparency, and accountability
- Promote AI education and workforce development through integration of AI into educational curricula
Senator Bledsoe has continued to lead the task force into its second phase, focusing on implementation of SB 4 and identifying areas where additional legislation may be needed.
Pending Legislation in the 2026 Session
The 2026 Kentucky General Assembly session has addressed several AI-adjacent issues.
HB 227: Social Media Age Verification and Algorithmic Protections for Minors
House Bill 227 would require social media companies to verify user age and prohibit the use of addictive algorithms targeting minors. The bill passed the full House 96-0 on March 9, 2026, and is now before the Senate.
While not exclusively an AI bill, HB 227 addresses AI-driven recommendation algorithms that social media platforms use to target content at younger users.
HB 567: Public Records and AI-Generated Requests
House Bill 567 would allow public agencies to require photo identification from individuals requesting public records. The bill responds to concerns that state agencies have been overwhelmed by AI-generated records requests, which consume staff time and resources.
AI Data Center Legislation
Kentucky has emerged as a target for AI data center development, prompting significant legislative activity. In 2025, the legislature passed a law exempting data centers from sales and use taxes on computer equipment for 50 years. The 2026 session has focused on establishing location requirements, protecting utility ratepayers from cost increases, and creating frameworks for nuclear power expansion to serve data centers.
How Federal AI Policy Affects Kentucky
Federal AI policy creates additional requirements for Kentucky residents and businesses beyond state law.
FTC Enforcement
The Federal Trade Commission actively enforces against deceptive AI practices nationwide, including in Kentucky. The FTC has warned companies against making unfounded claims about AI capabilities, using AI to generate fake reviews, and deploying AI in ways that cause substantial consumer harm.
Sector-Specific Federal Rules
Kentucky businesses in regulated industries face additional AI requirements from federal agencies. Financial institutions must ensure AI credit-scoring tools comply with fair lending laws. Healthcare providers must meet FDA requirements for AI diagnostic tools. Employers must follow EEOC guidance on algorithmic fairness.
NIST AI Risk Management Framework
The NIST AI Risk Management Framework provides voluntary guidelines for responsible AI deployment. Notably, Kentucky's SB 4 references the ISO/IEC 42001 standard, aligning state government AI governance with international best practices.
How Existing Kentucky Law Applies to AI
Beyond SB 4 and the KCDPA, several existing Kentucky laws apply to AI systems.
Consumer Protection
The Kentucky Consumer Protection Act (KRS 367.110-367.360) prohibits unfair, false, misleading, or deceptive acts and practices. AI-driven business practices that mislead consumers fall under this act, as demonstrated by Attorney General Coleman's Character.AI lawsuit, which included consumer protection claims alongside KCDPA claims.
Employment Discrimination
The Kentucky Civil Rights Act (KRS 344) prohibits employment discrimination based on race, color, religion, national origin, sex, age, disability, and other protected characteristics. Employers using AI tools for hiring, promotion, or termination must ensure those tools do not produce discriminatory outcomes.
The Kentucky Commission on Human Rights enforces these protections. Kentucky does not have a dedicated AI hiring law requiring bias audits or algorithmic impact assessments.
Penalties Summary
| Law | Violation | Classification | Penalty |
|---|---|---|---|
| SB 4 (Government AI) | Deploying AI without COT approval | Administrative | Agency/personnel consequences |
| SB 4 (Elections) | Unreported AI-generated political content | Civil | Legal remedies for targeted individuals |
| KCDPA | Processing children's data without consent | Civil (AG enforcement) | Injunctive relief; no private right of action |
| KRS 367 (Consumer Protection) | Deceptive AI practices | Civil | Up to $2,000 per violation |
| KRS 531 (CSAM) | AI-generated child exploitation material | Felony | Significant prison time, sex offender registry |
Looking Ahead: Kentucky's AI Regulatory Future
Kentucky has established itself as a leading state in AI governance, and the trajectory suggests continued activity. Key trends to watch include the following.
SB 4 implementation. The AI Governance Committee's development of detailed policy standards and the centralized AI registry will shape how all Kentucky state agencies deploy AI systems going forward.
KCDPA enforcement expansion. The Character.AI lawsuit signals aggressive enforcement against AI companies that violate Kentucky's data privacy law, particularly those that impact children.
Data center regulation. Kentucky's push to attract AI data centers while protecting ratepayers and communities creates a unique intersection of economic development and technology policy.
Private sector AI regulation. SB 4 currently focuses on government AI use. The AI Task Force's second phase may recommend extending transparency or disclosure requirements to private sector AI deployments.
Healthcare AI. Federal developments in AI medical device regulation may prompt Kentucky to establish state-level guidelines, particularly given the state's rural healthcare challenges.
More Kentucky Laws
Sources and References
- Kentucky SB 4 - AI Governance Framework(apps.legislature.ky.gov).gov
- KRS 42.731 - AI Governance Committee duties(apps.legislature.ky.gov).gov
- SB 4 full bill text (PDF)(apps.legislature.ky.gov).gov
- Kentucky Consumer Data Protection Act - AG office(ag.ky.gov).gov
- AG Coleman sues Character.AI(kentucky.gov).gov
- Character.AI complaint (PDF)(ag.ky.gov).gov
- KRS Chapter 531 - Pornography(apps.legislature.ky.gov).gov
- HCR 38 - AI Task Force establishment(apps.legislature.ky.gov).gov
- SB 4 reaches final passage - KY Senate GOP(kysenaterepublicans.com).gov
- AI Task Force second phase(kysenaterepublicans.com).gov
- AI Task Force findings and recommendations(linknky.com)
- Kentucky AI data center legislation(lpm.org)
- Kentucky 2026 session legislative update(wkms.org)
- NIST AI Risk Management Framework(nist.gov).gov
- FTC guidance on AI claims(ftc.gov).gov
- TAKE IT DOWN Act(congress.gov).gov
- WHAS11 - Kentucky AI law signed(whas11.com)