New York AI Laws and Regulation (2026)

Overview of New York AI Laws
New York has established itself as one of the most aggressive states in regulating artificial intelligence, with a multi-layered framework spanning city, state, and executive action. From New York City's pioneering Local Law 144 governing AI in hiring to the state-level RAISE Act regulating frontier AI models, New York's approach combines targeted sector-specific laws with broader safety and transparency requirements.
Governor Kathy Hochul has positioned AI regulation as a signature issue, signing multiple AI-related bills into law during the 2025 legislative session and including AI safety provisions in the FY 2026 state budget. The state's approach balances innovation promotion through the $90 million Empire AI consortium with consumer protection through laws addressing deepfakes, AI companions, and automated decision-making.
New York's regulatory landscape spans several distinct areas: employment (Local Law 144), frontier AI safety (RAISE Act), deepfakes and synthetic media (multiple enacted laws), AI companions (budget provisions), and advertising transparency (synthetic performer law). Additional comprehensive proposals remain under consideration.
This article covers all enacted and pending New York AI legislation, enforcement developments, and the interplay between state and federal AI policy. This information is current as of March 2026, but you should consult a licensed attorney for advice specific to your situation.

NYC Local Law 144: AI in Hiring
New York City's Local Law 144 of 2021 was one of the first laws in the United States to regulate the use of AI in employment decisions. The law took effect on July 5, 2023, and is enforced by the NYC Department of Consumer and Worker Protection (DCWP).
What the Law Covers
Local Law 144 regulates "automated employment decision tools" (AEDTs), defined as any computational process derived from machine learning, statistical modeling, data analytics, or artificial intelligence that issues simplified output, including a score, classification, or recommendation, used to substantially assist or replace discretionary decision-making for employment decisions. This covers AI tools used in hiring, promotion, and termination decisions.
Bias Audit Requirements
Employers and employment agencies using AEDTs must obtain an independent bias audit of the tool no more than one year prior to its use. The bias audit must test the AEDT for disparate impact on candidates or employees based on protected categories, including sex, ethnicity, and race.
The audit must be conducted by an independent auditor who has no involvement in developing or deploying the AEDT. Results must assess selection rates and impact ratios across demographic categories.
Public Posting Requirements
Organizations using AEDTs must make the following information publicly available on the employment section of their website in a clear and conspicuous manner: the date of the most recent bias audit, a summary of the results of the bias audit, and the distribution date of the AEDT.
Candidate Notification Requirements
Employers must notify job candidates that an AEDT will be used in the hiring process at least 10 business days before the tool is applied. The notice must describe how the AEDT will be used and what data will be collected and analyzed. Candidates must also be informed of their right to request an alternative selection process or accommodation.
Penalties
DCWP can impose civil penalties of $500 to $1,500 per violation per day. Each day a violation continues constitutes a separate violation. Failing to conduct a bias audit and failing to provide proper notice are considered separate violations, meaning an employer could face multiple daily penalties simultaneously.
Enforcement Challenges
A December 2025 audit by the New York State Comptroller found that DCWP's enforcement of Local Law 144 has been "ineffective." The audit identified at least 17 instances of potential non-compliance among 32 companies reviewed, compared to DCWP's identification of just a single compliance issue. DCWP received only two AEDT complaints during the two-year audit period (July 2023 through June 2025).
The Comptroller recommended that DCWP improve its complaint-handling processes, conduct proactive compliance reviews, and perform additional educational outreach to employers. State Comptroller Thomas DiNapoli stated that "New Yorkers deserve a transparent hiring process when artificial intelligence is used to vet their job applications."
The RAISE Act: Frontier AI Model Regulation
Governor Hochul signed the Responsible AI Safety and Education Act (RAISE Act, S6953B/A6453B) on December 19, 2025, establishing New York as the second U.S. state after California to enact comprehensive legislation regulating frontier AI models.
Scope and Applicability
The RAISE Act applies to "large developers" with $500 million or more in annual revenue who develop or operate "frontier models" in New York. A frontier model is defined as an AI system trained using 10^26 or more floating-point operations (FLOPs) with compute costs of $100 million or more.
This high threshold means the law primarily targets the largest AI companies developing the most powerful models, including companies like OpenAI, Google, Anthropic, and Meta.

Safety and Security Protocol Requirements
Large developers must develop, maintain, and publicly disclose a comprehensive safety and security protocol that includes reasonable administrative, technical, and physical cybersecurity protections. The protocol must address mitigation strategies for "critical harms," defined as outcomes resulting in the death or serious injury of 100 or more people or at least $1 billion in damage.
Required elements include internal testing and risk-assessment procedures, red-teaming capabilities, pre-deployment and post-deployment safety evaluations, and strategies for addressing known risks that present unacceptable dangers.
Incident Reporting
Developers must report safety incidents to the state within 72 hours of determining that an incident has occurred. Reports must include details about the nature of the incident, the model involved, and the remediation steps being taken.
Oversight Office
The Act creates a new oversight office within the Department of Financial Services (DFS) responsible for monitoring compliance, issuing annual reports on the state of frontier AI safety, and exercising rule-making authority. Developers must submit disclosure statements to DFS and pay assessment fees.
Penalties
| Violation Type | First Offense | Subsequent Offenses |
|---|---|---|
| Failure to submit required reporting | Up to $1 million | Up to $3 million |
| Making false statements | Up to $1 million | Up to $3 million |
The Attorney General can bring civil actions against developers for failures to comply. Prior versions of the bill had penalties of $10 million and $30 million, which were negotiated down during the legislative process.
Independent Audits
The RAISE Act may require annual independent audits of frontier AI developers. The DFS oversight office has authority to establish audit requirements and standards through its rule-making power.
Effective Date
The RAISE Act takes effect on January 1, 2027, following chapter amendments enacted in January 2026 that refined the law's thresholds and penalties. These amendments aligned the final version more closely with California's Transparency in Frontier AI Act (TFAIA).
Deepfake Laws
New York has enacted multiple laws addressing AI-generated deepfakes across different contexts: nonconsensual intimate imagery, political communications, and child exploitation.
Nonconsensual Intimate Imagery: S1042A
Governor Hochul signed S1042A into law, expanding New York's existing prohibition on nonconsensual distribution of intimate images to cover AI-generated "deepfake" content. The law adds images "created or altered by digitization" to the definition of unlawful dissemination or publication of an intimate image.
Violations carry penalties of up to one year in jail and a $1,000 fine. Prosecutors must prove the defendant intended to harm the emotional, financial, or physical welfare of the depicted person. Victims have the right to pursue civil legal action against perpetrators.
AI-Generated Child Sexual Abuse Material
As part of the FY 2026 budget, Governor Hochul modernized New York's penal law to treat AI-generated child sexual abuse material (CSAM) as child pornography. This change applies to real images manipulated to become sexually explicit using AI, closing a legal gap that previously allowed AI-generated CSAM to escape prosecution under existing child pornography statutes.
Election Deepfakes
In 2024, New York amended its election law to address AI-generated content in political communications. The updated definition of "materially deceptive media" now includes any image, video, audio, or text that was created with AI, did not actually occur or was significantly altered, and is indistinguishable from a real person.
Creators of deceptive political content must include disclosures. For video, the disclosure must appear for the entire duration in the same language. For audio-only content, the disclosure must be read at the beginning, at the end, and at intervals of no more than two minutes for content longer than two minutes.
Candidates whose voice or likeness is used in a deepfake political communication without appropriate disclaimers may seek injunctive relief through an expedited court process, along with court costs and attorney's fees.
Stop Deepfakes Act (Proposed)
The "Stop Deepfakes Act" was introduced in the New York State Senate in March 2025. The proposal would require synthetic content creation providers to attach metadata to AI-generated content, social media platforms to preserve that metadata, and state agencies to attach such data to the extent practicable.
AI Companion Safety Law
New York established first-in-the-nation safeguards for AI companion systems as part of the FY 2026 budget. The law took effect on November 5, 2025.
What Are AI Companions
AI companions are chatbot systems designed to simulate human relationships with users, functioning as AI friends or romantic partners. These systems remember personal details, adapt their personality to user preferences, and are designed to maximize user engagement.
Safety Requirements
AI companion operators must implement several mandatory safety protocols.

Operators must detect and implement a safety protocol when a user discusses suicidal ideation or self-harm. The protocol must include referral to a crisis center, such as the 988 Suicide and Crisis Lifeline.
Operators must clearly and regularly notify users that they are interacting with AI, not a human. This includes conspicuous notifications at the start of each session and at periodic intervals of every three hours of continued use.
Companies must implement safety features to interrupt users engaging in sustained interaction periods.
Enforcement
The New York Attorney General is responsible for enforcement. Fines collected from non-compliant companies are directed toward funding suicide prevention programs. Governor Hochul sent letters to AI companion companies in November 2025 notifying them that the safeguard requirements were in effect.
Synthetic Performer Advertising Law (SB 8420A)
On December 11, 2025, Governor Hochul signed SB 8420A/A8887B into law, regulating the use of AI-generated "synthetic performers" in advertising. The law amends New York General Business Law Section 396-b and takes effect on June 9, 2026.
Definition
A "synthetic performer" is a digital asset created with generative AI that looks like a human performing but does not represent any identifiable natural person. This distinguishes it from deepfakes of real people.
Disclosure Requirements
Any advertisement that includes a synthetic performer must include a "conspicuous disclosure" of the use of a synthetic performer. The disclosure must be prominent, unavoidable, and noticeable, not buried in fine print.
Penalties
| Violation | Fine Amount |
|---|---|
| First violation | $1,000 |
| Subsequent violations | $5,000 each |
Exemptions
The law exempts advertisements and promotional materials for expressive works including motion pictures, television shows, streaming content, documentaries, and video games, provided the use of the synthetic performer in the ad is consistent with its use in the expressive work. Advertisements using AI solely for language translation are also exempt.
Empire AI Consortium
Governor Hochul has invested significantly in AI research and development through the Empire AI consortium. Originally launched as part of the FY 2025 budget, the consortium received a $90 million expansion in the FY 2026 budget.

The consortium brings together New York's leading research institutions to advance AI for the public good. Members include Columbia University, Cornell University, New York University, and the Rensselaer Polytechnic Institute, with new additions including the University of Rochester, Rochester Institute of Technology, and the Icahn School of Medicine at Mount Sinai.
The $90 million in state capital funding is matched by $50 million in private funding from new members and $25 million in SUNY operating funds over the next decade. The investment aims to substantially increase computing power, expand access for SUNY researchers, and support AI applications in healthcare, climate science, and public services.
AI in Employment: State-Level Proposals
Beyond NYC's Local Law 144, several state-level bills have been proposed to regulate AI in employment across all of New York.
New York AI Act (S01169A)
This bill focuses on addressing algorithmic discrimination by regulating and restricting the use of certain AI systems, including in employment contexts. The bill would extend Local Law 144's approach statewide, requiring bias audits, transparency, and consumer rights for all New Yorkers.
New York AI Consumer Protection Act (A007683)
This bill would amend the general business law to prevent the use of AI algorithms to discriminate against protected classes, including in employment. If passed, the act would go into effect on July 1, 2026.
Employer AI Monitoring Bills (S7623A and A9315)
These bills would require employers to conduct impact assessments when using AI tools and provide written notice to employees. The proposals would specifically limit employers' use of employee data collected through AI monitoring systems and restrict the consequences that can flow from AI-based employee surveillance.
Healthcare AI Regulation
Insurance Utilization Review
New York has introduced legislation (A1456) requiring notice when health insurers use AI-based algorithms in the utilization review process. The bill addresses growing concerns about AI-driven insurance claim denials and would require insurers to disclose their use of AI in coverage determinations.
Mental Health AI Protections
Proposed legislation would create protections around the use of AI in professional mental healthcare settings. Under the proposals, AI could not be used to supplant professional judgment in interpreting client interactions, providing therapeutic strategies, offering emotional support, directly collaborating with clients, or providing behavioral feedback.
Federal AI Policy and New York
Executive Order 14365
On December 11, 2025, President Trump signed Executive Order 14365, establishing federal policy to create a "minimally burdensome national policy framework for AI" and challenge state laws that exceed that framework. The order creates a DOJ AI Litigation Task Force and directs the FTC to identify preempted state laws.
New York's Defiant Response
Governor Hochul signed the RAISE Act eight days after the federal executive order, in what was widely seen as a direct statement that New York would not be deterred from AI regulation. The Governor joined California and Colorado governors in issuing statements that the executive order would not stop them from passing or enforcing their AI laws.
Legal Landscape
The executive order itself cannot overturn existing state law without congressional action or court rulings. The RAISE Act's carve-outs and its focus on safety and transparency may make it more difficult for the federal government to challenge, particularly given the executive order's exemptions for child safety and certain state regulatory functions.
However, the potential for federal legal challenges creates uncertainty for businesses operating under New York's AI regulatory framework. Companies should monitor developments from the DOJ AI Litigation Task Force and any FTC policy statements on federal preemption.
Looking Ahead: New York's AI Regulatory Future
New York's AI regulatory trajectory points toward continued expansion. Several factors will shape the state's approach.
The RAISE Act's January 1, 2027 effective date provides a compliance runway for frontier AI developers. The DFS oversight office's rule-making authority will further define requirements during 2026.
State-level employment AI bills could extend Local Law 144's bias audit approach beyond New York City to the entire state. The Comptroller's critical audit of DCWP's enforcement signals likely increased scrutiny and potential strengthening of existing requirements.
New York's combined approach of targeted laws (deepfakes, AI companions, synthetic performers) alongside broader frameworks (RAISE Act) may serve as a model for other states seeking to regulate AI comprehensively while maintaining sector-specific protections.
The federal-state tension over AI regulation will remain a central dynamic. New York's willingness to sign the RAISE Act days after the federal preemption executive order suggests the state will continue to push forward regardless of federal opposition, potentially setting up significant legal battles over AI regulation authority.
More New York Laws
Explore other New York law topics on Recording Law:
- New York Recording Laws
- [New York Data Privacy Laws](/us-laws/data-privacy-laws/new-york-data-privacy-laws)
- New York Whistleblower Laws
- New York Background Check Laws
- New York Sexting Laws
Sources and References
- Automated Employment Decision Tools (AEDT) - NYC DCWP(nyc.gov).gov
- Enforcement of Local Law 144 - NY State Comptroller Audit(osc.ny.gov).gov
- Governor Hochul Signs Nation-Leading RAISE Act Legislation(governor.ny.gov).gov
- RAISE Act - DFS Press Release(dfs.ny.gov).gov
- Landmark AI Safety Bill Signed Into Law - NY Senate(nysenate.gov).gov
- Hinchey Bill to Ban Non-Consensual Deepfake Images Signed into Law(nysenate.gov).gov
- Governor Hochul Signs Empire AI Consortium Expansion and AI Protections (FY2026 Budget)(governor.ny.gov).gov
- Governor Hochul Notifies AI Companion Companies of Safety Requirements(governor.ny.gov).gov
- Governor Hochul Signs Synthetic Performer Transparency Law(governor.ny.gov).gov
- Automated Employment Decision Tools Rules - NYC Rules(cityofnewyork.us).gov
- NY Law Amended to Restrict AI Deceptive Practices in Elections(gtlaw.com)
- SB 8420A - Synthetic Performer Disclosure(nysenate.gov).gov
- Ensuring a National Policy Framework for AI (EO 14365)(whitehouse.gov).gov
- New York Enacts AI Transparency Law Amid Federal Preemption Debate(skadden.com)