Massachusetts AI Laws and Regulation (2026)

Massachusetts is emerging as one of the most active states in AI policy, combining executive action, attorney general enforcement guidance, and an ambitious legislative agenda. While the state does not yet have a single comprehensive AI regulatory law on the books, it has taken significant steps through executive orders, targeted legislation on deepfakes and healthcare AI, and aggressive attorney general guidance that applies existing consumer protection and civil rights laws to artificial intelligence systems.
This guide covers every enacted law, executive action, and pending bill that affects how AI is developed, deployed, and used in Massachusetts as of March 2026.
Massachusetts AI Executive Actions
Executive Order 629: AI Strategic Task Force (February 2024)
On February 14, 2024, Governor Maura Healey signed Executive Order 629, establishing the Artificial Intelligence Strategic Task Force. The Task Force was charged with studying AI and generative AI technology and its effects on state government, private businesses, higher education institutions, and Massachusetts residents.
The Task Force brought together 25 members, including legislators, industry leaders, and academic experts. Its mandate included conducting stakeholder outreach, advising the Governor and executive branch on AI implementation, and identifying ways to encourage leading industries to adopt AI technology responsibly.
The Task Force released its 2024 Report to the Governor, providing recommendations on how the Commonwealth should approach AI governance, economic development, and workforce preparation.
The FutureTech Act (July 2024)
Governor Healey filed the FutureTech Act in January 2024, and it was signed into law in July 2024. The legislation authorized $1.23 billion in bond authorization to modernize state IT systems from 2025 to 2029, with $25 million specifically dedicated to AI projects that improve government operations and digital services for residents.
The Act represents the state's commitment to using AI as a tool for better governance, funding projects in cybersecurity, user experience improvements, and emerging technology applications.
Applied AI Hub Initiative
Building on the FutureTech Act, Governor Healey has proposed $100 million in economic development legislation to create an Applied AI Hub in Massachusetts. The funding would support a capital grant program to drive AI adoption for solving public policy challenges and to maintain the state's leadership position in technology sectors including life sciences, healthcare, financial services, advanced manufacturing, robotics, and education.

Attorney General AI Advisory (April 2024)
On April 16, 2024, Massachusetts Attorney General Andrea Joy Campbell issued a formal advisory on AI, clarifying how existing state laws apply to artificial intelligence systems. This advisory is one of the most comprehensive state-level AG guidance documents on AI in the country and carries significant enforcement weight.
Consumer Protection Under Chapter 93A
The advisory confirms that Massachusetts' Consumer Protection Act (Chapter 93A) applies fully to AI systems. Under this framework, it is unfair or deceptive to:
- Falsely advertise the quality, value, or usability of AI systems
- Supply an AI system that is defective, unusable, or impractical for the purpose advertised
- Misrepresent the reliability, performance, safety, or condition of an AI system
- Fail to disclose material limitations or risks of an AI system
Businesses that violate Chapter 93A through their use of AI face civil penalties, injunctive relief, and potential class action lawsuits.
Anti-Discrimination Requirements
The advisory makes clear that the Commonwealth's anti-discrimination laws prohibit developers, suppliers, and users of AI from deploying systems that discriminate based on legally protected characteristics. This applies whether discrimination is intentional or results from biased training data or algorithmic design.
This means that an employer using an AI hiring tool that disproportionately screens out candidates based on race, gender, age, or disability could face liability under existing Massachusetts civil rights law, even if the employer did not intend to discriminate.
Data Security Obligations
AI systems must comply with Massachusetts' Standards for the Protection of Personal Information (201 CMR 17.00). Developers, suppliers, and users of AI must take steps to safeguard personal information processed by their systems and must comply with breach notification requirements when personal data is compromised.

Deepfake and Synthetic Media Laws
H.4744: Criminalizing Non-Consensual Intimate Images Including Deepfakes (2024)
In 2024, Governor Healey signed H.4744, "An Act to Prevent Abuse and Exploitation," making Massachusetts one of the last states to criminalize the non-consensual distribution of intimate images. Critically, the law explicitly covers AI-generated deepfakes.
The law defines "digitization" as "the creation or alteration of visual material, including, but not limited to through the use of computer-generated images in a manner that would falsely appear to a reasonable person to be an authentic representation of the person depicted." This broad definition captures AI-generated deepfake pornography.
Penalties for Non-Consensual Intimate Image Distribution
| Offense | Fine | Imprisonment |
|---|---|---|
| First offense | Up to $10,000 | Up to 2.5 years in House of Correction |
| Second offense | Up to $15,000 | Up to 2.5 years in House of Correction or up to 10 years in state prison |
| Involving a minor | Enhanced penalties | Additional charges under child exploitation statutes |
The law requires proof that the person distributing the images either intended to cause harm, harass, intimidate, coerce, or threaten the victim, or recklessly disregarded the risk of such harm.
For minors, the law allows diversion to educational programs about the legal and personal consequences of sexting, rather than requiring criminal punishment.
H.5100: Political Deepfake Disclosure (2024)
Massachusetts also enacted H.5100 in November 2024, introducing rules for political synthetic media. However, the relevant provisions were set to expire on February 1, 2025, meaning they were never in effect for a statewide election.
The Massachusetts House has since voted to approve legislation requiring political advertisements using AI-generated synthetic media to include a clear disclosure stating "contains content generated by AI," with the disclosure appearing at both the beginning and end of any audio or video political ad. This legislation remains pending in the 194th session.
Healthcare AI Restrictions
SB 2632: AI in Mental Health and Utilization Review (2025)
On October 16, 2025, the Massachusetts Senate passed SB 2632, setting clear boundaries on AI's role in healthcare. This legislation addresses two major areas.
Mental and Behavioral Health Provisions
Under SB 2632, AI cannot be used to make independent therapeutic decisions in a mental or behavioral health setting. All treatment plans and patient interactions involving AI must be reviewed by a licensed professional. The bill also requires transparency: patients must be informed when AI is used in their care and must provide explicit consent before AI tools are applied to their treatment.
Insurance Utilization Review Restrictions
The legislation restricts how insurance carriers use AI in utilization review and administrative functions. Specifically, it prohibits AI from replacing human decision-making or being used in ways that could result in discrimination against insured individuals. Carriers or utilization review organizations using AI tools for claims adjudication must provide disclosures and ensure that determinations of medical necessity are made by licensed professionals.
S.46: AI in Healthcare Decision-Making
Senate Bill 46 in the 194th session proposes additional legislation relative to the use of artificial intelligence and other software tools in healthcare decision-making. Filed by Senators Moore and Eldridge, this bill would expand beyond SB 2632's mental health focus to cover broader healthcare applications of AI.

AI in Employment and Algorithmic Discrimination
SD 3007: Non-Discrimination in Algorithmic Systems
Introduced on June 26, 2025, and referred to the Committee on Advanced Information Technology on September 4, 2025, SD 3007 has the potential to become one of the most comprehensive AI employment discrimination laws in the country.
SD 3007 explicitly targets AI tools used across a wide variety of employment contexts and would provide individuals with a clear legal path to hold employers accountable for discriminatory impacts of automated decision-making tools, regardless of whether the employer was aware of the discrimination. Key provisions include:
- Requiring employers to disclose when AI tools are used in hiring, promotion, or termination decisions
- Establishing a disparate impact framework specifically for algorithmic decision systems
- Creating a private right of action for individuals harmed by discriminatory AI
- Mandating impact assessments before deploying AI in high-stakes employment decisions
The F.A.I.R. Act (S.35): AI in the Workplace
On September 11, 2025, the Joint Committee on Advanced IT hosted a hearing on The F.A.I.R. Act (S.35), An Act Fostering Artificial Intelligence Responsibility. This bill would:
- Regulate AI deployment in the workplace
- Guard against worker surveillance through AI tools
- Protect human autonomy and expertise in workplace decisions
- Establish transparency requirements for employers using AI systems
The Massachusetts AFL-CIO has been a strong advocate for the F.A.I.R. Act, emphasizing the need to protect workers from unchecked AI deployment.
Pending AI Bills in the 194th Session (2025-2026)
Massachusetts has one of the most active AI legislative agendas in the country. Here are the key pending bills.
Consumer Protection and Accountability
H.94 (HD 396): Artificial Intelligence Accountability and Consumer Protection Act. This bill would regulate high-risk AI systems and impose obligations on developers and deployers, including requirements for maintaining risk management programs and conducting impact assessments. It follows a framework similar to Colorado's landmark AI Act.
S.264: AI Chatbot Consumer Protection. Filed by Senator Montigny, this bill would protect consumers interacting with artificial intelligence chatbots, likely requiring disclosure when users are interacting with AI rather than a human.
Safety and Harm Prevention
H.524: AI Self-Harm Penalties. This bill would impose penalties on entities whose AI models suggest harming oneself or another person. It reflects growing concerns about AI chatbots providing harmful content.
S.760: Kids Chatbot Safety. Introduced in late December 2025, this bill would prohibit chatbot operators from offering products to minors unless the chatbot is incapable of encouraging self-harm, suicidal ideation, violence, drug or alcohol consumption, or disordered eating.
AI Personhood and Housing
H.469: AI Non-Sentience Declaration. This bill would declare artificial intelligence systems nonsentient and prohibit them from obtaining legal personhood, addressing philosophical and legal questions about AI status.
H.389: AI in Rental Housing. This bill would restrict the use of artificial intelligence to affect rental housing pricing and availability, targeting algorithmic pricing tools used by landlords and property management companies.
General AI Regulation
H.81: AI Disclosure Requirements. Filed by Representative Howitt of Seekonk, this bill addresses artificial intelligence disclosure obligations.
H.77: General AI Regulation. Filed by Representative Farley-Bouvier of Pittsfield, this bill proposes comprehensive regulation of artificial intelligence use.
S.37: AI Economic Development and Safety. Filed by Senator Finegold, this bill aims to promote economic development with emerging AI models while addressing safety concerns.
Massachusetts Data Privacy Act and AI
The Massachusetts Senate unanimously passed the Massachusetts Data Privacy Act (MDPA) on September 25, 2025. Originally introduced as SB 2608 and refiled as SB 2619, the MDPA proposes sweeping reforms that would directly affect AI systems.
The MDPA would ban the sale of sensitive data, including biometric, health, and geolocation information that AI systems frequently rely on. It would enhance data privacy rights for minors and impose strict limits on data collection practices. If enacted, the MDPA would create one of the strongest data privacy frameworks in the country and would significantly restrict the types of personal data available for AI training and deployment.
Federal AI Policy Impact on Massachusetts
Massachusetts is affected by several federal AI developments. The federal TAKE IT DOWN Act, signed into law, prohibits the nonconsensual publication of intimate images including AI-generated deepfakes, supplementing Massachusetts' H.4744.
Representative Jake Auchincloss (D-MA) introduced the Deepfake Liability Act in December 2025, a bipartisan bill that would change how federal law treats websites and apps hosting nonconsensual AI-generated sexual content. This reflects Massachusetts legislators' active role in shaping federal AI policy.
The state's Attorney General advisory explicitly noted that federal frameworks do not preempt state consumer protection and anti-discrimination enforcement. Massachusetts retains full authority to enforce its own laws against AI-related harms, even as federal standards evolve.
More Massachusetts Laws
Looking for information on other Massachusetts laws? Visit our AI Laws by State hub to compare Massachusetts with other states. You can also explore related topics:
- Massachusetts Data Privacy Laws for biometric privacy protections
- Massachusetts Surveillance Camera Laws for monitoring and recording rules
- Massachusetts Background Check Laws for employment screening regulations
- Massachusetts Recording Laws for wiretap and consent rules
This article is for informational purposes only and does not constitute legal advice. AI laws and regulations are evolving rapidly, and enforcement interpretations change over time. Consult a licensed attorney in Massachusetts for advice about your specific situation. Last reviewed: March 2026.
Sources and References
- Governor Healey Signs Executive Order Establishing AI Strategic Task Force(mass.gov).gov
- Governor Healey Signs FutureTech Act to Modernize IT Across State Government(mass.gov).gov
- AG Campbell Issues Advisory on How State Laws Apply to Artificial Intelligence(mass.gov).gov
- Attorney General Advisory on AI (Full Document)(mass.gov).gov
- Massachusetts Law About Artificial Intelligence(mass.gov).gov
- Governor Healey Signs Bill Banning Revenge Porn, Expanding Protections Against Abuse and Exploitation(mass.gov).gov
- Massachusetts Legislature Passes Bill to Prevent Abuse and Exploitation(malegislature.gov).gov
- SB 2632 - AI in Mental Health and Utilization Review(malegislature.gov).gov
- Massachusetts AI Strategic Task Force 2024 Report(mass.gov).gov
- Joint Committee on Advanced IT Hearing on F.A.I.R. Act(massaflcio.org)
- Rep. Auchincloss Introduces Deepfake Liability Act(auchincloss.house.gov).gov
- Artificial Intelligence at the Commonwealth(mass.gov).gov