District of Columbia AI Laws and Regulation (2026)

The District of Columbia occupies a unique position in the national AI regulatory landscape. As the seat of the federal government, DC is both shaped by and responsive to federal AI policy while simultaneously developing its own local governance frameworks. Although the District has not enacted comprehensive AI legislation, it has taken significant executive action through Mayor Bowser's AI values framework and has repeatedly considered algorithmic discrimination legislation.
This guide covers DC's executive AI governance framework, pending legislation, deepfake protections, and how the District's unique federal relationship affects AI regulation. If you have specific questions about how these laws apply to your situation, consult an attorney for advice specific to your situation.
Mayor's Order 2024-028: DC's AI Values Framework
On February 8, 2024, Mayor Muriel Bowser signed Mayor's Order 2024-028, titled "Articulating DC's Artificial Intelligence Values and Establishing Artificial Intelligence Strategic Benchmarks." This order established the nation's first comprehensive responsible AI framework for a major city government.
Six Core AI Values
The Order requires all DC government agencies to align their AI deployments with six core values:
- Clear Benefit to the People: AI tools must demonstrably improve services or outcomes for DC residents
- Safety and Equity: AI systems must operate safely and not produce inequitable outcomes across communities
- Accountability: Clear lines of responsibility must exist for AI-driven decisions and outcomes
- Transparency: The use of AI must be disclosed and understandable to affected residents
- Sustainability: AI deployments must consider long-term resource and environmental impacts
- Privacy and Cybersecurity: AI systems must protect personal data and maintain cybersecurity standards
Before deploying any AI tool, agencies must verify alignment with these values, assess potential impacts, consider mitigation controls, and document their review process.
Source: Mayor Bowser's Office, AI Values Order
AI Taskforce
The Order created the DC AI Taskforce, led by the Office of the Chief Technology Officer (OCTO). The Taskforce comprises five District government employees with expertise across AI and technology law, AI enablement, data science, and software development.
The Taskforce's responsibilities include:
- Examining the District's current internal AI governance posture
- Supporting agency leadership in meeting prescribed AI Strategic Benchmarks
- Providing the Mayor with advice and recommendations on responsible AI use
- Coordinating agency efforts across government
The AI Taskforce is set to sunset on December 31, 2026, unless extended.
Agency AI Strategic Plans
The Order established a phased rollout for agency-specific AI strategic plans:
| Cohort | Deadline | Requirement |
|---|---|---|
| First cohort (selected by City Administrator) | October 1, 2024 | Submit agency-specific AI strategic plan |
| Second cohort | October 1, 2025 | Submit agency-specific AI strategic plan |
| Final cohort | October 1, 2026 | Submit agency-specific AI strategic plan |
AI Values Alignment Advisory Group (AIVA)
The Order also established the AIVA, which brings community and stakeholder perspectives into the District's AI governance process. This public engagement mechanism ensures that residents and advocacy organizations have a voice in how the government deploys AI technologies.
Source: DC OCTO, AI Values and Strategic Plan

Mandatory AI Training for Government Workforce
DC became the first major U.S. city to require responsible AI training for its entire government workforce. Mayor Bowser announced the mandate, which applies to all DC government employees and contractors.
Training Details
The training program, delivered in partnership with InnovateUS, consists of two self-paced online courses that take less than two hours to complete. All employees and contractors must complete the training within 90 days of notification.
The curriculum covers:
- Safe and effective use of generative AI tools
- When AI may appropriately support government work
- When human judgment, oversight, and review are required
- Employee and contractor accountability for AI use
- The "humans in the loop" approach to AI decision-making
The training reinforces that employees and contractors remain accountable for the appropriate use of AI and that human oversight is required for consequential decisions.
Source: DC OCTO, Mandatory AI Training Announcement
DC's AI/ML Governance Policy
The Office of the Chief Technology Officer (OCTO) published an AI/ML Governance Policy that establishes guidelines for responsible AI use within District government operations.
Key Policy Requirements
The policy applies to three categories of government personnel: users, developers, and administrators. Its core requirements include:
- Approved tools only: Personnel may only use AI and ML platforms approved by OCTO or their agency's IT division
- Data protection: Strict guidelines govern the use of non-enterprise or free AI platforms, with emphasis on preventing unauthorized data exposure
- Security compliance: All AI tools must meet the District's cybersecurity standards
- Regulatory compliance: AI deployments must comply with applicable laws and regulations
The policy specifically addresses the risks posed by free or non-enterprise AI tools, where government data could inadvertently be exposed to third-party systems without adequate security protections.
Source: DC OCTO, AI/ML Governance Policy
Current Government AI Projects
The District has deployed several AI applications across government agencies:
- CORA (Case Operations Resource Assistant): An AI-powered chatbot supporting the Child and Family Services Information System (CCWIS)
- DC Compass: An AI chatbot helping residents navigate the city's open data resources
- Rent Registry AI: Uses artificial intelligence to automate document review processes for the rent registry system
The Bowser Administration also announced a first-of-its-kind AI pilot program developed in partnership with the MIT Governance Lab and Stanford Digital Economy Lab, further establishing DC as a leader in responsible government AI adoption.
The Stop Discrimination by Algorithms Act (Pending)
The most significant AI-related legislation in DC's legislative history is the Stop Discrimination by Algorithms Act, which has been introduced in multiple Council sessions but has not yet been enacted.
Legislative History
The bill was first introduced in 2021 by then-Attorney General Karl Racine. It was reintroduced as B25-0114 in February 2023 during the 25th Council session. Despite public hearings and significant advocacy support, the bill did not advance to a vote during either session.
Proposed Requirements
If enacted, the Act would impose three main obligations:
1. Prohibition on Discriminatory Algorithms
Companies and organizations would be prohibited from using algorithms that produce biased or discriminatory results in decisions about education, employment, housing, public accommodations, and services including credit, healthcare, and insurance.
2. Annual Audit Requirements
Companies would be required to perform annual audits to ensure their algorithmic processing does not discriminate directly or produce disparate impacts on protected groups. These audits would need to be conducted by third parties and include reporting requirements.
3. Transparency Obligations
Companies would be required to inform consumers about what personal information they collect and how that information is used in algorithmic decision-making.
Protected Characteristics
The Act references the DC Human Rights Act, which protects 23 characteristics including race, sex, gender identity, disability, religion, age, sexual orientation, and national origin. This is one of the broadest sets of protected characteristics in any proposed algorithmic discrimination law.
Penalties
Violations would carry civil penalties of up to $10,000 for each individual violation.
Current Status
The bill has not been reintroduced in the current 26th Council session (2025-2026) as of this writing. However, the growing national momentum around AI regulation and the continued advocacy from organizations like the Electronic Privacy Information Center (EPIC) and the Lawyers' Committee for Civil Rights Under Law suggest that some form of algorithmic accountability legislation could resurface.
Source: DC Office of the Attorney General

Deepfake and Synthetic Media Protections
The District of Columbia does not have a DC-specific deepfake law. However, DC residents are protected by federal legislation and existing local laws.
Federal TAKE IT DOWN Act
As a federal district, DC residents are directly covered by the TAKE IT DOWN Act, signed by President Trump on May 19, 2025. This federal law:
- Criminalizes knowingly publishing or threatening to publish non-consensual intimate images, including AI-generated deepfakes
- Requires platforms to remove such content within 48 hours of notice from victims
- Carries penalties of up to 2 years imprisonment (3 years when minors are involved)
- Requires platforms to establish notice-and-removal processes by May 19, 2026
Existing DC Protections
The District of Columbia has laws criminalizing and creating civil liability for distributing sexually explicit images without the depicted person's consent. While these laws predate the AI deepfake era, they provide a foundation for addressing non-consensual synthetic intimate images.
Election Deepfakes
DC does not have a specific law addressing deepfakes in political campaigns or elections. Given the District's unique status and the concentration of political activity within its borders, this gap is notable. Federal campaign regulations and general fraud statutes provide some coverage, but DC lacks the targeted election deepfake provisions that many states have enacted.
AI in the DC Courts
The DC Courts have taken a thoughtful approach to artificial intelligence, establishing clear boundaries between appropriate and prohibited AI uses in the judicial system.
AI Strategic Planning Roadmap
In June 2025, the DC Courts' AI Task Force released an AI Strategic Planning Roadmap that provides a structured plan for:
- Identifying beneficial uses of AI in court operations
- Evaluating whether proposed AI applications should be approved
- Training court staff on AI capabilities and limitations
- Setting rules governing AI use in the judicial system
Internal AI Use Policy
In July 2025, the DC Courts shared an Internal AI Use Policy with all staff. The policy explains how AI should be used, sets rules for safe and fair operation, and clarifies staff responsibilities when using AI tools.
Critical Safeguard: No AI in Judicial Decisions
The most significant aspect of the DC Courts' AI policy is a firm prohibition: judges do not use AI to write opinions or make decisions. This safeguard preserves the integrity of judicial reasoning and ensures that human judgment remains central to the administration of justice.
Planned AI Applications
The Courts are exploring AI tools for operational efficiency, including:
- Website chatbots to help the public navigate court services
- Administrative task automation
- Basic legal research assistance for staff
All AI tools undergo careful review with a focus on transparency, ethics, and accountability before deployment.
Source: DC Courts, AI and the DC Courts

Federal AI Policy and the District
Executive Order 14365: Unique Impact on DC
The December 2025 executive order on AI state preemption has a distinctive impact on the District of Columbia. Unlike states, which have independent legislative authority, DC's laws are subject to Congressional review under the Home Rule Act. This means that federal efforts to preempt state AI laws could have an even more direct effect on DC's ability to enact AI regulation.
However, the practical impact remains limited. Executive orders cannot independently preempt local laws without congressional action. DC's executive-branch AI governance framework (Mayor's Order 2024-028) is an internal government policy rather than legislation regulating the private sector, making it less susceptible to federal preemption arguments.
Federal Agency Presence
The concentration of federal agencies in DC creates a unique AI regulatory environment. Many federal agencies are simultaneously developing their own AI governance frameworks under various executive orders and congressional mandates. This means DC residents interact with AI systems governed by both local and federal rules, creating a layered regulatory landscape.
AI in Employment in the District
While DC has not enacted AI-specific employment legislation, the District's existing legal framework provides significant protections against AI-driven discrimination in hiring and employment.
DC Human Rights Act
The DC Human Rights Act prohibits discrimination based on 23 protected characteristics in employment, housing, and public accommodations. These protections apply regardless of whether decisions are made by humans or AI systems. Employers using AI tools for hiring, promotions, or other employment decisions must ensure their systems do not produce discriminatory outcomes.
Proposed Algorithmic Audit Requirements
The Stop Discrimination by Algorithms Act, if enacted, would require employers to conduct annual third-party audits of AI systems used in employment decisions. Although this legislation has not passed, it signals the direction of DC's policy thinking on AI in the workplace.
Federal Employment Protections
DC employers are also subject to federal anti-discrimination laws (Title VII, ADA, ADEA) that apply to AI-driven employment decisions.
Key Dates and Timeline
| Date | Event |
|---|---|
| 2021 | AG Racine introduces Stop Discrimination by Algorithms Act |
| February 8, 2024 | Mayor Bowser signs Order 2024-028 (AI Values Framework) |
| October 1, 2024 | First cohort of agencies submit AI strategic plans |
| February 2025 | Stop Discrimination by Algorithms Act reintroduced (B25-0114) |
| May 19, 2025 | Federal TAKE IT DOWN Act signed (covers DC residents) |
| June 2025 | DC Courts release AI Strategic Planning Roadmap |
| July 2025 | DC Courts release Internal AI Use Policy |
| October 1, 2025 | Second cohort of agencies submit AI strategic plans |
| December 2025 | Federal Executive Order 14365 on AI preemption issued |
| 2025-2026 | DC mandates responsible AI training for all government workers |
| October 1, 2026 | Final cohort of agencies submit AI strategic plans |
| December 31, 2026 | AI Taskforce sunset date (unless extended) |
More District of Columbia Laws
Explore other District of Columbia legal guides on Recording Law:
- [District of Columbia Data Privacy Laws](/us-laws/data-privacy-laws/district-of-columbia-data-privacy-laws)
- District of Columbia Biometric Privacy Laws
- District of Columbia Data Breach Notification Laws
- District of Columbia Surveillance Camera Laws
- District of Columbia Background Check Laws
- [District of Columbia Medical Records Retention Laws
Sources and References
- Mayor Bowser Signs Order Defining DC's AI Values and AI Strategic Plan(mayor.dc.gov).gov
- DC OCTO - AI/ML Governance Policy(octo.dc.gov).gov
- DC OCTO - Mandatory Responsible AI Training for Government Workforce(octo.dc.gov).gov
- DC AI Values and Strategic Plan(techplan.dc.gov).gov
- DC Courts - Artificial Intelligence and the DC Courts(dccourts.gov).gov
- DC Courts - AI Strategic Planning Roadmap(dccourts.gov).gov
- DC Office of the Attorney General - Stop Discrimination by Algorithms Act(oag.dc.gov).gov
- Congress.gov - TAKE IT DOWN Act Analysis(congress.gov).gov
- DC OCTO - AI Taskforce Wins Inaugural AI 50 Award(octo.dc.gov).gov