Back to Insights
Industry
8 min read

AI Governance in Government: Mandates and Accountability

Guide for public sector compliance officers navigating OMB M-24-10, executive orders, EU AI Act public authority rules, and algorithmic accountability.

·Starkguard Team
Share:

When the Government Deploys AI, the Stakes Are Constitutional

A state unemployment agency deploys a fraud detection algorithm. It flags 40,000 claims as fraudulent and automatically suspends benefits. Months later, an audit reveals a false positive rate above 90%. Tens of thousands of people lost income they were legally entitled to — during a recession — because an algorithm made a determination that no human reviewed.

This is not hypothetical. Variants of this scenario have played out across multiple states and federal programs. When government deploys AI, the consequences land on people who often have no alternative provider, no competitive option, no ability to switch. A private company's biased algorithm might cost someone a credit card. A government's biased algorithm can cost someone their housing, their liberty, or their child custody.

That asymmetry is why public sector AI governance demands a fundamentally different approach than private sector compliance.

The Federal AI Governance Landscape: A Moving Target

The federal AI governance framework has undergone significant upheaval. In March 2024, OMB issued Memorandum M-24-10 — "Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence" — which established the most comprehensive federal AI governance requirements to date. M-24-10 required every federal agency to designate a Chief AI Officer, inventory all AI use cases, adopt minimum risk management practices for rights-impacting and safety-impacting AI, publish AI use case inventories publicly, and follow specific procurement and monitoring guidance.

M-24-10 defined "rights-impacting AI" as systems whose outputs have legal or material effects on individuals. "Safety-impacting AI" covered applications creating or exacerbating physical risks. Two hundred and six such use cases across federal agencies received compliance extensions — most commonly for independent evaluations and AI impact assessments.

Then the administration changed. Executive Order 14179, signed January 23, 2025, rescinded Biden's Executive Order 14110 and directed the OMB Director to revise M-24-10 and M-24-18 within 60 days. The December 2025 Executive Order on "Ensuring a National Policy Framework for Artificial Intelligence" further shifted the federal posture toward a "minimally burdensome" approach, focusing on preempting state-level AI regulations rather than expanding federal requirements.

For federal agency compliance officers, this creates genuine uncertainty. The structural requirements of M-24-10 — Chief AI Officers, use case inventories, risk categorization — remain operationally embedded in agencies that spent 2024 building them. Whether revised guidance will preserve, modify, or dismantle these structures is an open question. Our recommendation: maintain your governance infrastructure. The political winds shift. The operational risks of ungoverned AI in government do not.

Algorithmic Accountability in Benefits and Public Services

The highest-stakes AI applications in government are the ones that determine who receives public benefits, who gets investigated, and who faces enforcement action. These systems carry constitutional weight. Due process, equal protection, and the right to a reasoned explanation for adverse government action aren't just policy preferences — they're legal requirements.

AI systems used to evaluate eligibility for public assistance, healthcare services, housing, and social benefits are explicitly listed as high-risk under Annex III of the EU AI Act. For US agencies, the constitutional framework creates analogous obligations even without the AI Act's formal classification.

The core governance requirements for benefits AI are notice (individuals must know when AI influences decisions about their benefits), explanation (agencies must be able to articulate the basis for an adverse determination in terms a person can understand and contest), appeal (meaningful review by a human decision-maker must be available), and audit (ongoing monitoring for accuracy, bias, and drift must be systematic, not episodic).

More than half of US states have now enacted at least one AI or algorithmic accountability law, with common priorities including transparency, child protection, and reducing algorithmic bias. Colorado's AI Act — the first comprehensive state AI law — requires deployers of high-risk AI systems to use reasonable care to avoid algorithmic discrimination, mandating impact assessments, transparency disclosures, and documentation of AI decision-making processes.

For federal agencies, even without active enforcement of M-24-10's most prescriptive requirements, the Government Accountability Office continues to assess agency AI governance practices. The GAO's 2024 report found significant gaps in AI risk management across federal agencies, and congressional oversight committees have shown bipartisan interest in algorithmic accountability.

Law Enforcement and Criminal Justice AI

Predictive policing, facial recognition, risk assessment in sentencing and bail — AI in criminal justice raises the most acute governance concerns in the public sector. Research consistently demonstrates that AI models in policing can reflect and amplify biases in historical data, and the complexity and opacity of these tools challenge accountability and procedural fairness.

Several jurisdictions have enacted specific restrictions. Some cities have banned government use of facial recognition technology. Others require algorithmic impact assessments before deploying predictive policing tools. The OECD AI Principles — adopted by over 40 countries — specifically call for AI systems to function in a robust, secure, and safe way throughout their lifecycle, with particular emphasis on applications affecting fundamental rights.

Governance for law enforcement AI must include pre-deployment impact assessments that address racial and socioeconomic bias, ongoing performance monitoring with disaggregated metrics across demographic groups, clear protocols for human override and escalation, public transparency about what AI tools are deployed and how they influence decisions, and community engagement before deployment — not after controversy.

The Policing Project at NYU Law released an AI Governance Framework in late 2025 that provides specific guidance on procurement, deployment, and oversight of AI in policing contexts. It's worth reviewing as a structural model even for non-law-enforcement government AI governance.

EU AI Act: Obligations for Public Authorities

The EU AI Act imposes heightened obligations on AI systems deployed by public authorities. Beyond the Annex III high-risk designations for benefits eligibility, emergency services triage, and law enforcement, the Act's transparency requirements are particularly demanding for government deployers.

Public authorities using high-risk AI systems must register those systems in an EU-wide database before deployment. They must conduct fundamental rights impact assessments. They must ensure meaningful human oversight by personnel with appropriate authority and competence. And they must provide affected individuals with explanations of AI-assisted decisions.

For government agencies that serve EU residents or collaborate with EU institutions, these obligations apply regardless of where the agency is located. A US federal agency operating a program that processes data about EU citizens — immigration, trade compliance, international law enforcement cooperation — needs to assess whether the EU AI Act's public authority provisions apply.

The compliance timeline matters: high-risk AI system obligations take full effect August 2, 2026. Government agencies should be conducting gap assessments now.

Procurement: Governing AI You Don't Build

Most government AI isn't built in-house. It's procured from vendors. This creates a governance challenge that the private sector also faces, but with higher stakes: the government is deploying a vendor's algorithm to make decisions about people's rights, and the government bears the constitutional accountability.

OMB Memorandum M-24-18 (September 2024) established procurement requirements: vendors must document AI capabilities, limitations, and risks; contracts must include monitoring and human oversight provisions; and agencies must preserve the ability to audit and validate AI systems.

Even with revision of these memoranda, the procurement principles reflect sound governance. Your AI vendor contracts should include performance benchmarks with consequences for degradation, audit rights including bias testing, data governance provisions, explainability requirements, and incident response obligations for AI failures.

Building Public Sector AI Governance Infrastructure

Government AI governance must be anchored in public accountability. That's what distinguishes it from private sector governance, and it should shape every structural decision.

Start with your AI system inventory. Map every AI system to its use case, the population it affects, the decisions it influences, and the rights at stake. Categorize each system using the NIST AI RMF GOVERN, MAP, MEASURE, and MANAGE functions. This framework was developed specifically for the US federal context and provides a structured approach that aligns with both existing federal guidance and international standards.

Publish your AI use case inventory. Regardless of the current status of M-24-10, transparency about what AI tools your agency uses is a governance best practice. Public trust in government AI depends on the public knowing it exists.

Establish an AI governance board with representation from program offices, legal counsel, privacy and civil liberties, IT, and — critically — external stakeholders including community representatives and civil rights organizations. Government AI governance cannot be an internal exercise. The people affected by these systems deserve a voice in how they're governed.

The political environment for government AI governance will continue to shift. The operational imperative will not. Ungoverned AI in government creates legal liability, erodes public trust, and — when it fails — harms the people government exists to serve.

Explore how Starkguard supports public sector AI governance.

Starkguard Team

AI Governance Experts

Tags:
ai-governance
government
public-sector
federal-ai
compliance

Ready to implement AI governance?

Start your free trial and put these insights into practice with Starkguard.

Start Free Trial

Related Articles