AI responsibility in governmental settings
The Office of Management and Budget (OMB) has outlined a series of steps to implement responsible AI in government services, as part of the 2025 AI Action Plan. These initiatives aim to strike a balance between accelerating innovation and upholding key governance practices.
Regulatory and Procedural Updates
Outdated rules and guidelines that hinder AI development and adoption are being identified, revised, or repealed. Procurement guidelines are being adjusted to ensure federal contracts engage only with developers of large language models (LLMs) whose models are objective and free from ideological bias.
Interagency Coordination
The Chief Artificial Intelligence Officer Council (CAIOC) has been formalized as the central platform for AI coordination across federal agencies. This council facilitates collaboration with other executive councils to drive strategic AI adoption.
Talent and Capability Sharing
A talent-exchange program has been established to rapidly detail federal AI experts to agencies requiring specialized skills. Additionally, an Advanced Technology Transfer and Capability Sharing Program has been launched to expedite AI capabilities transfer among agencies.
Creating an AI Procurement Toolbox
A unified AI procurement toolbox, managed by the General Services Administration (GSA), is being developed. This toolbox will provide a uniform, customizable, and compliant AI model selection across federal agencies, ensuring visibility into AI use and compliance with privacy, data governance, and transparency requirements.
Training and Access for Federal Employees
Federal agencies are required to provide employees with access to frontier AI tools and training to improve AI adoption and use.
Risk Management and Impact Assessments
Agencies must conduct AI impact assessments, including pre-deployment testing, public feedback incorporation, and opportunities to appeal adverse decisions. These assessments critically evaluate whether training and operational data are fit for purpose and assess potential cost savings.
Funding and Regulatory Alignment
The OMB coordinates with federal agencies holding AI-related discretionary funding programs to consider state AI regulatory climates when making funding decisions, effectively tethering federal funds to state AI regulatory frameworks.
While these steps aim to accelerate AI adoption responsibly, some recent OMB directives have removed explicit requirements to refrain from using AI if risks outweigh benefits and omitted specific mandates to measure and mitigate bias.
Balancing AI and Human Interactions
Balancing the best of both AI and human interactions can uplevel citizen experiences. For instance, AI can be used for manual aggregation, analysis, triage, and delivering the right information to the right people at the right time and place.
Promoting Transparency, Accountability, and Security
Adopting responsible AI means promoting transparency, accountability, and security, ensuring alignment with key ethical AI principles. Advanced agencies are using citizen and employee feedback to assess the impact of AI and evaluate whether AI-powered processes are more effective and efficient.
Clear AI Use Cases
Agencies should have a clear AI use case in mind, such as a goal to achieve, a problem to solve, a citizen need that can be better met, or a way to optimize employees' workloads.
Case Studies
The Social Security Administration (SSA) used AI to analyze website and contact center insights and was able to add a "return to my saved application" button, reducing complaints and calls. The IRS rolled out a smart callback option during the 2022 tax filing season to help up to 95% of callers avoid long hold times, aligning with consumer expectations for shorter wait times.
Protecting Personal Data
Protections should be put in place to prevent AI from delivering unintended results. Vendors should have measures in place to process data in ways that protect personally identifiable information.
The Government Service Delivery Improvement Act, the Integrated Digital Experience Act, OMB directives M-25-21 and M-25-22, and Executive Order 14179 have combined to further elevate the notion of responsible AI in the public sector.
[1] Office of Management and Budget (2022). 2025 AI Action Plan [2] Future of Privacy Forum (2022). Upholding Trust: Responsible AI in Government [3] National Institute of Standards and Technology (2021). AI Risk Management Framework [4] National Conference of State Legislatures (2021). Artificial Intelligence Regulation
- The reimagined workforce, particularly within the federal workforce, will be significantly impacted by the integration of technology such as artificial-intelligence, as the Office of Management and Budget's 2025 AI Action Plan outlines the development of a unified AI procurement toolbox and talent-exchange programs to facilitate the exchange of AI expertise.
- In an effort to strike a balance between innovation and governance practices, the federal government is focusing on the responsible adoption of AI in government services, ensuring that AI systems are free from ideological bias, while also promoting transparency, accountability, and security, ultimately aiming to uplevel citizen experiences.