The AI Adoption Gap: Preparing the US Government for Advanced AI
Advanced AI could unlock an era of enlightened and competent government action. But without smart, active investment, we’ll squander that opportunity and barrel blindly into danger.
Executive summary
The US federal government is falling behind the private sector on AI adoption. As AI improves, a growing gap would leave the government unable to effectively respond to AI-driven existential challenges and threaten the legitimacy of its democratic institutions.
A dual imperative
→ Government adoption of AI can’t wait. Making steady progress is critical to:
Boost the government’s capacity to effectively respond to AI-driven existential challenges
Help democratic oversight keep up with the technological power of other groups
Defuse the risk of rushed AI adoption in a crisis
→ But hasty AI adoption could backfire. Without care, integration of AI could:
Be exploited, subverting independent government action
Lead to unsafe deployment of AI systems
Accelerate arms races or compress safety research timelines
Summary of the recommendations
1. Work with the US federal government to help it effectively adopt AI
We should:
Invest in win-win measures that both facilitate adoption and reduce the risks involved, e.g.:
Build technical expertise within government (invest in AI and technical talent, ensure NIST is well resourced)
Streamline procurement processes for AI products and related tech (like cloud services)
Modernize the government’s digital infrastructure and data management practices
Prioritize high-leverage interventions that have strong adoption-boosting benefits with minor security costs or vice versa, e.g.:
On the security side: investing in cyber security, pre-deployment testing of AI in high-stakes areas, and advancing research on mitigating the risks of advanced AI
On the adoption side: helping key agencies adopt AI and ensuring that advanced AI tools will be usable in government settings
2. Develop contingency plans and build capacity outside the US federal government
Current trends suggest slow government AI adoption. This makes it important to prepare for two risky scenarios:
State capacity collapse: the US federal government is largely ineffective or extremely low-capacity and vulnerable
We should build backstops for this scenario, e.g. by developing private or non-US alternatives for key government functions, or working directly with AI companies on safety and voluntary governance
Rushed, late-stage government AI adoption: after a crisis or sudden shift in priorities, the US federal government rapidly ramps up integration of advanced AI systems
We should try to create a safety net for this scenario, e.g. by preparing “emergency teams” of AI experts who can be seconded into the government, or by identifying key pitfalls and recommending (ideally lightweight) guardrails for avoiding them
These scenarios might render a lot of current risk mitigation work irrelevant, seem worryingly probable, and will get little advance attention by default — more preparation is warranted.