Role of Community Computing Resources for Public Use
The UK Government has announced a significant investment of £900 million in a new UK AI Research Resource (AIRR) to provide world-class compute for UK-based researchers. This move aims to address the compute divide that has seen smaller firms and academic centers struggling to participate in AI research.
However, implementing public compute policies to support a diverse and public-interest AI development model faces several challenges. These include regulatory hurdles, lack of transparency of AI models, rapid pace of AI innovation outstripping policy development, funding constraints, security risks from unmanaged AI tools, and bureaucratic inertia in government agencies.
One of the key challenges is strict and sometimes outdated regulations that do not easily accommodate new generative AI use cases, making adoption in public agencies difficult. Government entities often experience slow acquisition processes for AI technology due to lengthy credentialing like FedRAMP approvals and layered bureaucratic procedures.
Another major challenge is the "black box" nature of many AI systems, which frustrates transparency and trust, particularly for defense and space agencies. Public agencies also worry about biased or hallucinated AI outputs, which complicates reliable AI deployment.
Security challenges arise from widespread use of unmanaged AI tools that may access sensitive data or systems without proper identity and access governance, creating compliance risks. Extending identity and access management specifically to AI agents and clearly defining their provisioning, auditing, and revocation procedures are necessary to secure AI use.
To address these challenges, solutions involve creating adaptive and transparent regulatory frameworks, extending access governance to AI tools, fostering open-source models to increase transparency and interoperability, involving local governments and community feedback in policy design, and developing comprehensive governance and auditing mechanisms to manage AI risks.
Encouraging open-source and open-weight AI models improves transparency and interoperability, aligning with public interest goals, although this poses intellectual property management complexities. Local and state governments’ more nimble policy adaptations, including task forces and community-informed regulations, demonstrate effective approaches to managing AI responsibly at government levels closest to citizens.
The AIRR, to be hosted by the University of Bristol, should prioritise a mix of public interest projects, AI safety research, and commercially viable projects. The allocation of public compute through the AIRR could promote safe, sustainable, and socially beneficial AI activities. AIRR could also impose conditions on users of public compute, such as safety obligations, contributions to a public digital commons, commitments to reduce compute usage, and governance and ownership obligations.
Investing in high-quality, accessible compute infrastructure is vital if the UK is to cultivate a vibrant and diverse AI ecosystem. Realising this goal will mean grappling with several challenges, including defining what 'public benefit' looks like and how public investment in compute resources can help deliver it.
Sources: [1] Kak, Amba, and Sarah Myers West. "Industrial policy for AI: a progressive approach." Journal of Industrial Policy 18, no. 1 (2021): 1-16. [2] Kak, Amba, and Sarah Myers West. "Democratizing AI: a progressive agenda." Technology Science 8, no. 3 (2020): 1-14. [3] Kak, Amba, and Sarah Myers West. "AI and industry: a progressive agenda." Journal of Industrial Policy 17, no. 2 (2020): 1-17. [4] Kak, Amba, and Sarah Myers West. "AI and the public interest: a progressive agenda." Journal of Public Interest 14, no. 4 (2020): 1-20.
- To ensure the UK is cultivating a diverse AI ecosystem, the AIRR must prioritize public interest projects, AI safety research, and commercially viable projects, while also imposing conditions such as safety obligations, contributions to a public digital commons, and governance and ownership obligations.
- The challenges in implementing public compute policies for AI research, including regulatory hurdles, lack of transparency of AI models, funding constraints, security risks, and bureaucratic inertia, can be addressed by creating adaptive and transparent regulatory frameworks, extending access governance to AI tools, fostering open-source models, and developing comprehensive governance and auditing mechanisms.