
Earlier this year, the AI Opportunities Action Plan was published, outlining that effective AI adoption across the UK will boost economic growth, provide jobs for the future and improve people's everyday lives.
To help ramp up AI adoption within government and the wider public sector, we’ve done some user research to understand what colleagues need to use AI safely, confidently and effectively. We explored their current behaviour, attitudes, fears and activities.
We used what we’ve learnt from this user research to inform the Generative AI Framework, the new AI Playbook for the UK Government, new learning resources on AI, as well as to understand if there are skills and roles missing from the Capability Framework that we need to add to support the use of AI.
What we did
We ran two surveys and several rounds of interviews, across the public sector, over a six month period between 2023 and 2024. These involved over 150 participants from more than 20 government departments. Running the survey twice helped us to understand how things were changing.
To get a deeper understanding we interviewed colleagues in a variety of roles and departments including people already working on or leading AI and machine learning projects.
When developing our findings we brought together our research data with industry analysis and conversations with subject matter experts from different parts of government.
What we learned
People want to understand what AI is, what it can and cannot achieve, how it can be applied in government and how they can gain the skills and knowledge necessary to use AI safely, managing the associated risks.
People have misconceptions of what AI is and what it might achieve
Understanding of AI technology in government is evolving. Over the six month period dividing our user research studies, the civil servants’ awareness of AI has increased and people have become more familiar with tools like chatbots. However, the way they conceptualise AI has been shaped by the prominence of generative AI tools, with some participants associating AI primarily with generative AI.
The misconception that AI means large language models and chatbots can make managing stakeholders’ expectations complicated. Generative AI tools are not the solution to every problem and present issues of accuracy, potential bias, and, in some cases, sustainability. It is important that people implementing AI services demystify AI for their stakeholders, making sure that they understand the potential and limitations of different AI technologies when considering if AI is the right tool for the job.
A partial understanding of AI can lead someone to think that AI can be successfully applied to most use cases, or, conversely, to worry about risks and block usage of AI tools. Both these attitudes may impact civil servants who want the opportunity to experiment and learn what AI can do for them safely, with guardrails in place.
While AI is not new, there’s still work to be done to help people get a realistic understanding of AI, its capabilities, applications and limitations. User research suggests that, in addition to training and guidance, sometimes a single example or case study can help demystify AI and change people’s attitude. During interviews several people mentioned a specific presentation, conversation, or case study that got them excited about AI and prompted them to learn more.
The emergence of AI provides government with an opportunity to rethink its approach to governing services
Survey respondents showed a good understanding of potential risks of AI. Their concerns broadly mirrored conversations about AI outside of government. Respondents were keen to understand how they should deal with things like privacy, bias, ethics, concerns around plagiarism, security, and potential for misuse.
Civil servants are aware that there are additional complications and responsibilities when using AI in government, due to the nature of the data used and the reach and impact of government services.
Departments are implementing guidelines and the Government Digital Service (GDS) regularly reviews guidance on AI. However, teams who work on live AI services acknowledge that they need to regularly be kept informed on new guidance and good practice to build, test and monitor AI solutions. Currently, to make sure that AI services operate as expected, teams rely on existing frameworks and governance processes for assessing and monitoring data products.
Across government, we are still learning what the impact of AI models is for service governance processes. With many teams exploring ways to validate, monitor and measure AI services, there's an opportunity to combine cross-government lessons learned into guidance that would be useful for anyone working in AI.
How might we enable more public servants to make the most of AI
AI changes the way we interact with technology, and the way we build digital products. This change can be scary: it might change how frontline staff deliver their services, what digital and data roles do or how they collaborate.
A big part of change management is providing relevant, real examples, and opportunities to experiment with AI:
- leaders want to be inspired and want to understand what the real capabilities of AI are
- civil servants working on digital and data need opportunities to upskill, including ways to build AI solutions and learning with and from others
- civil servants working outside the digital and data space need clear, consistent guidance to be able to use AI in a safe and responsible way
In our research we uncovered a need for relevant examples and opportunities to share knowledge. Evolving capability, changes in attitude and more mature AI services are making this possible. Showcasing government use cases can help build awareness and inform good practice.
Participants also expressed a need to connect with others who are doing innovative work with AI to be inspired, avoid duplicating work and learn from each other. Networks such as the Artificial intelligence community of practice play a key role in this process, offering opportunities to share knowledge and make connections.
What’s next
There’s work going on to address what we’ve learnt in our user research.
GDS has just published the AI Playbook for UK Government. The Playbook offers an introduction to AI technologies for government, an exploration of their capabilities and limitations, and guidance on how to buy, implement and use AI safely and responsibly. This also includes sections on governance, quality assurance of AI products and how to build AI teams. This Playbook will be regularly reviewed and updated.
If you are interested in gaining AI skills, you can also take free e-learning courses on Civil Service Learning and access off-the-shelf training on Government Campus via the learning frameworks. These courses range from short introductions to AI topics to detailed online courses on technical aspects of AI technologies and products.
To understand how other departments are using AI and connect with the relevant teams, you can attend the monthly events organised by the AI Community of Practice. Moreover, you can review the projects presented in the Algorithmic Transparency Recording Standards (ATRS), which lists all the algorithmic tools deployed in decision making processes by public sector bodies.
1 comment
Comment by David H. Deans posted on
Change can be scary, but it can also raise the bar of performance expectations. I appreciate the efforts being made to involve users in the research process, as their feedback is invaluable in shaping modern digital government services. I hope to see continued transparency and collaboration across UK departments, ensuring that AI enhances efficiency and also results in meaningful outcomes for all internal and external stakeholders.