In this episode of Nevermind the Pain Points, James Prowting-Lord and Rick Stock, two experts in digital transformation and public sector consultancy, delve into the transformative potential of generative AI in the public sector.
From tackling risk aversion to addressing data maturity, they examine how this cutting-edge technology could redefine public service delivery while highlighting the importance of responsible adoption.
Listen here or read on for an edited transcript.
James Prowting-Lord: Hello and welcome. My name is James Prowting Lord. I've been a management consultant at Clarasys for 10 years now, eight years of which have been in the public sector. This leads us to today's topic of GenAI in the public sector, challenges and potential applications. I'm joined today by Rick Stock. Rick, would you like to introduce yourself?
Rick Stock: Hi, James. Good to see you again. I am Rick Stock. I have worked in the private and public sector for the last 20 years, focusing on digital transformation, delivery of digital programs projects. Particular emphasis in the last 10 or 12 years on the public sector. And I think particularly recently looking at the possibilities of AI and how that is likely to develop and potentially transform life in the public sector.
James Prowting-Lord: Fantastic. Right. We can jump straight in then.
GenAI in the public sector - risk aversion vs. innovation: striking the balance
James Prowting-Lord: I think it's probably fair to say with any innovative technology such as GenAI, a challenge potentially in the public sector is there's a lower appetite for risk than you might potentially see in the private sector.
I think there are a few things really to explore there. One, validating that that is the case. And I suppose two, the impacts on this particular technology. What is it that you've seen from recent talks you've been attending?
The state of AI adoption
Rick Stock: I think my experience recently has been that a lot of government organisations and a lot of private sector organisations are in different places. A lot of them are proceeding quite well, and perhaps have pilot programs already in place, more with organisations which are more mature in terms of data science and AI. And they are now starting to really question whether they need more robust governance around AI.
Others are very much in the foothills trying to understand the possibilities. And then others, again, are running very fast. And I think there's a lot of FOMO. People were worried about missing out and trying to understand where their colleagues in the public and private sectors are.
James Prowting-Lord: I totally agree. And I think what I'm seeing in conversations is that FOMO, is that fear of missing out going to outweigh this sort of risk aversion as well? And I think it's also people understanding there are different levels of risk to technology such as this, cause there's huge potential, but also there's not really risk to it being used for personal productivity and internal processes, providing the right governance is put around it.
Using GenAI for very public-facing things or things that drive decision-making in, say, a court environment, for example, that's fairly terrifying. You need to make sure it's pretty robust. Still, there's a bunch of stuff where you don't really have to wait for the legal landscape to mature around it, to take full advantage of it, which also just creates a sort of base level of understanding of the technology, which I think you're just gonna need throughout government departments going forward.
Rick Stock: Yeah, and I think to an extent, some of these questions aren't unique to AI. They are as applicable for making decisions with data and with algorithmic techniques as they are with AI.
Transparency and governance: Building public trust in AI
Rick Stock: I think that a really good question to ask is how much knowledge do we currently have about how much of our interactions with government are already powered or underpinned by data and algorithms? And I think that there's a degree of transparency, which there have been in the past various initiatives, but haven't been widely taken up to provide standards or greater transparency around the use of data.
And I think there's more effort that's needed there. And I think the rise of AI really just brings that into focus for a lot of these questions. All the answers may already be there. But we probably need to revisit those in particular for AI, given the immense pressure that government departments are now under to make use of this new and very exciting technology.
I mean, I don't want to downplay the potential I think of AI, but I think it's very much the case that we need to proceed with caution, but while acting responsibly.
James Prowting-Lord: Yeah, for sure. And I think there's a real point there around the public getting comfortable, and accepting a component of GenAI in a public service in the same way that to the point of using it more in internal teams to speed up reporting and speed up communications and things like that. There also needs to be an acceptance in that team of flaws that GenAI may produce, but understanding that they're counterbalanced by the efficiency. So, for example, if it's generating a report and it's saving someone two days of time generating that report, the readers of that report also have to acknowledge.
This has been produced in a certain way and I think it's the same for the public in terms of. The sort of asterisks of this was done by GenAI, is that transparency to your point?
Rick Stock: Yeah, and I think at the moment, as I said, I think transparency requires more work and isn't as good as it could be. I think though, there is a danger there that somehow having a generic approach where, rather like the approach we have with accepting cookies or accepting that your data is going to be processed, the users become blind to it and really don't understand the actual implications of that. How many times do we just click accept cookies and move on? I mean, would we have a similar approach with AI ? Is that dangerous?
I think from a regulatory point of view, I think obviously the UK has still got a little way to go. We're being driven in terms of kind of the leadership of AI regulation, obviously by the EU. For the private sector in particular, a lot of companies which are dealing with customers in the European Union are having to face up with that in the short term quite rapidly, because they will potentially come under the oversight of the EU AI Act. But for in the UK, I think, you know, really there's still quite a lot of work to go to say, okay, how is that going to be mandated for both public sector and the private sector?
Overcoming data maturity challenges in the public sector
James Prowting-Lord: No, for sure. And I guess another common public sector challenge that they have to overcome is the maturity of some of their data that they have to draw upon for AI. Both in terms of a sort of structure and also the nature of how departments are often fragmented, the different central government departments often run relatively siloed. There is collaboration between them. And I know for example, everyone involved is, you know, very purpose-driven towards improving society and things like that, but ultimately a lot of the data and things are stored in different places, not always in the best condition and you want to avoid a sort of junk in junk out situation if you're using GenAI.
Rick Stock: Yeah, I think the maturity of data handling in the UK government, and not just in the public sector, in the private sector, is massively variable. And I think there are plenty of places that we could point to which have dirty data, issues with legacy systems, problems with interoperability of data.
And I think it's quite interesting that AI can help potentially with a lot of those problems in terms of being able to join some of that data up, perhaps, you know, working and interpreting that data to make it more usable. But I think that it's definitely the case that to move beyond some of the more basic implementations of GenAI and the productivity tool. So things like, you know, Copilot and Google Gemini that we see being quite widely adopted. The problems with data maturity may well be a break on the adoption of advanced kind of AI techniques as they've proved to break to this point, you know, for things like machine learning. So I think there's a lot of work to be done there, some opportunities as well.
James Prowting-Lord: For sure. And I think what I'm sort of anticipating is that as this is realised, that data strategy, data governance, data maturity will start to creep up strategic agenda of various departments because I think it has been high up in the past, it's now like an underlying enabler to a lot of stuff that technology departments and government want to do, but perhaps isn't sort of top, top, top of the priorities. But I can see that really creeping up as the use of GenAI becomes more apparent to achieve success.
Rick Stock: Which is terrifying if you think about it, isn't it? That is only now creeping up, uh, if that is the case. So, so I think the government as a whole has to get its head around data. I think it has to do more.
I think there is definitely more progress is made particularly in the larger government departments around how they use data, their capability is growing, but I think the way that it's growing often is siloed from their wider digital initiatives and for other operational initiatives. So I think it's still nascent in many government departments. And I think that's a bridge and a gap that has to be built.
James Prowting-Lord: I think also it's between government departments. So there are certain departments that are quite mature within themselves, but actually a lot of public sector problems require the collaboration between multiple government departments and would require data inputs from multiple government departments. From my time working at Ministry of Justice, I saw a lot of the problems actually, you know, ideally you need Department of Work and Pensions, Ministry of Justice, Home Office, Department for Education, they're all going to feed into sort of reducing re-offending initiatives, for example. But at the moment the infrastructure is not really there to enable them to do that. And I hope actually that maybe GenAI will sort of wake people up to that in terms of when you start looking at that data and actually the power of combining it and sharing it in a more efficient way.
Rick Stock: Yeah. So I'm not sure if you've read Platform Land, which is the book that's just recently come out from Richard Pope. He's one of the co-founders of GDS. And he's got a really compelling vision that he paints of how joining up data, from those kinds of disparate sources that you're talking about can really empower the next generation of digital services in government. And I think it's a fantastic potential there, but I think the reality on the ground of that is a little way off.
And that is due to a number of factors, you know, which all of those working in the public sector are aware of, the kind of conflicting priorities to be able to, to fix some of those issues, problems with legacy systems, problems with skills around data, problems with investment and coordination and really understanding the potential of data throughout a department.
So I think lots of challenges there. Can AI help us with some of those, I guess, in terms of helping us clean some of that data, helping us with some of the interoperability that, yes. But will the current state of that data really kind of hold back some of the potential of AI, in terms of perhaps the grander ambition of what it might do? Yeah, I think it probably will.
James Prowting-Lord: Yeah. And then I think to revert to the previous point, there are some quicker wins in like low-hanging fruit that won't be impacted by that, but it needs to be worked on in parallel to those so that you can truly maximise it, I suppose.
Leveraging GenAI for internal efficiency and collaboration
Rick Stock: Yeah, definitely. I mean, just come from a government department, which is running a really good pilot around the use of some of the office productivity tools, works really well. I mean, it isn't revolutionary per se. But yes, does it save time? Does it pay back an investment? Yeah, it probably does. If certain groups of people who are performing the tasks that things like Microsoft Copilot, can help with, there are some tools now which have, really quite a good capability for taking the drudge out of some of the admin and they work at a level, which pays back.
James Prowting-Lord: For sure. I think it's worth noting with them, you say a level that pays back, there's also, and we kind of touched on it earlier, the acknowledgement that it may not be perfect, but it's no less imperfect than a human doing it when they've got a million other things to do. And maybe they end up writing up a meeting minutes, two days later, and everyone's forgotten what was talked about the actual immediacy makes it so much more valuable. And I guess it moves us maybe onto that point that you referenced when talking about that book, which I'll definitely have to give a read, of the data and technical skills, which I think GenAI is an interesting impact in that space within government, because on the one hand, we've explored some issues with the data landscape within government, but on the other hand, we actually ran a pilot where we were using GenAI to generate Sequel code for non-technical users to be able to access quite complex structured databases. So, strangely, actually, that complex data landscape then suddenly becomes more accessible by non-technical users, which is actually a really exciting use case. So there are sort of two sides to the coin, AI can be the solution as well as throwing up a few issues.
Rick Stock: Yeah, for sure. And I think exactly how many of those use cases are going to be suitable for AI, we don't know yet. I think though, what we do know is that there are characteristics to those kinds of solutions, which are, you know, becoming clearer.
So for example, with AI solutions there are some things which we know it isn't suitable for, and we talked about that, where you really need absolute certainty in a decision, then you're probably going to be looking at how you include a human in that loop. So if an AI is making a recommendation, rather than making a decision, then I guess that's a different kind of scenario, but I am optimistic that if we can get an understanding of the characteristics of AI, in the right places, then there will be use cases which come out, which can really provide a significant amount of value.
Shadow IT and the importance of official AI tools
James Prowting-Lord: For sure. And I think there's also the issue that if we don't, people will do it anyway with shadow IT. I know you pointed me towards an interesting article in Canada where they surveyed their civil servants and 20 percent of them are already using chat GPT for personal productivity and to AI d various processes. But that's not a regulated tool. So if you don't provide people with the right tools, the right governance, the right guidance, then shadow IT creeps in, which introduces a bunch of security concerns and just means you're not really getting the best impact of the technology either.
Rick Stock: Yeah. So I think that was a fascinating study and I think once the Canadian government was quite fast to react once it realised, I guess the scale of the issue there, that it was publishing guidelines quite soon after for its public servants and how they should be using those tools. But I think it's a reality that tools like chat GPT are on phones, are on people's laptops and easily accessible via browser. And they're being used in an immense number of situations very widely and often in an unregulated way. So I think most government departments will already have provided guidance to their staff, as I'm sure many other organisations through the public sector as to, when to use chat GPT and when not, and what kind of things to use it for and when not, and to highlight some of the kind of risks of that. But I think the bit that we don't know, I guess, is really those wider impacts of having those kinds of tools available, both within government departments, but also within the users. So how much, if we're having to deal with both Shadow IT use of AI, official AI, and then our users using AI as well, that's a little harder to predict how that's going to play out in the real world.
James Prowting-Lord: For sure. And I think there's not that much difference with other technology where you have to keep pace with what people see in a consumer setting so that you're not missing out, in terms of, say instant messaging, a lot of government departments weren't really using it a huge amount pre covid but then that meant a lot of people having conversations just on personal phones on whatsapp and things like that and I know there's encryption there but it's not really where you want it to be happening but now everyone's very bedded in on teams and slack and things like that and so you know if you don't provide the technology then people will find ways around it and you won't get the benefit and there are potential risks.
Rick Stock: Yeah, absolutely. I mean, we were talking earlier, I guess, about, you know, your own use of AI, in your professional life. And I think what a lot of professionals are finding is that there are tools that can AI d you in your day-to-day job, which whether they're provided by your employer or not, and they're going to get used. So I think what a lot of people are finding though, as well, is that those tools can't do the whole job. So at this point, they are providing a boost. But they need kind of careful oversight, you need to make sure that those tools and the people who are using them, really kind of understand their limitations. And I think that's going to be a focus, I think for a lot of organisations is really understanding how AI is being used outside the official line, moderating that, and then bringing their staff capability up to understand really what's the best way to use those tools, if at all.
James Prowting-Lord: Yeah, for sure. Really interesting talking about that with you, Rick I think if only there was an innovative technology that could have transcribed what we've spoken about and summarised it for us, that would be incredibly valuable. I've been trying to keep up as we go along, I guess, on where we've landed on some of those.
I think it's fair to say that there are potential riskier situations in public sector of applying GenAI technology, but we don't think that that needs to slow down people's journey in that sense. There's a perception that in private sector, you can perhaps run and break a few things along the way, but in public sector, there's enough safe uses. And also that will help grow confidence and enable people to identify those more challenging and actual, ultimately more beneficial use cases of this technology.
Rick Stock: Yeah, I think it's really interesting. I went to an event the other night, there was a speaker, Roger Spitz, who's a really interesting speaker on disruption, and he had grouped a lot of disruptive technologies, and the grouping he had for AI was along with climate change as inevitable.
The inevitable rise of AI: Preparing public sector strategies
It's unlikely that we're going to roll back AI, you know, it is happening. The amount of venture capital money and investment that's been thrown at it, it's not going to stop. So I think that it's really incumbent upon public sector organisations to work out their approach to it in a safe way. But I think that it won't be possible to avoid answering the questions that the AI brings up.
James Prowting-Lord: For sure. Well, thanks very much Rick, and look forward to talking to you again in the near future.
Rick Stock: Thank you very much, James.
Show notes
Guest bio:
Rick Stock is the managing director of RS Digital, a London-based digital consultancy serving both private and public sectors. With a strong background in digital transformation, he has recently been focused on the implications of integrating GenAI into public sector services. Rick emphasises the importance of equipping public and civil service leaders with the necessary skills to embrace emerging technologies, advocating for enhanced digital competency among senior officials to effectively harness GenAI's potential.
Follow James on LinkedIn here.
Follow Rick on LinkedIn here.
Contact us at podcast@clarasys.com
You might also like
LISTEN: The impact of Generative AI at work on people and creativity - PODCAST