Thinking

AI in action: Practical lessons on adoption and opportunity from Sage

Written by Sarah Rigby | February 05 2026

Get ready for a deep dive into the real-world challenges and opportunities of AI adoption in business. In this episode of Clarasys podcast, Principal Consultant and AI Lead Sarah Rigby sits down with Mahbub Gani, Principal Data Scientist at Sage, to explore how one of the world’s leading software companies is harnessing AI to drive business value, transform customer experience, and stay ahead in a rapidly evolving landscape.

From Sage’s journey with classical machine learning to the latest advances in generative and agentic AI, Mahbub shares practical lessons on cutting through the hype, designing for scale, and building a culture that embraces change. Discover how Sage balances experimentation with robust delivery, manages the surprises that come with innovation, and ensures high adoption rates across teams. You’ll also hear candid advice for organisations at the start of their AI journey, including why slowing down and thoughtful strategy matter more than ever.

Whether you’re curious about the business impact of AI, looking for tips on driving adoption, or want to understand the future of agentic AI, this episode is packed with actionable insights and honest reflections from the front lines of enterprise AI.

Listen here or read on for an edited transcript.

 

Sarah Rigby: Hello, and welcome to another Clarasys podcast episode. I'm Sarah Rigby, a Principal Consultant here at Clarasys, AI Lead and customer experience expert. I help organisations navigate the complexities of AI, turning its potential into tangible business value.

It's really lovely to have our guest on this podcast today. I will let him introduce himself and the role he's currently in.

Mahbub Gani: Thank you so much, Sarah, for inviting me to this amazing opportunity to talk about AI, the opportunities and challenges within corporate organisations.

So I'll just introduce myself. I'm Mahbub. I joined Sage as a Principal Data Scientist. It'll be my fourth anniversary, actually, toward the end of next month. So it's been four amazing, exhilarating years. Can't believe that it's gone so fast.

Prior to Sage, I was at a small startup called Biblio. I'm still in touch and very good friends with my former colleagues. It was a tiny startup. We were building out a recommendation platform for digital publishers before Biblio. So we kind of did our best to weather the COVID Storm.

Before Biblio, I was a senior data scientist with Pearson Academic Publishers, and prior to that, I did a year-long stint as a contractor with Barclays. And that's basically where I kind of launched my career into, well, then it was data analytics and then data science, and that followed 11 years as an academic, as a lecturer in engineering mathematics at King's College London University. So it was just down the road from here, based at the Strand.

And then in terms of my pre-history, I worked at a consultancy, an engineering consultancy that had contracts with the MOD in GCHQ predominantly. And that was following my PhD at Imperial in mathematical control, which followed my undergraduate years in mathematics and engineering at Cambridge. So that's a kind of whistle stop tour of my biography over the last 20 or whatever, dizzy years,

Sarah Rigby: And I guess it's been quite a journey for you to get to where you are at Sage today.It'd be great if you could tell us a little bit about how SAGE is actually currently using AI.

Sage’s AI journey: From classical machine learning to Generative AI

Mahbub Gani: Sure thing. So I think the way that Sage is right now, adopting and deploying AI to deliver business value can be roughly divided into two. And I suspect that's the case for many organisations. So, there's what I would call classical ML or AI, which is basically AI as it was understood prior to the explosion of generative AI and language models into the scene.

And so we have been building out AI services for our world-leading products for more than five, six years now, and that's essentially driving, amongst other things, automation of the bookkeeping process within accountancy. So a lot of the work that I do actually is in that space. I can get into a bit more detail about that as a conversation develops.

But the other area which has been more recent is applications development, applications of generative AI, predominantly large language models. For various use cases. Some of that has gone into production, but a lot of it is in the development stage, understandably, because it's still a very new field.

There are a lot of challenges that we are figuring out, but those are roughly the kind of two areas of application of AI within Sage.

Sarah Rigby: Amazing. Thanks for sharing. And for some of your more mature applications, thinking back to when you first started exploring AI, how did you cut through the initial hype and identify where to actually begin? And what was your very first kinda concrete step?

Cutting through the AI hype: Focusing on business value

Mahbub Gani: I think for us it was crucial that we remain very much focused on delivering business value in a meaningful way and not get swayed by the hype. Not be too distracted by all the promises and sci-fi stories that are regularly being generated in all the various social media platforms, mentioning no names we're all familiar with.

So first, we recognised that this was a massive distraction and there was a lot of noise, but at the same time, we had the deep expertise. We have a team from many different backgrounds, you know, very talented, but the one thing they all bring to them is this kind of deep knowledge and experience. In data science, in ML, in AI. So they're kind of mature in their experience and over that kind of honeymoon period with the field with AI. So they're not so swayed by the hype. You know, they're keen to cut through that noise and deliver real value because they know, or we all know, that judiciously applied and sensibly deployed AI can deliver meaningful value.

So we focused on the business value. We worked in partnership with our products. And we basically cut to the chase, wanted to understand from our product partners what their pain points were, what the opportunities were, and plug into their roadmap, into their strategies much earlier, you know, sooner rather than later.

And we otherwise, you know, went through the kind of standard process of having identified the business problem, we would build out a POC, get the results of that validated and then iterate. And then try and move to production as quickly as possible, while at the same time not rushing it to make sure that the AI systems we're developing, deploying, are robust and thoroughly tested and evaluated.

The other thing I'll mention is that we work very closely with our product managers. So our AI organisation has for every data scientist or MLE, every two or three MLEs or data scientists will have a product manager. So every one of our projects will be assigned a product manager who has the expertise from the product side, but they also have the knowledge of AI.

So they can act as a perfect kinda interface between us and the rest of the business. So they will often be involved in the requirements capture and also they'll get involved in the valuation and also business development. So it's kind of a, we have this, you know, multi scaled, talented team that is involved in every project that helps us to remain honest and very closely connected to the business of the product.

Sarah Rigby: Amazing. Thank you for sharing. And it's great to hear that. It sounds like the business value is the thing that unites you in terms of your skillset and you're all aiming, you're driving towards that same direction you mentioned there in terms of the transition to production and starting to scale. And I think from speaking to different organisations that can often be the real challenge and difficulty. So just wanted to understand how did you successfully do that within Sage?

Scaling AI: Designing for growth and managing surprises

Mahbub Gani: That's a really important and a great question, Sarah. So for us, when we design our AI systems, we think about scale right from the outset. It isn't a can that we just kick down the road and hoping that it'll never catch up.

We realised from the experience that we've acquired through different projects to different organisations that you're going to hit the problem of scale. That's an inevitability. So you might as well start by thinking about it and designing for scale. Recognising of course that you also don't want to slow down the process of experimentation.

So what we typically would do is we would launch experiments, get hold of data any which way we can so we can kick off the discovery phase. We don't delay the experimentation. You know, our data scientists are hungry. To work with algorithms and work with the data and start extracting, mining information and delivering values as soon as they can.

But at the same time, we'll launch a parallel activity and give some attention to the issue of scale. So we'll start doing kind of back in the envelope as calculations on what expected scale issues are going to be in the future, and will interact with engineers and the data scientists to get 'em to start thinking about that problem early on. And so it's not something that catches us by surprise when it's already too late, and we'll liaise with the products, and if we think that scale's going to be a problem, we typically would find different approaches to try and kind of cut that down in size so it becomes manageable. Of course, the reality is that no matter what precautions you take, no matter how much design you anticipate, you become the victim of your own success. You start delivering value, and then you say, okay, great, you know, you're delivering this value. This is great. It's saving revenue and so on.

By the way, you know there's kinda another million customers that are in the wings, and can you please integrate with them? And of course, we don't want to turn such great opportunities away, so we end up one way or another making provision for that type of scale. But the issue, of course, with that is that you basically accumulate tech debts. So you do end up having to create the price. And so there's always that trade-off. On the one hand, you try to control that valve. Making sure that the pipeline is steady and smooth, and you're not overwhelmed. But at the same time, you have to be aware of your competition. You don't want to turn good business away. So it's always a bit of an art, you know, in terms of managing those trade-offs and making sure that the pipeline is moving along. So, yeah, for us, I think it's a combination of trying to design upfront as much as possible using our wealth of experience.

To let that guide the anticipation of problems that we expect to arise in the future, working very closely with the product in order to try and forecast what the growth and scale will be. At the same time, remaining open to the possibility of those surprises. So we try to bake in some redundancy so we can cope with that type of thing.

Sarah Rigby: It's great to hear the thinking and the planning and the prep is way before you even need it, which is amazing that some of those surprises, as you call them. What do you think have been the most unexpected surprises, and what have you learned from them?

Mahbub Gani: Yeah. Yeah, that's the question, isn't it?

So, without giving too much away, I can speak in general terms, and I suspect many organisations encounter this issue. There we are. We have our roadmap all nicely planned out. You know, we've made provision for growths in scale, and then along comes this amazing opportunity for integrating with a product that wasn't part of the plan.

And it's one of those situations where you realise, okay, if we turn this away, then we are missing out on all this additional business and opportunity for growth. At the same time, if we accept this opportunity that is outside our door and this pressure to basically integrate. Then it means that it's gonna have to be all hands on deck.

We're probably going to lose some goodwill, right? With our data scientists, with our engineers who, you know, don't want to be distracted and disturbed and taken away from their standard plan. So we make the call, we make that judgment call, and we have made that judgment call, and it's worked out sometimes. And on other occasions we've realised, okay, maybe that wasn't such a great call. Let's learn and move forward. And there's a lot of repair work, of course, that we have to do. So that's why kind of making sure that you have strong relationships is so, so vital to all of this. So you can actually, and you have a certain credit of forgiveness, right? You don't want to exhaust it. And so you have to be very careful about dipping into that fund and exercising restraints before you do so because once you've exhausted it, then it becomes really hard.

Sarah Rigby: Yeah, absolutely. And I think the acceptance of a new technology in the grand scheme of that kind of industry, there are going to be failures and learnings, but it's how do you adapt to that and actually use them or apply those learnings going forward.

One of the things you mentioned there was strong relationships, and it might tie into my next question around managing the change of a new AI technology. I guess with any transformation, change and adoption can be one of the challenges. So just wanted to understand how you have achieved such high adoption rates and how you make people excited and accepting of AI when it can be quite a scary thing to people, or can change the way that they're working?

Driving AI adoption: Building trust and embracing change

Mahbub Gani: Absolutely. I think we're living through unprecedented times, which can very quickly become a cliche, but I really believe, you know, the pace of change that we're all observing, we are witnessing is unlike anything, any technology change, at least that I can remember, over the last few decades. And that causes an understandable combination of excitement and apprehension, anxiety, and equal measure.

So you're absolutely right to wonder how we balance between that. You want to make the most of all these opportunities that have arisen, but at the same time, you don't want to alienate the very people and scare off the very people who you have to rely upon and depend upon to actually realise your dreams and your goals and implement all of these amazing technologies and put them into production. So it has been something which, at least for me, has been at the forefront of my concerns in terms of working with my colleagues and also managing the upwards and sidewards and so on. So I think the way that we try to try our best to strike that balance is, first of all, we give full recognition and acknowledgement to all of our talents, our talented team who are at the forefront, who are having to manage this deluge of information, this information overload while at the same time focusing on the task at hand. So practically what that means is, you know, we do the kind of the 20% rule. We try to implement that in our own way, make sure that everybody has some recognised time to be able to explore, to play.

I think it's really, really important that we cultivate a kind of ludic spirit. So the people have a safe space to be able to play with these tools without too much pressure and can take their time in growing their skillset and learning new skills, and so on. So we give them that kind of reassurance and put them at the forefront.

At the same time, we connect them up with the business so they partner up. And so both sides, you know, build a rapport. They understand each other's worlds, each other's domains. So there's a kind of respect and acknowledgement. Of what both sides bring to the whole issue. So then what we hope is that we cultivate this space where there's this kind of playful interaction, but there's energy and a lot of goodwill that's being exchanged while solving a problem and you know, delivering business value.

So, of course, tricky to implement and to realise, but I think, you know. All in all, actually, at Sage, we've done a pretty good job of it. We still have a long way to go to get it completely right, but I feel that we're moving in the right direction. And I think the biggest part of it is the most important thing about this journey that we're all undertaking is being humble and open to the future, and recognizing that there's a lot of unknowns, there's a lot of movement, there's a lot of change. It's important. It's good to be excited. But it's also important to be restrained and question and give some time to winnowing from the noise, and there's a lot of that out there. The real insights, the real value, the kind of technologies and tools and approaches which have been thoroughly tested and stand a good chance of resisting all the various bubbles that we've seen, and there are going to be many more that are in our path.

The role of culture: Aligning vision, values and leadership

Sarah Rigby: And just focusing on something you said there about having the time and being able to challenge and having time to play as well, how much of an organisation's culture do you think impacts the ability to accept and effectively leverage AI?

Mahbub Gani: Massively. It's hugely important. I think it's make or break.

Of course you can expect there's a culture that organically grows from bottom up. You know, the engineers, the data scientists, the product managers, through their everyday activity, they're going to be contributing to the emergence of a culture and a way of working and an ethic. Some of it is going to be a prescriptive and by design, but often it isn't. Just through the characters that are involved. And it's very much to do with the hiring strategy. You know, have you assembled the right people, and that's always going to be a bit of a risk. I can say that I'm extremely fortunate in Sage AI that I'm working with an amazing team, and at the same time, it has to be driven from the leadership downwards.

I've experienced the best and worst of that in my kind of career, and there have been cases where the culture, though there's the opportunity for an amazing ethic and way of working among my team, but I found that it's just jarred. It hasn't meshed well with the culture that comes from the top.

They're shooting past each other. They're not talking to each other. The culture, there isn't this proper match between the cultures. They're not aligned. You know, the culture of the leadership is moving in one direction. For example, it might be that the focus might be on rapid delivery. As soon as there's a shiny new toy, you have to see it through.

You have to adopt it and see it through to production. Whereas a culture for among those data scientists and engineers might be one of taking their time, being more methodical. It may be risk-averse, for instance, and I'm not judging either approach. I'm just stating it as a fact that if you have these two cultures colliding, it isn't a healthy mix.

At the same time, if they're completely aligned, you produce basically an echo chamber, right? You know, you end up moving in a certain way. Everybody thinks they're patting each other on the back, thinking, great, we have an amazing culture. But actually, it's only amazing because you are all aligned, but it might be the wrong culture for that particular area of the market. In fact, the market might punish you as a result of that. So it's all well and good, internally you might think that you're doing very well, but it's not the right culture, it's not the correct approach for the particular business that you're in. So I think that you're absolutely right to suggest that it's something that needs to be given attention. I think it has to be put foremost, you know, right at the front. It cannot be contrived. It's the nature of culture. You cannot hire in a cultural culture consultant and expect that it's going to be solved. Although they're important, they can definitely instigate the process, and you know, encourage and steer it in the right direction, it has to be self-generated. It has to emerge naturally among the people. It's very much a human process. Really has to be recognised.

Sarah Rigby: And I guess it has to be that the people have to believe in the vision, the values and the purpose of the organisation to buy in and want to drive that culture. And something which you've mentioned on a number of occasions is the business value and how that drives what you do, and it helps people stay focused. Could you bring that to life for us? And also for you, I guess, AI is so embedded in the product that you sell to your customers. How has it impacted the customer experience for your end clients?

AI’s impact on customer experience: Automating accountancy and unlocking value

Mahbub Gani: So one of our flagship services is an ecosystem of AI models and the supporting engineering infrastructure and technology platform for automating various accountancy processes.

We all know that accountancy; and it's a very established, highly regulated process that's been evolving for several hundred years; it's a necessary process that all businesses have to adopt and conform to, and frankly, you know, if you don't get it right, then you could basically be ending up in prison or paying fines.

So it's something that people are incredibly sensitive to, generally, and completely understandably, risk-averse. It's a very cautious community. Trust is uppermost. It's extremely important that any new technology. That's introduced that promises to change the world for accountancy, has to acquire trust amongst accountants, amongst bookkeepers, amongst businesses.

Sage have been around since the late eighties, so they're very well aware of the needs and the desire, the importance of trust of the community that they serve. We seized upon an opportunity. To automate that process and to reduce a lot of the kind of irritating, but credit, extremely important work and processes.

And we realised that AI was sufficiently mature, that there were a wealth of algorithms that could actually solve this problem. It was a case of selecting the right one and then building the engineering system around it so that it could deliver trusted AI services and solutions for our accountancy products.

So the program I got involved with is one of the most successful biggest AI projects within Sage is basically put AI on the map for Sage, and that is to automate that process of bookkeeping and more generally accountancy services. And now I'm really pleased to say is been integrated with dozens of products, and it's directly impacting the revenue we've had.

Amazing stories from our customers who are really happy, pleased with the progress. And you know, there are testimonials everywhere in terms of how much it's reduced the manual labour on their part, and they have been released to be able to do those more valuable and exciting, interesting business-related activities.

Sarah Rigby: So it sounds like from a Sage point of view, it's been free product innovation and obviously being able to acquire new customers and retain existing. From your end client's point of view, how would they tangibly measure if it's been a success for them?

Mahbub Gani: So I think there are some tangible metrics for them. One is the extent to which, for this particular AI automation service, it's actually reduced the amount of manual, tedious work that they have to do. And there's definitely been an impact there. We have reports of many customers who have experienced that benefit, and they're singing our praises now, which is really great to see.

But also, we are getting feedback from our customers. They unlocked other possibilities. They may not have realised that was available to them. So perhaps, I mean, cashflow management is one. And they come back to us, and they say, oh, you know, perhaps you can do this for us. And that then launches another project.

So the other thing to mention, and we haven't really touched upon it yet, of course, is the generative AI side of things. So about a couple of years back, we launched our copilot brand, and that's been evolving. We've been through several iterations of that, and that's starting to now, aspects of it have matured. There's many other dimensions of that. We're still unlocking and exploring, but we're beginning to see the signs of the value that's delivered. It's of course, still NASA technology. It still needs to be tested and not just for our organisation, for many other organisations, but the early signs are that it is delivering value.

So specifically in terms of a kind of a chat partner. An AI partner that supports the work of, and the processes of our account, our bookkeepers and our users of our software. So they have basically an assistant that is there to provide a helpful hand, but isn't too intrusive. So that's something that, you know, we've deployed now in many of our products and it's turning out to be successful in many ways, but there's lots of work to be done to continue to improve that particular feature and or set of features.

Sarah Rigby: It's great to hear kind of practical examples and the value it is delivering. And I guess a few years ago, generative AI had a lot of hype around it. Is it gonna deliver value? And obviously we're starting to now see that across organisations. A question for you is AgTech that is now the new Gen AI people are worried about missing out on. So just wanted to understand from your point of view, do you see value there? Should businesses be focusing their attention on how they can leverage it in their organisations?

Agentic AI: Hype or real value?

Mahbub Gani: Excellent question. So I have a peculiar perspective on this, quite idiosyncratic and, based on my reading and experience with agents and LLMs, and it's what I'll throw out there, I'm happy to be challenged about it, but it's a perspective.

For me, I make a conceptual distinction between agents and Generative AI and specifically large language models. I like to view agents more as a software paradigm and framework. It's analogous to distributed computing frameworks that emerged, say about 15, 20 years ago, you know, when big data arrived in the scene and it was kind of the wild west, right?

The community realised that they had way more data than they had the technology to be able to work with that data in a robust and reliable and trusted matter. So there was no framework to start with. People were doing their best with old technologies. With SQL databases and whatever was available in terms of distributed decentralised computing back then.

But the frameworks that were available back then, they just weren't ready. They weren't designed for the scale of data that had become available. And what made that possible was, of course, the hardware that's always moving faster than. Software that has to then catch up and then it unlocks possibilities for software.

So I think something's analogous is kind of happening within AI. With Generative AI specifically, I think what you had about a few years back when LLMs were introduced and with kind of the remarkable success is as many experienced with Chat GPT when that was launched. There was, you know, a rapid, like an unprecedented interest in Gen AI when it was the tool itself was democratised. So non-experts were engaging with that, and that was driving organisations intrigued by this technology and how it could unlock business value. So very quickly, companies and organisations and teams were trialing this out, and then we started seeing startups and you know, had the usual kind of phenomena of bubbles and so forth.

So I think as all that matured, the practitioners and engineers and scientists. Realise that we need a more robust way of managing these models and giving attention to the interoperability between these different models. So I think AgTech is really the culmination of all of that investigation and experience and troubleshooting.

And for me, it is first and foremost a software framework, a paradigm. That is ideally set up so that an LLM is the beating heart of an agent, but what it offers is a set of tools and processes and a scaffold and an infrastructure around an LM so that the LLM can be used in a more robust, seamless and reliable manner.

So I think for me, this is really what agents are offering as a possibility. So it's allowing now systems to be developed with LLMs at their core in a much cleaner, elegant, robust fashion than perhaps would've been possible without that framework. Of course, there are competing frameworks. So now we have kind of another layer of looking at the interoperability between these types of frameworks that you can seamlessly move from one to the other.

So if you like the agents of the software and LLMs are the hardware, and in the same way that hardware has its own independent, you know, it follows Moore's Law and so on. There's an equivalent scaling law for LLMs, which is being studied, you know, and there's lots of literature available that examines this.

Sarah Rigby: It's a very interesting view. And listening to you, I was reflecting on some people that I've spoken to recently who have quite contrasting views. So, if you want to start using agent, and you might not be very mature in AI in other ways, do you need to invest a lot upfront to get your data and your processes running in the right way? Or can you almost bypass that and actually get an agent up and running quickly to deliver value and interested in getting your view on whether there is a right way?

Mahbub Gani: I mean, the short answer is I don't think there is a right way or a wrong way. I think that we are going through this process of collective learning and discovery and experimentation.

There are going to be winners. There are going to be losers, you know, and I think it'll take a few years before we are able to pass judgment on all of that. It's the reliability, or it's the trust in the scaling law associated with LLMs. That's the key issue. Everything stands or falls on that. So if you take it on faith, a rational faith, you might believe, then I think it's a rational choice. I get it. But I think the flip side is those organisations, those individuals and teams who are not so optimistic, they're not prepared to put their faith in LM’S, and despite the emerging framework and so forth, they recognise that yes, it's possible to put your faith and then go ahead and build all these applications. Or if you're a startup, you manage to win all of this VC funding on all these kinds of promises. But basically, there's a catastrophe waiting for you at the end of the road. And so I think those organisations, I think is also from their perspective, a rational choice actually, that you are more cautious.

But the issue, of cours,e is in the end, which one is the market going to favour? Is the market bullish or bearish when it comes to AI and its technology? And I think right now we are observing basically a bullish market that is kind of favouring those who are prepared to put their faith in this. And now there's something to be said.

If you look at the way technology typically evolves, I mean, this is again, a certain faith in tech evolution and development of technology that if you throw enough faith and money, right, then sooner or later somewhere, it's going to unlock a new possibility. The problem with that view, of course, is at what cost.

Statistically, yes, absolutely. Collectively, you might win. But there are so many losers and so many aspects of life that suffer as a result. So I think given the costs that we're talking about, given the stakes that are at play, personally, I would counsel more thought and reflection. Collectively across the board.

Sarah Rigby: Yeah. Interesting. And I think your risk appetites for AI versus, say other digital transformation projects, it's quite different, and you've gotta accept what comes with that as well.

Moving on to the final part. So you've given some great examples of how Sage are kind of a trailblazer in the AI space. It's helped you as a business, it's helped your customers, and just wanted to reflect on your experience and success in AI. What is the most honest piece of advice you'd offer to another company that might be at the start of their AI journey? And it's overwhelming 'cause there's a lot of noise. As you said earlier, there's a fear of missing out. What do they do? Where do they start?

Advice for AI beginners: Slowing down and building thoughtful strategies

Mahbub Gani: The best piece of advice if I'm in a position to give that advice, is slow down, be more thoughtful, apply more reflection. I know that's hard given the pressures of the market, and I think my fear is that a lot of AI strategy, a lot of AI roadmaps are driven mainly by fomo.

There's this huge anxiety like we've never seen before, of missing out, you know, on the next, the other shiniest toy. All these promises that Company X is making about what AI has already achieved for them, but the data hasn't been collected. A lot of this is just, these are hypotheses, and so my first recommendation would be, take your time.

I think we need to exercise some care and thought, and this is going back to your earlier question about culture. I think that those organisations that adopt this as a meaningful culture, you know, from the leadership and all the way downwards and across the organisation, those are the organisations that are going to acquire and earn respect in the long run.

But it's tricky. It is very hard. It's very challenging given the pressures. And I think really I'd call for that kind of enlightened, thoughtful, strategic thinking on the part of those organisations that are at the forefront. And I think it really needs, it's especially incumbent, I would argue on those that have some distance between them and everybody else. You know, they're already leaders. They're the ones who are kind of leading the charge in terms of AI. I think it's upon them more than anybody else to be a lot more responsible. But I completely understand how tempting it is to be swayed by all the promises.

I don't wanna suggest that it's a case of kind of scrambling to shut Pandora's box. It's more recognising we've opened it up now. But perhaps put on some shades, right? Like don't be too dazzled. Right.

Sarah Rigby: I like that messaging about slow down because it's probably, it's different to what people are saying in the market.

Well, thank you so much for your time today. It's been really insightful, and thank you for sharing all your experience.

Mahbub Gani: My pleasure.

Thank you for joining us for another episode of Nevermind the Pain Points. If you enjoyed this episode, please subscribe on your favourite podcasting app or site. We would love your feedback, so please leave a review or drop us an email at podcast@clarasys.com

Show notes

Learn more about Sage on their website here.