Skip to content

AI: A cultural shift in disguise

Uncover how organisational culture, including risk appetite and hierarchy, drives effective AI adoption, enabling collaboration, experimentation, and sustainable change. 

Image for Modern office building by night in Paris, France stock photo as organisational culture adapting due to AI concept

A two-part series on what really drives adoption

The conversation around AI is evolving. What started as a fascination with capabilities, from drafting to summarising and ideating, is now turning into a more grounded question: how do we adopt this responsibly, sustainably, and effectively?

Whilst there is bubbling excitement about what it can do, teams are not being brought into the room when shaping AI tools. In responses from the C-suite, seven in ten believe that AI applications are being created in a silo. It is no wonder that a majority of them (68%) also think AI adoption is causing significant rifts and divisions internally.

AI challenges organisational norms around decision-making, innovation, trust, and control. That is why we need to look at culture as much as we do technology. 

In this two-part series, I explore two key lenses for understanding how AI change progresses in organisations:

  • Part 1: Culture as the catalyst - how risk appetite, hierarchy, and psychological safety influence adoption
  • Part 2: Adoption in stealth: Shadow AI - how unofficial, underground use of AI reveals more about your organisation’s readiness than you might expect

Culture is the catalyst for AI adoption 

We often talk about AI adoption as if it’s a question of tooling, training, or regulation. But in practice, the biggest determinant of success lies in something less tangible: culture.

Whether it’s a cautious enterprise taking its first steps or a team of trailblazers pushing ahead with unapproved AI experiments, the real story starts with the organisation’s cultural DNA and its approach to risk, structure, and trust.

Risk appetite and the slowing effect of fear

Some organisations are defined by caution and driven by concerns over data privacy, reputation, or compliance. In these settings, AI adoption often gets stuck in committee. Experimentation feels too risky, so projects stall. Employees are told to “wait for further guidance.” This creates a state of inactivity, where expanded use is always “coming soon”.

By contrast, cultures with a healthy risk appetite create room for exploration. They know that not every use case will stick and accept that is okay. These organisations often start small: testing AI on meeting notes, summarising documents, or generating internal content. Wins are shared openly, and learnings (particularly failures) are seen as progress.

Hierarchy decides who gets to play

In hierarchical organisations, AI adoption can become overly centralised. Decisions flow from the top, and permission is tightly controlled and sanctioned. For example, an executive team might approve a single AI tool, say, for summarising client reports, and restrict its use to a specific department, like legal or compliance. This may offer control, but it often comes at the cost of speed and ownership. For instance, frontline teams might spot powerful use cases (like automating customer responses or simplifying data insights) but feel unable to act without clearance. Teams wait for consent so experimentation becomes exclusive and narrow, rather than open and expansive.

Organisations with a flatter hierarchy tend to treat AI as a shared opportunity. They encourage cross-functional collaboration and local ownership, empowering employees to test tools relevant to their workflows. For instance, at Spotify, they have moved toward a “Human‑AI enterprise” model that equips cross-functional squads with embedded AI engineers, product owners to enable decentralised experimentation. IT or digital teams set guardrails, like approved tools and data standards, but experimentation comes from the ground up. HR might trial AI for policy drafting, while marketing builds prompt libraries for campaigns. This decentralised model isn’t chaotic, but rather it is responsible freedom within clear boundaries.

The quiet blocker: AI shame

Even in progressive cultures, there’s a silent barrier to adoption: shame.

Some employees worry they “should” already understand AI or fear they will look foolish for asking basic questions. Some feel uncomfortable admitting they’re using AI at all, especially if there’s no formal policy in place.

But there’s deeper tension, with a growing perception that using AI is somehow cheating.

People hesitate to share their AI-generated work because they fear being judged, as if they have taken a shortcut. The unspoken implication is that using AI is lazy and requires limited cognitive function on the part of the individual.

This mindset undermines confidence and keeps experimentation hidden. It reinforces the idea that “real” work has to be done the hard way, even when AI could improve speed, quality, or creativity, and more often than not requires the same level of outcome-based thinking.

Leaders can break this cycle by normalising usage and learning. Celebrate small experiments. Share real (not polished) stories of what’s working and what’s not. Make it clear that it is okay to use these tools in everyday tasks to support accelerated work and throw cold water on the taboo.

Culture eats AI strategy for breakfast

There’s no one-size-fits-all playbook for AI adoption. But every successful journey has one thing in common: a culture that invites curiosity, enables safe experimentation, and builds trust from the ground up.

Before your organisation asks what to adopt, ask who gets to explore, how they’re supported, and why it matters.

Find out how we can help your business thrive ]