Why Causal AI Is Not a Technology Upgrade, but a Leadership Test

Why Causal AI Is Not a Technology Upgrade, but a Leadership Test

- by Samir Sharma, Expert in Artificial Intelligence

Hanging out with John Thompson, Bill Schmarzo, and Mark Stouse makes my head hurt a lot! These three guys push me to go beyond what I currently know and think. Which is very difficult for any human being and so I really need to dig deep to keep apace with them, which in all honest truth is super difficult. So, my mantra of being pushed to the limits is “to figure it out” and the best way to figure this out for me is to go back to something I loved studying and that was psychology! Here I begin.

So, I started to think a lot about why many data programmes fail. One of the things that I kept coming back to was the fact that many organisations do not struggle with data, analytics, or artificial intelligence because of technology. They actually struggle because they lack clarity of purpose. This is an inconvenient truth in an era obsessed with platforms, architectures, and algorithms. It is far more comfortable to believe that progress depends on better tools than on harder conversations. Yet history, and psychology suggest otherwise.

Now purpose in this context isn’t that fluffy, huggy thing sitting in the corner that offers love every now and then, when you need a hug. It’s the stuff that actually drives where the organisation is heading and why they exist.

I had to go back to psychology 101 over the weekend and to Alfred Adler. One of the founding figures of modern psychology, who argued that human behaviour is not primarily driven by the past, but by purpose. People do not act because of what happened to them; they act in pursuit of goals, meaning, and contribution. Developed nearly a century ago, this idea maps strikingly well onto how organisations behave today, particularly in their use of data and AI.

Most data programmes are framed as rational, technical endeavours. In reality, they are deeply teleological. They reflect what an organisation believes it is trying to achieve, whether that belief is explicit or not. When purpose is unclear, misaligned, or politically diluted, even the most sophisticated data capabilities struggle to deliver meaningful impact.

Teleoligical: focuses on the future goal, such as “not wanting to be judged” or “avoiding potential embarrassment,” which serves the current fear.

Teleology, in practical terms, and for this article, is the difference between analysing what happened and deliberately shaping what should happen next.

We have been using Business Intelligence for decades now and it mainly helps organisations understand what has happened. Machine Learning helps them estimate what is likely to happen next. Artificial Intelligence increasingly recommends or automates decisions at scale. Each of these capabilities is valuable and, in many contexts, essential. However, they share a common limitation: they optimise patterns without understanding why the organisation exists, what outcome truly matters, or which interventions genuinely change reality.

This is the key and when I speak with Mark Stouse, I often end up getting a headache and need to go find a dark room for a few hours. Not only is this a good thing, it makes me ponder things and helps digest what we have discussed.

With that pondering, it always comes back to the fact that many organisations become exceptionally good at explaining their past and predicting their future, while remaining surprisingly ineffective at changing either.

This is not a failure of algorithms or platforms. It is a failure of intent.

Now I’m not a deep expert in Causal AI, but, through learning and discussing it, I think it represents a fundamental shift in how organisations think about using data to drive decision-making. Unlike predictive approaches, it does not assume the future is fixed or inevitable. It does not simply ask, “What is likely to happen?” Instead, it asks a more demanding and more uncomfortable question: “If we intervene here, what will actually change and at what cost?”

This is not a technical question. It is an executive one, perhaps, even an existential one!

Causal AI only works when an organisation is explicit about its goals and honest about its constraints. It forces clarity where ambiguity has previously been tolerated, and exposes trade-offs that are often left unspoken. It brings decision ownership to the surface in a way dashboards and forecasts rarely do. Which is something I spoke to Patrik Eriksson about over the weekend on one of his posts.

For many boards and executive teams, this is where discomfort begins.

Pursuing Causal AI requires forms of clarity that cannot be outsourced, and it first requires real goals. Not broad aspirations such as “growth”, “efficiency”, or “customer centricity”, but explicit choices about what the organisation is optimising for, and what it is prepared to sacrifice. Something I talk about in my book as it’s been a big bugbear of mine for decades! Without this, causal models cannot distinguish between meaningful outcomes and convenient proxies.

Secondly, it requires decision accountability. Causal models make visible which levers matter and who controls them. This turns analytics into a governance issue (I don’t mean data governance). When ownership of decisions is unclear or diffused across committees, Causal AI does not quietly fail, it exposes the organisational reality.

Third, it requires organisational maturity. Adler believed that healthy individuals and systems move from superiority-seeking to contribution. Many organisations remain stuck in the former: chasing performance metrics, benchmarks, and competitive optics. Causal AI forces a transition toward contribution, toward understanding how actions create real-world effects and accepting responsibility for them. It replaces prediction with accountability for outcomes.

I have to go back to this because for me to break this down in simple terms and to understand it:

  • Business Intelligence explains the past.

  • Machine Learning predicts the future.

  • Artificial Intelligence automates decisions.

  • Causal AI, by contrast, demands leadership.

If an organisation cannot clearly articulate why it exists, what it is trying to change in the world, and who owns the consequences of intervention, Causal AI will appear overly complex, risky, or premature. In reality, it is none of these things. It is simply honest.

Much to my dismay, and in many organisations, honesty, not technology, is the most difficult transformation of all.

_______________________________

Samir Sharma is a Data and AI executive with over 25 years’ experience working with boards and leadership teams to turn data ambition into measurable outcomes. He is the Founder and CEO of datazuum , and author of “The Strategy Canvas: A Field Guide for Data & AI — Closing the Strategy–Execution Gap.” Available on Amazon.

If your data, analytics, or AI initiatives are struggling to deliver tangible outcomes, Samir is running a limited number of executive working sessions this quarter. These sessions focus on purpose, decision ownership, and impact. Get in touch if this resonates.