The AI Hangover of 2025: What Looked Brilliant, What Hurt, and What Comes Next

By the end of 2025, it became impossible to ignore a quiet but decisive shift in the data and AI landscape. The conversation changed tone. Where there had once been breathless optimism and grand claims of transformation, there was now something closer to realism, and, in many cases, reckoning.
Yesterday, I hosted the final Sharma Smackdown of the year, and was joined by Bill Schmarzo (The Dean of Big Data), John Thompson (The Great and Powerful – self proclaimed I might add), and Mark Stouse (The Maestro of Decision Intelligence) for what was less a trends discussion and more a forensic examination of a year that promised much and delivered unevenly. What emerged was not cynicism, but clarity. The industry has not failed. It is finally growing up.
This article captures the most important insights from that discussion, not as sound bites, but as a set of hard-earned lessons that should shape how leaders approach data and AI in 2026.
Generative AI: A Powerful Tool That Was Asked to Be Something Else
The defining theme of 2025 was not the rise of Generative AI, but the recalibration of expectations around it. For much of the past two years, GenAI was positioned as a decision-making engine — a technology that could replace or outperform traditional analytics and human judgement in complex, high-stakes environments.
That belief did not survive contact with reality.
Generative AI excels at pattern recognition in language. It is probabilistic, not causal. It predicts what is likely to come next based on past data, rather than explaining why something happens or what will occur under different conditions. When organizations attempted to deploy GenAI in regulated, risk-sensitive domains i.e. finance, compliance, operations the limitations became clear.
As Bill Schmarzo observed during the Smackdown, decisions based on averages inevitably produce average outcomes. In environments where the cost of error is asymmetric, those outcomes are not merely suboptimal they are dangerous. False confidence, delivered fluently, proved far more problematic than uncertainty acknowledged honestly.
Bill Schmarzo: “When you make decisions based on average, at best you’re going to get average results… for really critical important decisions in regulated industry or high impact areas… you can’t use correlation-based decision making.”
This does not mean GenAI failed. On the contrary, it delivered extraordinary value as a contextual and enabling technology. It democratised access to information, accelerated awareness of AI across the workforce, and reduced friction in knowledge discovery. What failed was the assumption that it could, or should, sit at the core of enterprise decision-making.
The lesson from 2025 is not to abandon GenAI, but to reposition it. It is an ensemble player — powerful in support, weak in isolation. Decision intelligence still requires causal reasoning, predictive modelling, and human accountability.
Autonomous Agents: A Vision That Arrived Before Its Foundations
Few ideas captured the imagination in late 2024 as strongly as autonomous agents. The narrative was compelling: digital workers operating independently, executing tasks, coordinating actions, and driving efficiency at scale.
By the end of 2025, that narrative had softened considerably.
As John Thompson noted, agents did not fail because the concept was flawed, but because the ecosystem was unprepared. What emerged instead was a proliferation of loosely defined “agents” many of them little more than rules-based automation with a new label. The promised leap to sophisticated, production-grade autonomy largely failed to materialize.
The industry spent much of 2025 building what it should have built first: interaction frameworks, governance models, APIs, and control mechanisms. These are not glamorous, but they are essential. Without them, autonomy becomes risk, and experimentation becomes liability.
John Thompson: “Any of us that have built systems realise you usually build a system three times: once to see if it works, once to get it right, and once to scale it… [with Gen AI] this whole idea that you’re one and done, that’s not happening.”
In hindsight, the disappointment around agents reflects a familiar pattern. We consistently underestimate the effort required to operationalize intelligence and overestimate the readiness of organizations to absorb it. Agents may yet play a meaningful role, but their impact will depend on whether 2026 becomes a year of disciplined implementation rather than renewed speculation.
The End of Experiments and the Rise of Fiduciary Accountability
Perhaps the most consequential insight from the Smackdown came from Mark Stouse, who framed the year ahead not as a technological transition, but a governance one.
The era of AI as a corporate science experiment is coming to an end.
Regulators, auditors, and investors are increasingly focused not on whether organizations are using AI, but on how those systems influence decisions that affect shareholder value. The concept of a “fiduciary defense of AI” is no longer theoretical. Leaders will be expected to explain not only outcomes, but mechanisms. Transparency, traceability, and intent are becoming mandatory.
This has profound implications for strategy.
Mark Stouse: “All companies are going to have to be able to defend their use of AI at any and all levels of their operations, not just the reporting. And that’s a big deal.”
In 2026, success will favor organizations that reverse the prevailing order of operations. Instead of starting with tools and searching for use cases, they will begin with business outcomes, decision ownership, and measurable impact. Only then will technology selection make sense.
It also places renewed importance on data foundations. CRM, ERP, and governed data assets, often dismissed as unexciting, are reasserting themselves as strategic differentiators. Without them, AI initiatives become fragile, difficult to defend, and impossible to scale responsibly.
What 2025 Really Taught Us
The story of 2025 is not one of failure, but of correction. The industry chased novelty, learned its limits, and began the process of reorientation. The organizations that struggled most were not those that moved too slowly, but those that moved without purpose.
The next phase of data and AI maturity will not be defined by ambition alone. It will be defined by clarity, restraint, and accountability. Leaders will need the courage to say no to initiatives that sound impressive but lack substance, and the discipline to invest in capabilities that compound over time rather than impress in the short term.
Samir Sharma: Go into 2026 with clarity, courage and a healthy skepticism of shiny things. If someone tells you AI will revolutionize everything overnight, you have my permission to walk away very slowly.
As we enter 2026, the question facing executives is no longer what AI can do. It is what decisions their business cannot afford to get wrong, and how intelligence, in its many forms, can support those decisions responsibly.
That is the real lesson of the Sharma Smackdown. The hype did not disappear, it simply stopped being enough.
–
Samir Sharma is CEO of datazuum, host of The Data Strategy Show, and author of The Strategy Canvas: A Field Guide for Data & AI. Closing the Strategy–Execution Gap. He works with leaders to turn data and AI strategies into measurable business outcomes.
- by