Are Firms Ready for the Cost of AI Failures?
I attended Big Data London yesterday and felt the AI buzz in the air!
I started to think about the history of the data industry as I meandered through the conference floor, steering myself away from the hype!
With that said, the history aspect ran through my mind as it is somewhat of a “history repeating” itself moment. So, here is my attempt at some form of historical timeline linked to the hype and often the disillusionment of the moment:
1950s-1960s: Early Data Processing and AI Concepts
Hype: The birth of computers and early AI concepts like neural networks.
Disillusionment: Limitations of early hardware and software, leading to skepticism about AI’s potential.
1980s-1990s: Expert and knowledge based systems
Hype: Expert systems and knowledge-based systems become popular. (I do remember studying all about these at university)
Disillusionment: Limitations in handling real-world complexity and lack of scalability.
2000s: Data Warehousing and Business Intelligence
Hype: The rise of data warehousing and business intelligence tools.
Disillusionment: Challenges in data silos, data integration, poor data quality, and governance. Nothing seems to have changed!
2010s: The Rise of Big Data and “Data-Driven”
Hype: The emergence of big data technologies like Hadoop.
Disillusionment: Complexity of managing big data infrastructure and difficulty in extracting insights. Companies over invest in infrastructure without clear use cases and business alignment. Gartner warned 80% failure rate due to poor governance and a lack of actionable insights.
2010s-2020s: AI and Machine Learning
Hype: Breakthroughs in deep learning and machine learning, leading to applications like image recognition, natural language processing, and recommendation systems.
Disillusionment: Concerns about bias, explainability, and ethical implications of AI.
2020s: AI and Analytics Convergence
Hype: Increased focus on AI-driven analytics, predictive modelling, and automation.
Disillusionment: Challenges in scaling AI models, ensuring data privacy, and addressing regulatory requirements.
Now we stand on the precipice of GenAI and all the hype that it brings with it! Okay, yes, there will be uses for it, but, not what the industry wants you to believe!
AI promises immense potential for value creation, innovation, and efficiency, but there is also a risk. As companies increasingly invest in AI, it’s crucial to ask the question:
Are they prepared for the cost of failure?
AI projects often fail due to poor data quality, unclear use cases, misalignment with business objectives, and a lack of robust governance.
Sound familiar in a historical context?
These failures can cost millions, lead to reputational damage, loss of trust, and missed opportunities. And yet, many firms still lack the right operating model to integrate AI effectively and sustainably. Don’t get me started on operating models!
Leaders need to ensure their data & AI strategy is integrated and includes measurable business outcomes.
It’s not enough to experiment with AI, companies must operationalise it with a clear purpose and framework that accounts for both value and risk.
The cost of AI failure is real.
But with the right strategy, structure, and mindset, those risks can be mitigated to turn AI into a true driver of long-term transformation and value.
Is your organisation ready to truly transform or will you just continue to believe the hype?