Ad Image

What Has GenAI Taught Us? 3 Years Later & What Comes Next

Solutions Review’s Executive Editor Tim King answers the question: “What has GenAI taught us?” 36 months later and what comes next.

Three years into the generative AI era, the conversation has shifted in a subtle but important way. The early phase was dominated by shock and speculation—job apocalypse headlines, AGI timelines, and breathless predictions about overnight transformation. This panel, closing out Insight Jam LIVE!, reflected a more grounded reality.

The technology has advanced faster than most expected, yet its impact has unfolded more unevenly, more humanly, and more messily than the early narratives suggested. What we’ve learned is not just about what GenAI can do, but about what we still struggle to do with it.

What we got wrong—and what we underestimated

One of the clearest points of agreement was that the widely predicted “overnight job apocalypse” never arrived. Entire professions were not wiped out in a single stroke. Teachers were not replaced by AI tutors. Analysts were not instantly automated out of relevance. Instead, GenAI arrived as an accelerant—powerful, disruptive, but deeply dependent on human context, governance, and readiness.

At the same time, the panel made clear that this doesn’t mean the disruption isn’t real. The labor impact appears slower and more structural than sensational headlines predicted, but the direction is unmistakable. AI became a crutch before it became a craft. Organizations rushed to adopt tools before redesigning workflows, data foundations, and expectations. Workers, meanwhile, adopted AI faster than institutions adapted to it, creating a widening gap between individual capability and organizational operating models.

What many underestimated most was how fast the models themselves would improve. Capabilities that once felt five or ten years away—high-quality code generation, reasoning over complex tasks, near-real-time iteration—arrived far sooner. Yet, paradoxically, these advances didn’t translate cleanly into productivity gains everywhere. Not because the models failed, but because organizations didn’t yet know how to activate them without simply paving old cow paths.

The Real Bottleneck is Workflows, Not intelligence

A recurring insight was that most existing workflows were designed around human limitations, not machine capabilities. Simply layering AI on top of those workflows misses the point. The opportunity is not to redesign humans to fit machines, but to rethink work itself—end to end—around what AI makes possible.

This explains why enterprise adoption has been uneven. While consumer use exploded almost overnight, enterprise trust and deployment lagged. AI flourished in narrow domains like code generation and customer support, where value was immediately legible. Beyond that, organizations hesitated—not because AI lacked power, but because trust, governance, and accountability frameworks lagged behind the technology.

The lesson after 36 months is clear: AI is not primarily a technical challenge. It is an organizational, cultural, and leadership challenge.

Education, Assessment & the Collapse of “What You Know”

Few areas felt this tension more acutely than education. Teachers saw quickly that traditional assessments—essays, homework, even exams—were no longer reliable measures of learning. But rather than retreating, many educators adapted faster than expected. They recognized that AI doesn’t eliminate thinking; it exposes whether thinking is happening at all.

A powerful shift emerged: moving away from assessing what learners know toward assessing what they can do. Learning as performance, not recall. Process over product. Prompting over prose. Iteration over final drafts. In some cases, teachers stopped grading essays altogether and instead graded how students interacted with AI—how they refined prompts, questioned outputs, and demonstrated judgment.

This shift mirrors what employers are now facing. When the cost of producing “acceptable” work approaches zero, taste, judgment, communication, and problem framing become the real differentiators. AI didn’t eliminate rigor; it moved rigor upstream.

The Taste Problem and the Rise of “Good Enough”

One of the most unsettling themes was the erosion of standards. When AI can generate large volumes of passable work instantly, the temptation is to accept 60% solutions and move on. But that creates a dangerous equilibrium: output increases while quality quietly declines.

The panel was clear that AI should raise the bar, not lower it. If drafting is effortless, the expectation should be excellence—not speed for its own sake. Yet this is not a technical issue. It’s a cultural one. Organizations must decide what they reward: velocity alone, or thoughtful, high-quality outcomes.

Without that recalibration, AI risks flooding systems with “slop”—content that is syntactically correct but intellectually thin. And once AI-generated work starts talking primarily to other AI systems, humans are left managing noise instead of meaning.

The Deeper Questions GenAI is Forcing us to Confront

Beyond work and education, the panel moved into more uncomfortable territory. GenAI is not just changing productivity; it’s forcing society to ask what is uniquely human.

When machines can write prose, compose music, pass exams, and simulate empathy, where does meaning reside? Is humanity defined by outputs—or by presence, relationships, and inner experience? Several panelists pointed to a troubling trend: a growing discomfort with struggle, silence, and long-form thinking. Frustration is increasingly treated as a bug, not a feature of learning or growth.

In this sense, AI acts as a mirror. It reflects back our impatience, our over-optimization, and our tendency to conflate digital productivity with human worth. It also exposes how much of intelligence has always lived between people—in conversation, collaboration, and shared context—rather than inside isolated minds.

Governance, Power & the Concentration Problem

On regulation, realism prevailed. The panel largely agreed that sweeping guardrails are unlikely to meaningfully slow progress, especially in a competitive geopolitical environment. Open-source models already exist. Compute is widely distributed. The genie is out of the bottle.

That said, there was strong concern about governance at the organizational level. Data sprawl, weak risk management, and casual misuse of AI create legal, ethical, and reputational exposure that many companies are ignoring. The more profound worry, however, was long-term: the concentration of intelligence and compute in the hands of a few entities.

As AI capability centralizes, the question becomes how societies hedge against excessive power concentration. The uneasy answer was not technological, but human: collaboration, collective intelligence, and the kinds of problem-solving that emerge only in groups. AI may outperform any individual, but it still struggles to replicate the emergent intelligence of diverse humans working together.

What Comes Next

Thirty-six months in, the panel’s outlook was neither utopian nor dystopian. It was sober. GenAI is not replacing humanity—but it is pressuring us to decide what we value, what we reward, and what we’re willing to defend.

The future likely belongs to those who:

  • Treat learning as performance, not memorization

  • Redesign work around AI capabilities instead of human constraints

  • Raise standards rather than settling for “good enough”

  • Invest in human judgment, collaboration, and taste

  • Pair intelligence with wisdom, not just scale

If there was one unifying takeaway, it was this: AI will not defeat humans who work together—but it will overwhelm those who remain isolated, complacent, or unclear about what matters. 36 months later, the technology is no longer the biggest unknown; we are.


Share This

Related Posts