
The honeymoon phase of the AI revolution is officially over. Boards and leadership teams are no longer satisfied with flashy proofs-of-concept or agents that “experiment” in safe, isolated playgrounds. We have entered the era of the reckoning: Can your organization actually run AI reliably, responsibly, and repeatedly inside real business processes?
For those of us who lived through the .com bubble at firms like easyJet or saw the mobile revolution unfold from the front lines at Nokia, this pattern is strikingly familiar. Every technological cycle begins with a burst of erratic innovation, followed by a plateau of “Enshittification” as companies prioritize monetization over value, and finally—if we are lucky—settles into a “Quiet Transformation” where the technology becomes invisible infrastructure. We are currently teetering between the second and third stages.
The Shift from Cleverness to Control
In the early days of any cycle, we celebrate the “clever” hack. But as Tamarah Usher notes in her recent analysis of 2026 trends, the conversation has shifted. Raw intelligence—represented by ever-growing LLM parameters—is now a secondary concern. The market is converging. The real differentiator for a CIO or a CPO today isn’t having the smartest model; it is having the most predictable one.
Why? Because the “brittleness paradox” is real. The more efficient we make our automated agents, the more catastrophic a single failure becomes if the foundation isn’t clean. We are seeing a move toward what I call “Agentic Governance”—where the risk is no longer that the AI fails, but that it succeeds in ways we cannot audit or explain. Businesses that treat AI as a mere feature are finding themselves structurally fragile. Those treating it as infrastructure—with the same rigour as a payment gateway or a cloud server—are the ones building resilience.
The Trap of the “Rot Economy”
We must be wary of “Enshittification.” As platforms seek to extract more value from their ecosystems, we are seeing the rise of “data tolls.” Salesforce, for instance, has faced scrutiny over increasing connector fees for apps tapping into its data. This is a classic symptom of the “Rot Economy”—when the technology stops improving the user experience and starts cannibalizing it for short-term gain.
For innovation professionals, the challenge is to protect the user experience from this corporate bureaucracy. In my experience at EDF Energy, making digital the lead customer channel wasn’t just about the tech; it was about ensuring the technology served a purpose—making the customer’s life easier, not just the company’s balance sheet look better. When AI is used to gatekeep support or obfuscate pricing, it isn’t innovation; it’s a debt that will eventually be called in by the market.
Moving from Pilot to Operating Model
How do we avoid these traps? It starts by rethinking the Product Model. We are seeing a major shift in how AI is procured and managed. The “Agentic Enterprise License Agreement” (AELA) is becoming a standard, as noted by Constellation Research. Vendors are increasingly moving toward flat-fee, “all-you-can-eat” models for AI agents to provide the predictability that CFOs crave.
But a predictable bill isn’t the same as a predictable product. To truly operationalize AI, leaders must focus on three core pillars:
- Accountability Boundaries: Who owns the output of an autonomous agent when it makes a healthcare or finance decision? We are seeing Forrester predict that 30% of consumers will soon use GenAI for high-stakes decisions. This requires a rethink of liability and human-in-the-loop design.
- Structural Dependency Management: If your core workflow depends on a third-party API, you are not a product company; you are a tenant. We must build with enough abstraction to switch models when the “rot” sets in.
- Operational Auditing: Governance is no longer a PDF policy; it’s an architectural requirement. Logging, environmental separation, and real-time intervention capabilities are the new prerequisites for deployment.
The Long View: Beyond the Hype
If we look back at the Luddites or the early days of the combustion engine, the pattern holds: the initial fear and excitement eventually give way to the “boring” reality of standardisation. AI is currently in its “noisy” phase, similar to the .com boom of 1999. The winners won’t be the ones with the most agents, but the ones who integrate those agents into a human-centric operating model.
Trust is currently a fragmented landscape, but it is earned through action. For CEOs and CIOs, the task is to stop “trying” AI and start designing it into the very fabric of the organization. True innovation doesn’t start with a model; it starts with empathy for the user. If your AI strategy doesn’t make your user’s life demonstrably better, it’s just expensive noise.
Instead of asking “What can AI do for us?”, we should be asking: “How does our operating model need to change to make AI safe for our customers?” The answer to that question will define the leaders of the next decade. Resilience is built on foundations, not features.
Leave a Reply