
What if the biggest disruption on the horizon isn’t a new model or breakthrough, but the way people react to intelligence that’s everywhere? That’s the provocative thesis in Barry O’Reilly’s Six Counterintuitive Trends for 2026, and it should make every product leader sit up. Barry argues — convincingly — that the battle for advantage won’t be about who has the best models, but who designs organisations to make better human decisions in an AI‑saturated world. Here are practical implications for product and technology leaders, drawn from those trends and grounded in examples you can act on today.
Redefine leadership around judgement, not control
As routine tasks are automated, the locus of value shifts to judgement: recognising when to trust AI, when to override it, and how to distribute decision authority. This isn’t managerial hand‑waving — it’s a redesign problem.
What to do:
- Create clear decision protocols that separate routine execution (let AI handle) from judgement calls (human in the loop).
- Train leaders in meta‑decision skills: probabilistic thinking, scenario framing and post‑decision review.
- Measure decisions, not just outputs — evaluate the quality of choices over the last quarter, not only throughput.
Real‑world example: tools like GitHub Copilot have changed developer workflows from typing code to reviewing and guiding generated suggestions. Teams that treat Copilot as an assistant — and invest in developer judgement to detect subtle security or design flaws — get faster without sacrificing quality. Those that treat it as a shortcut quickly accumulate tech debt.
Prioritise learning speed over rigid plans
Barry’s second trend is a reminder: when the environment changes fast, a slow, tightly specified plan is a liability. AI increases both the rate at which you can test ideas and the number of plausible directions you could take.
What to do:
- Replace calendarized roadmaps with outcome‑focused experiments. Align teams on hypotheses and time‑boxed learning cycles.
- Protect budget and runway for rapid exploration — fund experiments, not artefacts.
- Celebrate useful failure: ensure post‑mortems are blameless and focused on what was learned, not who failed.
Example: Amazon’s long history of experimentation (including its famous A/B culture) shows how organisations that embed measurement and learning at the team level can pivot quickly when customer behaviour shifts. In an AI‑rich world, the same discipline matters, but at greater speed and with more complex feedback loops.
Design attention as a scarce organisational resource
Ubiquitous AI promises to surface more insights, alerts and nudges than humans can handle. The real challenge becomes attention management: what deserves focus, and what should be filtered.
What to do:
- Define explicit attention contracts: who gets notified about what, and under which conditions.
- Invest in synthesis roles or tooling to turn AI outputs into concise decision options, not raw data dumps.
- Establish ‘quiet zones’ for deep work where agents cannot interrupt humans unless specific escalation criteria are met.
From my time in large digital organisations, the teams that won were those that disciplined inputs. More signals without better filters equals slower decisions.
Guard against brittle efficiency
Efficiency that removes slack and diversity of thought can erode resilience. Barry warns against the “brittleness paradox”: optimising for short‑term speed can kill long‑term adaptability.
What to do:
- Protect deliberate inefficiency: maintain small exploratory teams that are judged on learning, not short‑term KPIs.
- Use redundancy strategically: multiple models or heuristics can prevent catastrophic failure when a single AI pipeline encounters edge cases.
- Rotate people through discovery roles to keep fresh perspectives across product lines.
Rework talent and team design for symbiosis
Hiring for “AI skills” is not enough. The premium shifts to people who can collaborate with systems: product managers who can curate model behaviour, engineers who understand model limitations, designers who craft humane fallbacks.
What to do:
- Build truly cross‑functional product trios that include an AI competence: product, design and ML engineering.
- Invest in human‑centred AI skills across the organisation — not just in a central ML team.
- Design career paths that reward judgement and system stewardship, not only feature delivery.
Consider how language apps such as Duolingo blend pedagogy, product and models — success comes from integrating those disciplines, not isolating them.
Ethics and accessibility are competitive advantages
With intelligence embedded everywhere, missteps are visible and costly. Ethical design and accessibility are not compliance chores; they are signal of trustworthiness.
What to do:
- Operationalise ethics: make explicit trade‑off frameworks for privacy, fairness and transparency.
- Design for diverse users from day one — accessibility expands markets and reduces risk.
- Make explainability part of your user experience, not an afterthought.
Barry O’Reilly’s piece is a welcome corrective: this is not a tooling problem alone. The work ahead is organisational — rewiring incentives, skills and habits so humans make better choices in partnership with machines. From my experience in large, transformation‑heavy companies, those who treat AI as a change to how decisions are made — rather than just what is automated — will lead.
Where to start tomorrow
Run a single “judgement audit”: map the top 10 decisions in a product area, identify where AI touches them today, and create a one‑page protocol for each (who decides, which signals matter, and how to escalate). You’ll quickly see where attention, training and design are needed.
Credit: This article builds on the ideas in Barry O’Reilly’s Six Counterintuitive Trends for 2026. Read it for the full set of trends and the reasoning that inspired these recommendations.
The hard part isn’t buying better models. It’s building teams and systems that make better human judgements. If you’re a product leader, start by asking different questions — the ones Barry raises — and make your organisation’s judgement the metric that matters.
Leave a Reply