
Malthus was wrong. He predicted that population growth would outpace food production, leading to inevitable societal collapse. Technology proved him spectacularly wrong — agricultural productivity grew faster than he imagined possible, and the catastrophe never came.
But there is something worth salvaging from his framework — and I use it carefully, because Malthusian has become a loaded term. It has been borrowed to justify arguments about population control, resource nationalism, and worse. That baggage is real, and none of it is what I am reaching for here.
The part I am borrowing is narrower and more mechanical: unconstrained growth in one dimension creates bottlenecks somewhere else in the system. That logic applies to businesses today.
AI is proving it.
The Abundance AI Creates
AI has made a set of things effectively unlimited: content, code, analysis, proposals, first drafts, summaries, options. The constraint that governed knowledge work for decades — the time it took a skilled person to produce something — has largely collapsed.
This is genuinely valuable. It is also creating a new and underappreciated problem.
When production is no longer the bottleneck, everything upstream of production becomes more important. The constraint does not disappear. It moves.
Where the Bottleneck Has Gone
The constraint has moved from generation to discernment.
When anyone can produce a strategy document, the scarcity is no longer in the writing. It is in knowing which strategy document to trust. When AI can generate fifty options in the time it once took to produce five, the bottleneck is no longer ideation. It is the judgement to know which three are worth pursuing. When analysis is abundant, the constraint is synthesis — connecting findings across sources, filtering noise from signal, landing on something that is actually actionable.
This is not theoretical. The pattern is showing up consistently across functions.
Engineering teams that adopted AI coding tools saw their output of code increase dramatically. So did their review backlog. The bottleneck moved from writing code to evaluating it — and the skills required to review AI-generated code at speed are not the same as the skills required to write it.
Marketing teams that adopted AI content generation found themselves drowning in drafts. The bottleneck moved from writing to editing, quality judgment, and the harder question of whether any of it was worth publishing at all.
Executive teams with access to AI-generated analysis found themselves in longer, harder strategic debates — not shorter ones. The bottleneck moved from producing scenarios to evaluating them, and the abundance of plausible options made choosing between them more difficult, not less.
In each case, the tool that was supposed to accelerate things did — and then revealed the constraint that had been hiding behind the previous one.
What Organisations Are Getting Wrong
Most AI investment is going into generation: tools that produce more, faster. That is the right starting point. It is not where the leverage will be in two years.
The Malthusian trap for AI is this: organisations that invest in generation without investing equally in discernment will find themselves in a specific kind of trouble. More output, no better decisions. Faster production, slower progress. A backlog of AI-generated work that consumes as much human attention as the manual work it replaced — just at a different stage of the pipeline.
The organisations that get ahead of this are building three things alongside their generation capabilities.
Curation infrastructure. Systems and norms that filter AI output before it consumes human attention at scale. Not everything AI produces should reach a human reviewer. The first question should be: how do we build the layer between generation and consumption that surfaces what matters?
Judgement as a developed skill. The ability to evaluate AI-generated work is not the same as the ability to produce it. Most organisations are training their people to prompt. Fewer are training them to critically assess output — to know when a plausible-sounding analysis is actually wrong, when generated code will break at scale, when a well-structured strategy document contains a fundamental error in reasoning.
Decision architecture for abundant options. When options were scarce, the decision-making process was straightforward: evaluate the two or three things on the table. When AI generates twenty, the process breaks. The organisations thinking clearly about this are redesigning how decisions get made — not just how options get generated.
The Durable Part of Malthus
Malthus was wrong about the outcome. He was right about the dynamic.
Abundance in one dimension reliably creates scarcity in another. The organisations that succeed with AI in the next five years will not be the ones that generated the most. They will be the ones that built the discernment to match their generation — that invested in judgment as deliberately as they invested in tools.
The ones that did not will find themselves with something that looks remarkably like the problem they started with. Overwhelmed. Slower than they expected. Unable to turn output into decisions.
More, it turns out, is only better when you can tell the difference.
Leave a Reply