
The pressure to deploy AI is real. Boards want to see it. Investors ask about it. Competitors appear to be doing it. So teams rush to find processes to automate, run a proof of concept, declare success in a demo — and then hit a wall when they try to scale.
The wall is almost always the same thing. Not a technology problem. Not a budget problem. The process was never properly defined in the first place. The rules live in people’s heads, accumulated over years of quiet adaptation. Edge cases are handled by institutional memory. Different people do it differently and nobody noticed, because when humans do it, the gaps fill themselves.
You cannot automate what you have not codified. And most organisations are rushing to do exactly that.
The Automation Trap
This is not a sequencing accident. It is a cultural one. Automation is brilliant at executing a defined process at scale. It is useless at figuring out what that process should be — that is a human job, and it has to happen first. But the urgency to show AI progress has always meant skipping the documentation step. Now that same impatience is compounding into something more serious.
The irony is that the organisations most eager to automate are often the ones with the least codified processes. Speed was always the priority. Writing things down felt like a tax on delivery. Now the bill is coming due.
What Codification Actually Means
Codifying a process is not writing a wiki page that nobody reads. It means making every decision point explicit: what triggers this process, what data it needs, what the rules are for every fork in the road, what good output looks like, and what happens when something goes wrong.
Most processes, when you examine them closely, contain hidden complexity that experienced people navigate unconsciously. A customer escalation handler knows when to escalate to a senior rep even when the ticket doesn’t technically meet the criteria. A finance analyst knows which month-end numbers are usually wrong and need a second look. A content editor knows which topics require legal sign-off even when the brief doesn’t mention it.
This tacit knowledge is invisible until you try to automate the process — at which point it becomes a production incident.
Codification forces that knowledge into the open. It is genuinely difficult work. It requires time with the people who actually do the job, patience with ambiguity, and the discipline to write things down even when they feel obvious. But it pays off whether you automate or not, because it makes the process teachable, auditable, and improvable.
The Right Order of Operations
The sequence I have seen work consistently is this:
First, map what actually happens. Not what the process documentation says. Not what the manager thinks happens. What the people doing the work actually do, step by step, including the exceptions, the workarounds, and the informal checks. Shadow sessions, not surveys.
Second, identify the decision logic. For each decision point in the process, write down the rule. If the rule is “it depends,” that is not a rule — it is a prompt to keep asking. Keep going until you have something that could be communicated to a new hire and produce consistent results.
Third, test the codified process manually. Run it by hand for a few cycles before touching automation. This is where you find the gaps, the missing rules, the edge cases your map missed. Fix the process, not the technology.
Fourth, automate the stable parts. Once the process is documented, tested, and producing consistent results manually, you have something worth automating. The AI is now executing a known, validated process — not guessing at an undocumented one.
What This Changes About AI Strategy
Treating codification as a prerequisite rather than an afterthought changes how you prioritise your AI roadmap.
Processes that are already well-documented and consistently executed are your fastest wins — the automation layer drops in cleanly. Processes that are messy, exception-heavy, or person-dependent are not ready, regardless of how much pressure there is to automate them. The honest answer is: not yet.
This also changes the role of the people doing the process. The most valuable contribution they can make to an AI project is not testing the output — it is helping to surface the tacit knowledge that makes the process work. That is a skilled, important job. It should be treated as one.
The organisations getting the most out of AI are not the ones with the most ambitious automation roadmaps. They are the ones that did the unglamorous work of understanding their own processes first.
Codify, then automate. In that order. Every time.
Leave a Reply