
Making feedback count: process, not magic
When was the last time feedback on a school assignment changed what a pupil did next? For most teachers the answer is: too infrequently. Feedback remains one of the highest-potential levers in education — cheap, targeted and evidence-backed — yet it is often inconsistent, late or vague. Recent work by Daisy Christodoulou and others shows how large language models (LLMs) can be used to turn feedback from a hit-or-miss art into a scalable, standardised process that multiplies learning outcomes.
Why teacher feedback matters — and why it rarely reaches its potential
Feedback is an intervention with outsized returns. When done well, it helps learners correct misconceptions, practice the right skills and develop metacognition. But real-world constraints — class sizes, time pressures, variability in teacher training — mean feedback is often inconsistent. That inconsistency is the problem: high-impact feedback is precise, actionable and timely; anything less is unlikely to change behaviour.
For product leaders and technology heads thinking about education, this is an attractive product problem. It’s repeatable, measurable and amenable to process design. The goal is not to replace teachers but to multiply their reach and consistency.
What Daisy Christodoulou’s work shows about LLMs and feedback
Daisy Christodoulou (director of education at No More Marking) has been exploring how AI can support assessment and feedback. Her recent pieces and experiments — documented via No More Marking and her Substack — investigate how LLMs can generate diagnostic comments, suggest next-step practice and integrate with comparative judgement models used to evaluate student writing.
The practical insight is simple and important: LLMs are now good enough to be useful as a teacher’s assistant when constrained by clear rubrics and structured templates. Where they perform best is in repeating a well-defined pattern — for example, diagnosing a common misconception and producing a short, actionable next step — rather than offering vague praise or broad summaries.
From research to product: standardise, measure, iterate
Turning feedback into a product requires three things:
- Standardisation: Define what effective feedback looks like in a context-specific rubric or template.
- Measurement: Capture whether learners act on feedback — not just whether it was given.
- Iteration: Continuously refine prompts, templates and teacher workflows using real data.
These are not theoretical prescriptions. Surrey University’s FEATS (Feedback Engagement and Tracking at Surrey) is an example of a research-backed system that treats feedback as a trackable process. It focuses on how students engage with feedback and how that engagement predicts outcomes. You can read more about FEATS here: FEATS — University of Surrey.
During my time working at RM in educational technology I collaborated with Dr Naomi Winstone and her team on efforts to take FEATS-derived ideas into a commercial product. That experience reinforced a product truth: teachers will adopt tools that make their practice easier and demonstrably better for pupils — provided those tools match classroom rhythms and respect teacher agency.
Design principles for AI-enabled feedback products
For CTOs, CPOs and product teams building in this space, a few practical principles help reduce risk and raise adoption:
- Make the rubric first: Train models to operate inside clearly defined, pedagogically-sound templates. LLMs are best used to fill structure, not invent it.
- Keep teachers in the loop: Design for co-piloting. AI should draft, teachers approve and personalise.
- Measure the right things: Track student engagement with feedback and subsequent performance, not just “items marked”.
- Build guardrails: Monitor for hallucinations, bias and privacy leaks. Keep a human review loop for high-stakes assessments.
- Localise for curriculum: Feedback that works in one system or country won’t necessarily translate. Make curriculum alignment part of the product.
Ethics, trust and the teacher’s role
There’s a moral and practical dimension here. Teachers are more than feedback machines — they’re mentors, assessors and role models. AI should not be framed as a cost-cutting substitute. The right approach is augmentation: reduce teachers’ administrative load, help them give more precise next steps and let them spend time on the human work that matters most.
Trust is earned through transparency. Product teams must be explicit about what the AI can and cannot do, present provenance for suggestions and allow easy correction. These are not optional niceties; they determine whether a tool will be used consistently in classrooms.
Reference
This piece was informed by thoughts on practical AI use from Common Cognition: How to Use AI Without Becoming Stupid — CommonCog, and by Daisy Christodoulou’s work on AI and assessment: No More Marking and Daisy’s Substack.
Where product leaders should start today
If you lead a product organisation or an edtech team, start by reframing feedback as a process you can instrument. Pick a concrete use case — written answers in Year 7 English, short maths problems in Key Stage assessments — and run a small pilot that pairs LLM-generated drafts with teacher review. Use the FEATS ideas on engagement tracking to measure whether feedback produced by the combined teacher+AI workflow actually changes pupil behaviour.
Teacher feedback is one of the most under‑utilised, highest‑potential interventions in education. It can multiply the outcomes of studying — but only when treated as a process that can be standardised, measured and optimised. AI does not replace the craft; it scales it, surfaces patterns and frees teachers to do the uniquely human work that drives learning.
Actionable next step: identify one feedback flow in your product that is routine, rule-based and repeatable. Define a rubric, prototype an LLM prompt to fill it, and measure whether pupils act differently on the AI-augmented feedback. Iterate quickly — and keep teachers at the centre.
Leave a Reply