Your ERP Knows the Structure. It Doesn't Know the Planner.
Why the most critical layer of supply chain knowledge has never been captured — and what changes now.
Two Conversations That Should Be One
Pieter van Schalkwyk recently published a compelling piece — “Your World Model Knows the Physics. It Doesn’t Know the Plant” — arguing that industrial AI needs a 3-layer world model: physical dynamics, operational/KPI dynamics, and socio-technical/organisational dynamics. His thesis: the physics layer is necessary but insufficient. Without the operational and organisational layers, AI systems produce recommendations that are technically correct and practically useless.
His argument resonates because supply chain planning has the exact same problem — and has had it for thirty years.
Replace “physics” with “structure” and “plant” with “planner,” and you have the supply chain version:
Your ERP knows the structure. It doesn’t know the planner.
Every planning system ever built — MRP, APS, IBP, the latest AI-native platforms — captures structural knowledge: BOMs, lead times, safety stocks, reorder points, capacity constraints. This is van Schalkwyk’s Layer 1. It’s necessary. It’s not remotely sufficient.
Adapted from van Schalkwyk & Green, Your World Model Knows the Physics. It Doesn't Know the Plant (March 2026).
The Layer That Was Never Built
Knut Alicke, McKinsey Partner Emeritus and professor at KIT Karlsruhe, identified this missing layer in his essay “The Planner Was the System”:
“Planning systems have never had a complete knowledge model of the supply chain they were supposed to plan. The planner was the missing ontological layer — semantic intelligence turning data into understanding.”
Alicke calls this the experiential ontology — the behavioral knowledge about how supply chains actually work in practice. Not what the BOM says. Not what the ERP stores. What experienced planners know:
- That Supplier X always delivers late in Q4 because their shift pattern changes
- That the forecast model systematically over-predicts for Product Category Y after promotions
- That DC North always needs extra buffer during peak because of a loading dock bottleneck that doesn’t appear in any capacity model
- That when Customer Z places a rush order, three downstream disruptions follow within 48 hours
This is van Schalkwyk’s Layer 3 — the socio-technical and organisational dynamics that determine whether a technically optimal plan actually works in practice. As he puts it:
“Persistent operator overrides are data. They indicate unmodelled constraints that no physics model captures.”
The Invisible Crisis
Here is the part that should alarm every supply chain leader:
No planning system has ever even recorded whether a planner changed a number, let alone why.
Think about that. A planner logs into SAP, overrides the safety stock from 500 to 750 for a specific material at a specific plant. The system stores 750. The old value — 500 — is gone. There is no record of:
- What the system recommended
- What the planner changed it to
- Why they changed it
- What happened as a result
- Whether the change improved outcomes
This happens thousands of times per week across every large manufacturing and distribution operation. Every override represents a planner applying experiential knowledge — making a judgment call based on behavioral patterns they’ve accumulated over years or decades. And every one of those decisions vanishes into the new number.
When that planner retires — and the demographic cliff is real, with the Bureau of Labor Statistics projecting 26,400 logistics job openings annually through 2034 and experienced planners leaving faster than organisations can replace them — their knowledge doesn’t transfer. It doesn’t exist in any system. It was never captured. It was never even observed.
Alicke’s diagnosis is precise: for thirty years, the planner has been the system’s missing semantic layer, silently compensating for what the technology couldn’t model. We didn’t notice because the system appeared to work. It worked because they made it work.
From Override to Knowledge
Van Schalkwyk’s framework gives us a vocabulary for what needs to change. His 3-layer stack maps directly to supply chain planning:
| Van Schalkwyk | Supply Chain Equivalent |
|---|---|
| Layer 1: Physical Dynamics | Structural knowledge — BOMs, routings, lead times, capacity. What every ERP captures. |
| Layer 2: Operational/KPI Dynamics | Planning logic — MRP netting, safety stock calculations, ATP allocation. What APS systems add. |
| Layer 3: Socio-Technical/Organisational Dynamics | Experiential knowledge — why planners override, what patterns they’ve learned, what the system doesn’t model. What no system has ever captured. |
The breakthrough is not building a documentation system for planner knowledge. Interview-based knowledge capture has been tried. It doesn’t scale, it doesn’t stay current, and it doesn’t connect to the systems that need it.
The breakthrough is capturing experiential knowledge as a byproduct of normal work.
When an AI agent makes a decision and a planner overrides it, you now have:
- What the agent recommended (the system’s best answer)
- What the planner chose instead (the human’s judgment)
- The full context — inventory positions, demand signals, supplier status, upstream decisions
- The outcome — what actually happened
Do this systematically across every override, every coaching signal, every directive, and you build what Alicke describes: a structured knowledge layer that captures the behavioral reality of your supply chain. Not as documentation. As training data.
GENUINE vs. COMPENSATING
Alicke draws a critical distinction that van Schalkwyk’s framework also implies. Not all experiential knowledge is equal:
GENUINE knowledge reflects real behavioral patterns that remain true regardless of data quality. “This supplier’s lead time follows a lognormal distribution with a heavy right tail in winter.” This is van Schalkwyk’s Layer 3 — organisational and environmental dynamics that first-principles models cannot capture.
COMPENSATING knowledge is a workaround for known system deficiencies. “The forecast model is terrible for this category, so I always add 20%.” This looks like experiential knowledge but is actually a symptom of a Layer 2 failure. When you fix the forecast model, the compensating override should disappear.
The distinction matters because genuine knowledge should be learned by agents permanently — it represents the reality of your supply chain. Compensating knowledge should trigger system improvement — fix the root cause, and the workaround becomes unnecessary.
Causal AI can make this distinction automatically by tracking override outcomes over time. Overrides that consistently improve outcomes in the presence of specific contextual conditions are genuine. Overrides that correlate with known data quality issues are compensating.
Progressive Autonomy
Van Schalkwyk describes a progression from Shadow Mode to Collaborative Autonomous — each phase generating the calibration data the next phase requires. This maps precisely to the agentic inversion that Jordi Visser describes:
- Copilot Mode: AI recommends, human decides. Every human decision is recorded. This is the training signal.
- Supervised Autonomous: AI decides within guardrails, human inspects. Guardrails tighten as confidence grows. Override patterns reveal where human judgment adds value.
- Fully Autonomous: AI decides within expanded guardrails. Humans focus on governance, exception inspection, and strategic decisions that require creativity and judgment.
The critical insight is that overrides during the copilot phase are not friction — they are the most valuable data the system will ever receive. Each override is a senior planner encoding decades of experiential knowledge into a format that agents can learn from. Van Schalkwyk says it perfectly:
“Each override teaches the system about constraints it had not captured.”
The Window Is Closing
The planners who carry this knowledge are retiring. The knowledge exists only in their heads. No system has ever captured it — not because the technology wasn’t available, but because no system was designed to observe it.
The combination of cognitive AI agents and structured experiential knowledge capture changes this. For the first time, we have systems that can:
- Observe planner behavior in context — not through interviews, but through normal work
- Pattern-match across thousands of overrides to detect recurring behavioral knowledge
- Classify whether knowledge is genuine or compensating
- Operationalize that knowledge by feeding it directly into agent training via reinforcement learning
The result is not a knowledge management system. It’s a system that gets measurably smarter every time a planner exercises judgment — and retains that judgment permanently, even after the planner leaves.
Van Schalkwyk is right: you need all three layers for a deployable system. The physics (or structure) is necessary but not sufficient. The operations (or planning logic) adds value but misses the human layer. The socio-technical dynamics (or experiential knowledge) is what makes the difference between a system that produces recommendations and a system that makes trustworthy decisions.
For thirty years, the planner was the system. It’s time to capture what they know — before it walks out the door.
Trevor Miles is the founder of Azirella Ltd and creator of Autonomy, a Decision Intelligence platform for supply chain planning. He previously held leadership roles at Kinaxis and i2 Technologies.
References:
- Pieter van Schalkwyk & Gavin Green, “Your World Model Knows the Physics. It Doesn’t Know the Plant” (March 2026), LinkedIn / XMPro
- Knut Alicke, “The Planner Was the System” (2026), Substack
- Jordi Visser, “The Agentic Inversion” (February 2026)
- Bureau of Labor Statistics, Occupational Outlook: Logisticians 2024-2034
- Dataiku, Supply Chain AI Trends 2026
- The Aging Supply Chains: Demographics Reshaping Global Logistics
See Autonomy in action
Walk through how Autonomy models, executes, monitors, and governs supply chain decisions with autonomous AI agents.