Azirella
← Back to Blog
Technical March 2026

Your ERP Knows the Structure. It Doesn't Know the Planner.

Why the most critical layer of supply chain knowledge has never been captured — and what changes now.

Two Conversations That Should Be One

Pieter van Schalkwyk recently published a compelling piece — “Your World Model Knows the Physics. It Doesn’t Know the Plant” — arguing that industrial AI needs a 3-layer world model: physical dynamics, operational/KPI dynamics, and socio-technical/organisational dynamics. His thesis: the physics layer is necessary but insufficient. Without the operational and organisational layers, AI systems produce recommendations that are technically correct and practically useless.

His argument resonates because supply chain planning has the exact same problem — and has had it for thirty years.

Replace “physics” with “structure” and “plant” with “planner,” and you have the supply chain version:

Your ERP knows the structure. It doesn’t know the planner.

Every planning system ever built — MRP, APS, IBP, the latest AI-native platforms — captures structural knowledge: BOMs, lead times, safety stocks, reorder points, capacity constraints. This is van Schalkwyk’s Layer 1. It’s necessary. It’s not remotely sufficient.

VAN SCHALKWYK'S 3-LAYER STACK, MAPPED TO SUPPLY CHAIN PLANNING Same structure, same gap — for thirty years the top layer was a person, not a system. INDUSTRIAL OPERATIONS SUPPLY CHAIN PLANNING LAYER 3 — Socio-Technical / Organisational Dynamics "Even if KPIs improve, will this cause unacceptable human, safety, or business side-effects?" — van Schalkwyk & Green EXPERIENTIAL KNOWLEDGE Why planners override. Behavioral patterns. Judgment under uncertainty. NEVER CAPTURED by any system Alicke: "The Planner Was the System" LAYER 2 — Operational / KPI Dynamics "What happens to throughput, grade, recovery, energy?" — van Schalkwyk & Green PLANNING LOGIC MRP netting, safety stock calculations, ATP allocation, capacity smoothing. What APS systems add on top of the ERP. LAYER 1 — Physical / World Dynamics "What happens next physically?" — van Schalkwyk & Green STRUCTURAL KNOWLEDGE BOMs, routings, lead times, capacity constraints, bills of distribution. What every ERP captures. Each layer is necessary. Without the top, technically optimal plans fail in practice.

Adapted from van Schalkwyk & Green, Your World Model Knows the Physics. It Doesn't Know the Plant (March 2026).

The Layer That Was Never Built

Knut Alicke, McKinsey Partner Emeritus and professor at KIT Karlsruhe, identified this missing layer in his essay “The Planner Was the System”:

“Planning systems have never had a complete knowledge model of the supply chain they were supposed to plan. The planner was the missing ontological layer — semantic intelligence turning data into understanding.”

Alicke calls this the experiential ontology — the behavioral knowledge about how supply chains actually work in practice. Not what the BOM says. Not what the ERP stores. What experienced planners know:

  • That Supplier X always delivers late in Q4 because their shift pattern changes
  • That the forecast model systematically over-predicts for Product Category Y after promotions
  • That DC North always needs extra buffer during peak because of a loading dock bottleneck that doesn’t appear in any capacity model
  • That when Customer Z places a rush order, three downstream disruptions follow within 48 hours

This is van Schalkwyk’s Layer 3 — the socio-technical and organisational dynamics that determine whether a technically optimal plan actually works in practice. As he puts it:

“Persistent operator overrides are data. They indicate unmodelled constraints that no physics model captures.”

The Invisible Crisis

Here is the part that should alarm every supply chain leader:

No planning system has ever even recorded whether a planner changed a number, let alone why.

THE VANISHING OVERRIDE Same planner. Same override. The substrate either remembers what just happened, or it doesn't. What happens in SAP today Safety Stock: Material 4711 Plant: 1000 System value: 500 750 Planner overrides All discarded: No record of the old value No record of who changed it No record of why No record of what happened next No record of whether it helped Knowledge vanishes into the new number. When the planner retires, none of this exists in any system. What Autonomy captures Override #4,217 Agent: Inventory Buffer · SS 4711/1000 Agent recommended: 500 Planner chose: 750 Reason: "Supplier shifts to reduced capacity in Q4" Context: demand spike + seasonal pattern Outcome: fill rate held at 98% Classification: GENUINE Fed into RL training pipeline The override is the data. Recommendation, choice, reasoning, context, and outcome — all preserved, all routed to the agents that learn. Knowledge persists past the planner.

Think about that. A planner logs into SAP, overrides the safety stock from 500 to 750 for a specific material at a specific plant. The system stores 750. The old value — 500 — is gone. There is no record of:

  • What the system recommended
  • What the planner changed it to
  • Why they changed it
  • What happened as a result
  • Whether the change improved outcomes

This happens thousands of times per week across every large manufacturing and distribution operation. Every override represents a planner applying experiential knowledge — making a judgment call based on behavioral patterns they’ve accumulated over years or decades. And every one of those decisions vanishes into the new number.

When that planner retires — and the demographic cliff is real, with the Bureau of Labor Statistics projecting 26,400 logistics job openings annually through 2034 and experienced planners leaving faster than organisations can replace them — their knowledge doesn’t transfer. It doesn’t exist in any system. It was never captured. It was never even observed.

Alicke’s diagnosis is precise: for thirty years, the planner has been the system’s missing semantic layer, silently compensating for what the technology couldn’t model. We didn’t notice because the system appeared to work. It worked because they made it work.

From Override to Knowledge

Van Schalkwyk’s framework gives us a vocabulary for what needs to change. His 3-layer stack maps directly to supply chain planning:

Van SchalkwykSupply Chain Equivalent
Layer 1: Physical DynamicsStructural knowledge — BOMs, routings, lead times, capacity. What every ERP captures.
Layer 2: Operational/KPI DynamicsPlanning logic — MRP netting, safety stock calculations, ATP allocation. What APS systems add.
Layer 3: Socio-Technical/Organisational DynamicsExperiential knowledge — why planners override, what patterns they’ve learned, what the system doesn’t model. What no system has ever captured.

The breakthrough is not building a documentation system for planner knowledge. Interview-based knowledge capture has been tried. It doesn’t scale, it doesn’t stay current, and it doesn’t connect to the systems that need it.

The breakthrough is capturing experiential knowledge as a byproduct of normal work.

OVERRIDE → KNOWLEDGE → AGENT TRAINING A pipeline, not a knowledge base. Each cycle, the substrate gets measurably smarter. CAPTURE Override recorded with full context: recommendation, choice, reasoning, state, outcome. PATTERN DETECTION 3+ similar overrides → candidate entity. Recurrent behaviour across SKUs, lanes, or time of year. CLASSIFY GENUINE vs. COMPENSATING Causal AI on outcome history under matched conditions. RL TRAINING State augmentation, reward shaping, conformal calibration, simulator modifiers in the digital twin. Agent gets smarter with every cycle. Knowledge persists permanently. Override → pattern → label → training signal — captured as a byproduct of normal work.

When an AI agent makes a decision and a planner overrides it, you now have:

  • What the agent recommended (the system’s best answer)
  • What the planner chose instead (the human’s judgment)
  • The full context — inventory positions, demand signals, supplier status, upstream decisions
  • The outcome — what actually happened

Do this systematically across every override, every coaching signal, every directive, and you build what Alicke describes: a structured knowledge layer that captures the behavioral reality of your supply chain. Not as documentation. As training data.

GENUINE vs. COMPENSATING

Alicke draws a critical distinction that van Schalkwyk’s framework also implies. Not all experiential knowledge is equal:

GENUINE vs COMPENSATING Two kinds of override. The substrate routes them differently. GENUINE Real behavioural patterns that remain true regardless of data quality. Example "Supplier X's lead time follows a lognormal distribution with a heavy right tail in winter." Feeds into RL training Agent learns permanently Van Schalkwyk: Layer 3 Alicke: Experiential ontology The reality of your supply chain. COMPENSATING Workarounds for system or data deficiencies. Fix the root cause and the override vanishes. Example "Forecast model is bad for this category, so I always add 20%." Triggers system improvement Auto-superseded when fixed Van Schalkwyk: Layer 2 failure Alicke: Compensating knowledge A symptom, not a signal.

GENUINE knowledge reflects real behavioral patterns that remain true regardless of data quality. “This supplier’s lead time follows a lognormal distribution with a heavy right tail in winter.” This is van Schalkwyk’s Layer 3 — organisational and environmental dynamics that first-principles models cannot capture.

COMPENSATING knowledge is a workaround for known system deficiencies. “The forecast model is terrible for this category, so I always add 20%.” This looks like experiential knowledge but is actually a symptom of a Layer 2 failure. When you fix the forecast model, the compensating override should disappear.

The distinction matters because genuine knowledge should be learned by agents permanently — it represents the reality of your supply chain. Compensating knowledge should trigger system improvement — fix the root cause, and the workaround becomes unnecessary.

Causal AI can make this distinction automatically by tracking override outcomes over time. Overrides that consistently improve outcomes in the presence of specific contextual conditions are genuine. Overrides that correlate with known data quality issues are compensating.

Progressive Autonomy

Van Schalkwyk describes a progression from Shadow Mode to Collaborative Autonomous — each phase generating the calibration data the next phase requires. This maps precisely to the agentic inversion that Jordi Visser describes:

  1. Copilot Mode: AI recommends, human decides. Every human decision is recorded. This is the training signal.
  2. Supervised Autonomous: AI decides within guardrails, human inspects. Guardrails tighten as confidence grows. Override patterns reveal where human judgment adds value.
  3. Fully Autonomous: AI decides within expanded guardrails. Humans focus on governance, exception inspection, and strategic decisions that require creativity and judgment.

The critical insight is that overrides during the copilot phase are not friction — they are the most valuable data the system will ever receive. Each override is a senior planner encoding decades of experiential knowledge into a format that agents can learn from. Van Schalkwyk says it perfectly:

“Each override teaches the system about constraints it had not captured.”

The Window Is Closing

The planners who carry this knowledge are retiring. The knowledge exists only in their heads. No system has ever captured it — not because the technology wasn’t available, but because no system was designed to observe it.

The combination of cognitive AI agents and structured experiential knowledge capture changes this. For the first time, we have systems that can:

  1. Observe planner behavior in context — not through interviews, but through normal work
  2. Pattern-match across thousands of overrides to detect recurring behavioral knowledge
  3. Classify whether knowledge is genuine or compensating
  4. Operationalize that knowledge by feeding it directly into agent training via reinforcement learning

The result is not a knowledge management system. It’s a system that gets measurably smarter every time a planner exercises judgment — and retains that judgment permanently, even after the planner leaves.

Van Schalkwyk is right: you need all three layers for a deployable system. The physics (or structure) is necessary but not sufficient. The operations (or planning logic) adds value but misses the human layer. The socio-technical dynamics (or experiential knowledge) is what makes the difference between a system that produces recommendations and a system that makes trustworthy decisions.

For thirty years, the planner was the system. It’s time to capture what they know — before it walks out the door.


Trevor Miles is the founder of Azirella Ltd and creator of Autonomy, a Decision Intelligence platform for supply chain planning. He previously held leadership roles at Kinaxis and i2 Technologies.

References:

See Autonomy in action

Walk through how Autonomy models, executes, monitors, and governs supply chain decisions with autonomous AI agents.