Institutional memory
Experiential Knowledge
For thirty years, experienced planners have been the missing semantic layer in supply chain planning. They interpret exceptions, understand supplier behavior, make causal connections across domains, and none of it is captured in any system. When they retire, that knowledge leaves with them. Autonomy's Experiential Knowledge layer captures it systematically — turning every AIIO override into training signal for the shared world model.
"Planning systems have never had a complete knowledge model of the supply chain they were supposed to plan. The planner was the missing ontological layer, semantic intelligence turning data into understanding."
The Knowledge Gap No System Has Closed
Every planning system, from first-generation MRP to modern probabilistic engines — captures structural knowledge: BOMs, lead times, safety stocks, reorder points. But they've never captured behavioral knowledge: why a planner overrides the suggested PO timing every Q4 for a specific supplier, why forecast bias spikes for a product category after promotions, or why a particular DC always needs extra buffer during peak season.
This behavioral layer is what Knut Alicke calls the experiential ontology. Experienced planners carry it in their heads. They apply it instinctively, overriding system recommendations, adjusting safety stocks, rerouting shipments, often correctly. And when they leave, it leaves with them.
Autonomy captures this knowledge as a byproduct of normal work. Every time a planner Overrides an agent's decision in AIIO, the system records the full context: what the agent recommended, what the planner chose instead, why, and, critically, what happened next.
From Override to Knowledge Entity
The Experiential Knowledge pipeline transforms individual overrides into structured, reusable knowledge through four stages:
Every override is recorded with full context, agent recommendation, planner decision, reasoning, and the conditions that triggered it.
A daily job scans override history, groups by planner, decision type, and context. When three or more overrides share the same pattern, a candidate knowledge entity is created.
Each entity is classified as GENUINE (the planner knows something the system doesn't) or COMPENSATING (workaround for a system deficiency). Causal AI auto-classifies based on outcome evidence.
Validated knowledge feeds directly into agent training through four channels: state augmentation, reward shaping, conformal prediction calibration, and simulation modifiers.
GENUINE vs. COMPENSATING Knowledge
Not all planner knowledge is equal. Alicke draws a critical distinction between knowledge that reflects real behavioral patterns and knowledge that compensates for system deficiencies. Autonomy's causal AI makes this distinction automatically.
Real behavioral patterns that remain true regardless of data quality. "Supplier X always delivers 3 days late in Q4 because their shift pattern changes." These patterns feed into agent reward shaping, the agent learns to anticipate what the planner already knew.
Auto-classified when causal evidence shows overrides were consistently BENEFICIAL
Workarounds for known system or data deficiencies. "The forecast model is bad for product X, so I always add 20%." These are documented but excluded from reward shaping. When the root cause is fixed, compensating entities are automatically superseded.
Auto-classified when causal evidence shows overrides were DETRIMENTAL despite planner intent
Four Channels into Agent Training
Validated Experiential Knowledge doesn't sit in a static library, it feeds directly into the agent training pipeline through four channels, each serving a different purpose.
| Channel | Effect | Knowledge Type |
|---|---|---|
| State Augmentation | Appends conditional features to agent state vectors, the agent can now "see" conditions it couldn't before | Both |
| Reward Shaping | Bonus for decisions aligned with validated knowledge, accelerates learning in known patterns | GENUINE only |
| Conformal Prediction | Widens confidence intervals when conditions are active, more human oversight until the agent masters the pattern | Both |
| Simulation Modifiers | Applies multipliers to stochastic distributions during Monte Carlo training, scenarios reflect reality | Both |
The Learning Flywheel
Experiential Knowledge creates a self-reinforcing cycle. Each stage accelerates the next until the agent handles the pattern autonomously, and the knowledge becomes documentation rather than a crutch.
- Experiential Knowledge widens conformal intervals, more human Inspect activity in flagged conditions
- Overrides in flagged conditions confirm the pattern, evidence grows, confidence increases
- State augmentation lets the agent see the condition, agent accuracy improves in training
- Conformal intervals narrow as agent confidence increases, fewer Informs needed
- The agent handles the condition autonomously, EK becomes institutional memory, not an active override
"We know more than we can tell." Autonomy makes the tacit explicit, not by asking planners to document what they know, but by learning from what they do.
Knowledge Lifecycle
Experiential Knowledge isn't static. Supply chains change, suppliers shift lead times, demand patterns evolve, new products are introduced. The EK system manages knowledge through a full lifecycle:
- Stagnation detection: Active entities not validated in N days are flagged as STALE and queued for planner review
- Contradiction resolution: When different planners encode conflicting patterns for the same context, both are flagged for resolution
- Sole-source retirement: When a planner leaves, entities where they're the sole source are flagged, preserving knowledge before it walks out the door
- Supersession: When statistical signals capture what was previously experiential, COMPENSATING entities are automatically superseded
See Experiential Knowledge in action
Watch how overrides become training signal across the Decision Stream.