Learning theory
Type: index · Status: current
How systems learn, verify, and improve. These notes define learning mechanisms, verification gradients, and memory architecture that KB design draws on but that aren't KB-specific — they apply to any system that adapts through durable substrates, including but not limited to inspectable artifacts.
The collection is organized around deploy-time learning as the unifying framework. Accumulation — adding knowledge to the store — is the most basic learning operation, with reach as its key property: facts sit at the low end, theories at the high end. Two orthogonal mechanisms (constraining and distillation) transform accumulated knowledge. A third operation (discovery) produces the high-reach theories that are accumulation's most valuable items.
Foundations
- agentic-systems-interpret-underspecified-instructions — two distinct properties (semantic underspecification and execution indeterminism); the spec-to-program projection model, semantic boundaries, and the constrain/relax cycle
- learning-is-not-only-about-generality — accumulation is the most basic learning operation, with reach as its key property (facts at the low end, theories at the high end); capacity decomposes into generality vs a reliability/speed/cost compound; Simon's definition grounds the decomposition
- continuous-learning-requires-durability-not-weight-updates — the live disagreement is whether durable non-weight adaptation counts as learning at all; this note makes the affirmative case and turns artifact-side adaptation from metaphor into learning proper
- llm-learning-phases-fall-between-human-learning-modes — LLM phases (pre-training, in-context, deploy-time) occupy intermediate positions on the evolution-to-reaction spectrum rather than mapping 1:1 to human learning modes; warns against literal human-LLM learning analogies
- in-context-learning-presupposes-context-engineering — in-context learning depends on deploy-time learning to select and organize the right knowledge; Amodei's "no continual learning needed" claim relocates the learning to the system layer rather than eliminating it
Deploy-time Learning
The organizing framework: deployed systems adapt through symbolic artifacts — durable, inspectable, and verifiable — filling the gap between training and in-context learning.
- deploy-time-learning-the-missing-middle — three timescales of system adaptation; the verifiability gradient from prompt tweaks to deterministic code; concrete before-and-after examples of constraining at different grades
- learning-substrates-backends-and-artifact-forms — separates substrate class from backend and artifact form; explains why repo files, DB rows, and memory-service objects can all host the same broad learning substrate
- deploy-time-learning-is-agile-for-human-ai-systems — deploy-time learning and agile share the same core innovation (co-evolving prose and code); agile assumes code wins eventually, deploy-time learning treats the hybrid as the end state
- changing-requirements-conflate-genuine-change-with-disambiguation-failure — reframes agile: "changing requirements" hide late-surfacing interpretation errors in underspecified specs; short iterations bound interpretation-error propagation, not just change-response latency
- specification strategy should follow where understanding lives — names the lifecycle choice across spec-first, bidirectional, and behavior-extracted approaches; the right strategy depends on whether understanding is present before work, discovered during execution, or only visible after repeated runs
- evaluation automation is phase-gated by comprehension — concretizes the lifecycle for eval loops: comprehension and specification must precede optimization, or automation amplifies the wrong objective
- constraining-and-distillation-both-trade-generality-for-reliability-speed-and-cost — both mechanisms sacrifice generality for compound gains in reliability, speed, and cost; they differ in the operation (constraining vs extracting) and how much compound they yield
- bitter-lesson-boundary — determines when constraining is permanent (spec IS the problem) vs when relaxing is needed (spec approximates the problem); composition failure is the tell that specs are theories, not definitions
Constraining
Constraining the interpretation space — from partial narrowing (conventions) to full commitment (deterministic code). The primary mechanism for hardening deployed systems.
- constraining — definition and spectrum: storing an output, writing a convention, adding structured sections, extracting deterministic code; codification is the far end where the medium itself changes from natural language to executable code
- storing-llm-outputs-is-constraining — the simplest instance: keeping a specific LLM output resolves underspecification to one interpretation; develops the generator/verifier pattern and verbatim risk
- constraining-during-deployment-is-continuous-learning — AI labs' continuous learning is achievable through constraining with versioned artifacts, which beats weight updates on inspectability and rollback
- spec-mining-as-codification — codification's operational mechanism: observe behavior, extract deterministic rules, grow the calculator surface monotonically
- operational-signals-that-a-component-is-a-relaxing-candidate — five testable signals (paraphrase brittleness, isolation-vs-integration gap, process constraints, unspecifiable failures, distribution sensitivity) for detecting when to reverse codification
- error-messages-that-teach-are-a-constraining-technique — the dual-function property: effective enforcement artifacts simultaneously constrain and inform, because in agent systems the error channel is an instruction channel
- enforcement-without-structured-recovery-is-incomplete — the enforcement gradient covers detection and blocking but not recovery; maps ABC's corrective → fallback → escalation onto each enforcement layer, with oracle strength determining viable recovery strategies
Distillation
Targeted extraction from a larger body of reasoning into a focused artifact shaped by use case, context budget, or agent. Orthogonal to constraining — you can distil without constraining (extract a skill, still underspecified) or constrain without distilling (store an output, no extraction from reasoning).
- distillation — definition: the rhetorical mode shifts to match the target (argumentative → procedural, exploratory → assertive); the dominant mechanism in knowledge work because it creates new artifacts from existing reasoning
Information & Bounded Observers
- information-value-is-observer-relative — deterministic transformations add zero classical information but can make structure accessible to bounded observers; names the gap that distillation and discovery each describe operationally
- epiplexity-eli5 — ELI5 explanation of epiplexity through encrypted messages, shuffled textbooks, CSPRNGs, and chess notation; contrasts surprise, shortest description, and observer-relative usable structure
- minimum-viable-vocabulary-is-the-set-of-names-that-maximally-reduces-extraction-cost-for-a-bounded-observer — reframes "minimum viable ontology" as the vocabulary that maximally reduces extraction cost for a bounded observer entering a domain; synthesizes information-value, discovery, and distillation
- first-principles-reasoning-selects-for-explanatory-reach-over-adaptive-fit — Deutsch's adaptive-vs-explanatory distinction: explanatory knowledge transfers because it captures why, not just what works; grounds the KB's first-principles filter as selecting for reach
Discovery
A third operation, distinct from both constraining and distillation: positing a new general concept and simultaneously recognizing existing particulars as instances of it. Discovery produces theories — the highest-reach items accumulation can store.
- discovery-is-seeing-the-particular-as-an-instance-of-the-general — the dual structure of discovery (posit the general, recognize the particular); three depths from shared feature through shared structure to generative model; the hard problem is recognition, not linking
Synthesis
- a good agentic KB maximizes contextual competence through discoverable, composable, trustworthy knowledge — accumulation as the basic operation plus three transformation operations (constraining, distillation, discovery) mapped to three knowledge properties (trustworthy, discoverable, composable) serving contextual competence under bounded context; reach as the quality dimension of what's accumulated
- agent context is constrained by soft degradation not hard token limits — the binding constraint is the soft degradation curve (dilution, compositional collapse), not the hard token limit; programmatic constructability is the genuine differentiator
- soft-bound traditions as sources for context engineering strategies — catalog of twelve traditions with transfer assessment: what's already working, what's plausible, and what blocks transfer (optimization target mismatch, feedback absence, different failure modes)
Oracle & Verification
Moved to LLM interpretation errors — oracle theory, error correction, reliability dimensions, and the augmentation/automation boundary now live in the dedicated error-theory area. Key notes:
- error-correction-works-above-chance-oracles-with-decorrelated-checks — the core theory of error correction via decorrelated weak oracles
- oracle-strength-spectrum — the gradient from hard to no oracle that determines engineering priorities
Memory & Architecture
- three-space-agent-memory-maps-to-tulving-taxonomy — agent memory split into knowledge, self, and operational spaces mirrors Tulving's semantic/episodic/procedural distinction
- flat-memory-predicts-specific-cross-contamination-failures-that-are-empirically-testable — the three-space claim is testable: flat memory predicts specific cross-contamination failures
- inspectable-substrate-not-supervision-defeats-the-blackbox-problem — codification counters the blackbox problem not by requiring human review but by choosing a substrate (repo artifacts) that any agent can inspect, diff, test, and verify
- A-MEM: Agentic Memory for LLM Agents — academic paper: Zettelkasten-inspired agent memory with automated link generation; flat single-space design provides a test case for whether three-space separation matters at QA-benchmark scale
- memory-management-policy-is-learnable-but-oracle-dependent — AgeMem's RL-trained memory policy demonstrates low-reach accumulation (facts) and distillation (STM); confirms memory policy is vision-feature-like per the bitter lesson boundary, but requires a task-completion oracle the KB cannot yet provide
- Multi-Agent Memory from a Computer Architecture Perspective — computer-architecture analogy for multi-agent memory: shared/distributed paradigms, three-layer hierarchy, and consistency protocols as the critical unsolved problem
- Graphiti — temporally-aware knowledge graph with bi-temporal edge invalidation; strongest temporal model in the surveyed memory systems and strongest counterexample to files-first architecture
Applications
- unified-calling-conventions-enable-bidirectional-refactoring — when agents and tools share a calling convention, constraining and codification become local operations; llm-do as primary evidence
- programming-practices-apply-to-prompting — typing, testing, progressive compilation, and version control transfer from programming to LLM prompting, with probabilistic execution making some practices harder
- ad-hoc-prompts-extend-the-system-without-schema-changes — the counterpoint: sometimes staying at the prompt level is the right choice; ad hoc instructions absorb new requirements faster than schema changes
- legal-drafting-solves-the-same-problem-as-context-engineering — law as an independent source discipline for the underspecified instructions problem: precedent and codification are constraining; legal techniques are native to the underspecified medium
- Ephemeral computation prevents accumulation — ephemeral vs persistent artifacts as inverse of codification; discarding generated artifacts trades accumulation for simplicity
- Ephemerality is safe where embedded operational knowledge has low reach — synthesizes Kirsch's four barriers with the reach concept: the ephemeral/malleable boundary sits where embedded operational knowledge crosses from low reach (adaptive, safe to discard) to high reach (explanatory, must accumulate)
Reference material
- Context Engineering for AI Agents in OSS — empirical study of AGENTS.md/CLAUDE.md evolution in 466 OSS projects; commit-level analysis shows constraining maturation trajectory confirming continuous learning through versioned artifacts
- On the "Induction Bias" in Sequence Models — 190k-run empirical study showing transformers need orders-of-magnitude more data than RNNs for state tracking; architectural induction bias determines data efficiency and weight sharing, grounding the computational bounds dimension of learning capacity
Related Tags
- llm-interpretation-errors — oracle theory, error correction, and reliability dimensions migrated here; the error-theory area applies verification concepts specifically to LLM interpretation failures
- tags — applies learning theory to KB architecture and evaluation; methodology-enforcement-is-constraining bridges both areas
- document-system — the type ladder (text→note→structured-claim) instantiates the constraining gradient for documents
Other tagged notes
- Apparent success is an unreliable health signal in framework-owned tool loops — When framework-owned tool loops recover from broken tools via agent workarounds, final success stops being a reliable signal that the underlying scripts and workflows are healthy
- Automated synthesis is missing good oracles — Generating synthesis candidates (cross-note connections, novel combinations) is easy — LLMs do it readily. The hard part is evaluating whether a candidate is genuine insight or noise.
- Brainstorming: how reach informs KB design — Brainstorming on Deutsch's "reach" concept applied to KB notes — reach is a maintenance risk signal (not a retrieval signal) because high-reach revisions break downstream reasoning silently
- Codification and relaxing navigate the bitter lesson boundary — Since you can't identify which side of the bitter lesson boundary you're on until scale tests it, practical systems must codify and relax — with spec mining avoiding the vision-feature failure mode
- Evolving understanding needs re-distillation, not composition — When understanding evolves, reconciling fragments into a coherent picture can exceed effective context; a pre-distilled narrative keeps the whole picture within feasible bounds
- Reverse-compression (inflation) is the failure mode where LLM output expands without adding information — LLMs can inflate a compact seed into verbose prose that carries no more extractable structure — the test for whether a KB resists this is whether notes accumulate epiplexity across the network, not just token count
- Selector-loaded review gates could let review-revise learn from accepted edits — Brainstorm on learning reusable review gates from accepted note edits: mine candidate gates from before/after diffs, store them atomically, and load a bounded subset into future reviews
- Short composable notes maximize combinatorial discovery — The library's purpose is to produce notes that can be co-loaded for combinatorial discovery — short atomic notes are a consequence of this goal; longer synthesized artifacts belong in workshops or distilled instructions
- Silent disambiguation is the semantic analogue of tool fallback — When an agent silently resolves unacknowledged material ambiguity in a spec, final success hides that the contract failed to determine the path — an extension of the tool-fallback observability problem
- Systematic prompt variation serves verification and diagnosis, not explanatory-reach testing — Controlled prompt variation either decorrelates checks or measures brittleness under fixed task semantics; Deutsch's variation test instead changes the explanation to test mechanism and reach
- The fundamental split in agent memory is not storage format but who decides what to remember — Comparative analysis of eleven agent memory systems across six architectural dimensions — storage unit, agency model, link structure, temporal model, curation operations, and extraction schema — revealing that the agency question (who decides what to remember) is the most consequential design choice and that no system combines high agency, high throughput, and high curation quality.
- Trace-derived learning techniques in related systems — Sixteen code-inspected systems compared on trace ingestion pattern, promotion target (symbolic artifacts vs weights), artifact structure spectrum, and maintenance paths