Learning theory
Type: index · Status: current
How systems learn, verify, and improve. These notes define learning mechanisms, verification gradients, and memory architecture that KB design draws on but that aren't KB-specific — they apply to any system that adapts through inspectable artifacts.
The collection is organized around deploy-time learning as the unifying framework. Accumulation — adding knowledge to the store — is the most basic learning operation, with reach as its key property: facts sit at the low end, theories at the high end. Two orthogonal mechanisms (constraining and distillation) transform accumulated knowledge. A third operation (discovery) produces the high-reach theories that are accumulation's most valuable items.
Foundations
- agentic-systems-interpret-underspecified-instructions — two distinct properties (semantic underspecification and execution indeterminism); the spec-to-program projection model, semantic boundaries, and the constrain/relax cycle
- learning-is-not-only-about-generality — accumulation is the most basic learning operation, with reach as its key property (facts at the low end, theories at the high end); capacity decomposes into generality vs a reliability/speed/cost compound; Simon's definition grounds the decomposition
- llm-learning-phases-fall-between-human-learning-modes — LLM phases (pre-training, in-context, deploy-time) occupy intermediate positions on the evolution-to-reaction spectrum rather than mapping 1:1 to human learning modes; warns against literal human-LLM learning analogies
- in-context-learning-presupposes-context-engineering — in-context learning depends on deploy-time learning to select and organize the right knowledge; Amodei's "no continual learning needed" claim relocates the learning to the system layer rather than eliminating it
Deploy-time Learning
The organizing framework: deployed systems adapt through repo artifacts — durable, inspectable, and verifiable — filling the gap between training and in-context learning.
- deploy-time-learning-the-missing-middle — three timescales of system adaptation; the verifiability gradient from prompt tweaks to deterministic code; concrete before-and-after examples of constraining at different grades
- deploy-time-learning-is-agile-for-human-ai-systems — deploy-time learning and agile share the same core innovation (co-evolving prose and code); agile assumes code wins eventually, deploy-time learning treats the hybrid as the end state
- changing-requirements-conflate-genuine-change-with-disambiguation-failure — reframes agile: "changing requirements" hide late-surfacing interpretation errors in underspecified specs; short iterations bound interpretation-error propagation, not just change-response latency
- constraining-and-distillation-both-trade-generality-for-reliability-speed-and-cost — both mechanisms sacrifice generality for compound gains in reliability, speed, and cost; they differ in the operation (constraining vs extracting) and how much compound they yield
- bitter-lesson-boundary — determines when constraining is permanent (spec IS the problem) vs when relaxing is needed (spec approximates the problem); composition failure is the tell that specs are theories, not definitions
Constraining
Constraining the interpretation space — from partial narrowing (conventions) to full commitment (deterministic code). The primary mechanism for hardening deployed systems.
- constraining — definition and spectrum: storing an output, writing a convention, adding structured sections, extracting deterministic code; codification is the far end where the medium itself changes from natural language to executable code
- storing-llm-outputs-is-constraining — the simplest instance: keeping a specific LLM output resolves underspecification to one interpretation; develops the generator/verifier pattern and verbatim risk
- constraining-during-deployment-is-continuous-learning — AI labs' continuous learning is achievable through constraining with versioned artifacts, which beats weight updates on inspectability and rollback
- spec-mining-as-codification — codification's operational mechanism: observe behavior, extract deterministic rules, grow the calculator surface monotonically
- operational-signals-that-a-component-is-a-relaxing-candidate — five testable signals (paraphrase brittleness, isolation-vs-integration gap, process constraints, unspecifiable failures, distribution sensitivity) for detecting when to reverse codification
- error-messages-that-teach-are-a-constraining-technique — the dual-function property: effective enforcement artifacts simultaneously constrain and inform, because in agent systems the error channel is an instruction channel
- enforcement-without-structured-recovery-is-incomplete — the enforcement gradient covers detection and blocking but not recovery; maps ABC's corrective → fallback → escalation onto each enforcement layer, with oracle strength determining viable recovery strategies
Distillation
Targeted extraction from a larger body of reasoning into a focused artifact shaped by use case, context budget, or agent. Orthogonal to constraining — you can distil without constraining (extract a skill, still underspecified) or constrain without distilling (store an output, no extraction from reasoning).
- distillation — definition: the rhetorical mode shifts to match the target (argumentative → procedural, exploratory → assertive); the dominant mechanism in knowledge work because it creates new artifacts from existing reasoning
Information & Bounded Observers
- information-value-is-observer-relative-because-extraction-requires-computation — deterministic transformations add zero classical information but can make structure accessible to bounded observers; names the gap that distillation and discovery each describe operationally
- minimum-viable-vocabulary-is-the-set-of-names-that-maximally-reduces-extraction-cost-for-a-bounded-observer — reframes "minimum viable ontology" as the vocabulary that maximally reduces extraction cost for a bounded observer entering a domain; synthesizes information-value, discovery, and distillation
- first-principles-reasoning-selects-for-explanatory-reach-over-adaptive-fit — Deutsch's adaptive-vs-explanatory distinction: explanatory knowledge transfers because it captures why, not just what works; grounds the KB's first-principles filter as selecting for reach
Discovery
A third operation, distinct from both constraining and distillation: positing a new general concept and simultaneously recognizing existing particulars as instances of it. Discovery produces theories — the highest-reach items accumulation can store.
- discovery-is-seeing-the-particular-as-an-instance-of-the-general — the dual structure of discovery (posit the general, recognize the particular); three depths from shared feature through shared structure to generative model; the hard problem is recognition, not linking
Synthesis
- a good agentic KB maximizes contextual competence through discoverable, composable, trustworthy knowledge — accumulation as the basic operation plus three transformation operations (constraining, distillation, discovery) mapped to three knowledge properties (trustworthy, discoverable, composable) serving contextual competence under bounded context; reach as the quality dimension of what's accumulated
Oracle & Verification
Moved to LLM interpretation errors — oracle theory, error correction, reliability dimensions, and the augmentation/automation boundary now live in the dedicated error-theory area. Key notes:
- error-correction-works-above-chance-oracles-with-decorrelated-checks — the core theory of error correction via decorrelated weak oracles
- oracle-strength-spectrum — the gradient from hard to no oracle that determines engineering priorities
Memory & Architecture
- three-space-agent-memory-maps-to-tulving-taxonomy — agent memory split into knowledge, self, and operational spaces mirrors Tulving's semantic/episodic/procedural distinction
- three-space-memory-separation-predicts-measurable-failure-modes — the three-space claim is testable: flat memory predicts specific cross-contamination failures
- inspectable-substrate-not-supervision-defeats-the-blackbox-problem — codification counters the blackbox problem not by requiring human review but by choosing a substrate (repo artifacts) that any agent can inspect, diff, test, and verify
- A-MEM: Agentic Memory for LLM Agents — academic paper: Zettelkasten-inspired agent memory with automated link generation; flat single-space design provides a test case for whether three-space separation matters at QA-benchmark scale
- memory-management-policy-is-learnable-but-oracle-dependent — AgeMem's RL-trained memory policy demonstrates low-reach accumulation (facts) and distillation (STM); confirms memory policy is vision-feature-like per the bitter lesson boundary, but requires a task-completion oracle the KB cannot yet provide
- Graphiti — temporally-aware knowledge graph with bi-temporal edge invalidation; strongest temporal model in the surveyed memory systems and strongest counterexample to files-first architecture
Applications
- unified-calling-conventions-enable-bidirectional-refactoring — when agents and tools share a calling convention, constraining and codification become local operations; llm-do as primary evidence
- programming-practices-apply-to-prompting — typing, testing, progressive compilation, and version control transfer from programming to LLM prompting, with probabilistic execution making some practices harder
- ad-hoc-prompts-extend-the-system-without-schema-changes — the counterpoint: sometimes staying at the prompt level is the right choice; ad hoc instructions absorb new requirements faster than schema changes
- legal-drafting-solves-the-same-problem-as-context-engineering — law as an independent source discipline for the underspecified instructions problem: precedent and codification are constraining; legal techniques are native to the underspecified medium
- Ephemeral computation prevents accumulation — ephemeral vs persistent artifacts as inverse of codification; discarding generated artifacts trades accumulation for simplicity
Reference material
- Context Engineering for AI Agents in OSS — empirical study of AGENTS.md/CLAUDE.md evolution in 466 OSS projects; commit-level analysis shows constraining maturation trajectory confirming continuous learning through versioned artifacts
- On the "Induction Bias" in Sequence Models — 190k-run empirical study showing transformers need orders-of-magnitude more data than RNNs for state tracking; architectural induction bias determines data efficiency and weight sharing, grounding the computational bounds dimension of learning capacity
Related Tags
- llm-interpretation-errors — oracle theory, error correction, and reliability dimensions migrated here; the error-theory area applies verification concepts specifically to LLM interpretation failures
- kb-design — applies learning theory to KB architecture and evaluation; methodology-enforcement-is-constraining bridges both areas
- document-system — the type ladder (text→note→structured-claim) instantiates the constraining gradient for documents
All notes
- A good agentic KB maximizes contextual competence through discoverable, composable, trustworthy knowledge — Theory of why commonplace's arrangements work — three properties (discoverable, composable, trustworthy) serve contextual competence under bounded context; accumulation is the basic learning operation (reach distinguishes facts from theories); constraining, distillation, and discovery transform accumulated knowledge; Deutsch's reach criterion distinguishes knowledge that transfers from knowledge that merely fits
- Ad hoc prompts extend the system without schema changes — When a new requirement doesn't fit existing types or skills, writing an ad hoc instructions note absorbs it without any schema change — the collections problem is a concrete example
- Agentic systems interpret underspecified instructions — LLM-based systems have two distinct properties — semantic underspecification of natural language specs (the deeper difference from traditional programming) and execution indeterminism (present in all practical systems) — the spec-to-program projection model captures the first, which indeterminism tends to obscure
- Changing requirements conflate genuine change with disambiguation failure — Agile's 'changing requirements' hide two distinct phenomena — genuine change (world moved) and late discovery that downstream specs committed to a wrong interpretation of an underspecified upstream spec — short iterations limit interpretation-error propagation, not just change-response latency
- Codification — Definition — codification is constraining that crosses a medium boundary from natural language to a symbolic medium (code), where the consumer changes (LLM → interpreter) and verification becomes exact — the far end of the constraining spectrum
- Codification and relaxing navigate the bitter lesson boundary — Since you can't identify which side of the bitter lesson boundary you're on until scale tests it, practical systems must codify and relax — with spec mining avoiding the vision-feature failure mode
- Constraining — Definition — constraining narrows the space of valid interpretations an underspecified spec admits, from partial narrowing (conventions, structured sections) to full commitment (stored outputs, deterministic code) — one of two co-equal learning mechanisms alongside distillation
- Constraining and distillation both trade generality for reliability, speed, and cost — Both learning mechanisms — constraining (constraining) and distillation (extracting) — sacrifice generality for compound gains in reliability, speed, and cost; they differ in the operation and how much compound they yield
- Constraining during deployment is continuous learning — Continuous learning — adapting deployed systems to new data and tasks — is what constraining with versioned artifacts already achieves per Simon's definition; fine-tuning and prompt optimization target the same behavioral changes through different mechanisms
- Deploy-time learning is agile for human-AI systems — Argues deploy-time learning and agile share the same core innovation — co-evolving prose and code — but deploy-time learning extends it by treating some prose as permanently load-bearing
- Deploy-time learning: The Missing Middle — Deploy-time learning fills the gap between training and in-context — repo artifacts provide durable, inspectable adaptation through three mechanisms (constraining, codification, distillation) with a verifiability gradient from prompt tweaks to deterministic code
- Discovery is seeing the particular as an instance of the general — Proposes that discovery has a dual structure — positing a new general concept while recognizing existing particulars as instances of it — and that similarity-based connections vary by abstraction depth (shared feature → shared structure → generative model), not link kind. Scoped to similarity connections; contrastive and causal links are a different axis.
- Distillation — Definition — distillation is compressing knowledge for a specific task under a context budget — the operation that context engineering machinery exists to perform; one of two co-equal learning mechanisms alongside constraining
- Enforcement without structured recovery is incomplete — The enforcement gradient covers detection and blocking but has no recovery column — recovery strategies (corrective → fallback → escalation) are the missing layer, and oracle strength determines which are viable at each level
- Ephemeral computation prevents accumulation — Ephemeral computation — discarding generated artifacts after use — trades accumulation for simplicity, making it the inverse of codification
- Error messages that teach are a constraining technique — The most effective constraining artifacts simultaneously constrain (block wrong output) and inform (teach the fix) — because in agent systems the error channel is an instruction channel; fills the gap between the constraining gradient's layers and the context they deliver
- First-principles reasoning selects for explanatory reach over adaptive fit — Deutsch's adaptive-vs-explanatory distinction — explanatory knowledge has "reach" (transfers to new contexts) because it captures why, not just what works; grounds the KB's first-principles filter as selecting for reach over fit
- In-context learning presupposes context engineering — In-context learning only works when the right knowledge reaches the context window — the selection machinery that ensures this is itself learned and refined over deployment
- Information value is observer-relative because extraction requires computation — Classical information measures miss accessibility — transforms that preserve or reduce Shannon entropy can make structure visible to bounded observers. Connects distillation and discovery as instances of the same computational-bounds gap.
- Inspectable substrate, not supervision, defeats the blackbox problem — Chollet frames agentic coding as ML producing blackbox codebases — codification counters this not by requiring human review but by choosing a substrate (repo artifacts) that any agent can inspect, diff, test, and verify
- Learning is not only about generality — Per Simon, any capacity change is learning; accumulation is the most basic learning operation and reach is its key property — facts (low reach) vs theories (high reach); capacity also decomposes into generality vs a reliability/speed/cost compound
- Legal drafting solves the same problem as context engineering — Law has centuries of methodology for writing natural language specifications interpreted by a judgment-exercising processor — the same problem as context engineering for LLMs. Legal techniques (defined terms, structural conventions, precedent) are constraining techniques native to the underspecified medium; law mostly lacks codification because statutes remain natural language.
- LLM learning phases fall between human learning modes rather than mapping onto them — Pre-training acquires both structural priors (evolution's role in humans) and world knowledge in one pass — making it and in-context learning intermediate on the evolution-to-reaction spectrum
- Memory management policy is learnable but oracle-dependent — AgeMem learns on two substrates — facts accumulated in memory (low-reach) and policy learned in weights (when to accumulate, distil, curate) — confirming memory policy is vision-feature-like; but the learning depends on task-completion oracles, which is exactly the evaluation gap that makes automating KB learning hard
- Methodology enforcement is constraining — Instructions, skills, hooks, and scripts form a constraining gradient for methodology — from underspecified and indeterministic (LLM interprets and may not follow) to fully deterministic (code always runs), with hooks occupying a middle ground of deterministic triggers with indeterministic responses
- Minimum viable vocabulary is the set of names that maximally reduces extraction cost for a bounded observer — Reframes "minimum viable ontology" as an optimization problem — the vocabulary that, once acquired, maximally reduces a bounded observer's extraction cost for a domain; grounds the pedagogical intuition of "conceptual thresholds" in the KB's information-theoretic framework
- Operational signals that a component is a relaxing candidate — Six operational signals — five early-detection (paraphrase brittleness, isolation-vs-integration gap, process constraints, unspecifiable failure modes, distribution sensitivity) plus composition failure as late-stage confirmation — for shifting confidence about whether a component encodes theory or specification.
- Programming practices apply to prompting — Programming practices — typing, testing, progressive compilation, version control — apply to LLM prompting and knowledge systems, with semantic underspecification and execution indeterminism making some practices harder in distinct ways
- Spec mining is codification's operational mechanism — Operationalizes codification by extracting deterministic verifiers from observed stochastic behavior — the mechanism that converts blurry-zone components into calculators
- Storing LLM outputs is constraining — Choosing to keep a specific LLM output resolves semantic underspecification to one interpretation and freezes it against execution indeterminism — the same constraining move the parent note describes for code, applied to artifacts
- The bitter lesson has a boundary — The bitter lesson has a boundary — arithmetic vs vision features illustrate when exact solutions survive scaling and when they don't
- The fundamental split in agent memory is not storage format but who decides what to remember — Comparative analysis of eleven agent memory systems across six architectural dimensions — storage unit, agency model, link structure, temporal model, curation operations, and extraction schema — revealing that the agency question (who decides what to remember) is the most consequential design choice and that no system combines high agency, high throughput, and high curation quality.
- Three-space agent memory maps to Tulving's taxonomy — Agent memory split into knowledge, self, and operational spaces mirrors Tulving's semantic/episodic/procedural distinction
- Three-space memory separation predicts measurable failure modes — The three-space memory claim is testable because flat memory predicts specific cross-contamination failures
- Unified calling conventions enable bidirectional refactoring between neural and symbolic — When agents and tools share a calling convention, components can move between neural and symbolic without changing call sites — llm-do demonstrates this with name-based dispatch over a hybrid VM