First-principles reasoning selects for explanatory reach over adaptive fit
Type: note · Status: seedling
David Deutsch distinguishes two kinds of knowledge that mainstream usage conflates:
Adaptive information — structures that help a system cope with the world. A genome encodes successful adaptations. A neural network's weights encode useful patterns. An animal's instincts encode strategies that work. These are useful, but they don't explain why they work, can't be deliberately varied, and don't transfer beyond their training distribution.
Explanatory knowledge — says why the world works a certain way, can be deliberately varied and criticized, and supports transfer to new contexts because it captures deeper structure rather than successful habit. A gene "knows" how to build an eye but contains no theory of optics. Newton's optics is explanatory — it reaches contexts no eye ever encountered.
The distinguishing property is reach: explanatory knowledge applies beyond its original context because the explanation captures structure that isn't context-dependent.
Why this matters for the KB
The KB's first-principles methodology is, in Deutsch's terms, a filter that selects for explanatory reach over adaptive fit. When a note derives a design pattern from constraints (finite context, no scoping mechanism, text-in/text-out), the derivation is explanatory — it says why the pattern works, which means it predicts where the pattern will fail (change the constraint, change the conclusion). When a note records "X works in practice," that's adaptive — useful but brittle to context change.
The computational-model area exemplifies reach. PL concepts (scoping, partial evaluation, scheduling) were developed for compilers, but they reach into KB design because they capture structure that isn't programming-specific — they describe what happens when bounded processors compose text under constraints. LLM context is composed without scoping doesn't just analogize to dynamic scoping — it identifies the same mechanism producing the same pathologies, and predicts the same remedies (lexically scoped sub-frames).
The negative test
Deutsch's distinction provides a quality check orthogonal to the KB's type system. A well-formed note can pass every structural check (good title, description, links, area) while being merely adaptive — recording a pattern without explaining the mechanism. The test:
- Can you vary the explanation? If you changed one premise, could you predict what changes in the conclusion? If yes, the note captures causal structure. If no, it may be recording correlation.
- Does it reach? Would this insight apply in a domain you haven't considered? If yes, the mechanism is deeper than the specific case. If no, the note may be context-fitted.
- Can it be criticized? Is there a specific way the explanation could be wrong, not just incomplete? The falsifier blocks practice operationalizes this.
These map to the three depths in discovery: shared feature (adaptive), shared structure (partially explanatory), generative model (fully explanatory with reach).
The programming fast-pass as a reach bet
The design methodology gives programming patterns a "fast pass" — adopting them without complete first-principles derivation. Deutsch's framework explains why this bet is reasonable: programming patterns have explanatory reach — they capture structure that isn't programming-specific. The bet is that the reach is real, not just surface analogy. Convergent evolution (Thalo independently building a compiler for knowledge management) is evidence that it is.
Open Questions
- Where in the KB are notes that are well-formed but merely adaptive? Those are candidates for deepening.
- The discovery note's hierarchy (feature -> structure -> generative model) parallels Deutsch's hierarchy (adaptive -> partially explanatory -> fully explanatory). Are these the same axis?
- Should "has explanatory reach" become a trait or quality signal, or is it better as an informal check during writing?
Relevant Notes:
- design methodology — borrow widely, filter by first principles — grounds: first-principles filtering IS selecting for explanatory reach; this note explains why that filter works
- discovery is seeing the particular as an instance of the general — parallels: the generative model depth maps to explanatory knowledge with reach
- mechanistic constraints make Popperian KB recommendations actionable — extends: Deutsch and Popper are allied — explanatory knowledge is the kind criticism can test; falsifier blocks operationalize one of the three tests
- computational-model — exemplifies: PL concepts reaching into KB design is explanatory reach in action
- information value is observer-relative because extraction requires computation — complements: reach means the explanation makes structure accessible to observers in multiple contexts, not just the original one
- a good agentic KB maximizes contextual competence — extends: places reach as the quality criterion within a full theory connecting learning operations to knowledge properties
Distilled into:
- review-explanatory-reach — the three-part negative test (vary / reach / criticize)
Topics: