First-principles reasoning selects for explanatory reach over adaptive fit

Type: note · Status: seedling

David Deutsch distinguishes two kinds of knowledge that mainstream usage conflates:

Adaptive information — structures that help a system cope with the world. A genome encodes successful adaptations. A neural network's weights encode useful patterns. An animal's instincts encode strategies that work. These are useful, but they don't explain why they work, can't be deliberately varied, and don't transfer beyond their training distribution.

Explanatory knowledge — says why the world works a certain way, can be deliberately varied and criticized, and supports transfer to new contexts because it captures deeper structure rather than successful habit. A gene "knows" how to build an eye but contains no theory of optics. Newton's optics is explanatory — it reaches contexts no eye ever encountered.

The distinguishing property is reach: explanatory knowledge applies beyond its original context because the explanation captures structure that isn't context-dependent.

Why this matters for the KB

The KB's first-principles methodology is, in Deutsch's terms, a filter that selects for explanatory reach over adaptive fit. When a note derives a design pattern from constraints (finite context, no scoping mechanism, text-in/text-out), the derivation is explanatory — it says why the pattern works, which means it predicts where the pattern will fail (change the constraint, change the conclusion). When a note records "X works in practice," that's adaptive — useful but brittle to context change.

The computational-model area exemplifies reach. PL concepts (scoping, partial evaluation, scheduling) were developed for compilers, but they reach into KB design because they capture structure that isn't programming-specific — they describe what happens when bounded processors compose text under constraints. LLM context is composed without scoping doesn't just analogize to dynamic scoping — it identifies the same mechanism producing the same pathologies, and predicts the same remedies (lexically scoped sub-frames).

The negative test

Deutsch's distinction provides a quality check orthogonal to the KB's type system. A well-formed note can pass every structural check (good title, description, links, area) while being merely adaptive — recording a pattern without explaining the mechanism. The test:

  1. Can you vary the explanation? If you changed one premise, could you predict what changes in the conclusion? If yes, the note captures causal structure. If no, it may be recording correlation.
  2. Does it reach? Would this insight apply in a domain you haven't considered? If yes, the mechanism is deeper than the specific case. If no, the note may be context-fitted.
  3. Can it be criticized? Is there a specific way the explanation could be wrong, not just incomplete? The falsifier blocks practice operationalizes this.

These map to the three depths in discovery: shared feature (adaptive), shared structure (partially explanatory), generative model (fully explanatory with reach).

The programming fast-pass as a reach bet

The design methodology gives programming patterns a "fast pass" — adopting them without complete first-principles derivation. Deutsch's framework explains why this bet is reasonable: programming patterns have explanatory reach — they capture structure that isn't programming-specific. The bet is that the reach is real, not just surface analogy. Convergent evolution (Thalo independently building a compiler for knowledge management) is evidence that it is.

Open Questions

  • Where in the KB are notes that are well-formed but merely adaptive? Those are candidates for deepening.
  • The discovery note's hierarchy (feature -> structure -> generative model) parallels Deutsch's hierarchy (adaptive -> partially explanatory -> fully explanatory). Are these the same axis?
  • Should "has explanatory reach" become a trait or quality signal, or is it better as an informal check during writing?

Relevant Notes:

Distilled into:

Topics: