Computational model

Type: kb/types/index.md · Status: current

What kind of "programs" LLM instructions are, and what programming-language concepts — scoping, homoiconicity, partial evaluation, typing — illuminate their behavior. Where learning-theory covers how systems learn and tags covers how knowledge bases are operated, this index covers the computational properties of the medium itself and the scheduling architecture that follows from context scarcity.

Foundations

Scheduling & Orchestration

Design space & decomposition

Scheduler implementation

Session history & handoff

Tool loop & hidden scheduling

Observability & error masking

Instruction Properties

Error Correction & Reliability

These notes are dual-tagged with LLM interpretation errors, which provides the broader error-theory context. They appear here because their claims are about the scheduling architecture.

Tensions

  • The homoiconic medium enables extensibility (ad hoc prompts, unified calling conventions) but requires explicit scoping disciplines (lexical frames, tier separation) precisely because there are no structural boundaries. The constraining gradient from instructions to scripts is one response — codifying imposes the structure the medium lacks.
  • llm-interpretation-errors — error correction theory, oracle hardening, and reliability dimensions; explains why the scheduling architecture works
  • learning-theory — how systems learn through constraining, codification, distillation; the computational model explains what kind of programs these mechanisms operate on
  • tags — practical architecture that applies these computational properties; frontloading and indirection cost are PL concepts applied to KB instructions

Agent Notes: - 2026-03-10: the Scheduling & Orchestration cluster plus the Multi-Agent Aggregation note form the core of a paper outline presenting the scheduling model for an academic audience. The error-correction conjecture is now captured as scheduler-llm-separation-exploits-an-error-correction-asymmetry. The framework spectrum (Section 5) is not yet a standalone KB note.

Other tagged notes

  • "Agent" is a useful technical convention, not a definition - A lightweight technical convention — an agent is a tool loop (prompt, capability surface, stop condition) — sidestepping the definitional debate in favor of a unit that organizes code
  • Access burden and transformation burden are independent query dimensions - Queries have two independent difficulty axes — finding inputs (access) and producing the answer (transformation) — conflating them misroutes symbolic transformations through semantic processing
  • Agent memory is a crosscutting concern, not a separable niche - Memory decomposes into storage (solved), retrieval/activation (context engineering), and learning (learning theory) — treating it as a standalone category hides that the hard problems are at the intersections
  • Always-loaded context mechanisms in agent harnesses - Survey of always-loaded context mechanisms across agent harnesses — system prompt files, capability descriptions, memory, and configuration injection — cataloguing what each carries, how write policies differ, and where the gaps are
  • Any symbolic program with bounded calls is a select/call program - Any program whose symbolic execution between bounded LLM calls can be reified as explicit state can be mechanically converted into the select/call loop with the same call sequence
  • Context engineering - Definition — context engineering is the discipline of designing systems around bounded-context constraints; its operational core is routing, loading, scoping, maintenance, and observability for each bounded call
  • LLM debugging starts with retry-versus-rewrite triage - The two-phenomena model makes the first LLM debugging question diagnostic — is the failure a bad execution of a good interpretation (retry) or a consistent choice of a bad interpretation (rewrite the spec)? — because the fixes differ and do not substitute
  • LLM↔code boundaries are natural checkpoints - At each LLM↔code transition both semantic underspecification and execution indeterminism collapse simultaneously, making these boundaries natural places to anchor debugging, testing, and refactoring
  • Pointer design tradeoffs in progressive disclosure - Design tradeoffs for progressive disclosure pointers — context-specificity vs precomputation cost vs reliability; fixed pointers (descriptions, abstracts) trade specificity for reliability and cheap reads, query-time pointers (re-rankers) trade cost for specificity, crafted pointers (link phrases) achieve highest density but depend on authoring discipline
  • Progressive constraining commits only after patterns stabilize - Constraining via LLM code generation freezes a single projection of the spec in one shot, but progressive constraining observes behavior across many runs and commits only the interpretations that consistently emerge
  • Topology, isolation, and verification form a causal chain for reliable agent scaling - Decomposition, scoping, and verification may form a strict dependency chain (topology → isolation → verification) rather than independent design choices — tests the simpler account that decomposition alone implies the other two