Computational model
Type: index · Status: current
Programming language concepts applied to LLM instructions and agent architectures. Where learning-theory covers how systems learn and improve, and kb-design covers how knowledge bases are built and operated, this area covers the computational properties of the medium itself — what kind of "programs" LLM instructions are, and what PL concepts illuminate their behavior.
Foundations
- agentic-systems-interpret-underspecified-instructions — the core framing: underspecified semantics and execution indeterminism as the two properties that distinguish LLM instructions from traditional programs; also foundational to learning-theory
- context-efficiency-is-the-central-design-concern-in-agent-systems — the foundational argument for why context is the scarce resource; context cost has two dimensions (volume and complexity); connects all the PL-inspired mechanisms to this dual pressure
- llm-context-is-a-homoiconic-medium — instructions and data share the same representation (natural language tokens), enabling extensibility but removing structural guardrails; precedents in Lisp, Emacs, Smalltalk
- llm-context-is-composed-without-scoping — context is flat concatenation with no scoping, producing dynamic scoping's pathologies; sub-agents are the one mechanism for isolation, using lexically scoped frames
Scheduling & Orchestration
- symbolic-scheduling-over-bounded-llm-calls-is-the-right-model-for-agent-orchestration — the clean model: an unbounded symbolic scheduler manages exact state and issues bounded LLM calls for semantic judgment
- decomposition-rules-for-bounded-context-scheduling — preliminary practical rules for scheduling bounded LLM calls: separate selection from joint reasoning, choose representations not subsets, save reusable intermediates in scheduler state
- llm-mediated-schedulers-are-a-degraded-variant-of-the-clean-model — when the scheduler lives in an LLM conversation it degrades; three recovery strategies restore the clean separation to increasing degrees
- the-frontloading-loop-is-an-iterative-optimisation-over-bounded-context — extending frontloading from a single step to an iterative loop reveals a sequential optimisation problem over a fixed-capacity sub-agent window
- solve-low-degree-of-freedom-subproblems-first-to-avoid-blocking-better-designs — sequencing heuristic: commit least-flexible decisions first so high-flexibility choices cannot block scarce valid placements
Instruction Properties
- writing-styles-are-strategies-for-managing-underspecification — the five empirically observed context-file writing styles correspond to different strategies for narrowing the agent's interpretation space
- programming-practices-apply-to-prompting — typing, testing, version control transfer to prompting with modified cost models
- unified-calling-conventions-enable-bidirectional-refactoring — calling conventions that let components move between neural and symbolic implementations
Related notes in other areas
- frontloading-spares-execution-context (kb-design) — partial evaluation applied to LLM instructions; the mechanism behind indirection elimination and build-time generation
- indirection-is-costly-in-llm-instructions (kb-design) — the cost model for indirection differs fundamentally between code and LLM instructions
Tensions
- The homoiconic medium enables extensibility (ad hoc prompts, unified calling conventions) but requires explicit scoping disciplines (lexical frames, tier separation) precisely because there are no structural boundaries. The stabilisation gradient from instructions to scripts is one response — crystallising imposes the structure the medium lacks.
Related Areas
- learning-theory — how systems learn through stabilisation, crystallisation, distillation; the computational model explains what kind of programs these mechanisms operate on
- kb-design — practical architecture that applies these computational properties; frontloading and indirection cost are PL concepts applied to KB instructions