Computational model
Type: index · Status: current
Programming language concepts applied to LLM instructions and agent architectures. Where learning-theory covers how systems learn and improve, and kb-design covers how knowledge bases are built and operated, this area covers the computational properties of the medium itself — what kind of "programs" LLM instructions are, and what PL concepts illuminate their behavior.
Foundations
- agentic-systems-interpret-underspecified-instructions — the core framing: underspecified semantics and execution indeterminism as the two properties that distinguish LLM instructions from traditional programs; also foundational to learning-theory
- context-efficiency-is-the-central-design-concern-in-agent-systems — the foundational argument for why context is the scarce resource; context cost has two dimensions (volume and complexity); connects all the PL-inspired mechanisms to this dual pressure
- bounded-context-orchestration-model — formalises agent orchestration as a symbolic scheduler driving bounded LLM calls through a select/call/absorb loop; the computational model that follows from context scarcity
- llm-context-is-a-homoiconic-medium — instructions and data share the same representation (natural language tokens), enabling extensibility but removing structural guardrails; precedents in Lisp, Emacs, Smalltalk
- llm-context-is-composed-without-scoping — context is flat concatenation with no scoping, producing dynamic scoping's pathologies; sub-agents are the one mechanism for isolation, using lexically scoped frames
Scheduling & Orchestration
- decomposition-rules-for-bounded-context-scheduling — preliminary practical rules for scheduling bounded LLM calls: separate selection from joint reasoning, choose representations not subsets, save reusable intermediates in scheduler state
- llm-mediated-schedulers-are-a-degraded-variant-of-the-clean-model — when the scheduler lives in an LLM conversation it degrades; three recovery strategies restore the clean separation to increasing degrees
- rlm-achieves-the-clean-scheduler-model-but-opts-out-of-accumulation — RLM instantiates the symbolic-scheduler model by having the LLM write the scheduler as code; achieves clean separation but discards the scheduler after each run
- solve-low-degree-of-freedom-subproblems-first-to-avoid-blocking-better-designs — sequencing heuristic: commit least-flexible decisions first so high-flexibility choices cannot block scarce valid placements
- conversation-vs-prompt-refinement-in-agent-to-agent-coordination — tradeoff analysis of conversation, prompt refinement, and context cloning for sub-agent coordination; each shifts costs differently between caller and callee depending on architecture
Instruction Properties
- writing-styles-are-strategies-for-managing-underspecification — the five empirically observed context-file writing styles correspond to different strategies for narrowing the agent's interpretation space
- programming-practices-apply-to-prompting — typing, testing, version control transfer to prompting with modified cost models
- unified-calling-conventions-enable-bidirectional-refactoring — calling conventions that let components move between neural and symbolic implementations
Related notes in other areas
- frontloading-spares-execution-context (kb-design) — partial evaluation applied to LLM instructions; the mechanism behind indirection elimination and build-time generation
- indirection-is-costly-in-llm-instructions (kb-design) — the cost model for indirection differs fundamentally between code and LLM instructions
Error Correction & Reliability
These notes are dual-tagged with LLM interpretation errors, which provides the broader error-theory context. They appear here because their claims are about the scheduling architecture.
- scheduler-llm-separation-exploits-an-error-correction-asymmetry — conjectures that the scheduling model works because symbolic operations are error-correctable through redundancy while LLM bookkeeping compounds errors silently
- specification-level separation recovers scoping before it recovers error correction — identifies an intermediate regime where OpenProse-like DSLs recover frame isolation without yet gaining hard-oracle bookkeeping
- synthesis-is-not-error-correction (llm-interpretation-errors) — merging agent outputs propagates errors; voting discards minorities and corrects them; the aggregation operation must match the decomposition structure
Tensions
- The homoiconic medium enables extensibility (ad hoc prompts, unified calling conventions) but requires explicit scoping disciplines (lexical frames, tier separation) precisely because there are no structural boundaries. The constraining gradient from instructions to scripts is one response — codifying imposes the structure the medium lacks.
Related Tags
- llm-interpretation-errors — error correction theory, oracle hardening, and reliability dimensions; explains why the scheduling architecture works
- learning-theory — how systems learn through constraining, codification, distillation; the computational model explains what kind of programs these mechanisms operate on
- kb-design — practical architecture that applies these computational properties; frontloading and indirection cost are PL concepts applied to KB instructions
Agent Notes: - 2026-03-10: the Scheduling & Orchestration cluster plus the Multi-Agent Aggregation note form the core of a paper outline presenting the scheduling model for an academic audience. The error-correction conjecture is now captured as scheduler-llm-separation-exploits-an-error-correction-asymmetry. The framework spectrum (Section 5) is not yet a standalone KB note.
All notes
- Agentic systems interpret underspecified instructions — LLM-based systems have two distinct properties — semantic underspecification of natural language specs (the deeper difference from traditional programming) and execution indeterminism (present in all practical systems) — the spec-to-program projection model captures the first, which indeterminism tends to obscure
- Bounded-context orchestration model — Formalises agent orchestration as a symbolic scheduler driving bounded LLM calls through a select/call loop — analyses what makes selection hard and why the model supports local comparative results even when global optimisation is intractable
- Context efficiency is the central design concern in agent systems — Context — not compute, memory, or storage — is the scarce resource in agent systems; context cost has two dimensions (volume and complexity) that require different architectural responses, making context efficiency the central design concern analogous to algorithmic complexity in traditional systems
- Context engineering — Definition — context engineering is the architecture and machinery for getting the right knowledge into a bounded context at the right time — routing, loading, scoping, and maintenance; distillation is its main operation but not the only one
- Conversation vs prompt refinement in agent-to-agent coordination — Analyses the tradeoff between conversational Q&A, prompt refinement, and context forking for sub-agent coordination — each shifts costs differently between caller and callee, and the right choice depends on architecture and how much intermediate work the sub-agent has done
- Decomposition rules for bounded-context scheduling — Practical rules for symbolic scheduling over bounded LLM calls — separate selection from joint reasoning, choose representations not just subsets, save reusable intermediates in scheduler state
- LLM context is a homoiconic medium — LLM context windows are homoiconic — instructions and data share the same representation (natural language tokens), so there is no structural boundary between program and content, producing both the extensibility benefits and the scoping hazards of Lisp, Emacs, and Smalltalk
- LLM context is composed without scoping — LLM context is flat concatenation — no scoping, everything global, producing dynamic scoping's pathologies (spooky action at a distance, name collision, inability to reason locally) but without even a stack; sub-agents are the one mechanism that provides isolation through lexically scoped frames
- LLM-mediated schedulers are a degraded variant of the clean model — When the agent scheduler lives inside an LLM conversation it becomes bounded and degrades; three recovery strategies — compaction, externalisation, factoring into code — restore the clean separation to increasing degrees
- Programming practices apply to prompting — Programming practices — typing, testing, progressive compilation, version control — apply to LLM prompting and knowledge systems, with semantic underspecification and execution indeterminism making some practices harder in distinct ways
- RLM achieves the clean scheduler model but opts out of accumulation — RLM achieves the clean symbolic-scheduler model by having the LLM write the scheduler as code, but its ephemeral design opts out of deploy-time learning — the tension between architectural elegance and accumulation is a genuine open question
- Scheduler-LLM separation exploits an error-correction asymmetry — Bookkeeping and semantic operations have different error profiles across all three phenomena (underspecification, indeterminism, bias) — symbolic substrates eliminate all three for bookkeeping; mixing forces bookkeeping onto the expensive semantic-correction substrate
- Specification-level separation recovers scoping before it recovers error correction — OpenProse-like DSLs expose control flow and discretion boundaries while leaving scheduling and validation on the LLM substrate, creating an intermediate regime between flat prompting and symbolic scheduling
- Unified calling conventions enable bidirectional refactoring between neural and symbolic — When agents and tools share a calling convention, components can move between neural and symbolic without changing call sites — llm-do demonstrates this with name-based dispatch over a hybrid VM
- Writing styles are strategies for managing underspecification — The five empirically observed context-file writing styles (descriptive, prescriptive, prohibitive, explanatory, conditional) are not stylistic variation — they correspond to different strategies for narrowing the interpretation space agents face, trading off constraint against generalisability