Tool loop

Type: index · Status: current · Tags: computational-model, context-engineering, tool-loop

Many LLM applications share a common operational core: construct a task frame, give the model tools, and loop until it stops.

state = initial_task_frame()

while not done(state):
    turn = llm_call(state, tools=tools)
    if turn.type == "tool_request":
        result = execute_tool(turn.request)
        state = absorb(state, turn.request, result)
    else:
        state = absorb(state, turn.output)

Frameworks own this loop because the mechanics are repetitive protocol work — parsing tool requests, dispatching to handlers, serializing results, feeding them back, handling streaming and retries. Abstracting that away is good engineering, just as abstracting HTTP parsing is.

Many useful interventions can stay hidden inside this loop without changing its structure: logging, approvals, budget checks, checkpoints, deterministic transforms on tool results. A stateful singleton runtime behind the tool boundary can go further, holding recursion state and branch records. The recovery is genuine — but the question is not whether the loop can absorb bookkeeping. It is who gets to decide what the next step can do.

Forcing cases

Three cases where a single framework-owned loop becomes insufficient:

Resolution

The first and third cases call for sub-agents — fresh tool loops with their own prompt, capability surface, and stop condition. The second calls for something more: symbolic composition of agents — code-controlled iteration, filtering, and aggregation over multiple agent invocations. Sub-agents are the atomic unit; symbolic orchestration is what the application does with them. "Exposing the loop" means the framework supports both: spawning child loops and composing them in application code.

Downstream consequences


Relevant Notes:

Other tagged notes