When the Net Reads Its Own Results — Adding an Agent Analyst to the QA Pipeline

When the Net Reads Its Own Results

In Part 1, we built a self-healing QA net: eight command tokens fan out through an executor, results collect in a place, and a recycle loop resets everything for the next run. The architecture diagram had two dimmed elements — an agent transition and a report place, marked “future.” That future arrived in the same conversation. Here is Part 2.

What happens when a Petri net does not just produce data, but also understands it? That is the difference between a command transition and an agent transition. A command runs a shell script. An agent reads tokens, reasons about them, and writes structured conclusions back into the net. The QA pipeline was collecting raw results — exit codes, stdout dumps, timing data. But nobody was reading them. The agent changes that.

The Complete QA Pipeline — Now with Agent Analysis p-cmd queue 8 tokens t-execute checks CMD p-raw results 8 results p-cmd done t-recycle PASS p-spec 1 token t-analyze results AGNT consume: ALL read-only p-qa report structured JSON auto-emit NEW: Agent Analysis Phase Legend Command / Pass Agent (LLM) Place Recycle arc Read-only preset

Compare this with the Part 1 diagram. The dimmed elements are now active: t-analyze-results is an agent transition (green AGNT badge), and p-qa-report receives a structured JSON report. The agent reads all 8 result tokens from p-raw-results and the QA spec from p-spec, then writes its analysis to p-qa-report. Two iterations. One structured report.


How Agent Transitions Work

A command transition runs a fixed bash command. A pass transition copies data from one place to another. An agent transition is fundamentally different: it runs an LLM reasoning loop that reads bound tokens, calls tools, and decides what to create.

Here is the key difference in the inscription:

{
  "id": "t-analyze-results",
  "kind": "agent",
  "presets": {
    "results": {
      "placeId": "p-raw-results",
      "arcql": "FROM $",
      "take": "ALL",
      "consume": true
    },
    "spec": {
      "placeId": "p-spec",
      "arcql": "FROM $",
      "take": "FIRST",
      "consume": false
    }
  },
  "postsets": {
    "report": {
      "placeId": "p-qa-report"
    }
  },
  "action": {
    "type": "agent",
    "nl": "You are a QA analyst. Each result token has batchResults
          with results[].output.exitCode and stdout. A check PASSES
          only if exitCode==0. Produce a structured JSON report.",
    "modelId": "autonomous-agent"
  },
  "mode": "SINGLE"
}

Three things make this different from a command inscription:

  • kind: "agent" — routes execution to the master’s agent loop instead of the executor’s bash handler
  • action.nl — a natural language instruction that tells the agent what to analyze and how to structure the output
  • action.modelId — the model where the agent has tool access, allowing it to create tokens via the AgenticOS API

The agent does not just parrot the data. It reads nested JSON structures, extracts exit codes from batchResults[0].results[0].output.exitCode, applies PASS/FAIL logic, and produces a clean report. The NL instruction is the contract between the net designer and the LLM.


The Execution Flow

Agent transitions run on agentic-net-master, not on the executor. The master manages the full LLM loop: system prompt, tool calls, iteration tracking, and quality gates. Here is the sequence:

Agent Transition Execution Flow 1. fireOnce POST /api/transitions/ t-analyze-results/fireOnce 2. Bind Presets ArcQL: FROM $ on p-raw-results Binds 8 result + 1 spec token 3. Reserve CAS lock on result tokens (spec: consume=false, skip) 4. Dispatch AgentActionHandler runs on master Agent LLM Loop (AgentSessionService) Iteration 1: THINK + QUERY_TOKENS Read all 8 results + spec, plan analysis Iteration 2: CREATE_TOKEN + DONE Write structured QA report to p-qa-report 2 iterations 5. Emit & Consume Report token emitted to p-qa-report 8 result tokens consumed from p-raw-results 6. Done p-qa-report: 1 structured report p-raw-results: 0 (consumed)

The entire sequence — from fireOnce to report token — completed in two agent iterations. Iteration 1 read the bound tokens and planned the analysis. Iteration 2 created the report token and called DONE. No retries, no errors, no debugging required for the agent loop itself.

This is a striking contrast with Part 1, where the command execution path needed six bug fixes. Agent transitions benefit from running on the master, where the LLM loop, tool calling, and token management are already battle-tested through the GUI’s AI assistant.


The Report

Here is the actual QA report token that the agent produced — stored in p-qa-report, queryable through the tree API, and ready for downstream consumption:

QA Report Token — Produced by Agent Analysis overallStatus: PASS component: agentic-net-chat generatedAt: 2026-02-20 Check Status Exit Output Snippet Build PASS 0 tsup: ESM dist/bin/agenticos-chat.js 164.95 KB Typecheck PASS 0 tsc –noEmit: no errors Dependencies PASS 0 @anthropic-ai/sdk, grammy, @agenticos/cli all resolved Bundle Size PASS 0 165K executable, shebang present Shebang PASS 0 #!/usr/bin/env node, executable flag set Structure PASS 0 All required source modules present Splitter PASS 0 Splitter module exists in bundle CLI Deps PASS 0 @agenticos/cli cross-package dependency resolved 8/8 checks passed — all severity levels clear Token stored at: /root/workspace/places/p-qa-report/qa-report-20260220T080000-a1b2c3d4

This is not a log file. It is a token — a first-class citizen of the Petri net, queryable through ArcQL, consumable by downstream transitions, and visible through any AgenticOS channel: the GUI, the CLI, or the Telegram bot.

Ask the agent “what was the last QA report?” and it can read this token directly from p-qa-report. Ask “did the build pass?” and it can extract the answer from the structured JSON. The net does not just produce reports — it produces knowledge that other parts of the system can use.


The NL Instruction as a Contract

The first attempt at the NL instruction was generic:

"nl": "Analyze the QA check results. Determine PASS/FAIL.
       Produce a structured QA report."

The agent produced a report, but it was shallow. Everything was marked PASS because the agent did not know where to find exit codes in the nested JSON structure. The data was there — buried inside batchResults[0].results[0].output.exitCode — but the agent guessed at the structure instead of navigating it precisely.

The second attempt was specific:

"nl": "You are a QA analyst. Each result token has batchResults
       with results[].output.exitCode and results[].output.stdout.
       CRITICAL: A check PASSES only if exitCode==0 AND success==true.
       Extract the exitCode from batchResults[0].results[0].output.
       Produce a structured JSON report with id, name, status,
       exitCode, stdoutSnippet, and severity for each check."

The difference was immediate. The agent extracted the correct fields, applied the right logic, and produced a detailed report with stdout snippets for each check.

The NL instruction is a contract between the net designer and the LLM. Too vague, and the agent fills in gaps with assumptions. Too rigid, and you might as well write a MAP transition. The sweet spot tells the agent where the data is and what the output should look like, while leaving the reasoning to the model.


Command vs. Agent: Two Execution Models in One Net

The QA validation net now uses both execution models in a single pipeline. This is the first time the two have been combined in production:

t-execute-checks (Command) t-analyze-results (Agent)
Runs on agentic-net-executor (:8084) agentic-net-master (:8082)
Mode FOREACH — one token at a time SINGLE — all tokens at once
Input 1 command token 8 result tokens + 1 spec
Processing bash -c command, capture stdout LLM reasoning, tool calls
Output Raw exit code + stdout Structured JSON report
Iterations 1 (deterministic) 2 (LLM decides when done)
Bugs found 6 (Part 1) 0 (worked first try)

The command transition required six bug fixes because it exercises the executor’s entire pipeline: token format unwrapping, emit rule resolution, catch-all semantics, error payload merging, process management, and exit code handling. The agent transition required zero fixes because it runs through the master’s LLM loop — the same code path that powers every AI assistant session in AgenticOS. That code has been debugged through hundreds of conversations.


The Complete Cycle

With all transitions working, the complete QA cycle is a three-phase pipeline:

Three-Phase QA Pipeline Phase 1: Execute t-execute-checks (CMD/FOREACH) 8 commands fire independently Executor :8084 8 results Phase 2: Analyze t-analyze-results (AGNT/SINGLE) LLM reads all, writes report Master :8082 report Phase 3: Recycle t-recycle (PASS/FOREACH) Move done tokens back to queue Master :8082 Ready for next run p-cmd-queue: 8 tokens p-qa-report: 1 report p-spec: 1 (preserved)

Phase 1 (Execute): The executor runs 8 bash commands, one per token. Results flow to p-raw-results. Original commands park in p-cmd-done. Phase 2 (Analyze): The agent reads all results, reasons about pass/fail logic, and writes a structured report to p-qa-report. Phase 3 (Recycle): Done tokens move back to the queue. The spec is never consumed. The net is ready for another run.

After all three phases, the system state is clean: 8 command tokens in the queue, 1 report in the report place, and the spec untouched. Fire Phase 1 again next week, and you get a fresh report without reconfiguring anything.


Why This Matters

Adding an agent transition to the QA pipeline created something that neither a shell script nor a CI system can do alone:

1. The net interprets its own output. Raw results are useful for debugging. Structured reports are useful for decisions. The agent transition bridges that gap inside the Petri net itself — not in a separate reporting tool or dashboard.

2. Two execution models coexist. Deterministic command transitions handle the mechanical work (running bash scripts). An agent transition handles the cognitive work (analyzing results, applying logic, structuring conclusions). Same net, same token flow, different capabilities at different stages.

3. The NL instruction is the only configuration. No code was written for the analysis logic. No template was designed for the report format. The natural language instruction told the agent what to extract and how to structure it. Change the instruction, and the report changes — without touching any code or inscriptions.

4. Reports are tokens, not files. The QA report lives in p-qa-report as a first-class token. It can trigger downstream transitions — a notification, a Slack message, a dashboard update. Or another agent can read it later: “Has the build passed consistently for the last five runs?” The answer is in the token history.

This is what Agentic-Nets do differently: they close the loop between doing and understanding. The executor does. The agent understands. And both live in the same Petri net, connected by the same token flow.


What Comes Next

The QA pipeline is complete but not yet automated. Each phase requires a manual fireOnce call. The next step is a timed transition — a transition that fires on a schedule, triggering Phase 1 every morning. Combined with the recycle pattern, this turns the QA net into a fully autonomous quality monitor that runs, analyzes, reports, and resets without human intervention.

The other direction is multi-component QA. The same architecture — command tokens, dual-emit, agent analysis — works for any component in the AgenticOS ecosystem. Add command tokens for agentic-net-cli, agentic-net-gui, or agentic-net-master. They all flow through the same net, and the agent produces a unified cross-component report.

Two articles, one net, zero code changes. That is the power of tokens over configuration files.

Leave a Reply

Your email address will not be published. Required fields are marked *