The Pipeline Orchestrator — When an Agentic-Net Runs Your Entire CI Pipeline
What if your CI pipeline wasn’t a YAML file, but an autonomous agent that reads a plain-English request, figures out which modules need attention, reviews them, fixes the issues, runs the tests — and reports back with a structured summary? That’s the Pipeline Orchestrator: 4 Agentic-Nets, 23 transitions, and zero human intervention between “review security for gateway and vault” and “ALL_PASS.”
The Problem: CI Pipelines That Can’t Think
Traditional CI/CD is a marvel of engineering — until you need it to make a decision. A GitHub Actions workflow can build, test, and deploy. But ask it to review code for security issues, fix what it finds, and test the fixes? That’s three separate tools, three separate configurations, and a human stitching them together.
The real problem isn’t automation — it’s orchestration of intelligent work. You want to say “review security across gateway and vault” and have the system figure out the rest: which modules to touch, what to look for, how to fix issues, and how to verify the fixes. That requires an orchestrator that can reason, not just execute.
The Architecture: Four Nets, One Pipeline
The Pipeline Orchestrator is built from four Agentic-Nets that compose into a single automated pipeline. Each net handles one concern. An autonomous agent at the top orchestrates them all using FIRE_ONCE — the same mechanism a human operator would use, but executed by an AI that can poll for results, handle errors, and aggregate outcomes.
The four nets live in a single model (agentic-nets-reviewer) and share places across net boundaries. The orchestrator agent in Net 4 controls the entire flow by firing transitions in Nets 1, 2, and 3 — one module at a time, handling errors gracefully, and aggregating results into a final summary.
Net 4: The Orchestrator — An Agent That Drives a Pipeline
The Pipeline Orchestrator net has just 4 places and 2 transitions, but it’s the brain of the entire system. Here’s how it works:
Step 1: Natural Language In
You drop a token into p-user-request with a single property — a plain-English description of what you want:
{
"description": "review security across gateway and vault modules, thorough depth"
}
No structured JSON schema. No module lists. No configuration. Just English.
Step 2: t-collect-spec — Parsing Intent
The first agent transition (t-collect-spec, role rw, max 10 iterations) reads that token and parses it into a structured spec with flat-string properties:
{
"modules": "gateway,vault",
"focus": "security",
"depth": "thorough",
"pipelineMode": "full",
"originalRequest": "review security across gateway and vault modules, thorough depth",
"moduleCount": "2"
}
The agent understands synonyms (“check auth” → security focus), defaults (“all modules” when none specified), and pipeline modes (“just review” vs. “full pipeline”). It consumes the request token and creates the spec in p-spec-ready. Done in under 20 seconds.
Step 3: t-orchestrate — Running the Full Pipeline
This is where it gets interesting. t-orchestrate is an agent transition with role rwxh (read, write, execute, HTTP) and 100 iterations. It reads the spec and then, for each module, drives the entire review → fix → test pipeline:
For each module, the orchestrator creates an input token, fires the map transition (synchronous), fires the command transition (async — returns {queued: true}), then polls the output place until the executor finishes. It handles timeouts, errors, and partial failures gracefully — logging each module’s outcome and continuing to the next.
Three Pipeline Modes
The orchestrator supports three modes, controlled entirely by the natural language request:
| Mode | What Runs | Use Case |
|---|---|---|
review-only | Net 1 only (review) | Quick assessment, no code changes |
review-fix | Net 1 + Net 2 (review → fix) | Auto-fix without running tests |
full | Net 1 + Net 2 + Net 3 (review → fix → test) | Full CI-like pipeline with verification |
Say “just review vault” and you get review-only for one module. Say “review and fix everything” and you get review-fix across all 8 modules. The spec parser agent handles the interpretation; the orchestrator agent handles the execution.
Real Numbers: What a Full Run Looks Like
Here’s what happens when you drop “review security across gateway and vault modules, thorough depth” into the pipeline:
- t-collect-spec completes in ~10-20 seconds, parsing intent into a structured spec with 2 modules
- Gateway review takes ~3-5 minutes as Claude Code reads the codebase and produces findings
- Gateway fix takes ~5-30 minutes depending on how many issues were found
- Gateway test takes ~3-5 minutes to verify the fixes
- Vault follows the same sequence
- Total wall time: ~30-80 minutes for a full security review + fix + test of two modules
The final summary token in p-pipeline-summary tells you exactly what happened:
{
"totalModules": "2",
"passCount": "2",
"failCount": "0",
"overallStatus": "ALL_PASS",
"pipelineMode": "full",
"focus": "security",
"modules": "gateway,vault",
"moduleResults": "gateway:pass,vault:pass"
}
Per-module log tokens track each phase individually, so you can see exactly where things succeeded or failed.
Why This Matters: Agents as Pipeline Stages
The Pipeline Orchestrator demonstrates something that YAML-based CI systems fundamentally cannot do: an agent that reasons about the pipeline while running it.
The orchestrator doesn’t just execute a fixed sequence. It:
- Skips phases based on the pipeline mode (review-only skips fix and test)
- Handles failures gracefully — if the fix for gateway fails, it logs the failure and moves on to vault
- Manages its own iteration budget — at iteration 90 of 100, it creates a partial summary and exits cleanly
- Polls asynchronous results by querying output places, adapting wait times to different phase durations
- Aggregates results into a structured summary that downstream systems can consume
This is the difference between automation and orchestration. A CI pipeline automates a sequence. The Pipeline Orchestrator orchestrates intelligent work — adapting, recovering, and reporting along the way.
The Composition Pattern: Nets That Control Nets
What makes this architecture powerful isn’t just the orchestrator — it’s the composition. Each of the four nets is independently deployable, testable, and reusable:
- Net 1 (Module Reviewer) has 8 parallel lanes — one per module. Each lane is a map → command pair. You can test a single module’s review independently.
- Net 2 (Module Fixer) takes any review output and generates fixes. It doesn’t know or care which module produced the review.
- Net 3 (Module Tester) takes any fix result and runs tests. Same decoupling.
- Net 4 (Orchestrator) composes them through shared places. The nets communicate via tokens in
p-review-output,p-fix-result, andp-test-result.
This is the Petri net philosophy at work: the topology IS the architecture. Adding a new pipeline stage means adding a new net and having the orchestrator fire its transitions. No code changes. No YAML. Just places, transitions, and arcs.
Try It Yourself
With AgenticOS services running, the entire pipeline is three curl commands:
# 1. Drop a request
bash test-pipeline-orchestrator.sh "review security across gateway and vault"
# 2. Or do it manually — create token, fire spec collector, fire orchestrator
curl -X POST "http://localhost:8082/api/transitions/t-collect-spec/fireOnce" \
-H "Content-Type: application/json" -d '{"modelId":"agentic-nets-reviewer"}'
curl -X POST "http://localhost:8082/api/transitions/t-orchestrate/fireOnce" \
-H "Content-Type: application/json" -d '{"modelId":"agentic-nets-reviewer"}'
# 3. Watch results appear
watch -n 10 'curl -sf "http://localhost:8082/api/arcql/query/agentic-nets-reviewer" \
-H "Content-Type: application/json" \
-d "{\"placeId\":\"p-pipeline-log\",\"host\":\"agentic-nets-reviewer@localhost:8080\",\"query\":\"FROM $\"}" \
| python3 -m json.tool'
Go get coffee. Come back to a summary token that tells you exactly what happened across every module, every phase, every result.
Where This Leads
The Pipeline Orchestrator is a proof point for a bigger idea: CI/CD as an Agentic-Net. Today it reviews, fixes, and tests code modules. Tomorrow:
- Parallel module processing — fire all 8 module reviews simultaneously instead of sequentially
- Self-healing loops — if tests fail, route back to the fixer with the failure context
- Cross-module analysis — a synthesis agent that reads all review outputs and identifies systemic patterns
- Deployment integration — on ALL_PASS, automatically trigger a staging deployment via command transition
- Knowledge accumulation — review findings feed into a knowledge place that makes future reviews smarter
Each of these is just another net, another set of places and transitions. The Petri net model doesn’t limit what you can compose — it formalizes it.
This article describes a system built with AgenticOS — 4 Agentic-Nets, 23 transitions, 30 places, all driven by two agent transitions that turn a plain-English request into a fully automated code review pipeline. The nets, inscriptions, and deployment scripts are part of the AgenticOS core repository.
Related: When the Net Writes Your Code | Shared Places | The Reflexive Brain