When the Net Writes Your Code — Building an Autonomous Developer with Agentic-Nets

When the Net Writes Your Code

What if you could describe a feature in plain English, walk away, and come back to find the code written, the build verified, and the changes staged for review? That is what the AgenticOS Developer Net does. A six-zone Petri net pipeline that uses Claude Code as its hands and agent transitions as its brain — analyzing your codebase, planning changes, implementing them, testing the result, and learning from every run.


The Problem: AI Coding Without Structure

AI coding assistants are powerful. You can ask Claude Code to change a file, fix a bug, or add a feature. But the moment you need a repeatable pipeline — analyze the codebase, plan changes across multiple files, implement them, verify the build, review the result, and learn from mistakes — you are back to manual orchestration. Copy-paste prompts. Check outputs. Retry when things break. Feed context from one step to the next by hand.

CI/CD solves the build-test-deploy pipeline. But it does not think. It cannot analyze which files need to change, plan an approach, or decide whether to retry or escalate. And AI coding tools do not have a pipeline. They respond to a single prompt, one shot at a time. The gap between “AI can write code” and “AI can reliably deliver features” is a pipeline gap.


The Idea: A Petri Net That Codes

The AgenticOS Developer Net (agenticos-developer) is a 12-transition Petri net that turns a feature request into staged code. It combines three execution models in a single pipeline: map transitions for data transformation, command transitions for shell execution (Claude Code + builds), and agent transitions for planning and review. Tokens flow through six zones, each handling one phase of the development lifecycle.

AgenticOS Developer Net — Six-Zone Autonomous Coding Pipeline 12 transitions • 18 places • 3 execution models A: Input task p-task-input config p-project retry ctx arch ctx B: Analysis & Planning fmt-analyze map cmd analyze cmd 10m result analysis plan agent plan impl C: Implementation fmt-impl map cmd implement cmd 2h result impl D: Testing fmt-test map cmd run-tests cmd 10m result test E: Review review agent route agent F: Finalization fmt-commit commit learn APPROVE RETRY (max 3x) human review ESC Map Command Agent Place Retry loop

The pipeline starts with a task token — a plain JSON description of what to build. It ends with staged git changes, a learning summary, and accumulated architecture knowledge. Every intermediate result is a token in the Petri net: queryable, inspectable, and available for downstream transitions.


Three Execution Models in One Pipeline

The developer net uses all three execution lanes of AgenticOS. Each handles what it does best:

Map (4 transitions) Command (4 transitions) Agent (4 transitions)
Runs on Master (:8082) Executor (:8084) Master (:8082)
Purpose Transform tokens into command tokens Shell execution: Claude Code, builds, git Multi-step LLM reasoning
Determinism Deterministic templates Deterministic (shell output varies) Non-deterministic (LLM decides)
Cost Zero (no LLM) Claude Code API cost LLM API cost
Examples fmt-analyze, fmt-impl, fmt-test, fmt-commit analyze, implement, run-tests, commit plan, review, route, learn

Map transitions are the glue. They read tokens from one zone and transform them into command tokens or agent prompts for the next zone. No LLM cost, no network calls — pure template interpolation using ${input.data.field} syntax.

Command transitions are the hands. They execute shell commands on the executor: Claude Code for codebase analysis and implementation, mvnw or ng build for testing, git add for staging. Each command runs in a subprocess with configurable timeouts — 10 minutes for analysis, 2 hours for implementation, 1 minute for git.

Agent transitions are the brain. They run LLM reasoning loops on the master: planning implementation approaches, reviewing test results, routing decisions (approve/retry/escalate), and extracting lessons from completed tasks.


Claude Code as a Command Transition

The key innovation is running Claude Code — a full AI coding agent — inside a Petri net command transition. This turns an interactive CLI tool into a pipeline stage that reads tokens, executes autonomously, and produces tokens.

The Command Pattern

Claude Code runs in non-interactive mode via claude -p. The command is assembled by a map transition from the plan token, then dispatched to the executor. Three flags are essential:

<pre class="wp-block-syntaxhighlighter-code">export PATH=$HOME/.local/bin:$PATH && \
PROMPT_FILE=$(mktemp) && \
cat > "$PROMPT_FILE" <<'DEVNET_PROMPT_EOF'
Implement the following plan in the AgenticOS codebase:
1. Move the Builder tab to position 2 in editor-layout.component.ts
2. Mirror the change in toolbar.component.ts
3. Update swipe arrays in mobile-panel.service.ts
After changes, verify with: npx ng build
DEVNET_PROMPT_EOF
claude --dangerously-skip-permissions --no-session-persistence \
  -p "$(cat "$PROMPT_FILE")" < /dev/null 2>/dev/null
rm -f "$PROMPT_FILE"</pre>
  • --dangerously-skip-permissions — allows file writes in non-interactive mode
  • --no-session-persistence — prevents session state accumulation across pipeline runs
  • < /dev/null — prevents stdin blocking (critical for executor subprocess)
  • heredoc quoting — prevents shell interpretation of single quotes in the prompt

The Working Directory Trick

Every command transition specifies a workingDir. Set it to the monorepo root, and Claude Code automatically reads the repository’s CLAUDE.md — a comprehensive architecture document covering all services, patterns, APIs, and conventions. The coding agent gets full context for free, without injecting thousands of tokens into the prompt.

{
  "action": {
    "type": "command",
    "inputPlace": "input",
    "dispatch": [{"executor": "bash", "channel": "default"}],
    "await": "ALL",
    "timeoutMs": 7200000
  }
}

A 2-hour timeout. Enough for Claude Code to analyze dozens of files, make multi-file changes, and verify the build. The executor manages the subprocess lifecycle — timeout, exit code capture, stdout/stderr routing — while the Petri net manages the pipeline flow.


The Retry Loop: Learning From Failure

The developer net does not assume success. After implementation and testing, the review agent evaluates test results and makes a three-way decision:

Review & Retry — The Self-Correcting Loop test result exitCode + stdout t-review agent • rw– review result verdict token t-route agent • rw– APPROVE p-commit-ready RETRY p-task-input + context ESCALATE p-human-review retryCount: 0 → 1 → 2 → 3 After 3 retries → ESCALATE to human Retry Context Accumulates Attempt 1: “Permission denied — missing –dangerously-skip-permissions” Attempt 2: “Build failed — –watch=false not supported” Each retry carries full history of what went wrong

The review agent does not just check exit codes. It reads the test output, compares it against the implementation plan, and produces a structured verdict. If it says RETRY, the route agent writes the failure context to p-retry-context and sends the task back to p-task-input. The planning agent reads this context on the next iteration and avoids repeating mistakes.

After three retries, the route agent escalates to p-human-review — a place where a developer can inspect the accumulated context and decide what to do next. The net does not retry forever. It knows when to ask for help.


Real Numbers: First Successful Run

We deployed the net and seeded a real task: “Move the Builder tab to be the second tab after Properties in the GUI sidebar, toolbar, and mobile swipe navigation.” Three files needed changes across the Angular frontend.

Here is what happened:

  • t-analyze (Claude Code, 45 seconds): Identified 3 files — editor-layout.component.ts, toolbar.component.ts, mobile-panel.service.ts. Found the exact arrays that define tab order.
  • t-plan (agent, 2 iterations): Produced a detailed implementation plan with a Claude Code prompt specifying the exact array changes needed and the build verification command.
  • t-implement (Claude Code, 100 seconds): Modified all 3 files — moved {id: 'chat', label: 'Builder'} from position 5 to position 2 in sidebarTabs, tabOptions, and both swipe arrays.
  • t-run-tests (npx ng build, 30 seconds): Angular build passed. Exit code 0.
  • t-review (agent, 2 iterations): Verdict: APPROVE. All files changed, build green, implementation matches plan.
  • t-commit (git add -A && git status, 1 second): Three files staged for review.
  • t-learn (agent, 3 iterations): Extracted 4 lessons and 2 architecture insights from the run.

Total wall-clock time from task token to staged commit: under 4 minutes. Total human intervention: zero. The net analyzed the codebase, planned the changes, implemented them across three files, verified the build, approved the result, staged the changes, and learned from the experience — all autonomously.


What We Learned Building It

Getting the pipeline to work end-to-end required solving five problems that no documentation warned us about. Each one became a lesson that the net now carries in its retry context.

Problem Symptom Fix
Claude Code permissions Exit code 0, but no files changed Add --dangerously-skip-permissions for non-interactive mode
Shell quoting Prompt with single quotes broke bash Heredoc pattern: write prompt to temp file via <<'EOF'
Command emit rules Errors went to error-log but not to test-result Catch-all emit: {"to":"result","from":"@result"} with no when
Conditional routing Pass transition when conditions never matched Converted t-route from pass to agent with CREATE_TOKEN routing
ArcQL syntax FROM $ WHERE $.description LIMIT 1 parse error Property existence requires comparison: $.description!=""

Each fix was a pattern crystallization moment. The heredoc quoting pattern, the catch-all emit rule, the agent-based routing — these are now reusable building blocks for any future Petri net that needs to run CLI tools or make conditional routing decisions.


The Learning Loop

The finalization zone does not just stage code. It runs a learning agent that reads the completed task — the original request, the analysis, the plan, the implementation result, and the test outcome — and crystallizes reusable knowledge.

From the first run, the learning agent extracted:

{
  "lessons": [
    "Angular tab order is defined in 3 files: editor-layout, toolbar, mobile-panel",
    "Desktop and mobile maintain separate but parallel tab arrays",
    "npx ng build is more reliable than ng test --watch=false for CI",
    "Swipe navigation uses ordered arrays — position matters"
  ],
  "architectureInsights": [
    "GUI sidebar uses {id, label} objects in ordered arrays",
    "Mobile swipe navigation mirrors desktop tab order"
  ]
}

These tokens flow to p-lessons-learned and p-architecture-context. The next task that involves the GUI sidebar will have this context available — the planning agent will know exactly which files to target and which build command to use, without rediscovering it.

This is the crystallization effect in action. The first run discovers patterns. The second run uses them. By the third run, the net executes with surgical precision — zero exploration, zero wasted iterations, zero retries.


Why a Petri Net and Not a Script

You could build this pipeline as a bash script. Or a GitHub Action. Or a LangChain agent. Why a Petri net?

1. Every intermediate result is a token. When Claude Code finishes its analysis, the result sits in p-analysis-result as a queryable token. You can inspect it from the GUI, the CLI, or the Telegram bot. You can fire the next transition manually, or let the net continue autonomously. Try doing that with a subprocess in a bash script.

2. Retry is a topology, not an if-statement. The retry loop is an arc from t-route back to p-task-input. Adding a retry limit is a counter in the review agent’s prompt. Adding a different retry path for test failures vs. build failures is another postset and another CREATE_TOKEN call. No code changes — just net topology.

3. The pipeline survives restarts. Tokens are persisted by agentic-net-node’s event sourcing. If the executor crashes during implementation, the command token is still in p-implement-cmd when the system comes back up. The transition will re-fire. No checkpoint files, no state databases — the Petri net is the state.

4. Observability is built in. Every token has a timestamp, a parent place, and a provenance trail. The GUI shows tokens flowing through places in real time. Grafana dashboards track transition fire rates, durations, and failure counts. The pipeline is not a black box — it is a living system you can watch, query, and debug.


What Comes Next

The current developer net is a fixed pipeline — every task flows through the same six zones. The next evolution is a meta-net: a master net that reads the task, classifies it (bug fix, feature, refactor, documentation), and forges a custom sub-net tailored to the task type.

  • Simple bug fix: Skip planning, go straight from analysis to implementation to test
  • Multi-service feature: Parallel analysis per service, then coordinated implementation plan
  • Documentation task: Implementation only, skip tests, direct to commit
  • Performance issue: Extended analysis with profiling commands, iterative optimization loop

This follows the existing MCN (Meta-Composite Net) pattern — where one Agentic-Net generates and deploys other Agentic-Nets at runtime. The fixed pipeline is the stable foundation. The meta-net is the intelligence layer that adapts the pipeline to each task.


This article was drafted with the help of Claude, reflecting on a Petri net that itself uses Claude Code to write code. The net deployed, the task ran, the Builder tab moved. Three files, zero human intervention, one net that learns.

Related: When the Net Reads Its Own ResultsThe Reflexive BrainFrom Human-in-the-Loop to Living Automation

Leave a Reply

Your email address will not be published. Required fields are marked *