From AI-in-the-Loop to Self-Evolving Agentic Nets

Most AI systems get more expensive the longer they run. Agentic-Nets do the opposite. Here’s how—and why it matters.


The Hidden Economics of AI Automation

There’s a pattern emerging in enterprise AI adoption that nobody talks about openly: the cost curve goes up, not down.

When you deploy an AI agent to handle customer support, process documents, or manage workflows, something counterintuitive happens:

  • Day 1: The AI handles simple cases brilliantly.
  • Week 4: You’ve scaled to thousands of requests. LLM costs are significant.
  • Month 3: Even routine, repetitive tasks still require full LLM inference.
  • Year 1: You’re paying the same per-request cost for work that stopped being novel months ago.

The fundamental problem: AI systems don’t distinguish between genuinely novel work and work that has already been solved. Every request, no matter how routine, gets the full reasoning treatment.

This is where Agentic-Nets take a different path.


The Core Idea: Self-Improving Automation

Agentic-Nets are built around a simple observation: whenever an AI agent solves a problem repeatedly, the solution stops being creative and becomes procedural. At that moment, you don’t need AI anymore—you need structure.

The key insight:

Most AI systems burn tokens on certainty. Agentic-Nets reserve AI for genuine uncertainty.

This isn’t about eliminating AI. It’s about using AI where it actually matters—at the edges, where novelty exists—while letting deterministic structure handle the routine.

The Cost Curve: Traditional AI vs Agentic-NetsTime / Usage VolumeAI CostsTraditional AI(costs scale with volume)Agentic-Nets(patterns crystallize → costs drop)Why the curves diverge:• Traditional: Every request = full LLM inference• Routine work still costs like novel work• Agentic-Nets: Patterns become deterministic• AI reserved for genuine uncertainty only

The Foundation: Event-Sourced Auditability

Before diving into the intelligence layer, it’s worth understanding what makes Agentic-Nets different at the infrastructure level: everything is an event.

Every token movement, every transition execution, every structural change is captured as an immutable event. This isn’t just logging—it’s the actual data model:

Event #1: CreateToken {placeId: "inbox", data: {orderId: "ORD-123"}}
Event #2: TransitionFired {transitionId: "validate-order", consumed: [...], produced: [...]}
Event #3: CreateTransition {id: "t-weather-api", kind: "http"}
Event #4: TokenConsumed {tokenId: "...", reason: "processed"}

Why this matters for self-evolution:

  1. Time travel debugging: Reconstruct system state at any point in history
  2. Full provenance: Every result traces back to its causal chain
  3. Safe experimentation: Structural changes are reversible
  4. Regulatory compliance: Audit trails are built-in, not bolted-on

When an agent proposes a new transition, that proposal is an event. When a human approves it, that’s an event. When the transition fires successfully, that’s events too. The entire evolution of the system is captured, inspectable, and—critically—auditable.


The Agent Transition: A New Usage Pattern

Here’s where Agentic-Nets diverge fundamentally from traditional workflow systems and from pure AI agents.

An Agent Transition is a transition in the Petri net that hosts an AI agent. But unlike a simple “call the LLM” step, it has unique properties:

  • Configurable agent type: Choose which agent runs (Claude, llama3.2, specialized domain agents)
  • Context-aware: Reads from input places containing documentation, lessons learned, and task instructions
  • Self-improving: Execution results feed back as new context tokens
  • Structurally empowered: Can propose new places and transitions it thinks it needs
The Agent Transition: Self-Learning ExecutionTaskNL instructionContextLessons learnedToolingAPI docsAgent TransitionConfigurable AI(Claude, llama3.2, etc.)ResultTask outputProposalsNew structuresLessonsNew contextReflexive Loop: Lessons feed back into contextKnowledge Evolution:Execution #1:9 tokens read → 5 lessons learnedExecution #2:14 tokens read → 2 lessons learnedExecution #3:16 tokens read → pattern convergedResult:• Agent gets smarter without code changes• Quality improves with each execution• Documentation IS the programContext place grows: 3 → 14 → 30+ tokens

The Reflexive Loop

This is where Agentic-Nets become genuinely novel. The agent doesn’t just execute tasks—it learns from each execution and captures that learning as documentation tokens:

Execution #1 (Learning Phase):
├─ Reads: 5 API docs + 3 basic lessons = 8 tokens
├─ Encounters pitfalls (bulk import fails, parsing errors)
├─ Captures 5 new lessons about what works
└─ Context grows: 3 → 8 tokens

Execution #2 (Improved):
├─ Reads: 5 API docs + 8 lessons = 13 tokens
├─ Applies learned patterns → smooth execution
├─ Captures 2 refinement lessons
└─ Context grows: 8 → 10 tokens

Execution #3+ (Converged):
├─ Reads: 5 API docs + 10 lessons = 15 tokens
├─ Perfect execution on first try
└─ Quality has stabilized

The paradigm shift: Traditional systems separate code from documentation. Code executes, documentation explains. In Agentic-Nets, documentation IS the program. The agent reads documentation tokens to understand how to execute, and execution results become new documentation.


Three Distinct Agent Patterns

Agentic-Nets support three distinct patterns for AI integration, each serving a different purpose:

Pattern 1: The External Builder (PetriGuru)

A universal agent that lives outside the net and creates entire net structures from natural language descriptions.

Use case: Initial creation from scratch.

Human: "Create a document approval workflow with draft, review, and approval states"

PetriGuru:
  → Generates complete PNML structure
  → Creates places: draft-submitted, under-review, approved, rejected
  → Creates transitions: start-review, approve-document, reject-document
  → Wires arcs and initial tokens
  → Deploys ready-to-run net

This is what you use when you’re starting from zero. The agent designs the entire system.

Pattern 2: The Agent Transition (The Core Pattern)

An agent embedded inside the net as a transition. This is the self-learning, self-improving pattern described above.

Use case: Ongoing intelligent execution with continuous improvement.

Net running:
  → Task token arrives in input place
  → Agent Transition reads task + context + tooling
  → Agent executes task using learned patterns
  → Agent captures lessons as new context tokens
  → Agent may propose structural changes

This is the pattern that makes the cost curve bend down over time.

Pattern 3: The Monitoring Agent (Nets Monitoring Nets)

A net that observes other running nets over time and proposes optimizations.

Use case: Crystallization and cost reduction.

Monitoring Net observes Production Net:
  → Tracks which agent transitions fire repeatedly
  → Identifies stable patterns (same input → same output)
  → Proposes: "Replace LLM call with HTTP transition"
  → Human approves
  → Production Net becomes cheaper

This is the meta-structure that enables genuine self-evolution.

Three Agent Patterns in Agentic-Nets1. External Builder(PetriGuru)• Lives OUTSIDE the net• Creates entire net structures from NL• Used for initial creation from scratch2. Agent Transition(THE CORE PATTERN)• Embedded INSIDE the net• Self-learning through context tokens• Can propose structural changes3. Monitoring Agent(Meta-Structure)• A net that observes other nets• Identifies crystallization opportunities• Proposes cost-reducing optimizationsUse case:“Build me a new workflow from this description”One-time design taskUse case:“Execute this task using what you’ve learned”Continuous improvement loopUse case:“This pattern is stable—make it deterministic”Cost reduction over timePattern 2 is where the self-improvement happens. Pattern 3 is what makes costs decline over time.

The Crucial Distinction: LLM Transition vs Agent Transition

Not all AI integration is equal. Agentic-Nets draw a hard line between two types of AI transitions:

Aspect LLM Transition Agent Transition
Purpose Single inference call Autonomous task execution
Side effects Token agentic only Can create places, transitions, tokens
Presets/Postsets Required and fixed Discovered from NL instruction
Scope “Summarize this document” “Build me a monitoring workflow”
Learning None Captures lessons, improves over time

LLM Transition: A simple wrapper around an LLM call. It takes input tokens, calls the model, and produces output tokens. No side effects beyond token agentic.

Example: "Classify this support ticket as urgent/normal/low"
→ Input: ticket content
→ Output: classification label
→ No structural changes to the net

Agent Transition: A full autonomous agent that can reshape the net itself. It reads context, reasons about what structures it needs, and proposes changes.

Example: "Set up monitoring for our API response times"
→ Input: task description + API documentation + context
→ Output: execution result + proposals for new places/transitions
→ May propose: "I need a place for metrics and a transition to alert on thresholds"

This distinction is critical because it defines the boundary of autonomous capability. LLM transitions are safe—they can’t break anything. Agent transitions are powerful—they can evolve the system. The power differential is intentional.


The Core USP: Self-Improving Nets That Reduce AI Usage

Now we arrive at the central value proposition. Let’s trace through a concrete example.

The Weather Data Scenario

Week 1: Your agent needs weather data for shipping decisions.

Agent reasoning (every execution):
  "I need current weather for Chicago..."
  "Let me search for weather APIs..."
  "OpenWeatherMap seems appropriate..."
  "I'll construct the API call..."
  "Parse the response..."
  "Temperature is 45°F, clear skies..."

Cost: ~$0.05 per execution (full LLM reasoning)

Week 2: Agent notices the pattern.

Agent observation:
  "I've called this same weather API 47 times"
  "The pattern is always: city → API call → parse → temperature"
  "This doesn't require my reasoning anymore"

Agent proposal:
  "Let me add an HTTP transition for weather data"
  "Configuration: endpoint, API key, response mapping"
  "Human should approve and bind the API key"

Week 3: After human approval, the net evolves.

Before: Agent Transition (LLM reasoning for every weather call)
After:  HTTP Transition (deterministic API call, no LLM)

Cost: ~$0.0001 per execution (just the API call)

The math: If you make 1,000 weather calls per month:

  • Traditional: 1,000 × $0.05 = $50/month forever
  • Agentic-Nets: $50 in month 1, then $0.10/month forever
Crystallization: From AI Reasoning to Deterministic StructureW1Agent Reasons• Full LLM inference• Discovers API• Learns pattern$0.05/callW2Agent Proposes• Detects repetition• Proposes HTTP transition• Requests credentialsAwaiting approvalHuman Approves• Reviews proposal• Chooses API provider• Binds credentialsGovernance intactW3+Deterministic• HTTP transition runs• No LLM involved• Fast, reliable, cheap$0.0001/callResultBefore: $50/moAfter: $0.10/mo500x reductionThe Key Insight:Traditional AI: “Use LLM for everything” → costs scale linearly with usageAgentic-Nets: “Use LLM to discover patterns, then crystallize” → costs decline over time

Why This Works

The crystallization pattern works because of a fundamental truth about work: most tasks are variations of patterns we’ve seen before. Novel work is rare. Routine work is common.

Traditional AI systems treat every request as potentially novel—they apply full reasoning power every time. Agentic-Nets flip this: they assume work is routine unless proven otherwise, and they actively look for patterns to crystallize.

Teach once → formalize → automate forever.


Human-in-the-Loop by Design

Here’s where Agentic-Nets differ fundamentally from “fully autonomous” AI systems: agents may propose, but humans decide.

This isn’t a limitation—it’s a design choice. Here’s why:

The Capability Model

Agent Transitions can:

  • Read from any place they’re given access to
  • Execute tasks using learned patterns
  • Propose new places and transitions
  • Capture lessons for future executions

Agent Transitions cannot:

  • Deploy structural changes without approval
  • Bind credentials or external services
  • Bypass rate limits or budgets
  • Override human-defined guardrails

The Approval Agentic

1. Agent proposes: "I need an HTTP transition for weather data"
   ├─ Proposal stored as token in "proposals" place
   └─ Includes: configuration, rationale, expected benefit

2. Human reviews:
   ├─ Which API provider?
   ├─ What credentials?
   ├─ What rate limits?
   └─ Approve or modify

3. System deploys:
   ├─ Structural change recorded as event
   ├─ Credentials bound securely
   └─ Transition becomes active

4. Audit trail complete:
   ├─ Who proposed (agent ID)
   ├─ Who approved (user ID)
   ├─ When deployed (timestamp)
   └─ Full provenance

Why This Matters

Every structural change in the system has clear origin:

  • Who proposed it: Agent transition ID, execution context
  • Who approved it: Human user ID, approval timestamp
  • When it became active: Event sequence number
  • What it changed: Before/after state diff

This is critical for:

  • Compliance: Regulatory requirements for AI decision-making
  • Debugging: When something goes wrong, trace the causal chain
  • Trust: Humans remain in control of system evolution
  • Security: No autonomous credential binding or external access

Command Transitions and Remote Executors

Beyond AI-powered transitions, Agentic-Nets support Command Transitions for executing shell commands, scripts, and external tools.

The Command Token Schema

{
  "kind": "command",
  "id": "fetch-logs-001",
  "executor": "bash",
  "command": "exec",
  "args": {
    "command": "kubectl logs -n production deployment/api-gateway --tail=100",
    "workingDir": "/home/ops",
    "timeoutMs": 30000,
    "captureStderr": true
  },
  "expect": "text"
}

Remote Executor Architecture

Executors can run on remote machines with a critical security property: no inbound ports required. The executor polls the master for work, executes locally, and reports results back.

Remote Machine                    AgenticNetOS Master
    │                                 │
    │──────── Poll for work ─────────>│
    │<─────── Command token ──────────│
    │                                 │
    │    [Execute locally]            │
    │                                 │
    │──────── Report result ─────────>│
    │                                 │

This means you can deploy executors inside firewalled environments, air-gapped networks, or customer premises without exposing any attack surface.

Integration with BlobStore

For binary artifacts (logs, files, reports), executors integrate with a distributed BlobStore:

  1. Command produces large output
  2. Executor uploads to BlobStore cluster
  3. Result token contains blob reference
  4. Downstream transitions fetch blob as needed

Nets That Run While You Sleep

Traditional workflows are reactive—they wait for triggers. Agentic-Nets are living systems—they run continuously, driven by time, events, and state.

Example: Personal Calendar Assistant

A net that manages your calendar autonomously:

Places:

  • `calendar-events`: Upcoming meetings and appointments
  • `context`: Your preferences, recurring patterns, travel times
  • `proposals`: Suggested schedule changes
  • `notifications`: Alerts requiring your attention

Transitions:

  • `scan-conflicts`: Identifies overlapping meetings
  • `suggest-reschedule`: Proposes alternatives for conflicts
  • `check-travel-time`: Validates buffer between locations
  • `morning-briefing`: Generates daily summary at 7am

Behavior:

Overnight:
  → New meeting requests arrive in calendar-events
  → scan-conflicts fires, detects overlap with important meeting
  → suggest-reschedule proposes moving less important meeting
  → Proposal waits in proposals place

Morning:
  → morning-briefing fires at 7am
  → Generates summary: "2 conflicts detected, 1 proposal pending"
  → You review proposals, approve/reject
  → Approved changes execute automatically

Everything that happened overnight is captured as events. You can inspect what the net did, why it made proposals, and what state it’s in when you wake up.


Nets Monitoring Nets: Meta-Structures

The most powerful pattern in Agentic-Nets is using nets to monitor and improve other nets.

The Architecture

Production Net (doing work):
  └─ Places, transitions, tokens flowing
  └─ Metrics emitted on every transition fire
  └─ State visible through standard APIs

Monitoring Net (observing):
  ├─ metrics-collector: Gathers execution statistics
  ├─ pattern-detector: Identifies recurring sequences
  ├─ cost-analyzer: Calculates per-transition costs
  └─ proposal-generator: Suggests optimizations

What the Monitoring Net Detects

Stable patterns: “This agent transition produces the same output type 95% of the time”

Cost hotspots: “This transition accounts for 60% of AI costs”

Reliability issues: “This transition fails 12% of the time—here’s the common factor”

Crystallization candidates: “Replace this LLM call with a MAP transition”

The Reflexive Loop Visualized

Nets Monitoring Nets: The Meta-StructureProduction NetplaceAIplaceHTTPplaceAImetricsMonitoring Netpattern-detector → cost-analyzer → proposal-generator“AI transition #2 is stable—crystallize to HTTP”proposalHumanReview &ApproveEvolutionThe Reflexive Loop in Action:1. Production net executes, emits metrics on every fire2. Monitoring net detects: “AI transition fires 200x/day, 95% same pattern”3. Monitoring net proposes: “Replace with deterministic HTTP transition”4. Human approves, production net evolves5. Costs drop, reliability improves—automatically discovered

Token Intelligence Layer

Tokens in Agentic-Nets aren’t just data blobs—they’re semantically queryable entities. The ArcQL query language enables intelligent reasoning about system state.

Example Queries

-- Find all failed orders with high retry counts
FROM $ WHERE status=="failed" AND retryCount > 3

-- Get urgent tickets that have been waiting more than an hour
FROM $ WHERE priority=="high" AND waitingMs > 3600000

-- Find tokens created by a specific agent transition
FROM $ WHERE _meta.createdBy=="t-process-orders"

Token Explorer

The AgenticNetOS GUI includes a Token Explorer that lets you:

  • Browse tokens across all places in real-time
  • Query using ArcQL with syntax highlighting
  • Trace token lineage (what created this token, what consumed it)
  • Inspect full token history through event sourcing

This isn’t just “here are your tokens”—it’s “here’s what your system has been doing, and why.”


Design-Time vs Run-Time Intelligence

Agentic-Nets separate two distinct phases with different intelligence requirements:

Design-Time (AI-Heavy)

  • Agents generate PNML structures from natural language
  • Agents create transition inscriptions
  • Agents propose structural changes
  • Humans review and approve

This is where AI creativity and reasoning matter most.

Run-Time (Structure-Heavy)

  • Deterministic engine executes transitions
  • Event sourcing captures every state change
  • AI only involved for explicitly AI-typed transitions
  • Minimal inference cost for routine work

This is where efficiency and reliability matter most.

Design-Time vs Run-Time: Where Intelligence LivesDesign-Time(AI-Heavy)• Natural language → PNML generation• Transition inscription creation• Structural change proposals• Pattern discovery and learningAI does:Creative workRun-Time(Structure-Heavy)• Deterministic transition execution• Event sourcing for all state changes• Token agentic through places• AI only where explicitly typedEngine does:Routine workDeployThe universal agent designs the system. The engine executes it.

What Emerges: An Agentic Framework Without Chaos

When you put all these pieces together, something interesting emerges. Agentic-Nets become:

  • A framework for living automation: Systems that run continuously, adapt over time, and improve without code changes
  • Self-improving, but not self-uncontrolled: Agents can propose, but humans decide
  • AI-assisted, but not AI-dependent: Intelligence where it matters, structure everywhere else
  • Transparent, auditable, and explainable: Every change traced, every decision logged

This is not “let the AI do everything.” It’s a more nuanced proposition:

Teach once → formalize → automate forever

The first time you solve a problem, the AI reasons through it. The second time, it applies learned patterns. The third time, it proposes crystallizing the pattern into structure. From then on, no AI needed for that pattern—until something genuinely novel appears.


The Practical Advantages

Let’s be concrete about what you gain:

Cost Budgets per Workflow

Decide upfront how much AI you’re willing to spend. Route execution accordingly. Predictable operational costs instead of runaway inference bills.

Progressive Automation

Start with AI-heavy nets when you’re figuring things out. Migrate steps to deterministic as they stabilize. Continuous improvement built into the architecture.

Safer Autonomy

Deterministic guardrails. Explicit AI “decision points.” Approvals only where risk is real. No black box reasoning about critical business logic.

Reproducibility

Same net definition + token history = reproducible execution. Replay for debugging and audits. Full traceability of what happened and why.

Composability

Nets can call nets. Meta-nets can generate or refactor other nets. Build complex systems from simple, tested parts.

Knowledge Capture

Patterns encoded as nets = shareable knowledge. No more tribal chat lore. Operational knowledge preserved in executable form.


The Energy Equation

There’s an environmental dimension worth mentioning. AI inference is computationally expensive—it consumes real energy, produces real carbon.

Agentic-Nets address this by:

  1. Minimizing redundant inference: Patterns crystallize to deterministic steps
  2. Right-sizing AI usage: Simple tasks don’t need powerful models
  3. Caching learned patterns: Context tokens prevent re-learning
  4. Structural execution: Most work handled by efficient engine code

Fewer tokens burned on predictable glue work. AI reserved for genuine ambiguity.

The net handles the routine. The AI handles the novel. That’s the split that scales.


A Practical Mental Model

Chat agents are great for exploration. When you’re figuring out what you need, how to approach a problem, what APIs to use—that’s where AI reasoning shines.

Agentic-Nets are what you use when exploration turns into operations. When you’ve solved the problem enough times that you see the pattern. When you want to stop explaining and start executing.

AgenticNetOS is the bridge: it helps you move from “AI helps me do work” to “work executes itself—with AI only where it truly adds value.”

The Journey: From Exploration to OperationsExplorationChat agentsHuman-in-the-loopFiguring things outAgenticNetOS / Agentic-NetsPatterns crystallize → Costs decline → Systems evolveOperationsLiving netsSelf-improving executionWork runs itself

Final Thought

Most AI systems get more expensive the longer they run. They scale linearly with usage because every request, no matter how routine, gets the same expensive treatment.

Agentic-Nets do the opposite.

They learn from execution. They crystallize patterns into structure. They replace reasoning with rules where rules are sufficient. They keep humans in charge of evolution while automating the routine.

Over time, the cost curve bends down. The system gets cheaper, faster, and more reliable—not despite the AI, but because the AI knows when to step back.

That is the real value proposition of self-evolving automation: not more AI everywhere, but AI where it matters—and structure everywhere else.


Built with AgenticNetOS and Agentic-Nets—the framework for living automation that gets smarter over time.

Date: January 2026

Leave a Reply

Your email address will not be published. Required fields are marked *