The Agent Transition is the core pattern in Agentic-Nets. It’s not an AI that executes everything—it’s an AI that creates the structures it needs and tracks what it’s waiting for.
The Central Insight
Here’s the key idea that makes Agentic-Nets different from typical AI automation:
The Agent Transition cannot make external HTTP calls. It cannot execute shell commands. It cannot directly interact with the outside world.
What it can do is far more powerful: it can create new transitions and places directly into the net, track what it needs via tokens, and use the outputs once humans deploy those structures.
This distinction is everything.
How It Actually Works
Let’s trace through a concrete scenario to see this pattern in action.
Step 1: The Agent Receives a Task
You have an Agent Transition with a simple goal: “Monitor our API health and alert when response times exceed 500ms.”
The agent reads this task from its input place. It also reads from a context place containing lessons learned from previous executions.
Input places:
├─ p-task: "Monitor API health, alert on slow responses"
├─ p-context: [lessons about how to structure monitoring]
└─ p-want: [empty - no outstanding requests yet]
Step 2: The Agent Reasons and Creates Structure
The agent thinks through the problem:
Agent reasoning:
"To monitor API health, I need to:
1. Periodically call the API endpoint
2. Check if response time > 500ms
3. If slow, send an alert
But I cannot make HTTP calls myself. I need:
- An HTTP transition to call the API
- A place to store the response
I will CREATE these structures directly in the net."
Here’s the key difference: The agent doesn’t just emit a “proposal token”. It actually creates the transition and place directly into the net (or into a new version of the net):
Agent CREATES in the net:
├─ Place: p-health-metrics
└─ Transition: t-check-api-health (kind: http)
├─ url: https://api.example.com/health
├─ method: GET
└─ postset: p-health-metrics
Step 3: The Agent Creates a “Want” Token for Itself
After creating the structure, the agent creates a token back to its own input place that tracks what it’s waiting for:
{
"want": "t-check-api-health",
"expecting": "tokens in p-health-metrics",
"reason": "Need API response times to evaluate threshold",
"created_at": "2026-01-29T14:30:00Z"
}
This “want” token goes to p-want—a preset place that the agent reads on every firing.
Step 4: Human Sees New Version and Deploys
The human opens the net and sees: “New version available with 1 transition, 1 place”
New structures (not yet deployed):
┌─────────────────────────────────────────────────────┐
│ Transition: t-check-api-health (HTTP) │
│ Created by: Agent Transition (execution #1) │
│ Purpose: Fetch API health metrics │
│ Target: https://api.example.com/health │
│ │
│ Requires deployment: │
│ • Bind credentials: api-key │
│ • Set schedule: [every 60s] [every 5m] [manual] │
│ │
│ [Deploy] [Modify] [Discard] │
└─────────────────────────────────────────────────────┘
You decide:
- Which API endpoint to actually use
- What credentials to bind
- What schedule makes sense
- Whether to deploy at all
Once you click Deploy, the transition becomes active:
Event: TransitionDeployed
├─ transitionId: t-check-api-health
├─ createdBy: agent-transition-001 (execution #1)
├─ deployedBy: alexejsailer
├─ timestamp: 2026-01-29T14:35:00Z
└─ credentials: [api-key bound from vault]
Step 5: Next Firing — Agent Checks What It Got
The next time the Agent Transition fires, it reads from p-want and sees its outstanding request:
Agent reasoning (execution #2):
"I have a want token: expecting tokens in p-health-metrics
from t-check-api-health.
Let me check... YES! p-health-metrics now has tokens:
{ response_time_ms: 234, status: 'ok' }
I got what I wanted. I can now use this data."
The agent:
- Reads the health metrics from the deployed transition’s output
- Evaluates: 234ms < 500ms threshold → no alert needed
- Updates or removes its “want” token (fulfilled)
Step 6: The Pattern Continues
If the agent needs more structure (e.g., an alert transition), it repeats the cycle:
Agent reasoning (execution #3):
"API response was 650ms — that exceeds 500ms!
I need to send an alert, but I cannot make HTTP calls.
Let me create an alert transition and track what I want."
Agent CREATES:
├─ Transition: t-send-slack-alert (kind: http)
└─ Emits want token: {"want": "t-send-slack-alert", ...}
Human deploys t-send-slack-alert
Agent (execution #4):
"t-send-slack-alert is now deployed. My alert was sent.
Want fulfilled."
The Self-Aware Feedback Loop
The “want” token pattern is powerful because the agent tracks its own state:
The agent doesn’t need external memory or a database to remember what it asked for. The “want” place IS its memory. Each firing:
- Read outstanding wants
- Check if they’re fulfilled (do the created structures have output?)
- If yes: use the data, mark want as done
- If no: wait, or create more structure
Why This Pattern Is Powerful
1. The Agent Creates, Humans Deploy
The agent can create any structure it needs directly in the net. But those structures don’t become active until a human deploys them. This gives you:
- Full visibility: See exactly what the agent wants to do
- Full control: Decide what actually runs
- Full audit: Every deployment has human approval
2. The Agent Tracks Its Own State
The “want” token pattern means the agent knows:
- What it asked for
- Whether it got it
- What to do next
No external state management needed. The net IS the state.
3. Costs Decrease Over Time
Execution 1: Agent creates HTTP transition → $0.05 inference
Execution 2: Agent checks, structure deployed, uses data → $0.02 inference
Execution 10+: Structure handles everything → $0.00 inference
Once deployed transitions are running, the agent doesn’t need to reason about them anymore.
The Crystallization Effect
Over multiple executions, the net accumulates structure:
Comparing to Traditional Approaches
Traditional AI Agent (Direct Execution)
User: "Monitor our API"
AI Agent:
├─ Calls API directly
├─ No record of what it did
├─ Calls API again
├─ No way to know what it's waiting for
└─ Repeats forever at full cost
Problems:
├─ No self-awareness of outstanding requests
├─ No human control over external calls
├─ AI cost on every single iteration
└─ No structure learned
Agentic-Nets Agent Transition
User: "Monitor our API"
Agent Transition (Execution 1):
├─ Reasons: "I need HTTP transition"
├─ CREATES: t-check-api-health in net
├─ WRITES: want token to p-want
└─ AI cost: $0.05
Human deploys t-check-api-health
Agent Transition (Execution 2):
├─ READS: want token from p-want
├─ CHECKS: p-health-metrics has data? YES
├─ USES: real API data for evaluation
├─ MARKS: want as fulfilled
└─ AI cost: $0.02
Execution 10+:
├─ Structure runs autonomously
├─ Agent has no outstanding wants
└─ AI cost: $0.00
The Human’s Role
The human isn’t just an approval rubber-stamp. When the agent creates structure, you decide:
What to Deploy
Agent created: t-check-api-health
Agent created: t-send-slack-alert
Agent created: t-log-metrics
You might deploy:
✓ t-check-api-health (we need this)
✓ t-send-slack-alert (yes, alert us)
✗ t-log-metrics (we already have logging)
How to Configure
Agent created with: url = "https://api.example.com/health"
You might change:
url = "https://api.internal.example.com/health"
schedule = "every 5 minutes" (not 60 seconds)
timeout = 10s
retryPolicy = 3 attempts
What Credentials to Bind
The agent creates the structure. You bind the secrets:
Agent created: t-check-api-health
requires: api-key
You bind:
api-key → vault://production/api/key
The agent never sees the actual credentials.
Summary
| Aspect | Agent Transition |
|---|---|
| Can Do | Read tokens, create structures, track wants |
| Cannot Do | HTTP calls, commands, deploy, bind credentials |
| Creates | Transitions and places directly in net |
| Tracks | “Want” tokens in its own preset place |
| Next Firing | Checks if wants are fulfilled |
| Human Role | Review created structures, deploy, bind credentials |
| Cost Trajectory | Decreases as structure crystallizes |
The Agent Transition pattern is fundamentally about self-awareness:
- The agent creates what it needs
- The agent tracks what it’s waiting for
- The agent checks if it got what it wanted
- The human controls what actually runs
This isn’t AI that does everything. It’s AI that knows what it needs, creates the structure for it, tracks what it’s waiting for, and uses the results once humans deploy.
That’s the difference between chaotic automation and structured, governable intelligence.
The agent creates. The agent tracks. The human deploys. The structure executes. And over time, the AI cost approaches zero while the capability only grows.