AgenticOS Remote Executors: Run Automation Anywhere—Even Behind Firewalls
A deep dive into AgenticOS’s polling-based executor architecture: how Petri nets can orchestrate work across networks, clouds, and corporate firewalls—without requiring a single inbound port.
The question that started it all
What if your automation platform could reach anywhere? Not just servers in your cloud VPC, but machines behind corporate firewalls. Legacy systems in air-gapped networks. Edge devices in retail stores. Production nodes that security teams refuse to expose.
Traditional orchestration tools fail this test. They require the orchestrator to push work to agents—which means those agents need open inbound ports. The moment you say “open port 8080 on that production server,” security teams reach for their veto stamps.
AgenticOS takes a different approach: executors pull work via outbound connections only. They poll for tasks, execute locally, and report results—all without exposing a single port to the network.
This isn’t just a nice-to-have. It’s the key to running agentic processes across enterprise environments, hybrid clouds, and edge deployments where security constraints are non-negotiable.
The four pillars of AgenticOS
Before diving into the executor architecture, let’s understand the four core services that make AgenticOS work. Think of them as specialized organs in a distributed brain:
- agentic-net-node (port 8080): The persistence layer. Stores all Petri net data as a tree-structured meta-filesystem with event sourcing. Think of it as the long-term memory—every token, every place, every transition definition lives here.
- agentic-net-master (port 8082): The orchestration hub. Handles LLM integration, token binding, transition scheduling, and result routing. When an executor asks “what should I do?”, master figures it out by querying node and sending back the work.
- agentic-net-gui (port 4200): The visual interface. An Angular app where you design Petri nets, watch tokens flow, and inspect execution state in real-time.
- agentic-net-executor (port 8084, optional): The action runner. Executes shell commands, file operations, and external tool invocations. This is where the actual work happens—and where the magic of “run anywhere” comes in.
- SA-BlobStore (port 8095): Binary artifact storage. When commands produce large outputs (logs, PDFs, screenshots), they upload here and pass only a URN reference through the net.
The polling architecture: why “pull” beats “push”
Here’s the core innovation: executors don’t listen for incoming connections. They poll the master every 2 seconds asking “do you have work for me?”
This simple inversion has profound implications:
- No inbound ports on executors: Firewall rules stay closed. Security teams stay happy.
- Works behind NAT: Executors in home offices, branch networks, or containerized environments just work.
- Survives network partitions: If the connection drops, the executor just keeps polling. Work queues up on master until connectivity returns.
- Stateless scaling: Spin up 10 executors, and they all poll independently. No coordination needed.
The polling loop in detail
Every 2 seconds, the executor calls master’s poll endpoint with its identity and currently deployed transitions:
GET /api/transitions/poll?executorId=exec-prod-01&modelId=default&deployed=t-grep,t-analyze
Master responds with lifecycle commands: DEPLOY a new transition, START/STOP polling, or—the important one—FIRE with pre-bound tokens ready for execution.
AWS VPC deployment: the enterprise pattern
Let’s make this concrete with a real-world AWS deployment. This pattern works for any cloud provider, but AWS terminology makes it easy to visualize:
The key insight: executors live anywhere. They could be:
- EC2 instances in another AWS account
- VMs in an on-premise data center
- Kubernetes pods in a different cluster
- Raspberry Pis at retail locations
- Developer laptops during testing
As long as they can make HTTPS calls to your master endpoint, they can participate in agentic processes.
How command transitions actually work
Now let’s trace the complete journey of a command through the system. This is where the distributed Petri net model really shines:
Command token structure
A command token is just JSON that tells the executor what to run:
{
"kind": "command",
"id": "cmd-grep-001",
"executor": "bash",
"command": "exec",
"args": {
"command": "grep -rn 'TODO' ./src",
"workingDir": "/app",
"timeoutMs": 60000
},
"expect": "text"
}
The executor field routes to the right handler (bash, filesystem, MCP tools, etc.). The args contain everything needed for execution. The expect tells the handler how to format the response.
Binary artifacts: when tokens shouldn’t carry files
What happens when your command produces megabytes of output? A log tail, a PDF report, a screenshot, a heap dump?
Embedding large content in tokens breaks everything. The Petri net slows down, debugging becomes impossible, and you’re one buffer overflow away from disaster.
The solution: store the artifact in SA-BlobStore, pass only a URN reference.
To enable this pattern, add two fields to your command token:
{
"resultAs": "binaryUrn",
"blobStore": {
"host": "http://blobstore:8095",
"idStrategy": "timestamp"
}
}
The executor automatically uploads output files and returns a lightweight URN reference. Downstream transitions fetch the actual bytes only when needed.
Real-world patterns this enables
Once you have executors that can run anywhere and BlobStore for artifact handoff, entire categories of distributed automation become straightforward:
The security model: designed for zero-trust
The polling architecture isn’t just about network traversal—it fundamentally changes the security posture:
Putting it all together: a complete distributed agentic process
Let’s trace a realistic scenario end-to-end: automated security scanning across a hybrid environment.
Notice how different executors handle different parts of the agentic process—each in its own security zone—while the Petri net in the cloud orchestrates the overall flow. Raw scan data never leaves the data center as a token; only the BlobStore URN reference passes through.
Key takeaways
The AgenticOS executor architecture solves a fundamental problem in distributed automation: how do you orchestrate work across network boundaries without compromising security?
The answers:
- Polling over pushing: Executors pull work via outbound HTTPS. No inbound ports, no firewall exceptions, no attack surface.
- URN over embedding: Large artifacts go to BlobStore; tokens carry only references. The net stays fast and debuggable.
- Stateless by design: Executors don’t store state. Kill them, move them, scale them—the Petri net keeps running.
- Single net, many environments: The same agentic process definition drives dev, staging, and prod—executors in each environment handle their part.
- Event-sourced audit trail: Every token movement, every execution, every result—captured in agentic-net-node’s immutable event log.
This isn’t just architecture for architecture’s sake. It’s the foundation for automation that works in the real world—where security teams have veto power, where legacy systems exist, where “just open a port” isn’t an option.
Built with AgenticOS, agentic-net-executor, SA-BlobStore, and the polling architecture that makes it all work.
Date: January 24, 2026