The Conversational Control Plane — Managing Agentic-Nets from Telegram, CLI, and Beyond

The Conversational Control Plane

A Petri net that nobody can reach is a Petri net that nobody uses. Until now, AgenticOS processes lived behind a browser tab — the Angular GUI was the sole entry point for creating places, starting transitions, and inspecting tokens. That constraint vanished. AgenticOS now ships a command-line interface and a Telegram bot that expose the full power of Agentic-Nets through natural language. Query tokens from your phone. Deploy a transition from a terminal. Summarize a 70 KB HTML token in one sentence — all through the same conversational agent loop that powers the GUI.

This is not a thin wrapper around an API. It is a complete autonomous agent that reasons about your Petri net, decides which tools to call, executes multi-step plans, and reports results — whether you typed the request in a chat bubble or a shell prompt.

The Conversational Control Plane ENTRY POINTS Telegram Bot grammy + long polling Multi-session, auto-compact AgenticOS CLI agenticos chat / agenticos agent Interactive + scripted modes Angular GUI Visual Petri Net Editor Drag-drop, real-time updates Agent Loop System Prompt + History LLM (Claude / Ollama) Tool Call Decision Execute + Collect Result DONE / Next Iteration iterate AGENTICOS BACKEND agentic-net-node :8080 Tree Engine + ArcQL Places, tokens, events agentic-net-master :8082 Designtime + Deployment Net creation, inscriptions agentic-net-executor :8084 Runtime Execution Transitions, commands, agents QUERY_TOKENS CREATE_NET DEPLOY 29 tools available: query, create, deploy, extract, inspect — same tool set across all channels

Three entry points — Telegram, CLI, Angular GUI — converge on a single agent loop that orchestrates 29 tools against the AgenticOS backend.

* * *

Why a Control Plane Matters

Traditional orchestration engines give you two options: click through a UI, or write raw API calls. The first is convenient but slow. The second is powerful but brittle. Neither scales to a world where an operator might need to inspect tokens from a moving train, deploy a transition during a video call, or ask “what went wrong with yesterday’s scraping run?” and get a structured answer in seconds.

The conversational control plane eliminates this tradeoff. You describe intent in natural language. The agent loop translates intent into tool calls. The tools execute against the exact same APIs that power the GUI. Nothing is lost in translation because the same 29 tools — from QUERY_TOKENS to DEPLOY_TRANSITION — are available regardless of channel.

Key insight: The agent does not replace the GUI. It complements it. Use the visual editor to design nets. Use the CLI to deploy and test them. Use Telegram to monitor and debug them from anywhere. Each channel optimizes for a different moment in the agentic process lifecycle.

* * *

The Agent Loop: Think, Act, Observe, Repeat

Every interaction — whether from Telegram, CLI, or an internal transition — runs through the same core loop. A user message enters. The LLM reads the system prompt (which encodes AgenticOS knowledge and all 29 tool definitions), reasons about the request, and emits a structured tool call. The tool executor dispatches the call to the AgenticOS backend. The result feeds back into the conversation. The LLM decides: call another tool, or declare the task done.

Agent Loop: One Iteration User Message “list tokens in p-raw-html” LLM Reasoning System prompt + tool definitions + conversation history Tool Call QUERY_TOKENS {placePath, query} Execute ArcQL → node :8080 4 tokens returned Result JSON → history → next iteration Next Iteration (Agent decides to extract full content) LLM Reasoning “Tokens truncated → use EXTRACT_TOKEN_CONTENT” Tool Call EXTRACT_TOKEN_CONTENT {tokenName, mode:”text”} Extract 69 KB HTML → 4 KB text Strip tags, clean up DONE Two iterations, two tool calls, one coherent answer — the agent decided what to do at each step

The agent loop processes a user request through multiple iterations — each iteration calls one tool, collects the result, and decides whether to continue or finish.

This loop is the same regardless of LLM provider. The system supports Claude (direct API), Claude Code CLI (spawned subprocess), and Ollama (local models). Each iteration is stateless at the provider level — the full conversation history is reconstructed from the session and passed with every call. This makes the architecture resilient: if a single call fails, the session manager retries or auto-compacts the history to fit within provider limits.

* * *

29 Tools, One Vocabulary

The agent’s tool set maps directly to AgenticOS’s REST APIs. No abstraction gap, no leaky translation layer. Every tool the agent calls produces the same outcome as clicking the equivalent button in the GUI or issuing the equivalent curl command.

Category Tools What They Do
Token Operations QUERY_TOKENS, CREATE_TOKEN, DELETE_TOKEN, EXTRACT_TOKEN_CONTENT Read, write, analyze token data
Place Discovery LIST_PLACES, GET_PLACE_INFO Inspect runtime places and token counts
Net Structure CREATE_NET, CREATE_PLACE, CREATE_TRANSITION, CREATE_ARC, DELETE_* Build and modify Petri net topology
Inscriptions SET_INSCRIPTION, GET_TRANSITION, LIST_ALL_INSCRIPTIONS Configure transition runtime behavior
Deployment DEPLOY_TRANSITION, START_TRANSITION, STOP_TRANSITION, FIRE_ONCE Lifecycle management for runtime transitions
Reasoning THINK, DONE, FAIL Planning checkpoints and task completion signals

Tools are assigned through a role-based access control system. A read-only role (r) gets query and inspection tools. A read-write role (rw) adds creation and modification tools. A full role (rwx) adds deployment and execution tools. The same RBAC applies to CLI users, Telegram bot sessions, and GUI-initiated agent transitions.

* * *

EXTRACT_TOKEN_CONTENT: Solving the 70 KB Problem

Real-world tokens are not 50-byte JSON blobs. A web scraping transition emits 70 KB of raw HTML per page. A document processing pipeline produces multi-page contracts. An API response from a complex query returns kilobytes of nested JSON. When QUERY_TOKENS returns these tokens, the full content would consume the LLM’s context window in a single result — leaving no room for reasoning.

The solution is a two-tier architecture. QUERY_TOKENS auto-truncates values to 500 characters and includes a hint: “Values truncated. Use EXTRACT_TOKEN_CONTENT for full content.” The agent reads the hint and calls EXTRACT_TOKEN_CONTENT with the specific token it wants to analyze.

Token Content Pipeline: From 70 KB HTML to Actionable Intelligence Raw Token 69,417 chars HTML + CSS + JS <html>…</html> QUERY_TOKENS Auto-truncate: 500 chars “<html><head>… + _hint: “Use EXTRACT_TOKEN_CONTENT” EXTRACT_TOKEN_CONTENT Choose a mode: text Strip HTML → plain text links Extract all URLs structure Headings + forms summarize LLM summary 4 KB Plain Text Clean readable content 12 URLs Extracted H1-H3 Outline + Forms Context Window Budget 69 KB Before 4 KB After Summarize Mode: Chunked LLM Pipeline Strip HTML tags Chunk into 15 KB Per-chunk Haiku call Aggregate summary ~200w

EXTRACT_TOKEN_CONTENT reduces 69 KB of raw HTML to a few kilobytes of structured content, preserving context window budget for the agent’s reasoning.

The summarize mode is the most powerful. It strips HTML, chunks the plain text into 15 KB segments, sends each chunk to a helper LLM (Claude Haiku — fast and cheap), collects per-chunk summaries, and aggregates them into a final coherent summary. The main agent never sees the raw 70 KB. It receives a 200-word summary that captures the essential content.

The regex-based modes — text, links, structure, head — require zero API calls. They process content locally using compiled regular expressions, returning results in milliseconds. This makes them ideal for quick inspection before committing to an LLM summarization call.

* * *

The Telegram Bot: Agentic-Nets in Your Pocket

The Telegram bot is built on grammy and runs as a standalone Node.js process. Each Telegram user gets an isolated session with its own conversation history. When the history grows beyond 30,000 tokens, the bot auto-compacts by asking the LLM to summarize the conversation so far, then replaces the full history with the summary. This keeps sessions bounded while preserving context.

AgenticOS Chat Bridge
  Provider: claude-code
  Model:    autonomous-agent
  Role:     rwx

[Telegram] Message from user 664865552
  [tool] QUERY_TOKENS(placePath, query, maxValueLength)
  [result] {"resultCount":4,"results":[...]}
  [tool] EXTRACT_TOKEN_CONTENT(placePath, tokenName, mode)
  [result] {"mode":"text","originalLength":69417,"text":"..."}
  [DONE]

The verbose log above shows a real session. The agent received a question about tokens in p-raw-html, queried the place, found 4 tokens with truncated HTML content, decided to extract the full text of the first token, and delivered a structured summary — all without human intervention between steps.

Error recovery is built in. If the LLM provider crashes (prompt too long, rate limit, network timeout), the session manager catches the error, auto-compacts the history, and tells the user to retry. If compaction also fails, it clears the session entirely. The user never sees a stack trace — only a human-readable message explaining what happened and what to do next.

* * *

The CLI: Scripted and Interactive

The AgenticOS CLI serves two audiences. In interactive mode (agenticos chat), it behaves like the Telegram bot — conversational, multi-turn, with the same agent loop and session management. In scripted mode (agenticos agent), it takes a single task description and runs the agent loop to completion, making it composable with shell pipelines and automation scripts.

Both modes share the same ToolExecutor, agentLoop, and buildSystemPrompt implementations. The difference is transport: stdin/stdout for the CLI, grammy webhooks for Telegram. This architecture guarantee ensures that an agentic process tested via CLI behaves identically when triggered from Telegram or an internal agent transition.

$ agenticos chat
AgenticOS Agent (model: autonomous-agent, role: rwx)

You: deploy t-scrape and start it
Agent: [THINK] Planning deployment...
Agent: [DEPLOY_TRANSITION] t-scrape → assigned to executor
Agent: [START_TRANSITION] t-scrape → running
Agent: Transition t-scrape is now running. It will poll
       for tokens every 2 seconds.

* * *

Why This Matters for Agentic-Nets

Agentic-Nets are Petri nets where AI agents read documentation tokens, execute tasks, and write results back as new tokens — creating a self-improving loop. The conversational control plane completes this picture by making the human part of the loop as fluid as the machine part.

The Complete Agentic-Nets Feedback Loop Human Control Plane Telegram: “analyze p-raw-html” CLI: “deploy t-scrape” GUI: drag, drop, connect Observe Results Token summaries, status, extracted content, metrics Machine Execution Agent Transitions fire Command Transitions execute HTTP Transitions call APIs Emit Results as Tokens HTML, JSON, summaries, learned patterns, errors Petri Net Engine tasks fire results docs learn memory Event-sourced persistence ArcQL token queries tools fire results become new tokens humans inspect + adjust

Humans and machines share the same Petri net — humans control through conversation, machines execute through transitions, results flow back as tokens that both can read.

Consider a concrete scenario. A web scraping transition produces 70 KB HTML tokens in p-raw-html. From your phone, you ask the Telegram bot: “What did the scraper find?” The agent calls QUERY_TOKENS, sees truncated content, calls EXTRACT_TOKEN_CONTENT in text mode, and tells you: “4 pages scraped from alexejsailer.com — blog posts about agent transitions, command transitions, and template interpolation.” You reply: “Clean up duplicates and keep only the latest.” The agent calls DELETE_TOKEN three times and confirms. Total time: 30 seconds. No browser. No terminal. No context switch.

This is what makes Agentic-Nets different from traditional orchestration engines. The boundary between “designing an agentic process” and “operating an agentic process” dissolves. You are always one sentence away from any operation on any token in any place.

* * *

Lessons from Production: Three Bugs That Shaped the Architecture

Building the conversational control plane revealed three architectural assumptions that broke under real-world load. Each fix made the system more resilient.

Bug 1: ArcQL Scope Resolution. The QUERY_TOKENS tool was sending a path string (root/workspace/places/p-raw-html) where the ArcQL engine expected a UUID. Without a valid UUID, the engine defaulted to the root node — returning every leaf in the entire model, not just the target place’s children. An 8-token result that mixed unrelated agentic process instructions with scraped HTML. The fix: resolve the path to a UUID before querying. One line of code. Hours of debugging.

Bug 2: Prompt Size Limits. The Claude Code CLI provider spawns claude -p as a subprocess, passing the system prompt via command-line argument and the conversation history via stdin. When QUERY_TOKENS returned 8 tokens of full agentic process instructions (30+ KB), the second iteration exceeded the CLI’s prompt size limit. The fix: cap tool results at 8 KB in the conversation history, with a truncation hint pointing to EXTRACT_TOKEN_CONTENT. Total user prompt capped at 80 KB with graceful degradation.

Bug 3: Silent Failure Diagnostics. The claude -p subprocess exited with code 1 and empty stderr. The error message — "claude-code exited with code 1: " — told us nothing. The fix: capture 2 KB of stderr on failure, falling back to the last 500 bytes of stdout if stderr is empty. The next time the CLI fails, the error message will explain why.

These are not mere bug fixes. Each one exposed a contract boundary between components — path vs. UUID, prompt size vs. provider capacity, error signal vs. error detail. Robust distributed systems are built by finding and hardening these boundaries.

* * *

Summary

  • Conversational Control Plane — Telegram bot and CLI join the Angular GUI as first-class interfaces to AgenticOS, sharing the same 29-tool agent loop and role-based access control.
  • EXTRACT_TOKEN_CONTENT — Five processing modes (text, links, structure, head, summarize) reduce 70 KB tokens to kilobytes of actionable content without consuming the agent’s context window.
  • Provider-Agnostic Agent Loop — The same think → act → observe → repeat cycle works across Claude API, Claude Code CLI, and Ollama, with automatic session compaction and error recovery.
  • Multi-Session Telegram Architecture — Each user gets an isolated session with auto-compaction at 30K tokens, queued message processing, and graceful error recovery.
  • ArcQL Scope Fix — Token queries now resolve paths to UUIDs before execution, ensuring results are scoped to the target place instead of the entire model tree.
  • Prompt Budget Management — Tool results are capped at 8 KB in conversation history, total prompts at 80 KB, with truncation hints guiding the agent to specialized extraction tools.

The conversational control plane transforms Agentic-Nets from a system you operate through a browser into a system you operate through language — from any device, any channel, any moment. The Petri net is always one sentence away.

Leave a Reply

Your email address will not be published. Required fields are marked *