LLM Transitions – The Complete Guide to AI-Powered Token Processing in Agentic Nets
The LLM transition is the intelligence engine in Agentic Nets. While Pass and Map transitions handle simple routing and transformation, and HTTP transitions communicate with external APIs, LLM transitions bring artificial intelligence directly into your agentic processes – enabling classification, analysis, enrichment, translation, and any task that benefits from natural language understanding.
In this comprehensive guide, we’ll explore every capability of LLM transitions:
- Prompt templates – dynamic prompts with ${…} interpolation from input tokens
- System prompts – customize LLM behavior and output format
- JSON response handling – automatic parsing with markdown code block stripping
- Conditional routing – route to different postsets based on LLM response fields
- Multi-emit patterns – send responses to multiple destinations
- Token metadata – include token ID, name, and parent info in prompts
- Timeout handling – configurable timeouts for LLM operations
- Error handling – graceful handling of empty inputs, special characters, and edge cases
Anatomy of an LLM Transition Inscription
Every LLM transition inscription follows this structure:
{
"id": "t-my-llm-task",
"kind": "task",
"mode": "SINGLE",
"presets": {
"input": {
"placeId": "analysis-queue",
"host": "myModel@localhost:8080",
"arcql": "FROM $ LIMIT 1",
"take": "FIRST",
"consume": true
}
},
"postsets": {
"output": { "placeId": "results-queue", "host": "myModel@localhost:8080" },
"error": { "placeId": "error-queue", "host": "myModel@localhost:8080" }
},
"action": {
"type": "llm",
"nl": "Analyze this data: ${input.data.content}. Return JSON with 'sentiment', 'confidence', and 'summary' fields.",
"system": "You are a sentiment analysis expert. Always respond with valid JSON only, no markdown.",
"timeoutMs": 60000
},
"emit": [
{ "to": "output", "from": "@response.json", "when": "success" },
{ "to": "error", "from": "@input.data", "when": "error" }
]
}
Key elements:
kind: "task"– identifies this as an action transitionaction.type: "llm"– specifies the LLM action handleraction.nl– the prompt template with ${…} interpolationaction.system– optional system prompt for LLM behavior customizationemit.from– what to emit:@response.jsonfor parsed response or@input.datafor original token
Template Interpolation
LLM prompts support full ${...} interpolation from input tokens. You can access any field from the token data or metadata:
Accessing Token Data Fields
{
"action": {
"type": "llm",
"nl": "Analyze order ${input.data.orderId} for customer ${input.data.customerId}. Amount: ${input.data.amount}, Status: ${input.data.status}. Classify as 'urgent' or 'normal'."
}
}
If your input token has:
{
"orderId": "ORD-12345",
"customerId": "CUST-789",
"amount": "1500.00",
"status": "pending"
}
The prompt becomes: “Analyze order ORD-12345 for customer CUST-789. Amount: 1500.00, Status: pending. Classify as ‘urgent’ or ‘normal’.”
Accessing Token Metadata
Every token carries metadata that can be included in prompts:
{
"action": {
"type": "llm",
"nl": "Process token with ID: ${input._meta.id}, Name: ${input._meta.name}. Data: ${input.data.payload}. Return JSON with 'tokenId', 'processed', and 'result' fields."
}
}
Available metadata fields:
${input._meta.id}– token UUID${input._meta.name}– token name${input._meta.parentId}– parent place UUID
Custom System Prompts
System prompts shape how the LLM behaves and responds. Use them to enforce output formats, define roles, or set constraints:
Enforcing JSON Output
{
"action": {
"type": "llm",
"nl": "Analyze this text: ${input.data.content}",
"system": "You are a text analyst. Always respond with valid JSON containing 'sentiment' (positive/negative/neutral), 'complexity' (simple/moderate/complex), 'wordCount', and 'characterCount' fields. Never include markdown formatting."
}
}
Defining Expert Roles
{
"action": {
"type": "llm",
"nl": "Review this code: ${input.data.code}",
"system": "You are a senior code reviewer specializing in security and performance. Analyze code for vulnerabilities, inefficiencies, and best practice violations. Return JSON with 'issues' (array), 'severity' (critical/high/medium/low), and 'recommendations' fields."
}
}
Multi-Language Tasks
{
"action": {
"type": "llm",
"nl": "Translate: ${input.data.text}",
"system": "You are a professional translator. Translate text from ${input.data.sourceLanguage} to ${input.data.targetLanguage}. Maintain tone and context. Return JSON with 'translation', 'confidence' (0-100), and 'notes' (array of translation decisions) fields."
}
}
JSON Response Handling
LLM transitions automatically handle JSON parsing, including stripping markdown code blocks that models often include:
Automatic Markdown Stripping
If an LLM returns:
```json
{
"classification": "urgent",
"confidence": 95
}
```
The LLM handler automatically strips the markdown fencing and parses the JSON, making the fields available in the emitted token.
Nested JSON Responses
Complex nested responses are fully supported:
{
"action": {
"type": "llm",
"nl": "Process order ${input.data.orderId} with ${input.data.itemCount} items. Return JSON with 'orderId', 'itemCount', 'status', and 'summary' object containing 'totalItems' and 'processedAt' fields."
}
}
Response (nested objects preserved):
{
"orderId": "ORD-999",
"itemCount": "2",
"status": "processed",
"summary": {
"totalItems": 3,
"processedAt": "2026-01-05T12:00:00Z"
}
}
Response Routing
The power of LLM transitions comes from routing responses to different postsets based on the LLM’s output. This enables intelligent agentic process branching.
Basic Success/Error Routing
{
"emit": [
{ "to": "success", "from": "@response.json", "when": "success" },
{ "to": "error", "from": "@input.data", "when": "error" }
]
}
The when field can be:
"success"– LLM completed successfully"error"– LLM call failed or timed out
Conditional Routing by Response Field
IMPORTANT: LLM transitions use a two-field pattern for conditional routing:
"when": "success"– enables the emit rule (required)"condition": "field == 'value'"– evaluates the actual field condition
{
"postsets": {
"approved": { "placeId": "approved-queue", "host": "myModel@localhost:8080" },
"rejected": { "placeId": "rejected-queue", "host": "myModel@localhost:8080" },
"review": { "placeId": "review-queue", "host": "myModel@localhost:8080" }
},
"action": {
"type": "llm",
"nl": "Evaluate request ${input.data.requestId} with amount ${input.data.amount}. Return JSON with 'status' (approved/rejected/review) and 'reason' fields."
},
"emit": [
{ "to": "approved", "from": "@response.json", "when": "success", "condition": "status == 'approved'" },
{ "to": "rejected", "from": "@response.json", "when": "success", "condition": "status == 'rejected'" },
{ "to": "review", "from": "@response.json", "when": "success", "condition": "status == 'review'" }
]
}
Numeric Comparison Routing
Route based on numeric thresholds:
{
"emit": [
{ "to": "high-confidence", "from": "@response.json", "when": "success", "condition": "confidence >= 80" },
{ "to": "low-confidence", "from": "@response.json", "when": "success", "condition": "confidence < 80" }
]
}
Boolean Field Routing
{
"emit": [
{ "to": "valid", "from": "@response.json", "when": "success", "condition": "isValid == true" },
{ "to": "invalid", "from": "@response.json", "when": "success", "condition": "isValid == false" }
]
}
Multi-Emit Patterns
A single LLM response can be emitted to multiple postsets simultaneously:
{
"postsets": {
"primary": { "placeId": "main-results", "host": "myModel@localhost:8080" },
"audit": { "placeId": "audit-log", "host": "myModel@localhost:8080" },
"metrics": { "placeId": "metrics-queue", "host": "myModel@localhost:8080" }
},
"emit": [
{ "to": "primary", "from": "@response.json", "when": "success" },
{ "to": "audit", "from": "@response.json", "when": "success" },
{ "to": "metrics", "from": "@response.json", "when": "success" }
]
}
This pattern is useful for:
- Audit logging – every LLM decision automatically logged
- Metrics collection – feed LLM outputs to analytics pipelines
- Parallel processing – multiple downstream transitions process the same result
Real-World Use Cases
1. Sentiment Analysis Pipeline
Route customer feedback to appropriate teams based on sentiment:
{
"id": "t-sentiment-analyzer",
"kind": "task",
"mode": "SINGLE",
"presets": {
"input": {
"placeId": "feedback-queue",
"host": "myModel@localhost:8080",
"arcql": "FROM $ LIMIT 1",
"take": "FIRST",
"consume": true
}
},
"postsets": {
"positive": { "placeId": "positive-feedback", "host": "myModel@localhost:8080" },
"negative": { "placeId": "escalation-queue", "host": "myModel@localhost:8080" },
"neutral": { "placeId": "archive", "host": "myModel@localhost:8080" }
},
"action": {
"type": "llm",
"nl": "Analyze customer feedback: ${input.data.feedback}. Customer ID: ${input.data.customerId}. Return JSON with 'sentiment' (positive/negative/neutral), 'confidence' (0-100), and 'keywords' (array) fields.",
"system": "You are a customer sentiment analyst. Be accurate in detecting negative sentiment that may require escalation. Respond with JSON only."
},
"emit": [
{ "to": "positive", "from": "@response.json", "when": "success", "condition": "sentiment == 'positive'" },
{ "to": "negative", "from": "@response.json", "when": "success", "condition": "sentiment == 'negative'" },
{ "to": "neutral", "from": "@response.json", "when": "success", "condition": "sentiment == 'neutral'" }
]
}
2. Data Enrichment
Enrich tokens with LLM-generated data:
{
"id": "t-data-enricher",
"kind": "task",
"mode": "SINGLE",
"presets": {
"input": {
"placeId": "raw-products",
"host": "myModel@localhost:8080",
"arcql": "FROM $ LIMIT 1",
"take": "FIRST",
"consume": true
}
},
"postsets": {
"enriched": { "placeId": "enriched-products", "host": "myModel@localhost:8080" }
},
"action": {
"type": "llm",
"nl": "Enrich product: ${input.data.productName}, Category: ${input.data.category}, Price: ${input.data.price}. Return JSON with original fields plus 'description' (marketing copy), 'tags' (array of SEO keywords), and 'targetAudience' fields.",
"system": "You are a product data enrichment specialist. Generate compelling descriptions and relevant tags. Respond with JSON only."
},
"emit": [
{ "to": "enriched", "from": "@response.json", "when": "success" }
]
}
3. Code Review Automation
Automated code analysis and routing:
{
"id": "t-code-reviewer",
"kind": "task",
"mode": "SINGLE",
"presets": {
"input": {
"placeId": "code-submissions",
"host": "myModel@localhost:8080",
"arcql": "FROM $ LIMIT 1",
"take": "FIRST",
"consume": true
}
},
"postsets": {
"approved": { "placeId": "ready-to-merge", "host": "myModel@localhost:8080" },
"needs-review": { "placeId": "human-review-queue", "host": "myModel@localhost:8080" }
},
"action": {
"type": "llm",
"nl": "Review this ${input.data.language} code:\n${input.data.code}\nReturn JSON with 'hasIssues' (boolean), 'issues' (array), 'severity' (none/low/medium/high), and 'recommendation' fields.",
"system": "You are an expert code reviewer. Focus on security vulnerabilities, performance issues, and code quality. Be conservative - when in doubt, flag for human review.",
"timeoutMs": 120000
},
"emit": [
{ "to": "approved", "from": "@response.json", "when": "success", "condition": "hasIssues == false" },
{ "to": "needs-review", "from": "@response.json", "when": "success", "condition": "hasIssues == true" }
]
}
4. Document Translation
{
"id": "t-translator",
"kind": "task",
"mode": "SINGLE",
"presets": {
"input": {
"placeId": "translation-queue",
"host": "myModel@localhost:8080",
"arcql": "FROM $ LIMIT 1",
"take": "FIRST",
"consume": true
}
},
"postsets": {
"translated": { "placeId": "completed-translations", "host": "myModel@localhost:8080" }
},
"action": {
"type": "llm",
"nl": "Translate from ${input.data.sourceLanguage} to ${input.data.targetLanguage}: ${input.data.text}. Return JSON with 'originalText', 'translatedText', 'sourceLanguage', 'targetLanguage', and 'confidence' (0-100) fields.",
"system": "You are a professional translator. Preserve meaning, tone, and cultural context. Return JSON only."
},
"emit": [
{ "to": "translated", "from": "@response.json", "when": "success" }
]
}
Timeout Configuration
LLM operations can take significant time. Configure timeouts appropriately:
{
"action": {
"type": "llm",
"nl": "Complex analysis prompt...",
"timeoutMs": 120000 // 2 minutes for complex tasks
}
}
Recommended timeouts:
- Simple classification: 30-60 seconds
- Content generation: 60-90 seconds
- Code analysis: 90-180 seconds
- Long document processing: 180-300 seconds
Edge Cases and Robustness
Empty Input Handling
LLM transitions gracefully handle empty or minimal inputs:
{
"action": {
"type": "llm",
"nl": "Process: ${input.data.content}. Return JSON with 'processed' and 'isEmpty' fields.",
"system": "If input is empty or minimal, set isEmpty to true and provide appropriate response."
}
}
Special Characters
Special characters in token data are properly escaped in prompts:
{
"action": {
"type": "llm",
"nl": "Process text with special chars: ${input.data.text}. Handle quotes, newlines, and unicode properly."
}
}
Input tokens with special characters like "Hello \"World\"!\nLine 2" are handled correctly.
Long Prompts
Complex prompts with detailed instructions work seamlessly:
{
"action": {
"type": "llm",
"nl": "Comprehensive analysis of document ${input.data.documentId}.\n\nContent: ${input.data.content}\n\nAnalyze for:\n1. Key themes and topics\n2. Sentiment and tone\n3. Factual accuracy concerns\n4. Readability score\n\nReturn JSON with 'themes' (array), 'sentiment', 'accuracyConcerns' (array), 'readabilityScore' (0-100), and 'summary' fields.",
"system": "You are a comprehensive document analyst. Be thorough but concise. Always respond with valid JSON.",
"timeoutMs": 180000
}
}
The emit.from Field Reference
The from field in emit rules specifies what data to emit:
| Value | Description | Use Case |
|---|---|---|
@response.json |
Parsed JSON from LLM response | Standard output with LLM analysis results |
@input.data |
Original input token data | Error handling, preserving original for retry |
@response |
Raw response object | Access full response metadata |
Complete Integration Test Suite
AgenticOS includes a comprehensive integration test suite for LLM transitions with 18 tests covering all capabilities:
Category A: Basic LLM Operations
- Simple LLM Classification – basic prompt with JSON response
- Template Interpolation – multiple fields in prompts
- Custom System Prompt – specialized LLM behavior
- Token Metadata in Prompts – using _meta.id and _meta.name
Category B: Response Routing
- Conditional Routing by Field – status-based routing
- Numeric Comparison Routing – confidence thresholds
- Multi-Emit to Multiple Postsets – parallel output
- Default Emit (No Condition) – unconditional routing
Category C: JSON Response Handling
- Markdown JSON Stripping – handles code block responses
- Nested JSON Response – complex object structures
- Boolean Field Routing – true/false decisions
- Multi-Category Sentiment – 3+ routing destinations
Category D: Advanced Use Cases
- Data Enrichment – adding AI-generated fields
- Code Analysis – technical content review
- Translation Task – language conversion
Category E: Edge Cases
- Empty Input Handling – graceful empty data processing
- Special Characters – quotes, newlines, unicode
- Long Prompt Handling – complex multi-line prompts
Run the test suite:
cd agentic-net-test-client
mvn test -Dtest=LlmTransitionIntegrationTest
Key Differences: LLM vs HTTP Transitions
| Aspect | HTTP Transition | LLM Transition |
|---|---|---|
| Purpose | Call external REST APIs | Invoke AI models for analysis |
| Response Format | Depends on API | Structured JSON (auto-parsed) |
| Conditional Routing | when: "field == 'value'" |
when: "success", condition: "field == 'value'" |
| Typical Timeout | 5-30 seconds | 30-180 seconds |
| Retry Logic | Built-in with backoff | Typically no retry (stateless) |
| Authentication | Basic, Bearer, API Key | Handled by LLM service config |
Summary
LLM transitions bring artificial intelligence directly into your Agentic Agentic nets. Key capabilities:
- Template Interpolation – build dynamic prompts from token data and metadata
- Custom System Prompts – control LLM behavior and output format
- JSON Response Handling – automatic parsing with markdown stripping
- Intelligent Routing – branch agentic processes based on AI decisions
- Multi-Emit – send results to multiple destinations
- Robustness – handles edge cases gracefully
Combined with HTTP, Map, and Pass transitions, LLM transitions enable you to build sophisticated AI-powered agentic processes that can classify, analyze, enrich, translate, and make intelligent decisions – all within the elegant framework of Petri net execution semantics.