LLM Transitions – AI-Powered Token Processing
LLM (Large Language Model) transitions bring the power of AI directly into your agentic processes. They enable natural language processing, text generation, sentiment analysis, entity extraction, and intelligent routing – all orchestrated through simple inscriptions.
Key Innovation: LLM transitions use a two-field emit pattern where when checks execution status (success/error) and condition evaluates the AI response content. Both must match for emission.
The Two-Field Emit Pattern
Unlike Pass, Map, and HTTP transitions, LLM transitions use a two-field emit pattern that separates execution status from content evaluation:
Basic LLM Inscription Structure
{
"id": "t-llm-basic",
"kind": "llm",
"mode": "SINGLE",
"presets": {
"input": {
"placeId": "llm-input",
"host": "my-model@localhost:8080",
"arcql": "FROM $ LIMIT 1",
"take": "FIRST",
"consume": true
}
},
"postsets": {
"output": { "placeId": "llm-output", "host": "my-model@localhost:8080" }
},
"action": {
"type": "llm",
"systemPrompt": "You are a helpful assistant. Respond with JSON only.",
"userPrompt": "Summarize: ${input.data.text}"
},
"emit": [
{ "to": "output", "from": "@response.json", "when": "success" }
]
}
Key Fields:
kind: "llm"– identifies this as an LLM transitionsystemPrompt– defines AI behavior and roleuserPrompt– the actual request with template interpolationemit.from: "@response.json"– routes the parsed LLM responseemit.when: "success"– only emit when LLM call succeeds
Sentiment Analysis with Content-Based Routing
This example demonstrates routing based on AI classification using both when and condition:
{
"id": "t-sentiment-analyzer",
"kind": "llm",
"mode": "SINGLE",
"presets": {
"input": {
"placeId": "sentiment-input",
"host": "llm-demo@localhost:8080",
"arcql": "FROM $ LIMIT 1",
"take": "FIRST",
"consume": true
}
},
"postsets": {
"positive": { "placeId": "sentiment-positive", "host": "llm-demo@localhost:8080" },
"negative": { "placeId": "sentiment-negative", "host": "llm-demo@localhost:8080" },
"neutral": { "placeId": "sentiment-neutral", "host": "llm-demo@localhost:8080" }
},
"action": {
"type": "llm",
"systemPrompt": "You are a sentiment analyzer. Respond ONLY with valid JSON.",
"userPrompt": "Analyze sentiment: '${input.data.review}'\n\nRespond: {\"sentiment\": \"positive\"|\"negative\"|\"neutral\", \"confidence\": 0.0-1.0}"
},
"emit": [
{ "to": "positive", "from": "@response.json", "when": "success", "condition": "sentiment == 'positive'" },
{ "to": "negative", "from": "@response.json", "when": "success", "condition": "sentiment == 'negative'" },
{ "to": "neutral", "from": "@response.json", "when": "success", "condition": "sentiment == 'neutral'" }
]
}
Critical Pattern: Each emit rule has BOTH "when": "success" (execution succeeded) AND "condition": "sentiment == 'value'" (content matches).
Prompt Engineering Best Practices
Error Handling Pattern
{
"id": "t-safe-llm",
"kind": "llm",
"postsets": {
"success": { "placeId": "processed", "host": "..." },
"error": { "placeId": "failed", "host": "..." }
},
"action": {
"type": "llm",
"systemPrompt": "...",
"userPrompt": "..."
},
"emit": [
{ "to": "success", "from": "@response.json", "when": "success" },
{ "to": "error", "from": "@response.json", "when": "error" }
]
}
Summary
- kind: “llm” – identifies the transition type
- Two-field emit –
when(execution) +condition(content) - @response.json – access parsed AI response
- Template prompts – interpolate token data with
${input.data.field} - JSON output – design prompts for predictable, parseable responses
LLM transitions enable powerful AI-augmented agentic processes where tokens agentic through intelligent processing stages.