AgenticOS Building Blocks: LLM Transitions – AI-Powered Token Processing

LLM Transitions – AI-Powered Token Processing

LLM (Large Language Model) transitions bring the power of AI directly into your agentic processes. They enable natural language processing, text generation, sentiment analysis, entity extraction, and intelligent routing – all orchestrated through simple inscriptions.

Key Innovation: LLM transitions use a two-field emit pattern where when checks execution status (success/error) and condition evaluates the AI response content. Both must match for emission.

LLM Transition: AI-Powered Processing Input Token AI LLM Success Output Error Output when: “success” when: “error” LLM Inscription kind: “llm” systemPrompt: “…” userPrompt: “…” emit.from: @response.json

The Two-Field Emit Pattern

Unlike Pass, Map, and HTTP transitions, LLM transitions use a two-field emit pattern that separates execution status from content evaluation:

Two-Field Emit Pattern: when + condition Field 1: “when” Checks LLM EXECUTION STATUS: “success” – LLM completed OK “error” – LLM call failed Field 2: “condition” Evaluates RESPONSE CONTENT: “sentiment == ‘positive'” “category == ‘urgent'” BOTH Must Match Token Emitted to Postset

Basic LLM Inscription Structure

{
  "id": "t-llm-basic",
  "kind": "llm",
  "mode": "SINGLE",

  "presets": {
    "input": {
      "placeId": "llm-input",
      "host": "my-model@localhost:8080",
      "arcql": "FROM $ LIMIT 1",
      "take": "FIRST",
      "consume": true
    }
  },

  "postsets": {
    "output": { "placeId": "llm-output", "host": "my-model@localhost:8080" }
  },

  "action": {
    "type": "llm",
    "systemPrompt": "You are a helpful assistant. Respond with JSON only.",
    "userPrompt": "Summarize: ${input.data.text}"
  },

  "emit": [
    { "to": "output", "from": "@response.json", "when": "success" }
  ]
}

Key Fields:

  • kind: "llm" – identifies this as an LLM transition
  • systemPrompt – defines AI behavior and role
  • userPrompt – the actual request with template interpolation
  • emit.from: "@response.json" – routes the parsed LLM response
  • emit.when: "success" – only emit when LLM call succeeds

Sentiment Analysis with Content-Based Routing

This example demonstrates routing based on AI classification using both when and condition:

Sentiment Analysis: Route by AI Classification Reviews Input Sentiment Analyzer (LLM) Positive Neutral Negative when: “success” condition: sentiment==”positive” when: “success” condition: sentiment==”neutral” when: “success” condition: sentiment==”negative”
{
  "id": "t-sentiment-analyzer",
  "kind": "llm",
  "mode": "SINGLE",

  "presets": {
    "input": {
      "placeId": "sentiment-input",
      "host": "llm-demo@localhost:8080",
      "arcql": "FROM $ LIMIT 1",
      "take": "FIRST",
      "consume": true
    }
  },

  "postsets": {
    "positive": { "placeId": "sentiment-positive", "host": "llm-demo@localhost:8080" },
    "negative": { "placeId": "sentiment-negative", "host": "llm-demo@localhost:8080" },
    "neutral": { "placeId": "sentiment-neutral", "host": "llm-demo@localhost:8080" }
  },

  "action": {
    "type": "llm",
    "systemPrompt": "You are a sentiment analyzer. Respond ONLY with valid JSON.",
    "userPrompt": "Analyze sentiment: '${input.data.review}'\n\nRespond: {\"sentiment\": \"positive\"|\"negative\"|\"neutral\", \"confidence\": 0.0-1.0}"
  },

  "emit": [
    { "to": "positive", "from": "@response.json", "when": "success", "condition": "sentiment == 'positive'" },
    { "to": "negative", "from": "@response.json", "when": "success", "condition": "sentiment == 'negative'" },
    { "to": "neutral", "from": "@response.json", "when": "success", "condition": "sentiment == 'neutral'" }
  ]
}

Critical Pattern: Each emit rule has BOTH "when": "success" (execution succeeded) AND "condition": "sentiment == 'value'" (content matches).


Prompt Engineering Best Practices

LLM Prompt Engineering for Petri Nets System Prompt Define AI role/persona Specify output format “Respond ONLY with JSON” User Prompt Include token data ${input.data.field} Give example schema Emit Rules Match JSON field names Use “when: success” Add “condition” for routing Always request JSON output for predictable parsing • Specify exact field names that match emit conditions

Error Handling Pattern

{
  "id": "t-safe-llm",
  "kind": "llm",

  "postsets": {
    "success": { "placeId": "processed", "host": "..." },
    "error": { "placeId": "failed", "host": "..." }
  },

  "action": {
    "type": "llm",
    "systemPrompt": "...",
    "userPrompt": "..."
  },

  "emit": [
    { "to": "success", "from": "@response.json", "when": "success" },
    { "to": "error", "from": "@response.json", "when": "error" }
  ]
}

Summary

  • kind: “llm” – identifies the transition type
  • Two-field emitwhen (execution) + condition (content)
  • @response.json – access parsed AI response
  • Template prompts – interpolate token data with ${input.data.field}
  • JSON output – design prompts for predictable, parseable responses

LLM transitions enable powerful AI-augmented agentic processes where tokens agentic through intelligent processing stages.

Leave a Reply

Your email address will not be published. Required fields are marked *