Secrets That Never Touch Your Code — Credential Management with Agentic-Net-Vault
Every transition in an Agentic-Net can call external APIs — OpenAI, GitHub, Slack, AWS, databases. But where do those API keys live? Not in environment variables. Not in inscription JSON. Not in source code. They live in a purpose-built vault, scoped per transition, encrypted at rest, fetched just-in-time at execution. This is how AgenticNetOS manages secrets without leaking them into the event-sourced data model.
The Problem: Secrets in Workflow Systems
Workflow engines have a dirty secret: they are terrible at keeping secrets. Most systems store API keys in environment variables shared by every component, or embed them directly in workflow definitions. When your workflow definition is also your data model — as in event-sourced Petri nets where every state change is an immutable event — this gets worse. Credentials baked into token properties or inscription JSON become part of the permanent audit trail. They show up in exports, snapshots, debug logs, and read models.
The usual workarounds create their own problems. Centralized environment variables mean every service has access to every secret. Config files mounted as volumes are a deployment headache. And none of these approaches give you per-workflow or per-transition isolation. An HTTP transition calling the GitHub API should not have access to the Slack webhook token used by a completely different transition in a completely different net.
What AgenticNetOS needed was a credential system that understands the transition model natively — scoped per transition, versioned, encrypted, and fetched on demand rather than embedded in the data flow.
The Idea: A Vault That Speaks Petri Net
Agentic-Net-Vault is a stateless Spring Boot service (port 8085) that wraps OpenBao — the open-source fork of HashiCorp Vault — with an API designed specifically for the AgenticNetOS transition model. Instead of generic key-value paths, credentials are stored at secret/agenticos/credentials/{modelId}/{transitionId}. Every transition in every net model gets its own isolated credential namespace.
The vault never stores credentials in the event log, never embeds them in inscription JSON, and never passes them through places or tokens. Secrets exist outside the Petri net data model entirely. When a transition fires, the executor fetches credentials just-in-time from the vault, injects them into the execution context, and discards them when the transition completes.
How It Works: Three-Phase Credential Flow
The credential lifecycle in AgenticNetOS has three distinct phases: store, fetch, and inject. Each phase involves a different service, and at no point do secrets travel through the Petri net’s token flow.
Phase 1: Store — Master Deposits Credentials
When a transition is deployed, the master service stores its credentials in the vault. The API is scoped by model and transition ID — the same identifiers used throughout the Petri net model. This means credentials are automatically namespaced to the exact transition that needs them.
curl -X PUT http://localhost:8085/api/vault/intel-gather/transitions/t-crawl-web/credentials \
-H "Content-Type: application/json" \
-d '{
"openai_key": "sk-proj-...",
"github_pat": "ghp_abc123...",
"slack_webhook": "https://hooks.slack.com/services/T.../B.../xxx"
}'
The response confirms storage without echoing the secrets back — only metadata:
{
"modelId": "intel-gather",
"transitionId": "t-crawl-web",
"metadata": {
"keyNames": ["openai_key", "github_pat", "slack_webhook"],
"version": 1,
"updatedAt": "2026-04-04T10:30:00Z"
}
}
Phase 2: Fetch — Executor Retrieves at Runtime
When a transition fires, the executor receives a transition token from master containing the modelId and transitionId. It uses these identifiers to fetch the actual credentials from the vault — a single GET request returns everything the transition needs.
curl http://localhost:8085/api/vault/intel-gather/transitions/t-crawl-web/credentials
Credentials come back as plain JSON, protected by network isolation — the vault sits on an internal Docker network that is not exposed to the host. For distributed deployments, the gateway proxies vault requests at /vault-api/** with JWT authentication.
Phase 3: Inject — Secrets Enter the Execution Context
The executor injects credentials into the transition’s runtime environment — as environment variables, command arguments, or request headers depending on the transition type. Once the transition completes, the credentials are discarded. They never become token properties, never flow to postset places, and never appear in the event log.
What You Can Store
The vault accepts any JSON key-value map. There is no fixed schema — you store whatever a transition needs. This flexibility matters because different transition types require different credential shapes.
The Architecture: Layered Security
Agentic-Net-Vault is a thin, stateless REST service. All persistent state lives in OpenBao 2.1.0 — the open-source fork of HashiCorp Vault — which handles encryption, versioning, and access control. The vault service adds transition-aware path routing, input validation, retry logic, and health monitoring.
Encryption at Rest
OpenBao encrypts all secrets with AES-256-GCM via its transit engine before writing to storage. The vault service never sees unencrypted data at the storage layer — it only passes data through the OpenBao API, which handles all cryptographic operations internally.
Path Validation
Every modelId and transitionId is validated against a strict regex: ^[a-zA-Z0-9_-]+$. This blocks path traversal attacks (../), directory escapes, and special character injection. An attacker cannot manipulate IDs to read credentials belonging to a different transition.
Network Isolation
Both the vault service and OpenBao run on an internal Docker network (agenticos-backend) that is not exposed to the host. External access goes through the gateway, which requires JWT authentication. The vault itself has no public port mapping in production deployments.
Authentication Modes
For development, the vault uses token-based authentication with a configurable root token. For production, it supports AppRole authentication with automatic token renewal — a dedicated thread pool manages token lifecycle without service restarts.
Versioning and Rotation
Because OpenBao uses a KV v2 secrets engine, every credential update is automatically versioned. Overwriting a credential with a PUT request increments the version counter. Old versions are retained as soft deletes — you can audit what changed and when without losing history.
Rotation is straightforward: store new credentials with the same PUT endpoint. The version increments, and the next time a transition fires, it fetches the latest version automatically. No restart, no redeployment, no configuration change.
# Rotate credentials -- just PUT new values
curl -X PUT http://localhost:8085/api/vault/intel-gather/transitions/t-crawl-web/credentials \
-H "Content-Type: application/json" \
-d '{"openai_key": "sk-proj-NEW-KEY-HERE"}'
# Version auto-increments: 1 --> 2
# Next transition fire uses the new key immediately
You can also inspect metadata without retrieving the actual secrets — useful for auditing which transitions have credentials, when they were last updated, and what keys they contain:
# Metadata only -- no secrets exposed
curl http://localhost:8085/api/vault/intel-gather/transitions/t-crawl-web/credentials/metadata
# Returns: {"keyNames": ["openai_key"], "version": 2, "updatedAt": "..."}
Real-World Example: Intel-Gather Platform
The intel-gather platform — 9 nets, 55 transitions, zero hand-written pipeline code — demonstrates the vault in action. Multiple transitions across different nets need different credentials: the web crawler needs HTTP headers for rate-limited sites, the LLM analyzer needs an OpenAI key, the publisher needs WordPress API credentials, and the notification transition needs a Slack webhook URL.
Each transition stores its own credentials independently. The crawler cannot access the publisher’s WordPress password. The LLM analyzer cannot see the Slack webhook. And none of these secrets appear in the event log, the token data, or the inscription definitions.
# Crawler: HTTP auth for rate-limited sites
curl -X PUT http://localhost:8085/api/vault/intel-gather/transitions/t-crawl-web/credentials \
-d '{"user_agent": "AgenticNetOS-Crawler/1.0", "rate_limit_key": "rl-abc..."}'
# LLM Analyzer: AI inference credentials
curl -X PUT http://localhost:8085/api/vault/intel-gather/transitions/t-analyze-content/credentials \
-d '{"anthropic_key": "sk-ant-...", "model": "claude-sonnet-4-6-20250514"}'
# Publisher: WordPress API access
curl -X PUT http://localhost:8085/api/vault/intel-gather/transitions/t-publish/credentials \
-d '{"wp_user": "publisher", "wp_app_password": "xxxx xxxx xxxx"}'
# Notifier: Slack integration
curl -X PUT http://localhost:8085/api/vault/intel-gather/transitions/t-notify/credentials \
-d '{"slack_webhook": "https://hooks.slack.com/services/T.../B.../xxx"}'
The API Surface
The vault exposes five endpoints. No more, no less. The API is intentionally minimal — it does one thing well.
| Method | Endpoint | Purpose |
|---|---|---|
| PUT | /api/vault/{modelId}/transitions/{transitionId}/credentials |
Store or update credentials |
| GET | /api/vault/{modelId}/transitions/{transitionId}/credentials |
Retrieve credentials + metadata |
| GET | /api/vault/{modelId}/transitions/{transitionId}/credentials/metadata |
Metadata only (no secrets) |
| DELETE | /api/vault/{modelId}/transitions/{transitionId}/credentials |
Soft-delete (versioned) |
| GET | /api/health |
Service + backend health |
The gateway at port 8083 also proxies vault at /vault-api/**, which means remote executors and external tools can access credentials through the same JWT-authenticated gateway they use for everything else.
Why Not Just Use Environment Variables?
Environment variables are global. Every service, every transition, every container sees the same set. In a system where 55 transitions across 9 nets each need different credentials, that means either a massive shared environment or a complex naming convention that nobody enforces.
The vault gives you transition-scoped isolation with zero naming conventions to remember. The credential path is derived from identifiers that already exist in the system (modelId and transitionId). There is nothing extra to configure, no mapping file, no lookup table. If you know the transition, you know its credentials.
More importantly, environment variables are static. Rotating a key means restarting the service. With the vault, rotation is a single PUT request — no restart, no redeployment. The next transition fire picks up the new credentials automatically.
Try It Yourself
The vault runs alongside OpenBao in the standard AgenticNetOS deployment. Start it with the compose file, then store and retrieve credentials for any transition in your system:
# Start the vault + OpenBao
cd agentic-nets/deployment
docker compose up -d openbao agentic-net-vault
# Store credentials for a transition
curl -X PUT http://localhost:8085/api/vault/my-model/transitions/t-my-http/credentials \
-H "Content-Type: application/json" \
-d '{"api_key": "your-key-here", "secret": "your-secret-here"}'
# Verify metadata (no secrets exposed)
curl http://localhost:8085/api/vault/my-model/transitions/t-my-http/credentials/metadata
# Retrieve full credentials
curl http://localhost:8085/api/vault/my-model/transitions/t-my-http/credentials
# Rotate a key
curl -X PUT http://localhost:8085/api/vault/my-model/transitions/t-my-http/credentials \
-H "Content-Type: application/json" \
-d '{"api_key": "NEW-rotated-key", "secret": "your-secret-here"}'
# Clean up
curl -X DELETE http://localhost:8085/api/vault/my-model/transitions/t-my-http/credentials
Agentic-Net-Vault is part of the open-source AgenticNetOS ecosystem, available at agentic-nets/agentic-net-vault. It is built with Spring Boot 3.5.5, Java 21, and OpenBao 2.1.0. The entire credential management flow — from master to vault to executor — runs within the internal Docker network, with gateway proxy support for distributed deployments. No secrets in your inscription JSON. No secrets in your event log. No secrets in your tokens. Just a single REST call away when a transition needs them.