AI Agents in PURISTA
@purista/ai adds agent workloads to a PURISTA application without changing PURISTA core primitives. Agents use the same builder pattern as services, run on the same EventBridge, and follow the same validation, logging, and tracing conventions.
This chapter is written as a learning path: start simple, then move to protocol/operations topics.
When to reach for agents
Use agents when you need one or more LLM-powered workloads that must:
- share infrastructure with existing services (EventBridge, tracing, config, logging)
- stream progressive results to clients (chunks, artifacts, tool frames, telemetry)
- control parallelism through runtime worker pool settings
- reuse existing commands as tools with explicit allowlists
- keep deployment/runtime configuration outside of business logic
Project layout & scaffolding
Use the CLI first. It creates a runnable skeleton and test:
purista add agent SupportAgentThe generator creates:
src/agents/<agentName>/v<version>/<agentName>.ts(builder + handler skeleton)src/agents/<agentName>/v<version>/<agentName>.test.ts(deterministic test scaffold)
src/
├─ services/
└─ agents/
└─ supportAgent/
└─ v1/
├─ supportAgent.ts
└─ supportAgent.test.tsFrom there, you can expose the agent over HTTP, invoke it from commands/subscriptions, or run it in background workers.
What you will configure (and why)
| Area | Typical choice | Why it matters |
|---|---|---|
| models | defineModel(...) + runtime provider injection | quality, latency, and provider flexibility |
| conversation persistence | `persistConversation('user' | 'agent')` |
| tools | explicit allowTool(...) list | security and predictable behavior |
| pool/concurrency | runtime poolConfig.maxWorkers | throughput, rate-limit protection, cost control |
| transport | HTTP SSE, command invoke, queue worker | caller UX and operational profile |
Where to go next
Recommended order for new users:
- The Agent Builder — purpose, main methods, and first runnable agent.
- Run & Invoke Agents — bootstrap in
src/index.ts, call from commands/subscriptions, expose HTTP. - Model Providers & OpenAI — wire provider instances at runtime.
- Conversation Persistence — memory strategies, summary behavior, and retry-safe staging.
- Knowledge Adapters — RAG/data-source integration and adapter options.
- AI Protocol — envelope model, frame semantics, and interoperability references.
- Protocol & Streaming — practical HTTP/SSE streaming and helper usage.
- MCP & A2A Expose — adapter endpoint patterns for protocol-specific consumers.
- Frontend Consumers — dedupe-safe streaming UIs and command-owned conversation restore.
- Testing Agents — deterministic unit/integration tests.
- Agent Evaluation — dataset-driven evaluation output and CI comparison.
Advanced section still contains a protocol interoperability deep-dive entry point for operations teams.
If you are brand new to PURISTA, start with Service and Command first. The agent APIs intentionally reuse the same language.
