Skip to content

AI Agents in PURISTA

@purista/ai adds agent workloads to a PURISTA application without changing PURISTA core primitives. Agents use the same builder pattern as services, run on the same EventBridge, and follow the same validation, logging, and tracing conventions.

This chapter is written as a learning path: start simple, then move to protocol/operations topics.

When to reach for agents

Use agents when you need one or more LLM-powered workloads that must:

  • share infrastructure with existing services (EventBridge, tracing, config, logging)
  • stream progressive results to clients (chunks, artifacts, tool frames, telemetry)
  • control parallelism through runtime worker pool settings
  • reuse existing commands as tools with explicit allowlists
  • keep deployment/runtime configuration outside of business logic

Project layout & scaffolding

Use the CLI first. It creates a runnable skeleton and test:

bash
purista add agent SupportAgent

The generator creates:

  • src/agents/<agentName>/v<version>/<agentName>.ts (builder + handler skeleton)
  • src/agents/<agentName>/v<version>/<agentName>.test.ts (deterministic test scaffold)
src/
 ├─ services/
 └─ agents/
     └─ supportAgent/
         └─ v1/
             ├─ supportAgent.ts
             └─ supportAgent.test.ts

From there, you can expose the agent over HTTP, invoke it from commands/subscriptions, or run it in background workers.

What you will configure (and why)

AreaTypical choiceWhy it matters
modelsdefineModel(...) + runtime provider injectionquality, latency, and provider flexibility
conversation persistence`persistConversation('user''agent')`
toolsexplicit allowTool(...) listsecurity and predictable behavior
pool/concurrencyruntime poolConfig.maxWorkersthroughput, rate-limit protection, cost control
transportHTTP SSE, command invoke, queue workercaller UX and operational profile

Where to go next

Recommended order for new users:

  1. The Agent Builder — purpose, main methods, and first runnable agent.
  2. Run & Invoke Agents — bootstrap in src/index.ts, call from commands/subscriptions, expose HTTP.
  3. Model Providers & OpenAI — wire provider instances at runtime.
  4. Conversation Persistence — memory strategies, summary behavior, and retry-safe staging.
  5. Knowledge Adapters — RAG/data-source integration and adapter options.
  6. AI Protocol — envelope model, frame semantics, and interoperability references.
  7. Protocol & Streaming — practical HTTP/SSE streaming and helper usage.
  8. MCP & A2A Expose — adapter endpoint patterns for protocol-specific consumers.
  9. Frontend Consumers — dedupe-safe streaming UIs and command-owned conversation restore.
  10. Testing Agents — deterministic unit/integration tests.
  11. Agent Evaluation — dataset-driven evaluation output and CI comparison.

Advanced section still contains a protocol interoperability deep-dive entry point for operations teams.

If you are brand new to PURISTA, start with Service and Command first. The agent APIs intentionally reuse the same language.