Skip to content

Event Bridges

Event bridges are the transport backbone of PURISTA. They determine routing, scaling behavior, durability options, and delivery guarantees.

Support matrix

bridgescale outdurable subscriptionsmanual ackevent-to-queue strongest handoffstream support (openStream)
Defaultnononono, auto-ack in-memory fallbackyes
AMQPyesyesyesyesno (currently)
MQTTyesbroker-dependentnono, QoS-dependent delivery onlyno (currently)
NATSyesyes with JetStream, no with core NATSyes with JetStream, no with core NATSyes with JetStreamno (currently)
Dapryescomponent-dependentcomponent-dependentcomponent-dependentno (currently)

For event-to-queue bindings, PURISTA asks for durable/manual-ack subscription semantics when the active event bridge advertises both capabilities. If not, the binding remains API-compatible and uses the bridge's best available behavior. See Event-to-queue for the exact limitation.

Command reliability model

bridgecommand transporttimeout sourcepending invocation cancellation on shutdownresponse confirmation
Defaultin-memoryPURISTA pending invocation timeryesnone
AMQPreply queuePURISTA pending invocation timer + AMQP message expirationyesbroker confirm
MQTTtopic correlationPURISTA pending invocation timer + MQTT messageExpiryIntervalyesprotocol-level (QoS dependent)
NATSrequest/replynative NATS request timeoutno (request timeout owned by NATS request call)protocol-level
DaprHTTP requestsidecar/client request timeoutno (request lifecycle owned by sidecar/client timeout)protocol-level

Subscription consumer failure handling

bridgebounded retrydelayed retrydead-letter targetstrict startup validation
Defaultnononoyes
AMQPyesyes, broker-managed delayed retry queue when retryDelayMs > 0 on durable subscriptionsyesyes
MQTTnononoyes
NATSyes with JetStreamyes with JetStreamyes with JetStreamyes
Daprcomponent-dependentcomponent-dependentcomponent-dependentyes

Queue bridge support

queue bridge packagepreferred workloadscompatible event bridges
@purista/core default queue bridgelocal dev, unit tests, single instance deploymentsAny (in-memory inside the service)
@purista/redis-queue-bridgeproduction pull-based CQRS, delayed jobs, AI worker poolsDefault, AMQP, MQTT, NATS, Dapr (redis acts as the queue backend while the event bridge handles command/subscription traffic)
@purista/nats-queue-bridgeproduction pull-based workloads on NATS-first platformsDefault, AMQP, MQTT, NATS, Dapr (JetStream acts as the queue backend while the event bridge handles command/subscription traffic)

Future queue bridge packages will live next to the event bridge adapters once those providers expose reliable pull + lease semantics. When evaluating infrastructure, pick an event bridge + queue bridge pair that matches your durability and scaling needs.

See the dedicated Queue Bridges page for wiring guidance and capability details.

Delivery semantics in practice

PURISTA itself provides typed message contracts and processing flow. Delivery guarantees come from the selected bridge + broker/component configuration.

  • at-most-once: low latency, but a message can be lost on failures.
  • at-least-once: safer delivery, but duplicates are possible.
  • exactly-once: generally not guaranteed end-to-end in distributed systems; design handlers to be idempotent.

Late command responses are normalized across invoke-capable bridges in this slice. Once the caller-side timeout fires, PURISTA keeps a short-lived tombstone for the correlation id and ignores any later response with a warning instead of raising a bridge error.

Subscription retry and dead-letter handling are capability-driven bridge concerns. Service definitions can declare consumer failure handling in strict or best-effort mode. In strict mode, PURISTA fails startup if the selected adapter cannot honor the requested semantics. Exhausted subscription messages are dead-lettered. Where adapter capabilities allow it, handlers can explicitly return drop and stop-consumer.

Command registrations now follow the same strict startup validation approach for delivery semantics. If a command requests durable/manual-ack handling that the active bridge cannot provide, startup fails early instead of silently degrading. Commands are treated as single request/response operations: they are not retried by subscription retry policies. Handler failures are returned as CommandErrorResponse (UnhandledError for unhandled failures) instead of triggering transport redelivery loops.

Reliability checklist

  • configure broker durability/retry features explicitly
  • check eventBridge.capabilities before relying on durable/manual-ack event-to-queue handoff
  • keep bridge settings identical across instances
  • implement idempotency in command/subscription side effects
  • define timeout/retry policies intentionally (do not rely on defaults only)
  • keep subscription retry budgets bounded and route poison messages to a dead-letter target
  • verify shutdown, readiness, and reconnect semantics in integration tests
  • test reconnect and broker outage scenarios in integration tests

When to use which bridge

  • DefaultEventBridge: local development, single-instance deployments, stream development.
  • AMQP: production systems with durable queues/retries and strong operational control.
  • MQTT: IoT/edge and broker setups where topic/QoS tuning is central.
  • NATS: low-latency eventing where simple operations are preferred.
  • NATS with JetStream: good fit when you want bounded subscription retries and dead-letter subjects without introducing a separate queue backend.
  • Dapr: polyglot/service-mesh environments leveraging Dapr components.