Kafka and Domain-Driven Design Integration

⏱ 6 min read

Executive summary

Kafka and domain-driven design integration works when event contracts represent bounded context language and evolution is governed through compatibility rules. Confluent’s compatibility and schema evolution documentation provides concrete mechanisms for safe schema evolution—critical when multiple bounded contexts consume events. integration architecture diagram

  • Event contract patterns aligned to domains
  • Compatibility policy as autonomy guardrail
  • Governance: review exceptions and deprecation
Figure 1: DDD + Kafka integration — bounded contexts mapped to topics with integration patterns
Figure 1: DDD + Kafka integration — bounded contexts mapped to topics with integration patterns
  • Compatibility pattern explanation.

Mapping bounded contexts to Kafka topics

Figure 2: DDD + Kafka — bounded contexts mapped to topics with integration patterns
Figure 2: DDD + Kafka — bounded contexts mapped to topics with integration patterns

Domain-Driven Design and Kafka are natural partners. DDD provides the conceptual model (bounded contexts, aggregates, domain events). Kafka provides the technical infrastructure (topics, partitions, consumer groups). The mapping between them determines whether the architecture is clean or chaotic.

One topic per aggregate type. Each aggregate root (Order, Payment, Customer) publishes its domain events to a dedicated topic: orders.events, payments.events, customers.events. The message key is the aggregate ID, ensuring all events for a single aggregate land in the same partition — preserving ordering per aggregate.

Domain events are the contract. The event schema is the public contract of the bounded context. It uses the ubiquitous language of the producing context — OrderPlaced, PaymentAuthorized, CustomerVerified. Consumers must not impose their language on the producer's events.

Anti-corruption layer for cross-context integration. When the Orders context consumes events from the Payments context, it does not use payment events directly. An anti-corruption layer translates payment events into the Orders context's language: PaymentAuthorized becomes OrderPaymentConfirmed. This prevents the coupling that DDD's bounded context pattern is designed to eliminate.

Context mapping patterns in Kafka

Published Language: The producing context publishes events in a well-defined schema that becomes the shared contract. Conformist: The consuming context adopts the producer's language directly (acceptable when the producing context is authoritative, like a regulatory system). Anti-Corruption Layer: The consumer translates between contexts (the default for most integrations). Shared Kernel: Two contexts share a subset of the model — use sparingly, as it creates coupling.

From event storming to Kafka topics

Figure 3: Event storming to Kafka — five steps from workshop to deployed schemas
Figure 3: Event storming to Kafka — five steps from workshop to deployed schemas

The most effective way to design Kafka topics for a DDD system is to start with event storming — the collaborative workshop technique where domain experts and developers map business events on sticky notes before writing any code.

Step 1: Event storming workshop. Bring domain experts and developers into a room. Map the business process as a sequence of domain events: "Customer Registered," "Order Placed," "Payment Authorized," "Shipment Created." These are business events — they describe what happened in business language, not technical language. The workshop produces a timeline of 30-100 events for a typical bounded context.

Step 2: Identify aggregates. Group related events around their aggregate roots. All events about an order (OrderPlaced, OrderModified, OrderCancelled, OrderShipped) belong to the Order aggregate. All events about a payment (PaymentInitiated, PaymentAuthorized, PaymentFailed, PaymentRefunded) belong to the Payment aggregate. Aggregates are consistency boundaries — events within an aggregate are causally ordered.

Step 3: Define domain events formally. For each event, define: the event name (past tense verb: OrderPlaced, not PlaceOrder), the payload (what data the event carries), the aggregate ID (the entity this event belongs to), and the metadata (timestamp, correlation ID, producer identity). This is the domain event contract — the interface between bounded contexts.

Step 4: Map to Kafka topics. One topic per aggregate type: orders.events, payments.events, customers.events. The message key is the aggregate ID, ensuring all events for a single aggregate land in the same partition — preserving causal ordering. Multiple event types share a topic when they belong to the same aggregate.

Step 5: Design Avro schemas. Each event type gets an Avro schema registered in the Schema Registry under the topic's subject. Use a union schema or a wrapper envelope (eventType discriminator field + event payload) to support multiple event types per topic.

Anti-corruption layers in practice

When one bounded context consumes events from another, the consuming context must not adopt the producer's domain language. An anti-corruption layer translates between contexts.

In Kafka, implement the anti-corruption layer as a Kafka Streams or Flink application that consumes from the upstream topic, transforms events into the downstream context's language, and produces to an internal topic owned by the consuming context. For example: the Shipping context consumes payments.events but does not use PaymentAuthorized events directly. The ACL translates PaymentAuthorizedShipmentPaymentConfirmed and publishes to shipping.internal.events. The Shipping domain services only consume from their internal topic — they never see the Payment context's language.

// Kafka Streams: Anti-corruption layer
// Shipping context consuming from Payments context
builder.stream("payments.events")
    .filter((key, event) -> event.getEventType().equals("PaymentAuthorized"))
    .mapValues(payment -> ShipmentPaymentConfirmed.newBuilder()
        .setOrderId(payment.getOrderId())
        .setAmount(payment.getAmount())
        .setConfirmedAt(Instant.now())
        .build())
    .to("shipping.internal.events");

Handling eventual consistency across bounded contexts

When bounded contexts communicate via Kafka, consistency is eventual — the consuming context will see the event milliseconds to seconds after it was produced. This creates specific design challenges that DDD practitioners must address.

Saga pattern for cross-context transactions. When a business operation spans multiple bounded contexts (placing an order involves the Order context, Payment context, and Inventory context), use the saga pattern: each context publishes events representing its step's completion, and downstream contexts react. If any step fails, compensating events undo previous steps. Model sagas in the architecture repository as Business Processes spanning multiple Application Components, with explicit failure and compensation paths.

Read model staleness. Consumer read models are always slightly behind the producer's state. Design consumer UIs and APIs to tolerate this: display "last updated at" timestamps, use optimistic locking for concurrent modifications, and implement polling or server-sent events for real-time updates where business requirements demand it.

Ordering guarantees. Kafka guarantees ordering within a partition, not across partitions. Since the message key determines partition assignment, use the aggregate ID as the key — this ensures all events for a single aggregate are causally ordered. Cross-aggregate ordering requires additional design: event timestamps, vector clocks, or application-level sequencing.

If you'd like hands-on training tailored to your team (Sparx Enterprise Architect, ArchiMate, TOGAF, BPMN, SysML, Apache Kafka, or the Archi tool), you can reach us via our contact page.

Frequently Asked Questions

How is integration architecture modeled in ArchiMate?

Integration architecture in ArchiMate is modeled using Application Components (the systems being integrated), Application Services (the capabilities exposed), Application Interfaces (the integration endpoints), and Serving relationships showing data flows. Technology interfaces model the underlying protocols and middleware.

What is the difference between API integration and event-driven integration?

API integration uses synchronous request-response patterns where a consumer calls a provider and waits for a response. Event-driven integration uses asynchronous message publishing where producers emit events that consumers subscribe to — decoupling systems and improving resilience.

How does ArchiMate model middleware and ESB?

Middleware and ESB platforms appear in ArchiMate as Application Components in the Application layer that expose Integration Services. They aggregate connections from multiple source and target systems, shown through Serving and Association relationships to all connected applications.