Your Event Bus Is a Shared Database

⏱ 20 min read

Most event buses begin life as a promise. A neat promise, too. Teams will publish facts, other teams will subscribe, and the enterprise will finally escape the old trap of direct service-to-service integration. No more brittle point-to-point calls. No more hard dependencies. Just clean, asynchronous, loosely coupled systems gliding along on streams of business truth.

Then reality arrives.

A few quarters later, the event bus starts to look less like a communication mechanism and more like a public sewer everyone taps into. Topics become integration contracts nobody owns but everybody depends on. Services stop using events as notifications and start using them as remote tables. A team changes a field name in an “OrderCreated” payload and suddenly six downstream systems break in places nobody predicted. Another team replays a topic to rebuild a read model and accidentally triggers business side effects in three consumers. Compliance asks where customer consent is stored, and the answer is: “Well, bits of it are spread across Kafka.” event-driven architecture patterns

At that point, it’s worth saying the uncomfortable thing out loud: your event bus has become a shared database.

Not in implementation. In behavior. In coupling. In governance. And that is the level that matters. EA governance checklist

This is one of the most common architectural failures in modern enterprise systems. It usually appears under the banners of event-driven architecture, Kafka adoption, microservices modernization, or digital transformation. The tooling is often excellent. The architecture is often not. Teams confuse asynchronous transport with decoupling. They assume publishing an event is inherently cleaner than exposing an API. It isn’t. If many services depend on the same event schema, lifecycle, ordering guarantees, and semantic interpretation, they are coupled just as surely as if they all queried the same database table.

The bus becomes a giant distributed integration surface. Worse, it becomes one with weak transactional boundaries, partial observability, ambiguous ownership, and historical baggage preserved forever in log retention.

That doesn’t mean event buses are bad. Kafka is not the villain here. Nor are microservices. The problem is architectural misuse: treating the bus as a source of shared truth without designing clear domain boundaries, ownership, semantic contracts, and migration paths. Event streaming is powerful. But power without boundaries turns into accidental centralization. microservices architecture diagrams

So let’s unpack the problem properly: what coupling looks like on an event bus, why domain-driven design matters more than broker features, how this failure emerges in enterprises, and what to do about it without stopping the business.

Context

Enterprise architecture has gone through a predictable cycle.

First we had shared databases. They were efficient, straightforward, and catastrophic for autonomy. Every application was independent right up until someone altered a table used by twelve systems.

Then came service-oriented architecture. We wrapped the shared database in services, added XML, governance boards, and enough middleware to justify a new data center wing. ArchiMate for governance

Then microservices arrived with a compelling message: let teams own their services and data, communicate through APIs or events, and regain local control. Event streaming platforms such as Kafka fit naturally into this model. They handled high throughput, immutable logs, replay, decoupled timing, and durable integration. Very attractive.

But enterprises rarely move from one clean architecture to another. They migrate in layers. A billing platform still emits nightly batch files. A customer platform exposes REST. The order platform publishes events. The warehouse platform consumes all of the above. The event bus ends up carrying both domain events and integration sludge: canonical customer records, denormalized order aggregates, snapshot payloads, workflow commands disguised as events, and remediation messages for systems that cannot keep up.

In theory, an event bus should carry meaningful domain facts across bounded contexts. In practice, it often becomes the easiest place to dump data so nobody has to negotiate APIs or ownership. Once that happens, the bus turns into a shared dependency substrate.

The phrase “shared database” isn’t just rhetorical. It points to a structural smell:

  • many consumers depend on one producer’s schema
  • consumers infer behavior from fields never intended as formal contract
  • replay is used as backfill and operational recovery
  • no single team can evolve the message safely
  • the platform becomes a source of unofficial system-of-record behavior
  • business semantics drift without governance

That is shared persistence by another name.

Problem

The core problem is simple: asynchronous integration does not remove coupling; it merely changes its shape.

With synchronous APIs, the coupling is obvious. A caller depends on the endpoint, latency, availability, and response schema of another service.

With event buses, coupling hides in different places:

  • topic schema and field semantics
  • delivery guarantees
  • ordering assumptions
  • replay behavior
  • retention windows
  • idempotency expectations
  • timing of eventual consistency
  • hidden consumer dependencies
  • upstream event selection and naming

This coupling is often worse because it is less visible.

A producer thinks it publishes “CustomerUpdated” as a convenient event. Ten consumers treat it as the master customer record. One uses it for invoicing. One for fraud scoring. One for eligibility. One for GDPR audit. None of them talk to the producer team regularly. The producing team adds a nested address structure to support a mobile app use case. Half the consumers quietly mis-handle it. No compile-time dependency exists. No request fails immediately. The damage arrives later, staggered and expensive.

This is the event-driven version of a shared table with undocumented columns and too many readers.

Worse, teams often collapse domain semantics into transport convenience. They publish giant “entity changed” events because they are easy to emit from CRUD services. Those events are rarely good domain events. They are often database change notifications with a business label glued on. Downstream teams then build logic on top of implementation leakage.

That’s the real coupling architecture problem: not that systems communicate, but that they communicate through unstable, poorly bounded representations of shared concepts.

Forces

A decent architecture article should admit why smart teams make this mistake. They are usually responding to legitimate pressures.

Pressure for speed

Publishing to Kafka feels faster than negotiating API design or establishing a proper anti-corruption layer. A topic is easy to create. Consumers can self-serve. For delivery teams under quarterly targets, this looks like autonomy.

Pressure for reuse

Enterprise platforms love reuse. A “Customer” topic sounds efficient. Why have every system manage customer integration separately when they can all consume a common stream?

Because common streams often become common liabilities.

Pressure for analytics and real-time processing

Event streams are attractive for operational analytics, fraud detection, recommendation engines, and reconciliation pipelines. Once the stream exists, more consumers pile on. The event bus becomes both integration fabric and reporting substrate.

Pressure from migration

During monolith decomposition, events provide a way to tap into legacy state changes without rewriting everything. This is useful. It is also how transitional integration patterns become permanent architecture.

Pressure for loose coupling

Ironically, event buses are often chosen specifically to reduce coupling. But if teams do not model bounded contexts and domain ownership, they simply replace temporal coupling with semantic coupling.

Platform team incentives

Platform teams are often rewarded for adoption metrics: number of topics, number of producers, number of consumers, throughput. Those are platform success metrics, not architecture quality metrics.

That distinction matters.

Solution

The remedy is not “stop using events.” The remedy is to stop pretending the bus is neutral.

Treat the event bus as a shared integration surface that demands the same rigor you would apply to a shared database. Better still, design so it does not become a shared database in the first place.

The practical solution has five parts:

  1. Model bounded contexts first
  2. Publish domain events, not generic data exhaust
  3. Make ownership explicit
  4. Use derived read models locally, not as enterprise truth
  5. Design for reconciliation, drift, and change

This is where domain-driven design earns its keep. DDD is not about sticky notes and strategic jargon. It is about deciding what concepts mean, who owns them, and where that meaning stops. An Order in Sales is not necessarily an Order in Fulfillment. A Customer in CRM is not identical to a Customer in Billing. If you publish one “canonical” event for all, you are not harmonizing the enterprise. You are smearing different business meanings into a single artifact and making everyone pay for the confusion.

A healthy event-driven architecture publishes domain events from within a bounded context. Those events are statements of business fact meaningful to that context: OrderPlaced, PaymentAuthorized, ShipmentDispatched. They are not generic row copies like OrderRowUpdated. Downstream consumers translate those events into their own models.

That translation is not accidental overhead. It is the mechanism that preserves autonomy.

A healthier shape

A healthier shape
A healthier shape

In this model, Billing and Fulfillment do not treat the Sales event stream as a queryable shared source of customer and order truth. They consume business facts and build local models for their own purposes.

That is not zero coupling. Nothing is. But it is bounded coupling with clearer semantics and safer evolution.

Architecture

Let’s be concrete. There are two broad shapes of event bus usage.

The bad shape: shared data distribution

A service publishes broad entity snapshots. Many consumers subscribe. They depend on fields, optional attributes, update frequency, and ordering to maintain their own functions. The producer becomes an accidental upstream master. Topic changes require enterprise coordination.

The bad shape: shared data distribution
The bad shape: shared data distribution

This architecture scales throughput. It does not scale change.

The better shape: context-specific contracts

Each bounded context owns its data and publishes events meaningful to its domain. Consumers subscribe where relevant, transform into local language, and preserve internal autonomy. Cross-context integration uses anti-corruption layers, process managers, or dedicated translation services when needed.

That may seem less efficient than one canonical stream, but efficiency in enterprise architecture is often a trap. Shared representations save effort early and cost a fortune later.

Domain semantics matter more than Avro schemas

A schema registry helps. Compatibility checks help. Versioning helps. But these solve syntactic stability, not semantic stability. The dangerous breakage often comes from meaning, not structure.

Examples:

  • customerStatus = ACTIVE means marketable in one context, billable in another
  • orderTotal includes tax in one version, excludes tax in another
  • shipmentDate means promised date to one consumer and actual dispatch date to another
  • deletion events imply erasure for one consumer and soft-close for another

No schema registry will save you from semantic drift. This is why ubiquitous language and bounded contexts matter. The field can be valid JSON and still be architecturally toxic.

Kafka specifically

Kafka is especially good at luring teams into shared-database behavior because it is durable, replayable, and central. Those are strengths. They are also temptations.

Teams begin to think:

  • “Why build an API? The data is already on the topic.”
  • “Why own a local model? We can rebuild from the stream.”
  • “Why define a context boundary? Everyone can subscribe.”

That logic turns Kafka into a distributed table scan with better marketing.

Use Kafka for event streaming, integration, and durable asynchronous processing. Do not use Kafka as the enterprise’s unofficial operational database unless you are willing to accept the governance and coupling burden that comes with that choice.

Migration Strategy

Most enterprises are already in the mess. They do not need purity lectures. They need a migration path.

The right approach is almost always progressive strangler migration. You do not rip out the event bus. You reduce its role as a shared semantic dependency while introducing clearer domain ownership and local models.

Step 1: Identify topic dependency hotspots

Map which topics have many consumers, especially those with business-critical logic. Look for streams that smell like shared tables:

  • CustomerUpdated
  • OrderChanged
  • ProductSnapshot
  • AccountMaster
  • giant payloads with dozens of optional fields
  • consumers using fields not declared as contract

These are usually the risky assets.

Step 2: Classify events by type

Separate:

  • domain events — business facts
  • integration events — intended for external consumption
  • change data capture events — raw persistence changes
  • commands masquerading as events — “please do X”
  • operational events — retries, dead letters, corrections

A lot of problems come from mixing these categories on the same bus without naming them honestly.

Step 3: Introduce explicit ownership

For each high-value stream, name the owning team and bounded context. They own schema evolution, semantic documentation, compatibility policy, and deprecation path. If nobody wants to own a topic, that is usually because it already behaves like a shared database.

Step 4: Build consumer-side anti-corruption layers

Do not let every consuming service bind directly to raw enterprise events. Create translation layers, stream processors, or integration adapters that convert upstream events into local concepts.

This is not glamorous work. It is architecture in the real sense: absorbing change where it belongs.

Step 5: Replace broad snapshots with narrower domain events

Instead of publishing one giant CustomerUpdated, publish context-meaningful events such as:

  • CustomerRegistered
  • CustomerContactPreferencesChanged
  • CustomerCreditStatusReviewed

The goal is not more events for their own sake. The goal is less ambiguity.

Step 6: Add reconciliation paths

Every serious event-driven migration needs reconciliation. Event consumers fall behind. Messages get poisoned. Upstream bugs emit wrong state. Historical streams contain old semantics. If there is no way to compare local derived models with source-of-record state and repair drift, the architecture is brittle.

A mature migration includes:

  • periodic snapshots for comparison
  • replay with side-effect isolation
  • compensating events
  • discrepancy reports
  • business-owned exception handling

Step 7: Strangle direct dependence over time

As local models mature, consumers should stop treating upstream streams as complete truth. Over time, fewer services depend directly on giant shared topics. The event bus remains, but as a mechanism for bounded facts rather than a pseudo-database.

Progressive migration view

Progressive migration view
Progressive migration view

That is the practical path: isolate, translate, reconcile, then retire broad dependencies.

Enterprise Example

Consider a large retailer modernizing its order platform.

The legacy estate had an ERP system, a commerce platform, warehouse management, fraud tooling, CRM, and finance systems. During modernization, the architecture team introduced Kafka as the enterprise backbone. The first major stream was OrderChanged. It carried the full order snapshot whenever anything changed: payment state, address, fulfillment line, tax adjustment, cancellation reason, loyalty flag, and more.

At first, this looked brilliant. Teams moved fast. The warehouse subscribed. Billing subscribed. Customer support subscribed. Data science subscribed. Marketing subscribed. There was one source of order updates and no need for point-to-point APIs.

Within eighteen months, the topic had over thirty consumers.

Then the failures started.

The commerce team changed line-item discount representation to support promotion stacking. Fraud scoring consumers misread order value. Finance consumers broke during replay because they had embedded assumptions about first-seen event ordering. Warehouse systems treated partial updates as complete state and generated duplicate picks. Customer support tools showed stale cancellation reasons because local projections had silently dropped malformed messages. Nobody could safely evolve the OrderChanged contract because nobody fully knew who depended on which fields.

Classic shared database behavior. Just on a log.

The fix was not to abandon Kafka. The retailer reworked the architecture around bounded contexts:

  • Sales published OrderPlaced, OrderAmended, OrderCancelled
  • Payments published PaymentAuthorized, PaymentCaptured, PaymentFailed
  • Fulfillment published AllocationConfirmed, ShipmentDispatched, DeliveryConfirmed

Each consuming domain built a local model. Finance got explicit accounting events instead of mining order snapshots. Customer support received a dedicated support-facing projection. Data science still consumed streams, but from governed interfaces meant for analytics rather than production semantics. A reconciliation service compared financial records, shipment records, and sales order state nightly and raised discrepancies for operational review.

The migration took time. Some transitional topics remained. But the critical shift was conceptual: the bus stopped being treated as a universal order table and started behaving as a stream of context-owned business facts.

That is a real enterprise move. Not purity. Not ideology. Just better boundaries under pressure.

Operational Considerations

Event-driven systems fail operationally in ways architects often underplay on whiteboards.

Replays are dangerous

Replaying a topic sounds elegant until consumers perform side effects. If a consumer sends emails, triggers payments, opens tickets, or updates third-party systems, replay can become a production incident generator.

Separate projection rebuilds from business action handlers. If you cannot replay safely, you do not really own your event processing model.

Idempotency is not optional

Consumers will see duplicates. They will retry. They will restart after partial processing. If business operations are not idempotent, the architecture is depending on luck.

Ordering assumptions break

Kafka can preserve partition ordering, not global business ordering. If your design assumes all events for a business process arrive strictly in sequence across services, you are building on sand.

Retention is not truth

Long retention creates the illusion that the stream is a permanent source of record. It may not be legally complete, semantically stable, or operationally suitable for that role. Governance matters.

Observability must follow business flows

Technical metrics alone are useless. You need to trace business facts across topics and contexts:

  • order placed but never invoiced
  • payment captured but shipment not dispatched
  • cancellation received after dispatch
  • customer erasure requested but still present in downstream projections

This is where reconciliation becomes a first-class operational capability, not a cleanup activity.

Data governance gets harder, not easier

Sensitive data on an event bus spreads widely. PII, financial status, consent flags, and identity attributes can end up in too many topics with too many retention policies. Shared database problems return with compliance consequences attached.

Tradeoffs

A serious architecture always buys something and pays for something.

What you gain by reducing shared-bus coupling

  • better service autonomy
  • safer schema and semantic evolution
  • clearer bounded contexts
  • improved team ownership
  • lower blast radius of change
  • more resilient migrations
  • cleaner governance

What you pay

  • more translation logic
  • more local models and duplicated data
  • more careful event design
  • more operational reconciliation
  • less illusion of enterprise-wide canonical simplicity

That last one upsets people. Executives and platform teams often love canonical enterprise models because they look tidy on slides. But enterprises are not tidy. Billing, sales, support, and fraud do not see the world the same way. Pretending they do simply pushes complexity into integration failure later.

Duplication across bounded contexts is often a feature, not a flaw. It is the price of autonomy and contextual meaning.

Failure Modes

Here are the common ways this architecture goes bad.

The canonical topic trap

One giant topic per core entity. Many subscribers. Nobody can change anything. Every change becomes political.

CRUD events dressed as business events

A service emits “updated” events directly from persistence changes. Consumers infer business meaning from low-level state deltas. This always rots.

Hidden critical consumers

A downstream team depends on a field with no contract. Producer changes it. The issue appears in production weeks later.

Replay-induced side effects

A consumer rebuild operation re-triggers business actions. Duplicate notifications, double charges, repeated workflows.

Semantic drift over time

The event name stays the same, but the business meaning changes. The nastiest breakages are legal and financial, not technical.

Bus as operational query layer

Teams read directly from topics to answer transactional questions. They confuse event history with authoritative current state.

No reconciliation path

Local read models drift silently. The business notices only when a customer complains or finance misses revenue.

When Not To Use

Event buses are not a default answer.

Do not use an event bus as the primary integration mechanism when:

  • you need immediate request-response semantics and simple synchronous coupling is sufficient
  • the domain is small and the cost of eventing exceeds the value
  • consumers really need a stable API, not a stream of inferred state
  • the organization lacks discipline around ownership, contracts, and operations
  • the events would just be database row changes with no real domain semantics
  • compliance constraints make broad event dissemination unsafe
  • the main consumer count is tiny and direct integration is clearer

And specifically, do not use the event bus as a substitute for proper transactional ownership. If multiple services need current mutable access to the same concept with strong consistency, an event stream will not magically erase that design tension.

Sometimes the right answer is a modular monolith with well-defined boundaries. Sometimes it is a few services with APIs. Sometimes it is event streaming. Architecture is choosing the tension you can live with.

A few patterns sit nearby and are worth naming.

Event Carried State Transfer

Useful when consumers need data, but dangerous when overused. It can slide quickly into shared-database behavior if payloads become enterprise reference models.

Domain Events

The healthier form. Events represent business facts from a bounded context. Strongly recommended.

Change Data Capture

Valuable in migration, especially for strangler patterns. But CDC streams are not domain events. Treat them as transitional or infrastructural unless proven otherwise.

Anti-Corruption Layer

Essential when integrating across bounded contexts or legacy systems. It prevents external models from infecting internal ones.

CQRS

Helpful for building local read models from events. Not a license to let read models become enterprise truth.

Outbox Pattern

Good for ensuring reliable publication from transactional systems. It solves consistency between database write and event publication, not semantic coupling.

Saga / Process Manager

Useful for coordinating long-running business processes across services. But if sagas are compensating for poor domain boundaries, step back and reassess.

Summary

An event bus is not automatically a decoupling mechanism. In many enterprises, it becomes a shared database with better throughput and worse honesty.

The smell appears when teams publish broad, reusable entity streams; when consumers bind directly to upstream semantics; when ownership is unclear; when replay, retention, and schema evolution are treated as platform details instead of architectural decisions. Kafka can amplify both good architecture and bad architecture. It does not rescue weak boundaries.

The way out is not anti-event dogma. It is better design.

Start with domain-driven design. Define bounded contexts. Publish events as business facts, not generic row copies. Make ownership explicit. Let consumers translate into local models. Build reconciliation as a normal capability. Migrate progressively with strangler patterns instead of heroic rewrites.

The most important line in all of this is simple: shared infrastructure is not the same as shared meaning.

When teams forget that, the bus becomes a database.

When they remember it, the bus becomes what it should have been all along: a useful, bounded medium for business facts moving between autonomous systems.

Frequently Asked Questions

What is enterprise architecture?

Enterprise architecture aligns strategy, business processes, applications, and technology in a coherent model. It enables impact analysis, portfolio rationalisation, governance, and transformation planning across the organisation.

How does ArchiMate support architecture practice?

ArchiMate provides a standard language connecting strategy, business operations, applications, and technology. It enables traceability from strategic goals through capabilities and services to infrastructure — making architecture decisions explicit and reviewable.

What tools support enterprise architecture modeling?

The main tools are Sparx Enterprise Architect (ArchiMate, UML, BPMN, SysML), Archi (free, ArchiMate-only), and BiZZdesign. Sparx EA is the most feature-rich, supporting concurrent repositories, automation, scripting, and Jira integration.