Shared Kernel in Domain-Driven Design Microservices

⏱ 19 min read

Microservices are often sold like a clean breakup. Split the monolith, give every team its own bounded context, let services move independently, and everyone lives happily ever after. In practice, it is less a clean breakup and more a family inheritance dispute. A few concepts refuse to stay neatly on one side of the line. Customer identity. Product classification. Regulatory codes. Country definitions. Money. Those concepts leak across boundaries because the business itself shares them.

This is where architects get into trouble.

Some pretend the overlap is not real and duplicate everything. Soon every service has its own slightly different idea of what a “customer” is. Others centralize too aggressively and build a giant shared domain library that quietly becomes a distributed monolith in disguise. Both paths are common. Neither is elegant.

The Shared Kernel pattern sits in that uncomfortable middle ground. It is one of the more dangerous patterns in Domain-Driven Design because it solves a real problem by introducing a disciplined dependency. Used well, it reduces semantic drift and duplication. Used badly, it becomes organizational glue that hardens into concrete.

The key point is simple: a shared kernel is not “shared code” in the casual enterprise sense. It is a deliberately limited, collaboratively governed slice of a domain model shared by a small number of bounded contexts. Small is doing a lot of work in that sentence. So is governed.

In microservices architecture, especially one built around Kafka events, API integrations, and progressive migration from a monolith, the shared kernel becomes even more interesting. It can stabilize meaning during decomposition. It can also destroy team autonomy if allowed to spread. The pattern is useful precisely because it acknowledges an awkward truth: some domain semantics are expensive to duplicate, expensive to translate, and too critical to let diverge.

Let’s get into the mechanics, the politics, and the failure modes.

Context

Domain-Driven Design gives us bounded contexts because large enterprises do not have one clean, universal language. They have many. The meaning of “order,” “account,” or “price” changes depending on who is speaking. Sales means one thing. Billing means another. Fulfillment means something else again.

This is not a modeling flaw. It is the business.

Microservices take that DDD insight and turn it into a deployment strategy. Each service owns its own model, persistence, behavior, and lifecycle. In theory, that allows teams to move faster, evolve independently, and localize change.

But real enterprises are messy systems, not textbooks. Some concepts genuinely span multiple subdomains. A regulated financial product code must mean the same thing in onboarding, compliance screening, and reporting. A country code set is not a local opinion. A product taxonomy used in digital commerce, warehouse systems, and returns processing may be so foundational that divergence creates operational pain immediately.

In those cases, if every bounded context invents and maintains its own version, the organization pays repeatedly in translation logic, reconciliation workflows, event mapping, and support incidents. If one team owns everything centrally, everyone else queues behind them. Shared kernel is the compromise: share only what truly must be shared, and treat that shared part as a co-owned strategic asset.

This is especially relevant during migration. When decomposing a monolith, teams often discover that some domain objects are deeply embedded in multiple modules. Pulling them apart too early causes chaos. Sharing a carefully bounded kernel for a period of time can create enough stability to let the migration proceed without semantic fragmentation.

Problem

The problem is not simply code reuse. Code reuse is usually the wrong justification.

The real problem is semantic consistency under distributed ownership.

Suppose you are breaking a retail commerce platform into Catalog, Pricing, Promotions, Order Management, and Inventory services. Catalog and Pricing both need to understand a product’s identity, category hierarchy, unit-of-measure conventions, and regional availability codes. If each service models those independently, subtle mismatches emerge:

  • one service treats category as mutable, another as historical
  • one uses gross weight, another net weight
  • one supports regional overrides, another assumes global truth
  • one emits Kafka events with internal classification codes, another expects normalized taxonomy IDs

None of these differences look catastrophic on a whiteboard. In production, they become expensive. Search results disagree with product pages. Promotions target the wrong SKU family. Inventory planning reconciles against a different product grouping than order allocation. Support teams start using phrases like “data issue” when what they really mean is “we no longer agree on the meaning of the business.”

The opposite mistake is just as damaging. A platform team creates enterprise-domain-common, a giant shared package used by twenty services. It includes customer entities, product models, order aggregates, utilities, validation rules, enum sets, API clients, and half the company’s assumptions. Every small change now requires synchronized releases or heroic backward compatibility. Services claim to be independent but cannot ship without checking whether the shared library changed. Congratulations: you have rebuilt the monolith with Maven coordinates.

The Shared Kernel pattern exists to solve the first problem without falling into the second.

Forces

Several forces pull against each other here.

Autonomy versus consistency

Microservices live on team autonomy. Shared kernels impose coordination. The more you share, the more release cycles, governance, and social negotiation you require. Yet some semantics are too important to let drift. EA governance checklist

This is not a technical tradeoff first. It is an organizational one.

Duplication versus translation

You can duplicate a concept in multiple bounded contexts and translate between them. That preserves local freedom but creates mapping complexity. Or you can share a core model and avoid translation for that slice. The former costs runtime and operational complexity. The latter costs coupling.

There is no free lunch. There is only choosing where to pay.

Stability versus innovation

A shared kernel should contain stable concepts. If the model changes weekly, sharing it will spread turbulence through multiple services. But the more stable and foundational the concept, the more attractive a shared kernel becomes.

Stable nouns are better candidates than volatile workflows.

Synchronous dependency versus asynchronous propagation

In service architectures, not all sharing is code-level. Some shared semantics can be propagated by events over Kafka, with services maintaining local projections. But event-driven propagation introduces eventual consistency, versioning issues, replay concerns, and reconciliation work. event-driven architecture patterns

Sometimes that is exactly right. Sometimes it is overkill for a small, highly stable, jointly owned concept.

Migration speed versus future purity

During a strangler migration from a monolith, architectural purity is often a luxury. A temporary shared kernel may help teams extract services faster without rewriting every concept boundary upfront. But temporary architecture has a bad habit of becoming permanent.

That means migration reasoning matters. You need to know whether the shared kernel is a transitional support beam or part of the intended long-term design.

Solution

A Shared Kernel is a small subset of the domain model shared by two or more bounded contexts because the cost of divergence is higher than the cost of coordinated change.

The words “small subset” are everything.

A proper shared kernel usually includes some combination of:

  • value objects with precise business semantics
  • identifiers and code systems
  • canonical classifications
  • invariants that must hold across participating contexts
  • carefully limited domain services or validation rules
  • event schema definitions for common domain facts

It should not become a dumping ground for convenience code, cross-cutting utilities, persistence abstractions, HTTP clients, or a broad “common model.” If your kernel starts containing generic helper classes, someone has already stopped thinking.

The kernel must also be explicitly governed. Shared means co-owned, not centrally imposed. The teams that depend on it need a collaboration model: versioning rules, compatibility expectations, review practices, and decision rights. If there is no social contract, the technical artifact will rot.

A good litmus test is this: if one team can change the kernel without deeply considering the consequences to the other team, then it is not really a shared kernel. It is a dependency accident.

What belongs in the kernel?

The answer is domain semantics that meet four conditions:

  1. They are conceptually identical across the participating bounded contexts.
  2. They change infrequently.
  3. Divergence would cause material business harm.
  4. The set of participating teams is small and stable.

A product taxonomy might qualify. A “customer aggregate” usually does not, because different contexts inevitably need different customer views. Money, tax jurisdiction codes, or risk classification codes often do qualify. Full lifecycle workflows rarely do.

Architecture

The architecture of a shared kernel in microservices is less about where the package lives and more about keeping the blast radius tight. microservices architecture diagrams

A useful pattern is to share semantics, not behavior-heavy orchestration. Keep the kernel limited to high-value domain concepts. Let each service own its own aggregate lifecycle, persistence, and process rules. Share the smallest possible semantic center.

Here is the shape.

Architecture
Architecture

In this architecture:

  • Catalog, Pricing, and Compliance share a tightly bounded kernel.
  • They still publish their own Kafka events.
  • Downstream consumers build their own projections and models.
  • The kernel does not own workflows or aggregate state transitions for the whole estate.

That distinction matters. Shared kernel is not a central domain service. It is a semantic contract embodied in code and schemas.

Shared kernel and Kafka

Kafka changes the conversation because many enterprises use event streams as the de facto integration fabric. A shared kernel can define event schema fragments or core value objects that appear in events. For example:

  • ProductId
  • TaxCategory
  • CountryCode
  • RegulatoryClassification
  • Money

This can improve consistency across producers and consumers. But there is a trap. If every event schema imports the same large model package, event evolution becomes painful. Consumers are then coupled not only to topic contracts but also to code-level semantics that may not fit their context.

A better approach is to share the semantic core and let event contracts remain explicit and versioned. The shared kernel informs event design; it should not erase bounded context boundaries.

Shared kernel and local models

A service can use shared kernel types inside its own domain model without giving up autonomy. For instance, Pricing may use ProductId and TaxCategory from the kernel while still having its own PriceList, PriceRule, and PromotionalAdjustment aggregates. Catalog may use the same ProductId and TaxCategory but have entirely different entities and invariants around product content and lifecycle.

This is where many teams overreach. Sharing ProductId is fine. Sharing the whole Product aggregate is usually a mistake.

Migration Strategy

Most enterprises do not start with beautifully separated bounded contexts. They start with a monolith, a batch ecosystem, and an integration layer that knows too much.

When moving toward microservices, a shared kernel can be a practical migration aid. The key word is practical, not idealistic.

Imagine a monolith where product classification logic is embedded across catalog maintenance, pricing calculations, promotions eligibility, and compliance exports. If you try to extract those services independently and let each create its own classification model from day one, you will spend months fighting semantic drift and endless reconciliation defects. A temporary or intentionally durable shared kernel can anchor the common semantics while the rest of the system is decomposed.

A progressive strangler migration often looks like this:

Diagram 2
Migration Strategy

Step 1: Identify semantic seams

Do not begin with code extraction. Begin with language. Which concepts are actually the same across the candidate domains? Which only look similar? Workshops with domain experts are more valuable here than static analysis tools.

This is classic DDD work: understand the ubiquitous language, the places it changes, and the places it truly does not.

Step 2: Isolate a minimal kernel

Extract only the concepts that are stable and shared. This may include IDs, code sets, classification rules, and serialization contracts. Resist the temptation to move process logic into the kernel.

Step 3: Introduce anti-corruption layers where needed

Not every part of the monolith should consume the kernel directly. In fact, during migration, anti-corruption layers can protect new services from old assumptions. The shared kernel should reduce duplication among emerging services, not force the legacy model onto all participants.

Step 4: Publish domain events and run reconciliation

As services take ownership, they will emit Kafka events. Because migration is messy, those events and the old monolith outputs will not align perfectly. Reconciliation becomes critical. Build explicit comparison processes for key business facts:

  • product classification alignment
  • tax category consistency
  • regulatory export equivalence
  • pricing eligibility match rates

Reconciliation is not a temporary nuisance. It is the price of controlled migration.

Step 5: Decide whether the kernel is transitional or strategic

Some shared kernels should be retired once contexts are mature enough to own independent models with translation. Others represent enduring shared semantics. Make that decision consciously. Otherwise, the migration artifact quietly becomes a permanent dependency nobody understands.

Enterprise Example

Consider a global bank modernizing its onboarding and compliance architecture.

The legacy platform has a large customer master used by onboarding, KYC screening, sanctions checks, and regulatory reporting. Leadership initially wants a single “Customer microservice” to solve everything. That instinct is understandable and wrong.

In domain terms, these contexts do not mean the same thing by customer:

  • Onboarding cares about applicant journey, document collection, and channel experience.
  • KYC/AML cares about legal identity, beneficial ownership, risk factors, and screening evidence.
  • Reporting cares about regulatory classifications, filing obligations, and reporting snapshots.

Trying to force one universal customer model creates endless argument and slow delivery.

However, there is a smaller set of semantics that truly must remain consistent across these contexts:

  • legal entity identifier
  • country and jurisdiction codes
  • party type classification
  • regulatory status codes
  • risk rating scale definitions
  • some core validation rules around identity formats

That became the shared kernel.

Onboarding, Screening, and Reporting each built their own bounded contexts and local models. They consumed the shared kernel for those stable semantics. Events were published over Kafka:

  • PartyRegistered
  • IdentityValidated
  • RiskRatingAssigned
  • RegulatoryStatusUpdated

Each service maintained its own persistence. Reporting built a separate projection model optimized for filings. Screening kept richer evidence structures than anyone else needed. Onboarding evolved UI-friendly applicant states without infecting the rest of the estate.

The shared kernel prevented one of the classic bank failures: mismatched regulatory code interpretation across systems. Before the change, one code list update had to be manually patched in three platforms, and they often drifted. Afterward, coordinated kernel versioning and event schema alignment reduced those defects dramatically.

But there were tradeoffs. Kernel changes required a joint review board across the three teams. That slowed some releases. The bank accepted that cost because the business risk of semantic drift in compliance domains was far higher than the cost of coordination.

This is the right use case: small number of tightly related contexts, high cost of divergence, stable shared concepts, and clear co-ownership.

Operational Considerations

Architects often discuss shared kernel as if it ends at model design. In real systems, the hard part begins in operations.

Versioning discipline

A shared kernel needs semantic versioning, compatibility rules, and consumer upgrade expectations. Breaking changes should be rare and explicit. If every release is a breaking release, the kernel is not stable enough to be shared.

Release coordination

Not every kernel update should force synchronized deployment. If it does, you have too much coupling. Backward-compatible additions, tolerant readers, and deprecation windows matter. This is especially true when Kafka event schemas are involved.

Schema evolution

If the kernel informs event structures, you need schema governance. Avro, Protobuf, or JSON Schema with compatibility checks can help. The practical concern is not tooling elegance. It is avoiding consumer breakage while allowing the domain to evolve. ArchiMate for governance

Observability

Semantic drift still happens, even with a shared kernel. Monitor for it. Build dashboards around reconciliation mismatches, event validation failures, code-set discrepancies, and cross-service interpretation errors.

The operational smell is not just exceptions in logs. It is business facts that stop agreeing.

Reconciliation loops

In event-driven estates, eventual consistency is not a slogan; it is a workload. Shared semantics reduce reconciliation scope but do not eliminate it. Build recurring reconciliation jobs, exception queues, and human review paths for critical records.

For regulated or financial domains, reconciliation is architecture, not housekeeping.

Governance without bureaucracy

Shared kernels need governance, but heavy architecture boards will kill them. The best model is a small cross-team stewardship group with clear ownership and fast review cycles. Keep the participant set small. Once ten teams depend on the kernel, coordination cost starts eating the value.

Tradeoffs

Shared kernel is useful because it acknowledges tradeoffs rather than pretending to eliminate them.

Benefits

  • reduces semantic duplication
  • lowers translation complexity between closely related bounded contexts
  • stabilizes critical code systems and value objects
  • supports migration from monolith to services
  • helps align event semantics across Kafka-based systems
  • decreases reconciliation noise for foundational concepts

Costs

  • increases coordination between teams
  • creates release and versioning overhead
  • can slow independent delivery
  • invites scope creep into a generic common library
  • may hide bounded context distinctions if overused
  • can become a stepping stone to distributed monolith behavior

The deepest tradeoff is autonomy versus shared meaning. If your organization values independent team movement above all else, you should be cautious. If your domain is highly regulated and semantic drift is dangerous, the balance shifts.

Failure Modes

This pattern fails in familiar, predictable ways. Most are not technical surprises. They are governance failures expressed in code.

1. The kernel grows fat

What started as code lists and value objects becomes entities, repositories, services, API clients, and “just a few helpers.” Soon every service imports it. Build pipelines become hostage to a supposedly common package.

This is the most common failure. The cure is brutal scope discipline.

2. Shared kernel becomes central ownership

One team starts acting as the platform gatekeeper. Other teams lose influence over definitions that affect them. The result is resentment, workarounds, and local forks. At that point the kernel is no longer shared in any meaningful sense.

3. False semantic agreement

Teams think they share a concept, but they do not. “Customer status” might mean onboarding stage in one context and regulatory classification in another. Forcing both into one type creates confusion and bad workflows.

This is why domain semantics work must come before code extraction.

4. Synchronized deployment dependency

A small change in the kernel requires all dependent services to upgrade and deploy in lockstep. This is how microservices quietly become a distributed monolith. Good compatibility practices are non-negotiable.

5. Event contract contamination

Kafka events become thin wrappers around shared model classes. Consumers inherit assumptions they should not. Event evolution slows because too much meaning is packed into shared code instead of explicit, versioned contracts.

6. Migration fossilization

A kernel introduced as a temporary migration scaffold never gets revisited. Years later it still reflects monolith-era assumptions and blocks bounded context refinement.

Temporary architecture should have an expiration date, or it will become your permanent architecture.

When Not To Use

There are plenty of situations where Shared Kernel is exactly the wrong move.

Do not use it when contexts are only superficially similar

If two domains use the same words but with different meaning, sharing a model creates more damage than duplication. Use translation and anti-corruption layers instead.

Do not use it with many teams

Once the number of participating teams gets large, governance cost explodes. A shared kernel is for a small cluster of closely related contexts, not the whole enterprise.

Do not use it for fast-changing domains

If the concepts are evolving rapidly, shared code will amplify churn. Better to let teams model locally and integrate via explicit contracts.

Do not use it as a utility library

A utility library is not a shared kernel. Logging wrappers, date helpers, HTTP clients, and framework base classes belong elsewhere. Mixing technical commonality with domain sharedness is a category error.

Do not use it to avoid difficult DDD work

If teams are arguing about whether a concept really is the same, that is not a sign to share first and sort it out later. It is a sign to do more domain exploration.

Shared Kernel makes the most sense when considered alongside neighboring DDD and integration patterns.

Related Patterns
Related Patterns

Anti-Corruption Layer

Use this when one context should protect itself from another’s model. It is often the right choice when semantics are similar but not identical. During migration, you may combine ACLs with a shared kernel for only the truly common concepts.

Published Language

A published language defines a stable set of integration contracts. In Kafka-based systems, event schemas often play this role. Shared kernel can support a published language, but they are not the same thing. One is a co-owned semantic model subset; the other is an integration contract.

Conformist

Sometimes one downstream context simply conforms to an upstream model. That is easier than co-owning a shared kernel, but it sacrifices influence. In enterprise reality, this is common where one source is genuinely authoritative.

Strangler Fig

Shared kernel is often useful in strangler migrations because it creates semantic stability while capabilities are pulled out of the monolith. But the pattern must not become a dumping ground for all the unresolved monolith design.

Summary

The Shared Kernel pattern is one of those ideas that sounds modest and becomes political very quickly.

At its best, it is a sharp tool for preserving shared meaning across a small number of bounded contexts. It keeps foundational semantics aligned. It reduces translation waste. It helps microservice migrations proceed without splintering critical domain concepts. It supports Kafka event ecosystems by grounding shared value objects and code systems in a governed semantic core.

At its worst, it is a common library with delusions of grandeur. It bloats. It centralizes power. It forces synchronized change. It turns independently deployable services into a distributed monolith held together by package imports and release notes.

The difference is discipline.

Use a shared kernel only when the domain semantics are truly the same, the participating contexts are few, the concepts are stable, and the cost of divergence is meaningfully higher than the cost of collaboration. Keep it tiny. Govern it jointly. Version it carefully. Reconcile relentlessly during migration. And revisit whether it still deserves to exist.

In enterprise architecture, there are patterns that reward ambition and patterns that reward restraint. Shared Kernel is firmly in the second camp.

The memorable line here is the only one worth carrying into your next design review: share meaning, not convenience.

Frequently Asked Questions

What is a service mesh?

A service mesh is an infrastructure layer managing service-to-service communication. It provides mutual TLS, load balancing, circuit breaking, retries, and observability without each service implementing these capabilities. Istio and Linkerd are common implementations.

How do you document microservices architecture for governance?

Use ArchiMate Application Cooperation diagrams for the service landscape, UML Component diagrams for internal structure, UML Sequence diagrams for key flows, and UML Deployment diagrams for Kubernetes topology. All views can coexist in Sparx EA with full traceability.

What is the difference between choreography and orchestration in microservices?

Choreography has services react to events independently — no central coordinator. Orchestration uses a central workflow engine that calls services in sequence. Choreography scales better but is harder to debug; orchestration is easier to reason about but creates a central coupling point.