⏱ 19 min read
Microservices usually don’t fail with a bang. They fail with a favor.
A team wants to avoid duplication, so they extract a “common” library. Another team needs the same customer object, so they add a few more fields. A platform group centralizes logging, retries, event contracts, auth helpers, money types, error wrappers, feature flags, and a bit of persistence glue “for consistency.” Six months later, nobody can upgrade anything without a release train, half the services drag around dependencies they don’t understand, and every emergency fix feels like defusing a bomb through a keyhole.
This is one of the quietest traps in microservices architecture: the shared library that looks like reuse but behaves like coupling. Not because sharing code is always wrong. It isn’t. The problem is that in a distributed system, code reuse and domain autonomy are often pulling in opposite directions. Every shared package creates a hidden dependency fan-out graph. And when the graph gets dense enough, the architecture stops being a set of independent services and starts behaving like a distributed monolith wearing a microservice costume. microservices architecture diagrams
That’s the real cost. Not Maven coordinates. Not package versions. Not a little CI pain. The cost is that your boundaries become performative. Teams still say “service ownership,” but they are no longer free to evolve their own models, APIs, and release cadence. They have local code and centralized consequences.
I’ve seen this pattern in banks, retailers, telecoms, and SaaS firms. The details differ, but the shape is the same: a platform intended to accelerate delivery gradually becomes the thing that prevents it. Especially when Kafka enters the picture, because then the shared library often smuggles in event schemas, serializer logic, topic naming conventions, retry semantics, and consumer assumptions. What began as convenience becomes protocol by accident. event-driven architecture patterns
This article is about that hidden cost. We’ll look at why shared libraries are so seductive, where they cross the line, how domain-driven design changes the conversation, and how to migrate away without setting the estate on fire. We’ll also cover the tradeoffs honestly, because there are cases where a shared library is exactly the right move. Architecture is not religion. It is judgment under constraint.
Context
Microservices promised something very specific: independent deployability, bounded context autonomy, and the ability for teams to evolve parts of a system without negotiating every change across the enterprise. That promise was never about splitting codebases for sport. It was about reducing coordination cost.
And coordination cost is the tax that kills large systems.
When teams move from a modular monolith to microservices, they often lose one thing they had become deeply addicted to: easy in-process reuse. In a monolith, a shared module is often sensible. The runtime is the same. Release cycles are unified. Type systems line up. Refactoring can be global. If a shared package changes, you can rebuild the whole thing and know where you stand.
Microservices remove that comfort. They replace local compilation errors with distributed integration problems. That makes teams reach for shared libraries as a psychological bridge back to certainty. “If we all use the same package, at least our models stay aligned.” It sounds prudent. It feels efficient. It is frequently the beginning of the trouble.
The subtle point is this: a shared library is not just code reuse. In enterprise systems it often becomes shared policy, shared semantics, shared release timing, and shared failure behavior. Once you centralize those things, your services may still run in different processes, but they no longer change independently.
This is where domain-driven design matters. DDD is not about drawing context maps in workshops and then forgetting them. It is about protecting meaning. A Customer in billing is not necessarily the same thing as a Customer in marketing. An Order in fulfillment is not the same as an Order in pricing. Same word, different job. Shared libraries tend to erase this difference because code hates ambiguity. Domains need it.
Problem
The core problem is dependency fan-out.
One shared library is published. Ten services depend on it. Then the library itself depends on a security utility, a Kafka wrapper, a serialization framework, a metrics package, and a validation module. One change in the shared package now ripples through dozens of transitive dependencies and runtime behaviors.
Here is the trap in picture form.
This graph does not look dangerous at first glance. It looks efficient. But it has all the ingredients of systemic fragility:
- many services share one upgrade path
- transitive dependencies leak infrastructure concerns into every team
- a domain change in one area appears as a forced version change in unrelated services
- runtime behavior becomes standardized in places where local variation is actually valuable
The worst version of this is the “enterprise common model” library. That package contains canonical domain entities, event classes, error types, maybe some API DTOs, perhaps utility methods for persistence and validation. It is sold as consistency. In practice, it is a treaty that none of the domains truly agreed to.
A shared customer class is rarely just a customer class. It becomes a battlefield between product lines:
- marketing wants segmentation attributes
- billing wants tax identity and invoice preferences
- support wants communication settings
- fraud wants verification state
- loyalty wants points balance and tiering
Soon the object is obese. Worse, every service receives pressure to accept changes that are not part of its own bounded context. This is exactly the opposite of domain isolation.
And then there is Kafka. Teams often standardize around a shared event library to keep producers and consumers “in sync.” But sync is the wrong goal in event-driven architecture. Loose compatibility is the goal. A producer should not need to ship a code package to all consumers in order to evolve its event contract. If it does, the events are not integration contracts. They are remote class definitions.
Forces
A good architecture article has to admit why sensible people make the bad move. Shared libraries persist because they solve real pains.
First, duplication is emotionally expensive. Developers hate copying code. A duplicated Money type or validation rule feels amateurish, especially in organizations that reward visible standardization.
Second, security and compliance teams often want central enforcement. One auth helper. One audit interceptor. One encryption utility. On paper, this reduces risk.
Third, platform teams are measured on consistency and paved roads. Shared libraries are an easy artifact to produce. They are tangible. They look like leverage.
Fourth, language ecosystems encourage it. Java and .NET shops have mature package management and rich type systems. Publishing a common package is easier than negotiating API boundaries.
And finally, microservices create genuine infrastructure repetition. Tracing setup, correlation IDs, Kafka producer settings, retries, dead-letter policies, OpenTelemetry instrumentation, structured logging. Some of this should indeed be shared.
So this is not a morality play. The force field is real:
- autonomy vs consistency
- duplication vs coupling
- local optimization vs enterprise control
- domain purity vs delivery speed
- compile-time safety vs runtime decoupling
Architects get into trouble when they pretend one side always wins. It doesn’t.
The useful distinction is not “shared library good or bad.” It is what is being shared.
Sharing a date utility is not the same as sharing a domain object.
Sharing an OpenTelemetry bootstrap module is not the same as sharing Kafka event classes.
Sharing a password hashing component is not the same as sharing customer semantics.
That distinction is the whole game.
Solution
My opinion is blunt: in microservices, share technical capability sparingly, but do not share domain semantics in code across bounded contexts.
If two services need the same business concept, that does not mean they need the same class. It usually means they need a contract, translation, and a bit of humility.
The practical solution has four parts.
1. Split shared libraries into categories
Most organizations lump all shared code into one or two packages. That is lazy taxonomy, and lazy taxonomy creates expensive systems.
Use three buckets:
- Foundation libraries: narrow, technical, low-volatility concerns such as logging bootstrap, telemetry setup, auth token parsing, or secure config access.
- Platform SDKs: optional wrappers around infrastructure like Kafka producers, feature flags, or service discovery, but with strict limits and extension points.
- No shared domain model libraries across bounded contexts.
That last rule matters. The billing service can have a CustomerAccount. The CRM service can have a CustomerProfile. The loyalty service can have a Member. If they all share one Customer class, you have centralized semantics and decentralized confusion.
2. Use schema contracts, not shared code, for integration
For synchronous APIs, publish OpenAPI or protobuf contracts if useful. For Kafka, publish versioned schemas and compatibility rules. Consumers should generate or map local representations rather than import producer-owned code.
This keeps the dependency direction clean: consumers depend on a contract, not the producer’s internal model.
3. Translate at boundaries
Boundary translation feels like “extra code” to teams raised on DRY. In distributed systems, it is often money well spent. Mapping is not waste if it protects autonomy.
A small mapper is cheaper than a year of forced coordination.
4. Govern by fitness functions, not architecture slogans
Measure fan-out. Measure upgrade lag. Measure how many services are pinned to old package versions. Measure how often a shared library release triggers multi-team testing. If you can’t see the coupling, you’ll underestimate it.
Architecture
A healthier target architecture looks more like this.
The point is not zero sharing. The point is selective sharing with the dependency direction under control.
Notice what is absent:
- no common
Order,Invoice, orCustomerpackage - no mandatory Kafka wrapper that hides producer and consumer semantics
- no central “business model” package pretending all domains mean the same thing
This architecture aligns much better with domain-driven design.
A bounded context owns its model and language. Integration is translation between contexts, not type reuse across them. The anti-corruption layer is not some old DDD relic; it is one of the best defenses against semantic drift caused by shared libraries.
Domain semantics are where shared libraries do the most damage
Take OrderStatus. It sounds universal. It isn’t.
For sales, an order may be Pending, Confirmed, Cancelled.
For fulfillment, it may be Allocated, Picked, Packed, Shipped.
For finance, it may be Authorized, Captured, Refunded, ChargedBack.
When teams share one enum, they force a false consensus. The result is either endless expansion of the enum or awkward values nobody understands in their local context. Both are bad architecture wearing a neat type definition.
A service should model the domain language it needs to do its job. Integration should carry the relevant facts, not impose a universal ontology.
Migration Strategy
The hard part is not agreeing with this in principle. The hard part is getting out once the shared library is everywhere.
The right migration is progressive, boring, and observable. This is classic strangler thinking applied to dependencies rather than endpoints.
Do not announce “we are deleting the common library” and trigger an enterprise panic. Instead, peel away one category at a time.
Step 1: Map the dependency fan-out graph
You need an inventory:
- which services depend on the library
- which modules they actually use
- which versions are in production
- which dependencies are transitive
- which parts are domain, infrastructure, and accidental convenience
A surprising amount of “critical shared code” turns out to be dead or trivial.
Step 2: Freeze semantic expansion
Before you migrate, stop making the problem worse. Put a rule in place: no new domain objects in shared libraries. Any new cross-service integration goes through schemas or APIs.
Step 3: Extract foundation from domain
If the shared package contains both telemetry setup and customer events, split it. Foundation code can remain shared if it is truly technical and stable. Domain code gets isolated and replaced with contracts.
Step 4: Introduce local models behind adapters
Service by service, replace direct use of shared domain classes with local representations. Add mappers at ingress and egress. This is tedious work, but it is straightforward and low drama if done incrementally.
Step 5: Move event sharing to schema governance
For Kafka, stand up schema versioning and compatibility checks. Producers publish schemas. Consumers validate and map. Avoid shipping producer-owned event classes as a library.
Step 6: Reconcile data and semantic divergence deliberately
This is the part many migration plans skip. Once services own local models, they will diverge. That is healthy, but you need reconciliation patterns where cross-domain consistency matters.
Examples:
- billing reconciling invoiceable orders with order events
- customer service reconciling profile changes with downstream subscription systems
- inventory reconciling reserved stock against fulfillment events after consumer lag or replay
Reconciliation is not a bug. In distributed architecture, it is a first-class capability.
Here is a sensible migration pattern.
Progressive strangler in practice
If you have 40 services, do not migrate all 40. Start with one domain slice that has clear boundaries and active change demand. Active pain is your friend because it creates urgency and reveals where the coupling hurts.
A typical sequence:
- pick one producer and two consumers on Kafka
- replace shared event classes with registry-managed schemas
- let each consumer create its own local event mapping
- run dual-read or dual-deserialization for one release
- reconcile mismatches
- remove the shared event dependency
Repeat.
That is how large estates change: one seam at a time.
Enterprise Example
Consider a large retailer running e-commerce, stores, loyalty, and finance systems. They had roughly 120 microservices, mostly Java, Kafka as the event backbone, and a widely celebrated enterprise-domain-sdk.
That SDK included:
Customer,Order,Product,Payment- Kafka serializers and topic conventions
- validation annotations
- common error types
- tracing bootstrap
- auth helpers
At first, it worked. Teams shipped faster because everyone used the same artifacts. Then the organization grew.
Loyalty wanted customer tier and household structure.
Finance needed invoice entity references and tax registrations.
Stores needed pickup preferences and point-of-sale attributes.
Digital commerce needed guest checkout semantics.
Every addition landed in the same Customer object. Fields became optional because no one agreed on invariants. Event payloads ballooned. Consumers built logic based on fields that other producers did not reliably populate. Upgrading the SDK became a quarterly coordination exercise involving more than 30 teams.
The ugly part arrived during a returns modernization program. The returns domain needed a new interpretation of order line lifecycle that conflicted with the commerce team’s shared OrderStatus enum. Instead of modeling a local returns lifecycle, they extended the enum. This broke analytics jobs, confused billing compensations, and forced downstream consumers to handle statuses they should never have seen.
The fix was not heroic. It was architectural hygiene.
The retailer split the SDK into:
- a tiny foundation package for telemetry and security bootstrap
- a Kafka platform SDK with limited helper abstractions
- external Avro schemas managed in a registry
- no shared domain model package
Returns created its own local ReturnAggregate and ReturnableLine concepts.
Commerce kept its own SalesOrder.
Billing mapped order events into InvoiceCandidate.
Loyalty mapped customer changes into MemberSnapshot.
They also introduced reconciliation jobs because once domains were independent, temporary drift became visible. For example, if a consumer missed events due to deployment lag, a nightly reconciliation compared source-of-truth order facts against local projections and repaired mismatches.
The result was not instant perfection. They had more mapping code. They had to improve schema governance. Some developers complained about “duplicate models.” But within two quarters: EA governance checklist
- SDK upgrade coordination dropped sharply
- domain teams changed faster without central approval
- Kafka event evolution became safer
- incidents caused by transitive dependency conflicts decreased
- semantic arguments moved from package design to contract design, which is where they belonged
That is a trade worth making in almost every enterprise I’ve seen.
Operational Considerations
This architecture changes operations as much as design.
Versioning and compatibility
If you remove shared event code, you need real contract discipline:
- backward and forward compatibility rules
- consumer-driven contract checks where appropriate
- schema registry governance
- deprecation timelines
Without this, teams will simply recreate coupling through undocumented assumptions.
Observability
One reason teams love shared libraries is built-in instrumentation. Keep that convenience for technical concerns. Foundation modules for tracing, logging correlation IDs, and metrics are perfectly reasonable if they remain narrow and do not drag in domain semantics.
Consumer lag and replay
In Kafka systems, local models plus event evolution means replay matters. Consumers must be able to deserialize older events or route them through translation logic. Reconciliation becomes the safety net for long-lived consumer groups, missed deployments, poison messages, and retroactive schema corrections.
Dependency management
Even technical shared libraries should be measured. Watch:
- fan-out count
- version fragmentation
- adoption lag
- breaking change frequency
- release rollback frequency
A library with high fan-out should be treated like a platform product. Stable interfaces. Conservative change policy. Real support.
Team topology
Shared libraries often reflect organizational shape. If one central team controls all integration semantics, your architecture will follow Conway’s Law straight into a bottleneck. Domain teams need ownership of their local models. Platform teams should provide capabilities, not dictate business meaning.
Tradeoffs
Let’s be honest. Avoiding shared domain libraries is not free.
You will write more translation code.
You will see similar-looking types in different services.
You will have to invest in schema governance. ArchiMate for governance
You will occasionally explain to an angry developer why “just reuse the class” is the wrong move.
That is the price.
But the benefits are substantial:
- independent deployability
- bounded context integrity
- lower release coupling
- fewer forced upgrades
- clearer domain ownership
- safer event evolution
- better fit with Kafka and asynchronous integration
The deepest tradeoff is between local duplication and global coordination. In microservices, local duplication is usually cheaper. Not always prettier. Cheaper.
A little duplication is often the premium you pay to avoid enterprise-wide lockstep.
Failure Modes
This pattern has its own ways to go wrong.
1. Overcorrection into chaos
Teams hear “no shared libraries” and conclude every service should reinvent logging, auth, retries, and telemetry. That is nonsense. You still want a paved road for technical capabilities.
2. Contract theater
Organizations replace code sharing with schema sprawl but no governance. Now there are fifty event versions, no compatibility checks, and everyone blames Kafka. The problem is not Kafka. The problem is unmanaged contracts.
3. Mapping layer bloat
If teams create baroque anti-corruption layers for trivial integrations, they drown in ceremony. Keep mapping pragmatic. Not every DTO needs a philosophical treatise.
4. Hidden shared semantics in platform SDKs
Sometimes teams remove the common domain package and then smuggle the same assumptions into a Kafka SDK. The SDK starts carrying standard event envelopes, business error categories, canonical customer IDs, and required payload sections. Same coupling, different box.
5. No reconciliation plan
Once models diverge, eventual consistency becomes visible. If you have no replay strategy, no compensating actions, and no periodic reconciliation, drift turns into production incidents.
When Not To Use
There are situations where the anti-shared-library stance should be moderated.
Do not over-rotate if:
- you are building a modular monolith, not true microservices
- the services are owned by one small team with synchronized releases
- the domain is genuinely simple and stable
- the shared package is purely technical and low-volatility
- the cost of schema governance outweighs the benefit of decoupling
A startup with six services and one team does not need a grand anti-corruption strategy for every internal event. A tightly governed internal platform may reasonably publish a small SDK for security bootstrap and telemetry. The mistake is not sharing. The mistake is sharing volatile business meaning under the banner of reuse.
If your architecture depends on frequent, coordinated changes to many services, be honest and call it what it is. Maybe you want a modular monolith. There is no shame in that. The shame is pretending to have microservice autonomy while quietly rebuilding monolithic coupling through package managers.
Related Patterns
Several patterns sit close to this problem.
Bounded Context
The central DDD idea. Different parts of the business deserve different models, even if they use the same words.
Anti-Corruption Layer
A translation boundary that protects local models from external semantics. Very useful when retiring shared domain libraries.
Published Language
A shared contract for communication, often through schemas or APIs, without forcing a shared internal model.
Strangler Fig Pattern
Ideal for dependency migration. Replace usage incrementally rather than by declaration.
Backend for Frontend / API Composition
Helpful when teams are tempted to share domain response models for convenience. Better to compose at the edge than centralize semantics in a common package.
Event Carried State Transfer
Useful, but dangerous if implemented as shared producer-owned classes. Better with governed schemas and local mapping.
Data Reconciliation
Essential in event-driven systems where eventual consistency, replay, and missed updates are facts of life.
Summary
Shared libraries in microservices are dangerous not because reuse is bad, but because reuse often hides coupling until the estate is too large to ignore it.
The key architectural mistake is sharing domain semantics in code across bounded contexts. That creates a dependency fan-out graph that binds teams together through versions, transitive dependencies, release timing, and false agreement about meaning. It is one of the fastest ways to build a distributed monolith.
The better path is selective sharing:
- share narrow technical foundations
- publish integration contracts, not producer-owned classes
- keep local domain models inside bounded contexts
- translate at boundaries
- govern schemas seriously
- plan for reconciliation and replay
- migrate progressively using strangler techniques
The memorable line here is simple: in distributed systems, duplication is often cheaper than coordination.
That sounds wasteful until you’ve lived through the alternative: twenty teams waiting on a patch to a common customer jar so they can all pretend to be independent.
That is not architecture. That is synchronized suffering.
Frequently Asked Questions
What is a service mesh?
A service mesh is an infrastructure layer managing service-to-service communication. It provides mutual TLS, load balancing, circuit breaking, retries, and observability without each service implementing these capabilities. Istio and Linkerd are common implementations.
How do you document microservices architecture for governance?
Use ArchiMate Application Cooperation diagrams for the service landscape, UML Component diagrams for internal structure, UML Sequence diagrams for key flows, and UML Deployment diagrams for Kubernetes topology. All views can coexist in Sparx EA with full traceability.
What is the difference between choreography and orchestration in microservices?
Choreography has services react to events independently — no central coordinator. Orchestration uses a central workflow engine that calls services in sequence. Choreography scales better but is harder to debug; orchestration is easier to reason about but creates a central coupling point.