The Golden Source Is Usually Political

⏱ 21 min read

Most enterprise data arguments masquerade as technical design. They are not. They are jurisdiction fights with JSON attached.

Somewhere in every large organization, there is a slide that says “single source of truth.” It is usually delivered with confidence, as if truth were a database property rather than an organizational compromise. Then the meeting starts. Sales says the CRM owns customer data. Finance says billing owns the legal customer. Support says the service desk knows the real active account. Identity says only they know who can act on behalf of whom. Compliance quietly reminds everyone that “customer” means different things under different regulations. And just like that, the “golden source” stops being golden and starts looking like a border dispute.

This is why authority routing topology matters.

It is not just a data integration pattern. It is a way of admitting something many enterprises resist: authority is contextual. The system that is authoritative for one decision is often the wrong system for another. If you force one platform to own all meanings, you don’t get clarity. You get semantic vandalism.

A serious architecture has to begin there. Not with technology first, but with domain semantics, bounded contexts, and the uncomfortable fact that truth in enterprises is distributed. A customer can be authoritative in one bounded context as a legal billing entity, while another system is authoritative for service eligibility, and a third for communication preferences. The job of architecture is not to erase those distinctions. The job is to route authority to where it legitimately belongs and make the seams operational.

That is the heart of authority routing topology:

instead of pretending one system owns all truth, explicitly model which system is authoritative for which business decision, attribute, or lifecycle event—and route reads, writes, and reconciliation accordingly.

This pattern becomes especially valuable in event-driven organizations, Kafka-heavy estates, and microservices landscapes where the old comfort blanket of a central master data hub no longer fits. But it is also dangerous if applied lazily. Done well, it creates resilient, semantically honest architecture. Done poorly, it creates distributed ambiguity with better branding. event-driven architecture patterns

Let’s get into it.

Context

Large enterprises rarely start greenfield. They inherit.

They inherit CRMs bought by one division, ERP modules configured by another, customer masters that began life as spreadsheets, acquisition platforms, regional compliance variants, and “temporary” integration databases that have been running for eleven years. Then someone launches a transformation program and asks for a canonical golden record.

This is understandable. Executives want consistency. Product teams want simpler APIs. Analysts want fewer duplicates. Regulators want traceability. Architects want less point-to-point chaos. The instinct is rational.

The mistake is assuming that the path to consistency is centralized ownership of all business truth.

In domain-driven design, this is exactly where bounded contexts earn their keep. A bounded context is not an inconvenience to be normalized away. It is a statement that a concept carries different meanings in different business capabilities. “Customer” in billing is not “customer” in marketing. “Order” in fulfillment is not “order” in finance. “Account status” in identity is not “account status” in collections.

You can unify identifiers. You can map relationships. You can publish events. You can reconcile disagreements. What you should not do is flatten contextual meaning into one overloaded enterprise object and call that maturity.

Authority routing topology accepts that:

  • authority is partitioned
  • truth may be attribute-specific
  • consistency may be eventual
  • disputes are business process issues, not just data defects
  • reconciliation is a first-class architectural capability

This is especially relevant when Kafka and microservices enter the picture. Event streaming makes distribution easy. It does not make semantics easy. If five services consume a CustomerUpdated event but each interprets “customer” differently, you have built a semantic bomb with excellent throughput.

Problem

The enterprise anti-pattern usually looks like one of two things.

1. The fake golden source

One system is declared master because it is politically strong, not because it is semantically correct. The CRM becomes “customer master” because the sales organization funded the program. The ERP becomes “product master” because finance trusts it. A customer data platform is crowned sovereign because it has the nicest demo. enterprise architecture with ArchiMate

Then edge cases arrive. Legal name comes from ERP. Preferred name comes from CRM. Communication consent comes from marketing. Account closure comes from billing. Service entitlements come from provisioning. Nobody really owns “customer”; everyone owns part of it.

So teams start bypassing the master. They cache local copies. They build side tables. They add exception fields. They subscribe directly to source systems. In a year, the golden source is still central on diagrams and completely peripheral in reality.

2. The universal canonical model

This one is architecturally elegant for about three weeks.

A central integration model is designed to represent the enterprise truth of customer, account, order, product, party, relationship, consent, address, and every nuance across all domains. It becomes a sprawling compromise. Every new field creates governance debates. Every service maps poorly to it. Every business unit feels partially misrepresented. The model grows; trust shrinks. EA governance checklist

In both cases, the real problem is the same:

the architecture treats authority as a global property when it is actually contextual.

And that causes practical harm:

  • wrong system approves changes
  • duplicate updates race
  • downstream systems become inconsistent
  • audit trails get muddy
  • data quality issues become endless “stewardship” queues
  • operational teams cannot explain why two systems disagree
  • migration programs stall because nobody can define the target truth

You cannot solve a jurisdiction problem with a bigger schema.

Forces

Authority routing topology exists because several forces pull in opposite directions.

Business semantics versus platform simplification

The business wants precision. Platforms want standardization. A central model reduces integration variety but often erases important distinctions. Domain-driven design says preserve the distinctions that matter. Integration teams often do the opposite because it looks cheaper up front.

It rarely is.

Local autonomy versus enterprise consistency

Microservices and federated teams want local control over their data and lifecycle. Risk, audit, and analytics want consistency across the estate. Both are right. The architecture has to support autonomous bounded contexts while still making enterprise-level queries, controls, and reporting possible.

Write authority versus read convenience

The system that should decide a value is not always the best place to read it from. A service may own legal customer status, while a read model or search index serves customer 360 queries. This is a healthy distinction. Enterprises often blur it and accidentally let read-optimized stores become write-authoritative.

That way lies chaos.

Latency versus correctness

Routing to the authoritative source on every request improves correctness but can hurt performance and resilience. Replicating authority data locally improves availability but introduces staleness. This is not a bug in the architecture. It is the architecture.

Transformation pressure versus operational reality

Migration programs want crisp end states. Operational estates are messy. During migration, multiple systems will hold overlapping truths. If the architecture does not explicitly handle split authority and reconciliation, the migration will become a trench war fought with CSV files and late-night calls.

Solution

Authority routing topology says: define authority explicitly, route decisions intentionally, and separate authoritative ownership from distribution and consumption.

At a practical level, this means you design around five ideas.

1. Authority is assigned by business decision, not by entity name

Do not say “System X owns Customer.” That sentence is usually wrong.

Say things like:

  • CRM owns prospect contact preferences
  • ERP owns bill-to legal entity and tax identifiers
  • IAM owns user credentials and delegated access rights
  • Provisioning owns active service entitlements
  • Billing owns financial delinquency status
  • Customer support owns case severity and resolution history

That is much more honest. And much more usable.

2. Authority can be attribute-scoped and lifecycle-scoped

Different stages of a lifecycle may shift authority. During onboarding, a sales workflow may capture customer details, but once the customer is activated, finance may become authoritative for billing identity and contractual status. During offboarding, collections may override certain status transitions.

This is common in real enterprises and often hidden under the phrase “complex business rules.” It is not complexity. It is domain reality.

3. Route writes to authority; route reads by need

A command should generally be sent to the authority that can legitimately validate and commit the change. Reads are different. Read models, caches, Kafka-fed projections, search indices, and customer 360 stores can aggregate and serve data for convenience. But they are not sovereign unless explicitly designed to be.

4. Reconciliation is not an exception path

If multiple systems hold overlapping representations, disagreement is guaranteed. Reconciliation must be built as a normal operating capability:

  • conflict detection
  • precedence rules
  • human workflow for irreducible disputes
  • audit evidence
  • replay and correction
  • time-aware resolution

A topology without reconciliation is wishful thinking.

5. Publish events, but preserve semantic contracts

Kafka is useful here because it separates ownership from distribution. An authoritative service emits domain events; consumers build projections or react to changes. But events must be tied to bounded context semantics. Avoid generic enterprise events that pretend every consumer shares the same language.

BillingCustomerStatusChanged is often better than CustomerUpdated.

Here is the core shape.

5. Publish events, but preserve semantic contracts
Publish events, but preserve semantic contracts

This is not “one source of truth.” It is many sources of authority, with controlled distribution.

Architecture

A sound authority routing topology usually has six architectural elements.

1. Bounded contexts with explicit authority maps

Start with domain-driven design, not plumbing. Identify bounded contexts and define:

  • business capabilities
  • core aggregates
  • authoritative decisions
  • key identifiers
  • upstream/downstream dependencies
  • allowed mutations

Then create an authority matrix. Not glamorous, but indispensable.

Example:

This matrix is the constitution. If you skip it, your architecture will be governed by Slack threads.

2. An authority router or policy layer

You need a mechanism that can decide where a command goes based on business semantics, not hardcoded application folklore. This can live in:

  • an API gateway with policy logic
  • an orchestration service
  • a workflow engine
  • a domain facade
  • a command gateway

The point is not a specific product. The point is centralizing routing policy enough that teams can reason about it.

For example:

  • update communication preferences → CRM
  • update tax registration → ERP
  • suspend user login → IAM
  • terminate service line → Provisioning, with downstream billing event

Sometimes a single command touches multiple bounded contexts. That is not a cue to create a giant distributed transaction. It is a cue to model a process.

3. Event backbone for propagation

Kafka fits naturally because it supports fan-out, replay, decoupling, and independent projections. But use it with discipline.

Good practice:

  • publish context-specific domain events
  • maintain schema evolution rigor
  • include business keys and event time
  • preserve causality where needed
  • support replay into projections

Bad practice:

  • one giant enterprise topic for “customer”
  • indiscriminate event-carried state transfer
  • consumers inferring authority from whichever event arrived last

Kafka is a transport and history mechanism. It is not a semantic referee.

4. Read models and materialized views

Consumers often need consolidated views: customer 360, operational dashboards, fraud screening, support workbenches. Build these as read models fed by authoritative events. Make their staleness and provenance visible.

A useful read model should answer:

  • current value
  • source of authority
  • last update time
  • confidence/reconciliation state
  • relevant business identifiers

The most valuable thing a customer 360 screen can do is not just show data. It can show which system gets to be believed.

5. Reconciliation service or workflow

Sooner or later systems will disagree because:

  • events arrive out of order
  • upstream systems are manually corrected
  • backdated changes occur
  • bulk migration loads violate precedence
  • external parties send contradictory data

You need a reconciliation capability that can:

  • detect divergence
  • apply precedence and survivorship rules
  • trigger compensating actions
  • open human review cases
  • retain evidence for audit

This service should treat time seriously. “Latest wins” is often a terrible rule in enterprise domains. A delayed old event should not overwrite a legally effective newer state simply because it arrived later.

6. Observability with semantic diagnostics

Technical monitoring is not enough. You need semantic observability:

  • authority decision logs
  • command routing traces
  • reconciliation backlog metrics
  • divergence rates by attribute
  • stale read model age
  • policy override frequency

If you cannot explain why System A says “active” and System B says “suspended,” you do not have architecture. You have theater.

Here is a more detailed topology.

6. Observability with semantic diagnostics
Observability with semantic diagnostics

Migration Strategy

This pattern shines during migration precisely because migrations are the season when “golden source” politics become unbearable.

You are moving from legacy overlap to explicit authority. That cannot be done with a big-bang switch unless the estate is tiny or the risk appetite is reckless.

Use a progressive strangler migration.

Phase 1: Discover actual authority

Do not start from system inventories. Start from business decisions:

  • Who can legally create a customer?
  • Who can approve a status change?
  • Which status matters for billing?
  • Which attributes are regulated?
  • Which data is only advisory?

Map actual operational authority, not official PowerPoint authority. They are often different.

Phase 2: Introduce an authority map and passive read model

Without changing existing writes, build a customer/account read model from existing sources. Expose provenance and conflict states. This creates visibility and teaches the organization that disagreement exists.

This phase is politically useful. It turns vague claims into observable facts.

Phase 3: Route new writes through a command gateway

Insert a policy-driven command path for selected use cases. At first, the gateway may simply proxy to existing systems. That is fine. What matters is that routing logic becomes explicit and traceable.

Phase 4: Carve off bounded capabilities

Move one authority area at a time:

  • communication preferences
  • identity and access
  • service entitlement
  • billing status
  • onboarding decisions

Each carved capability emits events. Downstream systems subscribe or projections update. Legacy systems are gradually demoted from authority to consumer.

Phase 5: Add reconciliation and drift controls

As split authority increases during migration, divergence risk rises. Introduce automated drift detection and correction workflows before conflicts become operational folklore.

Phase 6: Retire or narrow legacy authority

Do not wait for total replacement. Often the right end state is not deleting the legacy platform but shrinking its authority scope. A system can remain operationally useful while no longer acting as enterprise authority.

That is often the most realistic enterprise outcome.

Here is the migration picture.

Phase 6: Retire or narrow legacy authority
Phase 6: Retire or narrow legacy authority

Migration reasoning that matters

A few hard-earned rules:

  • Migrate by authority boundary, not by application boundary. Replacing a whole system at once is often too blunt.
  • Do not centralize uncertainty. If semantics are unclear, don’t build a big master. Build visibility and routing.
  • Shadow before cutover. Compare new authority outcomes against legacy behavior.
  • Use replayable events. Migration mistakes are inevitable; replay is your friend.
  • Keep humans in the loop for disputed records. Automation is excellent until it starts making legally significant mistakes at scale.

Enterprise Example

Consider a multinational telecom.

This company has:

  • Salesforce for sales and account teams
  • a legacy billing platform for invoicing and credit management
  • an identity platform for customer login and delegated access
  • a provisioning platform for mobile, broadband, and TV service activation
  • a support platform with service history and complaints
  • Kafka as an event backbone introduced during a microservices modernization program

Leadership initially declared Salesforce the golden customer source. It was the most visible platform and had executive sponsorship. It failed almost immediately.

Why? Because “customer” was four different things:

  • a lead or contact in sales
  • a legally billable party in finance
  • a subscriber entitlement holder in provisioning
  • an authenticated digital identity in IAM

A support agent viewing “customer status” needed to know:

  • can this person log in?
  • are they on credit hold?
  • is the mobile line active?
  • do we have consent to message them?
  • are they authorized to manage the family account?

No single source owned all of that. Pretending otherwise caused support errors, activation delays, and compliance issues around consent.

The revised architecture used authority routing topology.

Authority decisions

  • Salesforce owned lead/contact profile and marketing preferences
  • Billing owned legal account identity, payment delinquency, and tax status
  • IAM owned authentication credentials and delegated household access
  • Provisioning owned active service status and entitlements
  • Support owned case metadata, but not customer master changes

Implementation shape

  • an API command gateway routed updates by policy
  • context-specific Kafka topics published domain events
  • a customer 360 read model consolidated data for support and digital channels
  • a reconciliation service flagged disputes, especially around household relationships and service eligibility
  • old nightly ETL jobs remained for some downstream reporting during transition, but operational reads moved to event-fed views

Results

  • support call handling improved because agents saw provenance with the value
  • billing disputes dropped because legal account changes could no longer be “fixed” in CRM alone
  • digital account takeover risk decreased because delegated access was only writable in IAM
  • migration from the legacy customer table became incremental instead of all-or-nothing

What was politically difficult

The hardest part was not Kafka. It was telling senior stakeholders that Salesforce was not the golden source. It was a source of authority for some things. Once that language changed, the architecture became tractable.

This is common in enterprises. The data fight is usually a proxy for a power fight.

Operational Considerations

Authority routing topology is not self-running.

Data lineage and audit

For regulated domains, every critical value should carry:

  • source authority
  • effective timestamp
  • ingestion timestamp
  • correlation or causation ID
  • version
  • override/reconciliation reason

Without lineage, reconciliation becomes guesswork.

Schema governance

Kafka topics need disciplined contracts. Versioning and compatibility matter. Domain events should evolve intentionally, with context ownership clear. Shared enterprise schemas become a dumping ground astonishingly fast.

Backfill and replay

You will need to rebuild read models, correct bad transformations, and replay missed events. Design for replay from day one:

  • immutable event logs where possible
  • idempotent consumers
  • deterministic projections
  • checkpointing and backpressure handling

Staleness budgets

Not every use case needs fresh authoritative reads. Define staleness budgets:

  • support dashboard: maybe 30 seconds
  • fraud decisioning: maybe 2 seconds
  • tax/legal status check: perhaps direct synchronous read required
  • executive reporting: maybe hours

This stops teams from overengineering real-time where it does not matter and underengineering where it does.

Ownership model

Someone must own:

  • authority policy
  • reconciliation rules
  • identifier mapping
  • event contracts
  • exception workflow

This is usually cross-domain governance with real decision rights, not a committee that only writes standards nobody follows. ArchiMate for governance

Tradeoffs

Every serious pattern earns its keep by making tradeoffs explicit.

Benefits

  • semantically honest architecture
  • better fit with bounded contexts and microservices
  • reduced false centralization
  • safer progressive migration
  • clearer auditability of who owns what
  • more resilient read-side composition via Kafka and projections

Costs

  • more moving parts
  • policy complexity
  • reconciliation overhead
  • steeper operational burden
  • harder onboarding for teams used to “just update the master”
  • cultural resistance because authority becomes visible

This is not simpler than declaring a master. It is more truthful. In enterprise architecture, truth often costs extra.

Failure Modes

There are several ways to get this wrong.

1. Attribute authority without domain clarity

If the authority map is created as a technical spreadsheet divorced from business semantics, it rots immediately. Teams will interpret attributes differently and route incorrectly.

2. Last-writer-wins everywhere

This is the distributed systems equivalent of giving up. It destroys meaningful business precedence, especially when delayed events or backdated corrections are common.

3. Read models treated as masters

A customer 360 platform becomes writable “just for convenience,” and soon it is the hidden master with weak validation and no bounded context discipline.

4. Reconciliation deferred

Many teams promise to “add data quality later.” They never do. Then disputes pile up and operations teams resolve them manually in whichever screen is easiest. Congratulations, you have invented human-based eventual consistency.

5. Overly centralized routing

If every change must go through a heavyweight central architecture team, delivery slows to a crawl. The authority policy must be governable, but not bureaucratically frozen.

6. Kafka topic sprawl without semantics

Hundreds of topics, no clear ownership, overlapping payloads, and consumers choosing whichever event suits them. The event backbone becomes a gossip network.

When Not To Use

This pattern is not universally appropriate.

Do not use authority routing topology when:

The domain is genuinely simple

If one application truly owns a concept end to end and that is unlikely to change, adding routing and reconciliation is needless machinery.

Your organization cannot sustain semantic governance

If nobody can define bounded contexts, authority rules, or ownership, this pattern will degenerate into distributed confusion.

Latency requirements require single-hop transactional decisions

Some operational use cases need immediate, strongly consistent writes against one authority with minimal indirection. Use a simpler model there.

The estate is too small to justify it

A mid-sized business with one ERP, one CRM, and limited overlap may be better served by conventional master data management or even straightforward integration.

Teams are using it to avoid hard domain decisions

Authority routing is not a license to leave semantics unresolved forever. If every attribute is “shared” and every decision “depends,” you do not have a nuanced architecture. You have organizational indecision in technical clothing.

Authority routing topology sits near several adjacent patterns.

Master Data Management

MDM aims to create governed enterprise entities and survivorship rules. It can complement authority routing, but authority routing is more explicit about contextual ownership and write routing.

System of Record / System of Engagement

Useful distinction, but often too coarse. Authority routing sharpens it by making ownership decision-specific.

Strangler Fig Pattern

Highly relevant for migration. Replace authority capability by capability, not system by system.

CQRS

Very compatible. Commands route to authority; queries hit projections or read models.

Event Sourcing

Helpful in some domains, especially for audit and replay, but not required. Do not confuse event sourcing with event streaming. Many enterprises do.

Data Mesh

Shares the idea of domain ownership, but authority routing is more operationally focused on decision rights and state propagation.

Summary

The golden source is usually political because enterprises use the phrase to settle arguments they have not actually resolved. It sounds precise, but it often hides semantic disagreement, overlapping responsibilities, and organizational power.

Authority routing topology is a more honest answer.

It begins with domain-driven design and bounded contexts. It assigns authority by business decision and attribute, not by inflated entity labels. It routes writes to the legitimate authority. It lets reads be composed through Kafka-fed projections and read models. It treats reconciliation as normal, not exceptional. And it supports progressive strangler migration in the real world, where old and new systems overlap for longer than anyone admits.

This pattern is not elegant in the glossy sense. It is elegant in the enterprise sense: it survives contact with reality.

And reality, in large organizations, is this:

truth is distributed, semantics are contextual, and data ownership is never just about data.

If you design with that in mind, your architecture has a fighting chance. If you don’t, your “golden source” will become what most of them become in the end—just another database everyone claims to trust and quietly works around.

Frequently Asked Questions

What is enterprise architecture?

Enterprise architecture aligns strategy, business processes, applications, and technology in a coherent model. It enables impact analysis, portfolio rationalisation, governance, and transformation planning across the organisation.

How does ArchiMate support architecture practice?

ArchiMate provides a standard language connecting strategy, business operations, applications, and technology. It enables traceability from strategic goals through capabilities and services to infrastructure — making architecture decisions explicit and reviewable.

What tools support enterprise architecture modeling?

The main tools are Sparx Enterprise Architect (ArchiMate, UML, BPMN, SysML), Archi (free, ArchiMate-only), and BiZZdesign. Sparx EA is the most feature-rich, supporting concurrent repositories, automation, scripting, and Jira integration.