How to Run an Architecture Review in Sparx EA: Evidence

⏱ 20 min read

Let’s start with the uncomfortable truth.

Most architecture reviews are not really reviews.

They’re diagram walkthroughs. Or governance theatre. Or a late-stage checkpoint held after delivery has already committed to a design and bought half the software. Sometimes they turn into subjective debates where the first confident person in the room sets the tone, and everyone else spends the next hour circling that opinion.

I’ve watched this happen for years in telecom. The labels change — design authority, architecture board, solution governance, technical review forum — but the failure pattern is remarkably consistent. Someone arrives with a slide deck full of application diagrams, maybe a sequence flow or two, perhaps a cloud deployment picture covered in icons. The board asks a few predictable questions. Risks get discussed in broad terms. Standards are quoted from memory. Then a decision gets made that is only loosely tied to evidence.

That is not a review. It’s a conversation with diagrams.

A proper architecture review is a decision event. Its purpose is to answer a defined question, using explicit evidence, with traceability back to requirements, principles, constraints, strategic direction, and risk acceptance. If you can’t point to why a decision was taken, which trade-offs were accepted, and what conditions were attached, then the governance memory of the enterprise is effectively gone.

In telecom, that matters even more. Review a new 5G order capture and service activation solution and you are immediately dealing with CRM, order management, mediation, billing, inventory, service orchestration, partner APIs, IAM, observability tooling, and usually some awkward transitional integration nobody actually wanted. Complexity at that scale does not reward polished presentations. It rewards evidence.

That’s why, in Sparx EA, I think the review should revolve around an evidence package. Sparx EA training

Not a deck. Not a dump of diagrams. A curated package of linked architecture artifacts that helps reviewers reach a decision quickly, and defend it later.

Start from the end: what decision is the review supposed to make?

This sounds obvious. Teams skip it all the time.

Before you open Sparx EA, before you generate a document, before anyone starts arguing about which ArchiMate view to use, define the decision. What exactly is the board being asked to decide? ArchiMate modeling best practices

Usually it falls into one of a few patterns:

  • approve
  • approve with conditions
  • send back for redesign
  • escalate because there is an unresolved strategic conflict

That matters because the evidence package should change depending on the decision type. A standards exception review is not the same thing as a domain architecture conformance review. A production-readiness architecture review before a 5G provisioning launch needs hard non-functional evidence, support ownership, resilience design, and operational controls. A checkpoint for a major investment wave needs strategic fit, dependency concentration, and retirement logic for transitional architecture.

I usually frame reviews with four blunt questions:

  • What is being approved?
  • What risks are acceptable?
  • Which principles are non-negotiable?
  • What trade-offs are still open?

If those are still fuzzy, the review is probably too early or badly framed.

Take a common telecom example: a team proposes a new customer 360 cache to support digital channels because the existing CRM and billing interactions are too slow for mobile app journeys. The review question is not “is this a good-looking design?” It is whether introducing another data layer creates unacceptable duplication, latency, privacy exposure, consent management complexity, and operational overhead. The decision is architectural, not aesthetic.

I’ve sat through reviews where that distinction never got made. Predictably, the conversation drifted into component naming, API style, and whether Redis was “strategic,” while the real issue — duplicating customer profile data in a way that could break privacy controls and identity consistency — stayed oddly under-examined.

What an evidence package actually is in Sparx EA

An evidence package is not a slide deck generated from EA.

It is not a random export of the package browser. It is not “the repository,” and it is definitely not one giant document containing every diagram anyone has touched in the last six months.

In practical terms, an evidence package is a review-ready bundle of linked EA artifacts that answers six things:

  • what is changing
  • why it is changing
  • what principles and standards apply
  • what dependencies are impacted
  • what risks and decisions exist
  • what alternatives were considered

That bundle will usually include a scoped model package, selected diagrams, key element relationships, requirements, constraints, decisions, risks or issues, standards references, matrix or impact views, and then some generated output for reviewers — maybe a document, maybe HTML publishing, maybe a web view depending on how your EA setup works.

The important distinction is this: the package is not the repository. It is a curated argument assembled from the repository.

That curation is where most teams struggle.

They usually have plenty of content. Too much, if anything. But they haven’t modeled the evidence needed for a decision, so when review time arrives they fall back to PowerPoint and verbal explanation. The model becomes illustration, not proof.

The mistake I see constantly: teams model the solution, but not the evidence

This is one of those patterns that becomes obvious after enough painful review boards.

Teams are often quite good at modeling the proposed solution. They produce clean application cooperation diagrams, tidy deployment views, some process flows, maybe a cloud topology with Kafka clusters and API gateways and IAM integration shown in all the right places.

And the review still fails.

Why? Because the evidence is missing.

I keep seeing the same things:

Beautiful ArchiMate views with no linked requirements.

Interfaces modeled without ownership, lifecycle status, or criticality.

Principles living in PDFs on SharePoint, completely disconnected from the review package.

Risks tracked in Jira only, so reviewers can’t see the architecture rationale in context.

Alternatives discussed in workshops but never captured.

Review outcomes not written back into EA, which means the same arguments reappear every time a dependent project comes along.

This causes real damage. Not theoretical damage. Operational drag.

You get the same debates in every governance board because nothing is anchored. You get no audit trail. You get no reusable architecture knowledge. And, eventually, someone asks for “one more deck” because the repository still doesn’t tell the decision story clearly enough.

I’ve seen network API integration proposals modeled beautifully, right down to interface endpoints, without a shred of non-functional evidence on latency. In telecom, that is not a small omission. If your network exposure API sits on a near-real-time activation path, latency budgets matter. A lot.

I’ve also seen BSS-to-OSS orchestration decisions made without proper service inventory dependency analysis, which later caused some ugly fallout handling when customer orders partially completed and nobody could reliably reconcile state across order management, billing, and activation.

And I’ve seen event streaming adoption approved because “Kafka is strategic,” while operational ownership for schema governance, topic lifecycle, and replay handling was completely absent. The architecture looked modern. The evidence underneath it was thin.

What an evidence package should contain for a telecom review

Not every review needs the same depth. That matters. A minor mediation mapping change should not require a 40-page evidence pack and three domain architect workshops. Proportionality matters more than people admit. TOGAF roadmap template

Still, there is a core set of artifacts I would expect in most telecom architecture reviews.

Review scope statement.

This sounds dull, but it stops a lot of nonsense. Define systems, domains, business capability impact, and release boundary. If you are reviewing only the digital order journey and not downstream billing changes, say so. If network activation is in scope but field workforce systems are not, make that explicit.

Decision statement.

Put this in writing. Exactly what is the board being asked to decide? “Approve use of a transitional decomposition service for converged bundle orders through Release 3, subject to retirement conditions” is useful. “Review target architecture” is not.

Architecture context diagram.

Reviewers need orientation before detail. Show upstream and downstream systems, partner channels, network domains, trust boundaries, and major integration paths. In telecom especially, context beats detail at the start.

Change footprint.

Which applications, interfaces, data objects, and technology nodes are changing? Which are retained? Which are retired? If the answer is “everything touches everything,” the package still isn’t curated.

Requirements and quality attributes.

Throughput. Resilience. Latency. Security. Data retention. Regulatory controls. IAM dependencies. Auditability. Operational recovery expectations. In telecom, non-functional requirements are often where the real architecture sits.

Principle conformance.

I like this section when it is honest and short. Reuse before buy/build. Canonical APIs. Customer data minimization. Observability by design. Cloud guardrails. Zero trust integration patterns. State the principle, show conformance or deviation, and link it to actual architecture elements.

Risk register tied to model elements.

Not a separate spreadsheet if you can avoid it. If a risk relates to a Kafka bridge, API gateway cluster, IAM federation dependency, or replicated customer cache, link it to those elements. Context matters.

Alternatives considered.

This is often the strongest part of the package if it’s done properly. Especially in integration choices, data replication, eventing patterns, and platform selection. Reviewers need to know not just what was chosen, but what was rejected and why.

Transition-state view.

Essential in telecom. Legacy and target coexist for years. Sometimes far longer than anyone admits in the review. A target-only diagram is usually fiction. TOGAF roadmap template

Review recommendation.

State the architect’s recommendation and conditions. Don’t make the board reverse-engineer your opinion from thirty pages of diagrams.

I’m quite opinionated about this: if the package doesn’t contain an explicit recommendation, it isn’t finished. Architects are there to advise, not just to present options indefinitely.

How to structure this in Sparx EA without creating a junk drawer

Sparx EA can turn into a junk drawer very quickly. That is not really the tool’s fault. It’s what happens when every project creates review content ad hoc and nobody separates reusable architecture knowledge from review-specific curation.

I recommend separating reusable baseline architecture content from review packages. The review package should reference baseline artifacts where possible, not duplicate them.

A practical package structure looks something like this:

  • 00 Review Charter
  • 01 Scope and Context
  • 02 Current and Target Views
  • 03 Requirements and Constraints
  • 04 Decisions and Alternatives
  • 05 Risks and Issues
  • 06 Impact and Traceability
  • 07 Review Outcome

Simple. Slightly boring. Very effective.

Where teams go wrong is pretty predictable. They copy-paste diagrams into a temporary folder. They create project-specific packages that can’t be reused. Review comments live only in meeting minutes or email chains outside EA. Baselines aren’t taken. Traceability is assumed rather than demonstrated.

A few Sparx EA tactics help a lot:

  • use tagged values for review status, criticality, lifecycle state, and change type
  • use linked documents or element discussions where they genuinely add context
  • baseline the package before review so you can see what was actually assessed
  • use relationship matrices to prove dependency and standards coverage
  • keep review stereotypes or element types for decision records, exceptions, and conditions

I’d also say this, based on experience: assign one architect to curate the package. One. Maybe with domain contributors around them. But one person needs to own coherence. Ten people editing in parallel usually gives you a repository-shaped argument rather than an actual one.

A concrete example: reviewing a new telecom order decomposition service

Here’s one that feels painfully familiar.

A telecom provider is launching converged fiber plus mobile bundles. Commercial pressure is high. The existing order management platform is fine for single-product orders but struggles to decompose multi-product bundles fast enough and cleanly enough. The delivery team proposes a new decomposition microservice between CRM and orchestration.

On paper, it sounds reasonable.

In practice, it triggers exactly the right architecture questions.

Are we solving a local performance problem by bypassing the strategic order platform? What happens to product model consistency? Where does fallout handling sit when a fiber activation succeeds but a mobile SIM provision fails? Who owns decomposition rules? How do billing and service order states stay aligned? What does this do to long-term architecture debt?

A decent evidence package in Sparx EA for this review would include: Sparx EA guide

A capability map showing impacts across product management, order capture, order decomposition, service activation, and billing assurance.

An application cooperation view showing CRM, product catalog, order management, decomposition service, orchestration, billing, and support tooling. This is also where I’d show Kafka if eventing is proposed for downstream notifications, plus IAM trust relationships if the service introduces another machine identity boundary.

A sequence or process view for bundle order flow. Not a huge BPMN opera. Just enough to show where decomposition happens, where state transitions occur, and where fallout is detected.

Data traceability for product offering, customer order, service order, and billing event. This matters because decomposition logic often creates accidental semantics drift between business and operational order objects.

Risks tied to model elements, such as:

  • duplicate decomposition logic between order platform and new service
  • operational support ambiguity for failed decomposition or replay
  • data reconciliation issues across CRM, billing, and orchestration
  • temporary service becoming permanent
  • observability gaps if event-driven interactions are introduced without end-to-end tracing

Standards mapping:

  • API standard compliance for synchronous interfaces
  • eventing standard for Kafka topics, schema versioning, and replay behavior
  • observability requirements for distributed tracing and alert ownership
  • IAM standards for service-to-service authentication and secrets rotation

And then alternatives.

This is where the board starts to trust the package or not.

For this scenario, the alternatives might be:

  1. extend existing order management to support bundle decomposition
  2. introduce decomposition service as a temporary transitional component
  3. move decomposition logic to orchestration layer

In one review I was involved in, the team had already emotionally committed to option 2. But once the alternatives were modeled properly, it became obvious that moving decomposition into orchestration would create even worse fragmentation of commercial and technical order semantics. Extending order management was strategically cleaner, but too slow for the launch timeline. So the board approved option 2 — with conditions.

That is the kind of outcome a review should produce.

The actual decision was not “great diagrams, proceed.” It was: approve with conditions; decomposition service allowed only as transitional architecture; mandatory retirement trigger tied to order platform release capability; explicit ownership model for decomposition rules; observability and support handoff required before production.

That’s a real decision. It has consequences. It can be traced later.

And it only works if the evidence package makes the trade-offs visible.

Diagram 1
How to Run an Architecture Review in Sparx EA: Evidence

The review flow I recommend in practice

I don’t run reviews as a neat textbook sequence because real life is messier than that. But the flow is roughly this.

Clarify the review trigger and the actual decision owner first. If nobody knows who owns the decision, the board will drift.

Define the review scope in EA early. Not after all the diagrams exist. Early. That immediately exposes whether the package is about a standards exception, a design approval, a production-readiness gate, or something else.

Then assemble evidence from the existing repository. This is the point where you find out whether your architecture practice is healthy or just busy. If key requirements, standards references, risks, and ownership metadata are missing, you will feel it here.

Identify missing evidence early. That sounds banal, but it saves everyone pain. If you discover missing NFRs two days before the board, postpone. Seriously. I’ve seen too many teams push through anyway, and the result is predictable: conditional approval based on guessed evidence, followed by avoidable design churn later.

Run an internal pre-review with domain architects. Not as theatre. As a quality filter. Enterprise architecture, security, integration, operations, data, maybe network architecture depending on the change. Let them attack the package before the board does.

Publish a curated review package to reviewers. One coherent thing. If reviewers need twelve separate exports and three SharePoint folders, the package is badly curated. I feel strongly about this.

During the review itself, focus on decisions, not diagram narration. If the lead architect needs 40 minutes of explanation before the board understands the architecture, the package was not ready. The board should be debating trade-offs, conditions, and unresolved conflicts — not discovering basic context live.

Avoid live modeling in the board except for clarifications. It usually creates heat, not clarity.

Then, and this is where teams often fail, capture the outcome back into EA. Conditions. Actions. Rationale. Exception expiry. Baseline. Retirement expectations. Without that, the review becomes disposable.

Diagram 2
How to Run an Architecture Review in Sparx EA: Evidence

Evidence package checklist by review type

Here’s a practical shorthand I’ve used with teams.

That table is deliberately minimal. Real reviews can go deeper. But if you do not have the minimum, you usually do not have a reviewable package.

Don’t make reviewers hunt

This sounds trivial. It isn’t.

The package should make the decision easy to find and the evidence easy to navigate. Put the decision statement on page one. Show architecture context before detailed component diagrams. Label what is new, changed, retained, and retired. Include a one-page risk summary. Make conditional approvals explicit. Highlight unresolved items honestly.

There are several decent output options from Sparx EA depending on how your organization works. Document generation from the review package can be effective if the template is lean and disciplined. HTML or web publishing is useful for distributed review. Linked diagrams with drill-down can work well for technical reviewers who want to inspect detail.

But the medium matters less than the curation.

My rule of thumb: if reviewers are hunting for the risk list, the package has failed.

Traceability is where Sparx EA actually earns its keep

This is the part people either love or overdo.

Sparx EA is genuinely better than static review decks when it is used for traceable evidence. Requirement-to-component traceability. Principle-to-decision mapping. Impact analysis via connectors and matrices. Current, transition, and target states in one repository. That is where the tool starts paying for itself.

In telecom, those traceability paths are not abstract.

A customer identity service change can impact lawful intercept data flows and consent enforcement in ways that are easy to miss if you only look at an application diagram.

A resilience requirement for a mediation platform should trace to active-active deployment nodes, failover assumptions, and operational runbooks, not just to a sentence in a non-functional requirements document.

A proposed API can and should be checked against canonical information model standards, especially when multiple channels and downstream OSS/BSS consumers rely on shared semantics.

But there is nuance here. Traceability should support decisions, not become architecture bureaucracy. If every connector becomes mandatory and every element requires ten tagged values before it can exist, people stop trusting the model or simply stop updating it. I’ve watched that happen more than once.

Good traceability is selective and purposeful. It proves the critical things.

What to capture after the review — because this is where governance memory lives

This section gets skipped constantly, which is one reason enterprise architecture functions end up repeating themselves.

After the review, write back into Sparx EA:

  • decision outcome
  • approval conditions
  • action owners
  • review date
  • baseline reference
  • exception expiry
  • architecture debt accepted
  • planned retirement dates for transitional components

This is not admin. This is the record of what the enterprise knowingly accepted.

If you don’t capture it, the same issue resurfaces at go-live, or six months later, or during the next dependent program. Temporary architecture becomes permanent because nobody recorded what “temporary” meant.

I once saw a temporary event broker introduced for a product launch because the strategic integration platform could not be ready in time. Fair enough. That kind of compromise happens. The problem was that no retirement condition was captured in the architecture record. Four years later it was still in production, carrying more traffic than the “strategic” platform, with fragile support ownership and no clean lineage of why it existed in the first place.

That is not unusual. It’s what happens when review outcomes live in meeting notes instead of the model.

A brief word on politics

Architecture reviews are never purely technical. Especially not in telecom.

Vendor commitments are already in play. Program deadlines are real. Domain ownership disputes are usually simmering below the surface. Every platform is “strategic” when its sponsoring executive is in the room. Operations teams will rightly resist unsupported patterns. Delivery leads will try to narrow scope to protect dates.

Evidence does not remove politics.

What it does do is depersonalize disagreement, at least a little. It lets you frame issues as trade-offs and consequences rather than territory fights. It helps show impact on existing architecture debt, not just local delivery pressure. It makes weak decisions harder to hide behind vague optimism.

That’s often the best you can hope for.

Common mistakes when using Sparx EA for architecture reviews

A punchier list, because some mistakes deserve bluntness.

Using EA as a diagramming tool only.

Do instead: model requirements, risks, decisions, standards, ownership, and traceability.

Generating 80-page documents with no narrative hierarchy.

Do instead: curate around the decision and support drill-down where needed.

Failing to distinguish current, target, and transition.

Do instead: show all three when migration reality matters. In telecom, it almost always does.

Not tagging review-critical elements.

Do instead: tag lifecycle state, change type, owner, criticality.

Keeping risk and decision logs outside the repository.

Do instead: link them to model elements so context survives.

No baseline before review.

Do instead: baseline the reviewed state. Otherwise you cannot prove what was approved.

No ownership metadata on interfaces and applications.

Do instead: make support and lifecycle ownership visible.

Presenting target architecture with no migration reality.

Do instead: show coexistence, sequencing, and debt implications.

Approving exceptions without expiry dates.

Do instead: make the end condition explicit or expect permanence.

Practical setup tips that save time

A few tactical things make a disproportionate difference.

Create a reusable evidence package template in the repository. Don’t reinvent the review structure every time.

Standardize review-relevant tagged values. Keep them few enough that people actually use them.

Maintain a decision element type or stereotype. Same for exceptions if you can.

Prebuild matrix profiles for dependency coverage and standards impact. A good relationship matrix can settle arguments quickly.

Use diagram legends to show lifecycle state and change type. New, changed, retained, retired. It reduces noise immediately.

Keep reviewer-focused document templates lean. Reviewers are not there to admire repository completeness.

And assign one architect to curate the package. I know I already said that. It matters enough to repeat.

Curation beats tool cleverness. Every time.

Architecture review quality is mostly evidence quality

That is really the whole argument.

An architecture review is a decision mechanism. Sparx EA is useful when it holds traceable evidence, not just drawings. Evidence packages reduce noise, expose risk, preserve rationale, and create actual governance memory.

That matters in every industry, but especially in telecom, where transitional states last for years, integrations multiply quietly, cloud services and on-prem platforms coexist awkwardly, Kafka gets introduced faster than operational ownership matures, IAM assumptions break in cross-domain flows, and operational risk is rarely where the first diagram says it is.

The architect’s job is not to impress the board with diagrams.

It is to make the decision easy, explicit, and defensible.

Frequently Asked Questions

What is enterprise architecture?

Enterprise architecture aligns strategy, business processes, applications, and technology. Using frameworks like TOGAF and languages like ArchiMate, it provides a structured view of how the enterprise operates and must change.

How does ArchiMate support enterprise architecture?

ArchiMate connects strategy, business operations, applications, and technology in one coherent model. It enables traceability from strategic goals through capabilities and application services to technology infrastructure.

What tools support enterprise architecture modeling?

The main tools are Sparx Enterprise Architect (ArchiMate, UML, BPMN, SysML), Archi (free, ArchiMate-only), and BiZZdesign. Sparx EA is the most feature-rich, supporting concurrent repositories, automation, and Jira integration.