⏱ 21 min read
Most Architecture Review Boards start with good intentions and then, before long, become a drag on delivery.
You see the pattern often enough. A cloud migration is already under pressure. One team is trying to move a citizen-facing case management service onto a managed container platform. Security wants stronger boundary controls. The data team is concerned about residency and retention. The platform team insists the standard landing zone is mandatory. Procurement has a contract renewal underway with the incumbent hosting vendor, so nobody wants to make a call that could trigger commercial complications. The solution team goes to the ARB hoping for clarity and walks out with three action items, no decision, and another meeting in two weeks.
After a while, delivery teams stop treating the board as support. It becomes something to get through, or worse, something to route around.
Government environments make this sharper than in most commercial settings. The estates are older. Ownership is less clear. Policies overlap. Shared services are often only half-built. There are records obligations, privacy concerns, auditability requirements, agency-specific mandates, and usually some inherited platform contract that continues shaping technical decisions long after it should have lost that power. Add cloud transformation to that mix and you get exactly the conditions where architecture governance matters — and exactly the conditions where poor governance does real damage. ARB governance with Sparx EA
So this is not a theory piece about TOGAF. TOGAF training
It is a practical setup guide for running an Architecture Review Board that genuinely helps teams move, while still giving the enterprise enough control to manage risk, standards, and long-term design integrity. I’ll cover operating model, decision scope, artefacts, cadence, triage, exceptions, and a realistic 90-day setup sequence. I’m not going to re-explain the whole ADM or pretend that dropping TOGAF terminology into every conversation somehow improves governance. ArchiMate in TOGAF
Because it doesn’t.
The first design choice is not membership — it is purpose
This is the mistake I see most often. An organization decides it needs an ARB and immediately starts debating who should sit on it. Chief architect, security lead, head of data, maybe someone from operations, perhaps a business representative if there is room.
That is backwards.
Before you decide membership, you need to decide what the board is actually for. In practice, most ARBs drift into one of three roles:
- a conformance gate
- a design assurance forum
- an enterprise trade-off decision body
Those are not the same role, and trying to be all three usually produces a board that is slow, overbooked, and unclear about its authority.
My view here is fairly straightforward: for cloud transformation, especially in government, the ARB should primarily be a time-bound enterprise trade-off forum. Not an all-purpose approval committee. Not a place where every solution goes to be blessed. And definitely not a stage on which stakeholders perform governance for each other.
If a solution architect cannot explain, in one or two sentences, why a decision has enterprise impact or crosses architectural domains, it probably should not go to the board.
A concrete example helps. Suppose you are deciding whether citizen identity services should be centralized on a shared IAM platform or remain agency-specific for now. That belongs at the ARB. It affects interoperability, user experience, security posture, funding, vendor strategy, and future reuse. It may also shape how APIs, audit logging, and delegated administration work across agencies.
By contrast, choosing the exact log retention values for a specific workload inside an already approved observability pattern is not an ARB matter. That belongs with platform standards and implementation governance.
The board should be asked to decide things that matter beyond one team.
That sounds obvious. In my experience, plenty of boards fail simply because nobody draws that line early enough.
Put the ARB in the TOGAF governance model without turning it into framework theatre
TOGAF gives you a useful home for the ARB, but you do not need to drag people through a framework lecture every time you explain it.
In practical terms, the board sits inside Architecture Governance. It exists to apply architecture principles, enforce or deliberately vary standards, and resolve design decisions that local delivery teams cannot — or should not — settle on their own. It is a mechanism, not the mechanism.
That distinction matters.
An ARB is not a replacement for design authority, security approval, service transition, operational change management, portfolio governance, or commercial review. I’ve seen organizations use the ARB as a dumping ground for everything unresolved elsewhere. Once that happens, the board turns into a clearinghouse for ambiguity. Meetings get longer. Accountability gets weaker. Nobody is quite sure whether they are making architecture decisions or just collecting objections.
TOGAF concepts are still useful if you keep them grounded:
- Architecture Principles become review criteria.
- Architecture Repository becomes the place where patterns, standards, past decisions, and approved exceptions are stored.
- Architecture Contracts help turn board decisions into implementation follow-through.
- Compliance Assessments should be evidence-based checks, not ritualized presentations.
That last point is worth underlining. Compliance should not mean making teams produce polished decks to reassure the enterprise that governance exists. It should mean checking whether a design actually conforms to agreed patterns, or if not, whether the deviation is justified and controlled.
Decide what comes to the board — and what never should
The most effective ARBs are selective. Ruthlessly selective, at times.
You need intake filters. Without them, every delivery issue eventually finds its way onto the agenda.
The categories that generally belong at the board are fairly consistent:
- cross-domain architecture decisions
- deviations from standards with material impact
- platform exceptions that affect enterprise risk or reuse
- major integration pattern choices
- data sharing, sovereignty, or residency trade-offs
- legacy modernization sequencing decisions with broad consequences
In a government context, that might include questions like:
- Can a case management platform use a public cloud PaaS service in a protected-data setting?
- Can an agency bypass the enterprise API platform because delivery deadlines are at risk?
- Can a regional office retain its own document repository even though records policy says those functions should move to a centralized platform?
- Should inter-agency event exchange use Kafka on a shared managed platform, or continue with point-to-point batch integration until the common event backbone matures?
Those are architecture decisions with enterprise implications.
Now the things that should stay out:
- minor implementation detail
- fully compliant designs using approved patterns
- vendor demos disguised as architecture proposals
- issues that are really funding, staffing, or project management problems
- local tuning decisions inside a sanctioned platform model
I have seen ARBs spend time debating whether a team should use one ingress controller or another in Kubernetes when the real architectural issue was that the landing zone itself did not yet support the required network segmentation. That is exactly the kind of confusion intake triage should stop before the meeting.
A simple intake checklist helps. Not glamorous, but effective.
ARB intake checklist
- What decision is being requested?
- Why can this not be resolved within approved patterns or local design authority?
- What enterprise standard, principle, or shared service is affected?
- What options have been considered?
- What happens if no decision is made now?
- Is this an architecture issue, or is it really security, commercial, funding, or operational governance?
- What level of turnaround is required?
If teams cannot answer those questions, the item is not ready.
Build the board backward from decision speed, not organizational charts
A lot of ARBs are overstaffed because people design them to reflect the organization chart. That is almost always a mistake.
The core board should be compact. Small enough to make decisions, broad enough to represent the main enterprise concerns. In most government cloud programs, I would start with:
- chief or lead enterprise architect as chair
- business architecture representation
- security architecture lead
- data or information architect
- cloud/platform architect
- solution or delivery architecture representative
That is enough for a functioning core.
Then bring in specialists when needed: privacy, integration, operations/SRE, procurement, records management, identity, maybe legal in unusual cases. They do not all need permanent seats. In fact, they usually should not have them.
People who should not be permanent members? Easy.
Anyone attending purely for status. Senior leaders who cannot engage in trade-offs but can delay them. Stakeholders with opinions but no decision accountability. Observers from every affected team. A 16-person ARB sounds inclusive and responsible. In practice, it creates delay, diluted ownership, side conversations, and a strong tendency to defer hard calls.
You also need quorum rules and delegated authority. If the chair and two or three critical domain representatives cannot make a decision without reconvening the entire machine, the board will bottleneck. Government delivery rarely waits politely for governance calendars.
One pattern I like is simple: the full board handles significant exceptions and enterprise-impacting choices; the chair plus delegated domain leads can make urgent interim decisions, which are then logged and reviewed retrospectively.
That keeps things moving.
The minimum artefacts that make reviews useful
There is a persistent fantasy in architecture governance that more documentation leads to better decisions. Usually it just leads to slower ones.
A practical ARB submission pack should be lean. If a team needs weeks to prepare for review, your process is too heavy.
For most reviews, the minimum set is:
- a one-page decision summary
- a context diagram
- key risks and trade-offs
- standards compliance view
- exception request, if any
- implementation impact and dependencies
That is enough to have a serious conversation.
Depending on maturity, you might also ask for a capability map, target-state alignment, deployment model, or a data classification and residency view. But those should be driven by the type of decision, not by a generic template that treats every review as equally complex.
Let’s make that concrete. Suppose a service is moving from on-prem hosting to a managed container platform in the cloud. The board probably needs to understand:
- how IAM integration will work
- where audit logging lands
- what the data residency position is
- the disaster recovery approach
- which legacy systems remain coupled
- whether there are non-standard network, secrets, or integration requirements
- whether Kafka connectivity is needed and, if so, whether it follows approved enterprise patterns
What it does not need is a 70-slide deck with polished capability heat maps and generic cloud benefits slides. I say that as someone who has sat through far too many of those.
The real need is a clear decision statement.
What the ARB reviews at each level of change
This is where teams need predictability. They should know, at least roughly, what kind of review they are signing up for.
That table is not just administration. It sets expectations, limits queue chaos, and stops every design question from being treated as constitutional law.
Cadence matters more than people expect
I’ve seen monthly boards in active cloud programs. They almost always fail.
Not because the people are bad. Because the pace is wrong.
If teams are deploying frequently, wrestling with landing zone maturity, dealing with IAM federation issues, or trying to retire brittle middleware while introducing things like managed Kafka or event-driven integration, waiting a month for a decision is absurd. They either stall or route around governance.
My recommendation for most transformation environments is:
- a short regular ARB session, weekly or fortnightly
- a separate pre-board triage every week
- an urgent out-of-cycle path for operational or delivery-critical exceptions
Weekly boards can fail too, of course. The failure mode there is performative governance: too many low-value agenda items, not enough real decisions, everyone attending because the meeting exists.
So keep the agenda sharp:
- confirm prior decisions and actions
- exception items first
- enterprise-impacting decisions second
- information-only items last, or remove them entirely
I’m not a fan of “for noting” architecture items cluttering the board. If the board is not being asked to decide, approve, or advise on a real trade-off, the item probably belongs somewhere else.
Decision logging is non-negotiable. Every decision should record:
- the decision itself
- rationale
- constraints and assumptions
- owner
- expiry date if relevant
- required follow-up actions
A good example: an agency needs a temporary exemption from the enterprise observability stack because the landing zone upgrade is three months behind and the service still has to go live. Fine. The board can approve it for 90 days, require central log export and compensating controls, and set a remediation milestone. That is a useful decision. Much better than a vague “approved subject to alignment.”
Don’t launch with slogans. Write operating rules.
A surprising number of ARB charters are full of phrases like “ensure strategic alignment” and “promote architectural excellence.”
Nobody can operate a board with that.
A usable charter needs a few plain things:
- mandate
- scope
- decision rights
- escalation path
- review categories
- service levels
- artefact expectations
- exception management rules
And it needs direct wording. For example:
- “The ARB reviews only decisions with cross-domain, exception-based, or enterprise-standard implications.”
- “Compliant solutions using approved patterns may proceed without full board review.”
- “The board may grant time-bound conditional approvals where risks are understood and remediation is owned.”
- “Decisions that fall outside architecture scope will be redirected within three business days.”
That is real governance language. It tells teams what will happen.
In government, I would add explicit references to policy alignment, privacy and records obligations, inter-agency standards, and audit trail requirements. Not as decoration. As operating constraints.
What you should avoid is ambiguity about whether the board recommends or approves. If that remains unclear, every contentious issue becomes a political negotiation afterward.
A concrete setup sequence for the first 90 days
This is where most organizations need the most help. The board does not become effective because a charter is signed. It becomes effective because the first few months are handled deliberately.
Days 1–30: map the governance problem before creating the board
Do not start by scheduling the meeting. Start by understanding what is broken.
Look at current approval bottlenecks. Where are designs waiting? Which reviews are duplicated? Which teams keep getting conflicting direction from security, platform, and enterprise architecture? What exceptions recur every month? Where do standards collide with implementation reality? free Sparx EA maturity assessment
In one department I worked with, three separate forums were all reviewing cloud connectivity decisions: network governance, security design review, and an informal architecture working group. None had final authority. Teams would present the same hybrid connectivity pattern three times and still not know whether they could proceed. That is exactly the kind of duplication the ARB should absorb or rationalize.
Outputs by day 30 should include:
- current-state governance map
- list of duplicated reviews
- recurring exception themes
- candidate decision categories
- draft scope boundaries for the ARB
Days 31–60: define triggers, appoint members, publish templates
Now decide what causes an ARB review.
Define review triggers clearly. Cross-agency integration? Deviation from IAM standard? Unsupported PaaS in protected workloads? New Kafka tenancy outside the shared eventing platform? Data residency exception? Legacy hosting retention beyond target-state milestone? Be specific.
Then appoint the chair, core members, and delegates. Keep it tight.
Publish the submission pack and service levels. Teams need to know the expected turnaround. “Submit and wait” is not a service model.
I would also stand up triage during this phase, even if the board itself is not fully operational. Triage is how you stop the backlog from becoming unmanageable before launch.
Outputs by day 60:
- approved charter
- review trigger list
- core membership and delegate model
- triage process
- submission templates
- decision log format
- draft exception register
Days 61–90: run pilots, measure, and adjust
Do not open the floodgates immediately. Pilot the board with a limited scope.
For example, in a department shifting from agency-by-agency cloud decisions to a shared review model, you might initially limit the ARB to:
- platform exceptions
- data-sharing decisions
- integration pattern choices
- security architecture exceptions with enterprise impact
Run real reviews. Measure cycle time. Ask delivery teams whether the board clarified anything or just created more work. Track what was redirected away from the board. Look at where decisions were still blocked because authority sat somewhere else.
By day 90, you should have:
- pilot decisions logged
- turnaround metrics
- issue categories that should be removed from board scope
- refined quorum and delegation rules
- updated templates based on actual usage
- a short lessons-learned paper, not a grand maturity model
That last point matters. Early ARBs should optimize for clarity and predictability, not elegance.
The part many architects underestimate: pre-board triage
The real work of a good ARB happens before the meeting.
I feel strongly about this because so many boards waste their sessions trying to work out what problem they are even being asked to solve. That should never happen live if you can avoid it.
Triage should do four things:
- reject incomplete submissions
- route local decisions away from the board
- identify whether the issue is architecture, security, procurement, or operations
- help teams sharpen the decision statement
A board secretary paired with a lead architect or delegated reviewer usually works well. The secretary keeps flow, records, and agenda discipline. The architect decides whether the issue genuinely belongs.
I’ve found triage is also where you de-escalate noise. A team might submit an item saying they need ARB approval to use Kafka. Once you unpack it, the real issue is that they need a topic provisioning exception because the shared platform’s naming and retention model does not fit their agency records obligations. That is a narrower, and much more useful, conversation.
Without triage, the board meeting itself becomes discovery. And once the meeting becomes discovery, it becomes theatre.
Architecture review is not architecture policing
If delivery teams think the ARB exists to catch them doing something wrong, you have already lost half the battle.
There is always tension between enterprise control and product autonomy, especially in cloud programs where teams are told to move fast and then discover fifteen hidden constraints around IAM, networking, secrets, logging, procurement, and data handling. The board has to reduce that tension, not amplify it.
A few practical things help:
- publish reusable reference architectures
- offer office hours before formal review
- make standards easy to find
- allow conditional approvals
- time-box exception reviews
Language matters too. “What decision are you asking us to make?” is a much better opening question than “Why did you do this?” One invites collaboration. The other triggers defensiveness.
A familiar scenario: a digital licensing team wants to adopt a managed workflow service that is not yet on the approved cloud services list. A bad ARB says no because the standard is incomplete. A useful ARB asks what capability gap the service fills, whether approved alternatives exist, what data it touches, how IAM and audit requirements would be met, and whether a conditional six-month approval with controls is enough to let delivery proceed while enterprise standards catch up.
That is governance doing its job.
The best boards I have seen are remembered for solving awkward trade-offs. Not for red-marking diagrams.
Handle exceptions like a system, not a favour
In government transformation, exception management is not edge-case administration. It is central.
Legacy dependencies persist longer than planned. Procurement timing lags technology decisions. Policy updates trail implementation realities. Critical systems have constrained migration windows. And sometimes the standard platform simply is not ready for what teams need to deliver.
So build a proper exception model. At minimum, each exception should record:
- exception type
- business justification
- risk acceptance
- compensating controls
- expiry date
- remediation owner
Then distinguish between two kinds of exceptions.
First, tolerable temporary exceptions. These are often necessary. For instance, a justice system may need a non-standard integration bridge because the enterprise API platform does not yet support a required protocol. That can be acceptable if there is a retirement plan, funded remediation, and clear controls.
Second, dangerous precedent-setting exceptions. These look temporary but actually undermine the architecture direction. Allowing each agency to stand up its own identity store “for now” often falls into this category. Once you permit enough of those, your target state becomes fiction.
The classic failure is granting temporary waivers with no end date and no reporting. Six months later they are part of the baseline.
Measure whether the ARB is helping
Do not count meetings. Do not count slides reviewed. Do not count how many people attended.
Useful metrics are operational:
- review turnaround time
- percentage resolved without full board review
- repeated exception themes
- standards adoption rates
- implementation compliance after approval
- delivery team satisfaction
In government, I would also watch for:
- reduction in duplicated platforms across agencies
- fewer unmanaged data-sharing patterns
- improved auditability of architecture decisions
- reduced use of direct point-to-point integrations where shared API or eventing models exist
Those measures tell you whether the board is changing architecture outcomes, not just running meetings.
Common mistakes I see in ARBs
A few blunt ones.
Making every project appear before the board. That kills throughput.
Confusing architecture governance with project governance. The ARB is not there to ask whether milestones are on track or whether budget has been approved.
Letting security dominate every discussion without enterprise trade-off framing. Security matters enormously, obviously, but architecture decisions still involve usability, integration, cost, platform maturity, and delivery timing.
Approving exceptions verbally with no record. That is governance malpractice.
Forcing teams into heavyweight templates that obscure the actual issue.
Having no follow-up after approval. Conditions that are never checked are just polite suggestions.
Inviting too many observers.
Escalating unresolved design detail instead of enterprise choices.
Failing to retire outdated standards, which forces teams to ask for exceptions to rules nobody should still be enforcing.
And one government-specific mistake I see a lot: assuming a policy reference is enough architecture reasoning. It isn’t. Quoting policy does not solve implementation constraints. If the prescribed standard cannot be applied in the actual delivery environment, the board still has to reason through the architecture, not just point at a document.
Worked example: a multi-agency citizen services cloud program
Let’s end with something real enough to be useful.
Imagine a multi-agency citizen services program. Several departments are modernizing customer-facing services over a two-year period. There are shared ambitions around identity, payments, notifications, and data exchange, but hosting models are mixed and cloud maturity is uneven. One agency has a solid AWS landing zone and decent IAM federation. Another is still heavily dependent on a managed hosting provider and runs key workloads in VMs with manual deployment. A third wants event-driven integration through Kafka but has no operational model for it.
The ARB is set up with a deliberately narrow initial scope:
- shared service decisions
- security exceptions with enterprise impact
- cross-agency integration choices
- platform deviations affecting the target-state cloud model
It runs weekly triage and a fortnightly board. Membership is compact: lead enterprise architect as chair, security architect, data architect, cloud platform architect, solution architecture representative, and business architecture lead. Privacy and records specialists rotate in when needed. Sparx EA guide
In the first two months, the board handles four significant decisions.
The first is whether to use a centralized API gateway for all external-facing integrations. One agency wants to keep its own because migration risk is high. The board allows phased coexistence but mandates that new APIs register through the central model and that agency-specific gateway use expires in nine months.
The second is identity. Two agencies still maintain local citizen identity stores. The board decides that no new services can onboard to local identity, but it permits temporary federation bridges while the shared IAM platform scales. That is not a perfect answer. It is a practical one.
The third is point-to-point integration. A program team wants to connect directly between a benefits system and a notifications service because the enterprise messaging platform is not ready. The board approves it temporarily, requires event schema documentation, and sets a migration checkpoint once the shared platform is available.
The fourth is a container platform issue. One agency cannot yet meet the standard because a critical vendor product supports only a legacy deployment model. The board grants a conditional exception, but links it to contract renewal timing and requires a funded remediation path.
After several months, the outcomes are mixed in the healthy sense. There is better consistency. Fewer duplicated debates. Shared IAM and API decisions start to stabilize. Teams understand when they need the board and when they do not.
There is still friction around legacy exemptions. Some agencies feel the shared standards move slower than their deadlines. The board sometimes has to choose between architectural purity and practical transition. That is normal. In government, the job is rarely to eliminate compromise. It is to make compromise visible, controlled, and temporary where possible.
Final thought
A TOGAF-aligned ARB becomes useful when it is narrow in scope, fast in operation, and explicit about decision rights.
That sounds simple. It isn’t easy.
Especially in government cloud transformation, architecture governance has to absorb reality rather than deny it. Legacy constraints are real. Policy obligations are real. Procurement and funding boundaries are real. Delivery pressure is very real. If the board ignores those things, teams will bypass it. If it takes them seriously, while still protecting enterprise coherence, it becomes one of the few governance forums people actually value.
That is the test, really.
If your board cannot distinguish between a reusable enterprise decision and a local design detail, it will become noise. If it can, it will help the enterprise move with less confusion, fewer accidental divergences, and better reasons for the trade-offs it accepts.
FAQ: Running an ARB in practice
How often should an Architecture Review Board meet during a cloud migration?
Usually weekly or fortnightly, with weekly triage and an urgent out-of-cycle path. In most active programs, monthly is simply too slow.
Do all solution designs need ARB approval?
No. Compliant solutions using approved patterns should usually proceed through lightweight confirmation or local design authority.
What is the difference between an ARB and a Design Authority?
A Design Authority often focuses on solution quality and technical coherence within a domain or program. An ARB should focus on cross-domain, exception-based, and enterprise-impacting decisions.
How should the board handle urgent production exceptions?
Use delegated authority — typically the chair plus relevant domain leads — with retrospective review and a documented expiry.
Who owns follow-up on conditions attached to approvals?
The delivery owner should, but the ARB secretariat or architecture function needs to track and report closure. Otherwise conditions tend to disappear.
Can a vendor participate directly in ARB sessions?
Sometimes, but carefully. Vendors can clarify options or constraints, but they should not drive board decisions or turn the session into a product pitch.
Frequently Asked Questions
What is TOGAF used for?
TOGAF provides a structured approach to developing, governing, and managing enterprise architecture. Its ADM guides architects through phases from vision through business, information systems, and technology architecture to migration planning and governance.
What is the difference between TOGAF and ArchiMate?
TOGAF is a process framework defining how to develop and govern architecture. ArchiMate is a modelling language defining how to represent architecture. They work together: TOGAF provides the method, ArchiMate provides the notation.
Is TOGAF certification worth it?
Yes — TOGAF Foundation and Practitioner are widely recognised, especially in consulting, financial services, and government. Combined with ArchiMate and Sparx EA skills, it significantly strengthens an enterprise architect's profile.