How We Helped a Government Agency Implement TOGAF-Based

⏱ 21 min read

I remember that meeting clearly, mostly because everyone in the room assumed governance was already in place. ArchiMate for governance

It was a portfolio review, the kind of steering committee session that fills up with status slides, RAG indicators, and strategic language broad enough to survive almost any challenge. Five transformation initiatives were on the agenda. All five claimed alignment to the same digital strategy. All five had passed some form of review. Yet within forty minutes, it became painfully clear they were designing toward different futures.

One program was working on the assumption that a shared cloud platform would be available by Q3. Another had written its procurement documents as though on-premise hosting would remain mandatory for at least the next three years. Security had signed off on one interpretation of the compliance controls. Legal was operating with another. Procurement had accepted a third version embedded in framework call-off assumptions. The delivery teams were not blocked because nobody was doing the work. They were blocked because the institution had plenty of decision forums and no real decision system.

That distinction matters more than it sounds.

The agency was not short on architecture documents. It had principles. It had standards. It had solution diagrams, target-state decks, committee terms of reference, and enough PowerPoint to cover a corridor wall. What it lacked was decision rights, traceability, and a repeatable way to turn architectural intent into binding choices across investments, procurements, and delivery.

That, in the end, was the real assignment.

This was not a blank-sheet architecture exercise. We were not walking into a startup, or even a private company with a strong executive mandate and a tidy transformation office. We were stepping into a public institution with history, sensitivities, overlapping mandates, and the kind of accumulated procedural sediment you only really understand after a few weeks of hearing people say, “that’s not our remit.”

Why this was harder than a private-sector transformation

I’ve worked in both settings. The difference is not that government is slower or less capable. That is the lazy stereotype, and in my experience it misses the point entirely. The real difference is that public institutions operate under a denser form of legitimacy.

In a private firm, if the executive team agrees, you can often move. In a government agency, formal accountability is everywhere, but operational ownership is frequently diffuse. You may have oversight from multiple directorates. Shared services used by several departments. Data exchange obligations with member-state or national systems you do not control. Procurement cycles that last longer than the technology half-life of some of the platforms being considered. Policy commitments that create immovable dates even when the enabling architecture is still being argued over.

And then there is the reporting.

Reporting upward. Reporting sideways. Reporting externally. Reporting for audit. Reporting for funding. Reporting because the institution has to demonstrate not only that it made a decision, but that it made it proportionately, transparently, and in a way that will survive scrutiny months later from people who were never in the room.

That is why I have a fairly strong view on this now: in public institutions, architecture governance fails less because the model itself is weak and more because the model does not fit the institution that has to live with it day after day. EA governance checklist

You can import a textbook governance structure and still fail completely. I’ve seen beautifully designed architecture boards become ceremonial. I’ve seen standards catalogs that no one really trusted. I’ve seen enterprise architects with excellent models and almost no influence because the funding process, procurement process, and project gating process were all running on different clocks. Sparx EA best practices

This agency had all the familiar EU-style complexity markers. Cross-directorate service consumption. Shared identity dependencies. Transparency and records-management obligations. Security classification concerns sitting alongside open-data ambitions. Multilingual service design. Accessibility obligations. Data protection reviews. Interoperability expectations extending beyond the institution itself.

That combination changes what “good governance” actually looks like.

What we found in the first six weeks

The first six weeks were mostly about listening, reviewing documents, and trying not to be fooled by formal structures.

We interviewed the CIO office, security leadership, procurement, legal, PMO, solution architects, data specialists, infrastructure teams, and several business owners. We reviewed project gates, existing standards, exception practices, investment approvals, and prior architecture decisions. We looked at what people actually used, not just what had been published.

Quite a lot already existed.

There were architecture principles on paper. There were solution review meetings. There were domain experts in data, security, integration, and cloud. There were standards too—although “standards” often meant old PDFs in SharePoint, meeting notes in Teams, and a set of tribal rules that everyone important knew but nobody had ever written down properly in one place.

The problem was not the absence of architecture activity. It was fragmentation.

Reviews happened too late, often after procurement choices had narrowed the options so much that “review” really meant “comment on something that is now politically difficult to change.” Architects had influence, but not mandate. Exceptions were common and almost never retired. Strategic alignment meant one thing to the PMO, another to security, another to the business, and something else again to project managers trying to get through the next gate.

One example crystallized it.

Two modernization initiatives, both in the grants-management space, had moved through different governance paths. One was designing around a central federated identity model—something very similar to an EU Login-style approach, where external and internal identity patterns could be managed consistently through federation and policy-based access. The other had already initiated procurement for a standalone IAM capability bundled with the case-management platform because the vendor promised faster implementation and “compliance out of the box.”

Both had been deemed acceptable.

Not by the same people, obviously. But acceptable enough to keep moving.

That was usually the point where the room went quiet when we presented findings. Because everyone understood the consequence immediately: duplicated spend, inconsistent user experience, fragmented auditability, and a future integration problem disguised as short-term delivery pragmatism.

At that stage, adopting TOGAF was the easy part. Making it usable was the real work. TOGAF training

The mistake we made first

We did make a mistake early on, and it is worth being honest about because the polished version of these stories is usually misleading.

Our first move was too architectural.

We led with governance design artifacts. Proposed boards, role definitions, architecture deliverables, review stages, compliance checkpoints. Technically, it was all sound. If you compared it against best practice, it looked entirely reasonable. The trouble was that it landed badly.

Business leaders heard “more review.”

Delivery teams heard “slower approvals.”

Some architects heard “central control dressed up as process.”

That first draft governance model was politically tone-deaf. We were solving for structural coherence before we had properly translated the pain into the language the institution itself would actually sponsor.

In government environments, that translation matters more than many architects like to admit. Governance does not get approved because it is elegant. It gets approved because it is seen as reducing risk, improving defensibility, and helping delivery avoid avoidable collisions.

The agency did not need convincing that architecture was intellectually useful. It needed to believe that architecture governance would reduce contradictory decisions across the investment lifecycle.

That became the pivot.

Reframing the problem in language the agency would support

Once we stopped talking about “implementing TOGAF governance,” things improved quickly. ArchiMate in TOGAF

We reframed the conversation around three outcomes:

  • fewer late-stage design escalations
  • clearer reuse of shared capabilities
  • faster, documented decision-making for high-risk initiatives

These were not invented slogans. They came directly from what senior leaders were already frustrated by.

We used examples of duplicated spending. We showed where policy delivery risk was increasing because projects were making incompatible assumptions. We tied governance to auditability, procurement defensibility, and decision traceability. That last point mattered a great deal. In a public institution, being able to show why a decision was made is often almost as important as the decision itself.

The turning point was a meeting with the agency’s CIO-equivalent and a deputy director general. We did not begin with a target operating model. We began with contradictions. Two IAM paths. Three cloud assumptions. Shared integration components being bypassed by local procurement decisions. Security exceptions with no expiry. Projects escalating too late for meaningful architectural correction.

I’ve learned over the years that executives rarely sponsor governance because they want better architecture. They sponsor it because they are tired of paying for incoherence.

That was the opening.

The TOGAF pieces we used — and the parts we deliberately did not over-engineer

TOGAF helped, but not in the way framework evangelists often describe.

We used it as a toolkit.

From TOGAF, we leaned heavily on a few things: architecture principles as decision anchors, architecture governance as a formal capability rather than an informal practice, the Architecture Board concept, compliance assessments, repository structures for reusable assets, and capability-based planning to connect strategy with solution design.

We did not force every project through a full ADM cycle. That would have failed almost immediately. Some initiatives were too small. Others were already mid-flight. A few were effectively constrained by procurement timelines that did not line up with ideal sequencing.

So we simplified.

Documentation was reduced to what governance needed in order to make a decision. Viewpoints were tailored by audience. Executives got concise capability and risk views. Delivery teams got practical review packs. Domain specialists could go deeper where it mattered—data flows, integration patterns, cloud landing zone alignment, IAM decisions, records retention, security controls.

This is my honest view: TOGAF is useful in the public sector when treated as a toolkit, not as scripture.

The distinction that helped most was between governance of architecture and governance by architecture.

Governance of architecture is about running the function: boards, standards, compliance checks, repositories, roles. Necessary, yes, but not enough.

Governance by architecture is where architecture starts shaping institutional choices: what can be procured, which shared capabilities must be considered, when exceptions are acceptable, how target states influence funding and delivery. That is where the value appears. Without that, you just have meetings.

The governance model we actually implemented

What we finally put in place was layered, but not complicated for the sake of it.

At the top, we introduced a strategic architecture forum. Its purpose was not to review detailed solution diagrams. It focused on target-state alignment, capability investment choices, and cross-portfolio architectural implications. This was where strategic debates belonged: shared platforms, interoperability priorities, major cloud direction, data exchange patterns, identity models, and key transition-state decisions.

Below that sat the design authority, meeting fortnightly. This was the engine room for solution-level decisions. It handled project submissions, assessed risk, checked standards alignment, reviewed exceptions, and issued conditional approvals where necessary.

Then we established domain councils—data, security, integration, and cloud/platform. I’m generally cautious about proliferating bodies, but in this case they were needed because the domain complexity was real and the expertise was already dispersed. The trick was to make these councils advisory, with clear feeds into the design authority, not independent mini-governance kingdoms.

Decision rights had to be explicit.

The strategic forum could approve target-state directions, shared capability priorities, and major deviations with portfolio impact. The design authority could approve solution architectures within defined boundaries, issue conditions, and register time-bound exceptions. Domain councils could recommend, challenge, and maintain standards, but the escalation routes were clear.

That clarity sounds obvious. In practice, it rarely exists.

We defined roles in practical terms:

  • a head of enterprise architecture acting as chief architect
  • domain architects with named accountability areas
  • business architecture participation for capability and process impact
  • security and data protection integrated by design, not invited at the end
  • PMO linkage so architecture decisions affected delivery checkpoints and funding status

Cadence mattered too. Monthly strategic reviews. Fortnightly design authority. Lightweight pre-checks before formal submission. Those pre-checks were one of the most useful things we introduced, because they prevented formal sessions from turning into public ambushes.

The mandatory artefacts were intentionally limited:

  • a capability impact view
  • a solution context diagram
  • a standards compliance view
  • a risk and exception log

That was enough to support most decisions.

And crucially, we embedded governance into project funding and delivery checkpoints. We did not create a parallel architecture process running beside the real machinery of the institution. If architecture approval has no bearing on funding, procurement progression, or delivery gating, it quickly becomes advisory theatre.

Here’s the before-and-after view we used internally:

A real architecture example: shared case management across multiple policy domains

One of the most useful tests of any governance model is whether it can survive a real business problem.

In this agency, several departments were running broadly similar case-management processes: grants administration, compliance checks, stakeholder correspondence, document-heavy review cycles, and decision tracking. Over time, each had acquired or customized different platforms. Some were low-code workflow tools. Some were vendor case-management suites. One area had pushed heavily into cloud-hosted SaaS. Another remained anchored to a managed on-prem estate because of historical procurement and records concerns.

The result was predictable. Duplicate workflow capabilities. Inconsistent document retention controls. Different audit trail quality. Fragmented reporting. Overlapping integration work with external registries and national systems. Every local optimization made the estate harder to govern.

This was exactly where capability mapping helped.

Rather than arguing immediately about platforms, we mapped the business capabilities and separated what genuinely needed to be shared from what should remain domain-specific. Case intake, identity, document classification, audit logging, notification, integration, search, and reporting had strong commonality. Domain rules, workflow variants, policy-specific data models, and some specialist analytics needed flexibility.

That distinction changed the conversation. Instead of centralization versus autonomy, we framed the architecture around shared core capabilities with configurable domain workflows.

Governance then anchored a set of principles:

  • federated identity and common access policies
  • central document management and records-classification controls
  • standard integration patterns for external registries and member-state exchanges
  • reusable event and API patterns, including Kafka for asynchronous processing where cross-system state changes needed reliable propagation
  • shared audit and monitoring standards

Kafka, in particular, was one of those decisions that needed governance discipline. Without it, every team would have built point-to-point integration differently. We did not mandate Kafka everywhere—that would have been absurd—but for high-volume event distribution and decoupled case updates between core services, reporting pipelines, and notification processes, it became the preferred pattern. Domain councils defined the conditions for its use, the cloud/platform team aligned it with managed hosting patterns, and the design authority stopped teams from inventing bespoke message brokers in every program.

The final decision was a shared core platform model with configurable domain workflows, not a single monolith and not a free-for-all. That sounds moderate in hindsight. It did not feel easy at the time. Some directorates wanted full autonomy. Others wanted aggressive standardization. Governance gave us a place to make the trade-off visible and durable.

Without that, each directorate would have optimized locally and pushed long-term cost and compliance risk into the future.

Identity and access management: where politics met architecture

IAM was messier.

The agency had accumulated a familiar mix: local accounts in older applications, contractor-managed access in some externally supported systems, one federated identity service used inconsistently, and several vendor products with their own internal user stores. New digital services were increasing pressure from every direction at once: external user access, internal staff mobility, stronger auditability, and data protection requirements that made loose access governance untenable.

Security wanted strict centralization. Program teams wanted speed. Legacy vendors wanted exceptions. Business sponsors wanted not to hear the phrase “identity transformation” unless absolutely necessary.

This is where governance either becomes credible or collapses.

We defined a target IAM architecture with transition states rather than pretending everything could be made compliant immediately. The compliance assessment model was simple:

  • fully aligned
  • aligned with conditions
  • exception requiring remediation date

That sounds almost trivial, but it changed behavior. New stakeholder-facing services had to use the federated model. Internal workforce-facing modernizations had to align to central access policies and audit standards. Legacy systems could receive exceptions, but only with expiry dates, mitigation plans, and recorded ownership.

We also had to be realistic about procurement and vendor constraints. In one case, a system already under contract came with a bundled local identity component that could not be removed in the first release without derailing delivery. The governance answer was not to reject the program theatrically. It was to allow a constrained exception, require federation at the boundary where possible, tighten audit extraction, and put remediation into the funded roadmap.

That is what practical governance looks like. Direction without fantasy.

What nearly derailed it

Several things nearly did.

The standard resistance patterns showed up right on schedule.

“This duplicates the PMO.”

“Architecture cannot delay policy deadlines.”

“Procurement already decided the platform.”

“Our project is unique.”

That last one is eternal.

The closest we came to losing credibility was when a major procurement launched before the architecture guardrails had been fully approved. This is not uncommon in public institutions. Procurement calendars often move according to commitments and legal windows that do not politely wait for governance maturity.

If we had insisted on ideal-state conformance, we would have created a public collision and probably lost support.

Instead, we produced a rapid interim architecture position paper. We created an emergency review path for in-flight procurements. And we agreed minimum mandatory controls rather than perfect alignment: integration standards, IAM expectations, records obligations, hosting constraints, and key interoperability assumptions.

Was it perfect? Not remotely.

But I still think it was the right call. Sometimes you preserve the legitimacy of the governance model by accepting a few imperfect decisions in order to stop systemic incoherence from getting worse.

Architects do not always enjoy hearing that. But institutions run on legitimacy, not purity.

The mechanics that made it stick

Most governance programs fail in the plumbing, not the framework.

That was true here as well.

We started with lightweight repository and tooling choices. A simple architecture catalog first, modeling depth later. A standards register with named owners and review dates. A decision log visible beyond the architecture team. Version control for key governance records. Nothing glamorous. All essential.

The templates helped more than the repository at first.

We created a two-page architecture summary for executives. A practical solution review pack for delivery teams. An exception form with a mandatory expiry date, mitigation, and accountable owner. We published exemplar submissions so teams could see what “good enough” looked like.

That phrase matters: good enough.

If your governance artefacts require consultant-grade polishing every time, the operating model will collapse under its own weight. Public sector delivery teams are busy, often under-resourced, and already navigating compliance overhead. The artefacts have to be short, reusable, and proportionate to project size and risk.

We also tracked a small set of metrics:

  • reviews by project phase
  • exceptions opened versus retired
  • reuse rate of shared services
  • reduction in late-stage design changes
  • number of projects entering procurement with architecture pre-check completed

The metrics were not for vanity reporting. They were there to prove that governance was shifting upstream.

We ran office hours with architects. Pre-review coaching. Short clinics with procurement and PMO. Those sessions turned out to be unexpectedly important because they changed governance from a judgment event into a support mechanism.

Here’s a simple view of the operating flow we converged on:

Diagram 1
How We Helped a Government Agency Implement TOGAF-Based

And the governance layers looked roughly like this:

Diagram 2
How We Helped a Government Agency Implement TOGAF-Based

Nothing revolutionary there. Which is partly the point.

How we adapted TOGAF for public accountability and audit scrutiny

Public institutions need more than internal architecture discipline. They need defensible records.

That means explicit linkage between principles, standards, and approved decisions. Versioned governance records. Formal exception management. RACIs aligned to the mandates people actually hold, not idealized org charts invented in workshops.

We had to make auditability a first-class architectural concern. If a principle led to a standards position, and that standards position shaped a procurement boundary or solution approval, we needed to be able to show the chain. Not because architects enjoy traceability diagrams, but because leadership changes, audits happen, procurement challenges emerge, and memory is a terrible governance mechanism.

The EU-style institutional angle sharpened this further. Cross-institution interoperability requirements. Data protection and sovereignty concerns. Accessibility and multilingual service design. These were not side constraints. They had to be treated as architecture concerns from the beginning.

That changed some of the artefacts too. Accessibility and language support appeared in architecture reviews. Data residency assumptions were surfaced early. Integration with member-state systems required explicit pattern choices and risk assessment. Governance had to treat those as core design dimensions, not compliance appendices.

Results after the first year — including what did not improve enough

After the first year, the results were real, though not miraculous.

Architecture was involved earlier in investment discussions. Duplicate platform decisions reduced. Shared integration and IAM patterns were used more consistently. Technical debt and exceptions became more visible. More importantly, decisions became easier to trace and defend.

The less tangible shift mattered even more. Architecture began to be seen less as document production and more as decision support. Business stakeholders became more willing to escalate trade-offs openly rather than bury them in project ambiguity. That is a cultural change, and it is not trivial.

Still, some things did not improve enough.

Legacy modernization sequencing remained difficult. Business architecture maturity was uneven. Domain architects were stretched thin. And the tension between urgent policy delivery and strategic reuse never disappeared. It was managed better, but not solved.

That is worth saying plainly: governance improved decision quality more quickly than it improved the estate itself.

That is normal. Decisions change first. Landscapes change later.

Practical guidance if you are trying this in government

A few things I would recommend, based on bruises rather than theory.

Start with contradictions, not methodology. People sponsor governance when they can see the cost of incoherent decisions.

Tie governance to funding and procurement gates. If it sits outside the real lifecycle, it will become optional in all but name.

Make exceptions visible and temporary. Hidden permanent exceptions are how standards become fiction.

Separate strategic target-state debates from project design reviews. Mixing them creates endless meetings and poor decisions.

Bring legal, security, and procurement inside the model early. Do not treat them as external reviewers waiting at the edge.

Keep the mandatory artefact set small. Most teams can produce a capability impact view, context diagram, standards assessment, and risk log. Ask for much more and quality will fall fast.

Design for varying project sizes and risk profiles. A low-risk enhancement should not face the same burden as a major cross-agency platform procurement.

Expect political negotiation. Do not confuse it with failure. In public institutions, negotiation is often how legitimacy is built.

And please do not let the architecture board become a theatre of abstract debate. Publish decisions quickly. Use plain language. Make ownership obvious.

What I would do differently next time

A few things.

I would invest earlier in business capability mapping. It paid off later, but we should have done it sooner.

I would formalize product and service ownership earlier as well. Some governance friction persisted because shared capabilities existed without clear long-term stewards.

I would set quantitative thresholds sooner for which projects required full review. We converged on this eventually, but earlier clarity would have reduced noise.

I would also create a stronger onboarding pack for new project managers and vendors. Too much early energy went into re-explaining the model.

And I would spend less time perfecting the repository structure in the first quarter. Useful structure matters, but operating rhythm matters more.

Governance was not the deliverable — trust was

I often think back to that opening portfolio review, the one where five programs were all supposedly aligned and clearly were not.

A year later, the agency was not perfectly standardized. It still had legacy constraints. It still had exceptions. It still had debates about cloud, reuse, timing, procurement flexibility, and what should be centralized versus local.

But it had become more coherent in how it made technology decisions.

That is a bigger achievement than many people realize.

TOGAF-based governance can work in government. I believe that strongly. But only if it respects institutional reality, allows transition states, and stays relentlessly focused on decision quality rather than framework ceremony.

The best sign it was working was not that the architects were happier.

It was that difficult decisions stopped being accidental.

Frequently Asked Questions

What is TOGAF used for?

TOGAF provides a structured approach to developing, governing, and managing enterprise architecture. Its ADM guides architects through phases from vision through business, information systems, and technology architecture to migration planning and governance.

What is the difference between TOGAF and ArchiMate?

TOGAF is a process framework defining how to develop and govern architecture. ArchiMate is a modelling language defining how to represent architecture. They work together: TOGAF provides the method, ArchiMate provides the notation.

Is TOGAF certification worth it?

Yes — TOGAF Foundation and Practitioner are widely recognised, especially in consulting, financial services, and government. Combined with ArchiMate and Sparx EA skills, it significantly strengthens an enterprise architect's profile.