TOGAF vs DODAF: Which Framework for Defence and Government

⏱ 22 min read

There is a point in a public-sector programme when the architecture conversation stops being polite.

It usually arrives a few steering committees in, after the glossy target-state slides have done the rounds, the repository has been populated, and somebody senior—often not an architect—asks the question that cuts straight through the ceremony:

Show me how this actually supports operations.

Not in abstract language. Not as a future-state picture. They want to know which capability improves, which authority owns which decision, what fails when a national node drops offline, whether the identity federation survives degraded connectivity, and why the procurement lot structure appears to ignore operational dependencies that everyone in the room already knows are real.

I have seen this happen in EU institutional programmes more than once. A cross-agency initiative starts with a sensible brief: rationalise platforms, reduce duplication, improve interoperability, modernise legacy systems, align to enterprise standards, maybe introduce cloud-native patterns and event-driven integration where that genuinely makes sense. The architecture team is told to “use TOGAF” because it is familiar, contract-friendly, and easy to explain to procurement, PMO, and delivery partners. TOGAF training

Then the programme gets closer to reality.

Security classifications appear. Member-state constraints surface. One agency wants shared services, another wants sovereign control, a third insists on strict legal separation of data domains. Operational stakeholders start asking about mission continuity, failover arrangements, trust chains, command relationships, and who is accountable when an incident crosses institutional boundaries. Very quickly, the architecture pack that looked entirely respectable in a digital transformation setting starts to feel a bit thin.

That is usually when somebody brings DODAF into the room.

And then the wrong debate begins.

The discussion turns into “TOGAF vs DODAF,” as if this were a clean doctrinal choice. In practice, especially in defence and government work, that is almost never the real issue. The real mistake is treating the two frameworks as interchangeable. Or worse, treating either of them as a documentation ritual detached from governance, procurement, and delivery. architecture decision records

That is the argument I want to make here, from the perspective of someone who has had to make architecture useful in environments where programmes cut across agencies, countries, legal regimes, and operational cultures. I am not interested in textbook comparisons. Most architects in this space already know the official descriptions. What matters is which framework survives contact with messy governance, capability planning, supplier ecosystems, operational accountability, and the political compromises that shape most public-sector programmes long before the first interface is built.

So this is a field report.

Not neutral, because neutrality is not especially helpful here.

The EU programme that started as “just a repository”

A few years ago, I worked on an information-sharing initiative that began, as many of these things do, with a deceptively simple objective: clean up the application landscape, rationalise interfaces, reduce duplication between agencies, and create a more coherent target architecture for cross-border cooperation.

On paper, it was straightforward enough. Multiple institutions. Multiple systems. Legacy integration everywhere. Point-to-point interfaces that had grown over years of urgent policy deadlines. Some brittle middleware. Identity and access controls that made sense locally but not across the wider ecosystem. A sensible enterprise architecture effort was overdue. Sparx EA training

The early approach leaned heavily into TOGAF-shaped thinking. Principles. Baseline architecture. Target architecture. Work packages. Transition states. Governance forums. Heatmaps. Roadmaps. If you work in government, you know the pattern. It is familiar because, up to a point, it works reasonably well. ArchiMate in TOGAF

And, to be fair, it did work at first.

We got clarity on duplicated capabilities, overlapping applications, shared data concerns, and technical debt. We identified opportunities for API standardisation, better IAM patterns, and selective use of cloud services for less sensitive components. We started sketching a more coherent integration backbone, with event streaming via Kafka for certain classes of cross-system notifications instead of relying entirely on fragile synchronous coupling. All of that was useful. Tangibly useful.

Then the programme board changed the conversation.

They stopped asking only what applications could be consolidated and started asking what operational outcomes would actually improve. Which actors depended on which services? How would dependencies behave across agencies and member states? What happened if a national system was available but the trust service was not? What happened during degraded operations, during incident response, or when one participant had to fall back to manual procedures?

Those were not awkward questions. They were the right questions. Frankly, they should have come earlier.

And our architecture pack, while not bad, was not built to answer them well enough. It described business processes and application services. It did not model mission threads in a way that made operational consequences explicit. It captured interfaces, but not the dependency logic the oversight bodies actually needed. It catalogued systems, but it was still too solution-centric. Cross-organisational accountability was implied rather than modelled.

That is the point where the TOGAF-vs-DODAF discussion becomes real rather than academic.

The decision is usually framed badly

When teams ask, “Should we use TOGAF or DODAF?” my first reaction these days is that the question is often malformed.

The useful decision is not framework preference. It is this: what kind of problem are we actually trying to make architecture solve?

That sounds obvious. In practice, it is often skipped.

Are you dealing with enterprise change across a portfolio—application rationalisation, shared services, target-state planning, migration sequencing, governance alignment? Or are you dealing with a mission or capability ecosystem where operational effects, dependency chains, assurance questions, and cross-organisational responsibilities have to stand up under scrutiny?

Those are not the same thing.

Likewise, what is the main pain? Governance confusion? Interoperability? Acquisition planning? Operational assurance? Supplier accountability? Programme sequencing? A lot of architecture teams jump too quickly into notation and artefact production before they have identified the actual decision pressure.

And in my experience, public-sector architecture teams overestimate the importance of notation and underestimate the importance of decision traceability.

That is one of the recurring mistakes. People get excited about metamodels and view catalogues. Meanwhile the programme director wants to know whether Lot 2 can be procured before identity trust dependencies are resolved, whether the cloud hosting model is acceptable under data residency constraints, and whether the operational chain has a single point of failure hidden behind a “shared platform” label.

That is the comparison lens that actually matters:

  • operating model fit
  • artefact usefulness
  • governance burden
  • procurement compatibility
  • interoperability support
  • survivability in messy programmes

Not theoretical purity.

TOGAF, as it actually gets used in government

I have nothing against TOGAF. In fact, I think it is often dismissed too casually by people who have mainly seen poor implementations of it.

In practice, TOGAF gives teams something very useful: a method people can organise around. The ADM is not sacred, but it is handy. It gives structure to conversations that would otherwise drift. It creates a common language around baseline and target architecture, stakeholder concerns, principles, transition architectures, and migration planning. In institutions where architecture has to bridge business, applications, data, and technology, that matters a lot.

Public-sector organisations like TOGAF for perfectly understandable reasons.

It is widely recognised. Procurement teams understand it well enough. Integrators and consulting firms know how to produce deliverables that fit its shape. It works decently for portfolio rationalisation, digital transformation planning, shared services consolidation, data governance initiatives, and application landscape clean-up. If you are trying to connect strategy to implementation without forcing everyone into highly specialised defence architecture language, TOGAF is often the path of least resistance.

That is not a trivial advantage. Accessibility matters more than architects sometimes admit.

In EU institutional settings, TOGAF fits naturally in programmes like:

  • digital transformation across administrative functions
  • application portfolio simplification in an agency
  • shared service consolidation
  • interoperability platform modernisation
  • data governance and master data clean-up
  • IAM consolidation across institutional boundaries

I have seen it used effectively to bring order to sprawl. Especially where the real challenge is not military-style mission analysis but simply getting business and IT stakeholders to agree on what exists, what should change, and in what sequence.

But here is the quieter truth: in many government environments, TOGAF becomes a governance wrapper more than an architecture discipline.

Sometimes that is fine. Sometimes it becomes a problem.

The anti-patterns are painfully common. Architecture principles with no consequence. Target architectures detached from funding cycles. ADM phases treated as rigid linear gates. Repositories full of diagrams no delivery team ever reads. “Architecture compliance” becoming a check-box exercise while procurement lots are scoped in ways that violate the supposed target state before implementation even begins.

I have sat in architecture boards where everyone nodded at a well-structured TOGAF pack and then approved delivery decisions that made the target architecture impossible. That is not TOGAF’s fault. But it is very often how TOGAF gets used.

And in defence-adjacent programmes, its weakness tends to show up earlier.

DODAF, stripped of mythology

DODAF has a reputation problem.

Some people hear the acronym and assume it is only for US defence bureaucracy, too heavy for civilian government, and loaded with terminology that alienates half the room. There is some truth in that perception. But it also causes people to miss what DODAF is actually good at.

At its practical best, DODAF is valuable because it forces relationships to become explicit.

Capability to operational activity. Operational activity to systems. Systems to services. Services to standards. Standards to constraints. Dependencies across views. It is stronger than TOGAF when the architecture has to explain not just what the estate looks like, but why components exist in operational terms and what happens when dependencies break.

That is exactly why defence programmes reach for it. They need traceability. They need evidence. They need to support oversight and acquisition decisions in environments where system-of-systems complexity is real and operational context is non-negotiable.

And this is not only a military issue.

Plenty of government domains outside the armed forces have the same architecture characteristics: emergency coordination, border security, justice information exchange, civil protection, surveillance ecosystems, crisis response, command-and-control style operations. In those settings, you are not just modernising applications. You are dealing with actors, nodes, operational threads, trust relationships, timing constraints, degraded modes, and assurance questions.

DODAF fits that terrain much better.

But no, it does not magically make architecture actionable.

Without modelling discipline and governance maturity, DODAF becomes a view factory. Teams produce operational views, system views, standards views, and capability maps because the framework permits them, not because any decision owner actually needs them. The architecture looks rigorous. The programme remains confused.

I will be blunt about this: DODAF is stronger when the organisation needs evidence and traceability. It is weaker when the real need is basic change management, portfolio simplification, and practical sequencing across a broad enterprise estate.

That distinction matters a great deal.

The comparison table people usually want too early

Still, sooner or later people want the table. Fine. But it only makes sense once the context is clear.

The important part of this table is not the labels. It is the implication: different oversight bodies respond to different kinds of evidence.

A programme board may love TOGAF-style roadmaps and transition architectures because they support budget and sequencing discussions. An operational steering group may find those artefacts almost useless unless they are backed by explicit capability and dependency views. The architecture function has to serve both realities.

That is why single-framework purism usually collapses in the real world.

Where TOGAF falls short first

TOGAF’s weaknesses in defence and government settings are not abstract. They show up in very specific ways.

Operational scenarios are often under-modelled. Capability relationships are described loosely rather than rigorously. Cross-organisational accountability becomes fuzzy. Systems and services get catalogued without enough mission context. Assurance questions—what if this node fails, which capability degrades, who compensates operationally—remain unanswered.

I have seen this very clearly in a secure cross-border case data exchange programme.

The architecture deck was, on the surface, solid. It described business processes well. It mapped application services cleanly. It had sensible principles around interoperability, trust, and reuse. There was even a credible target integration model involving API mediation, event notification through Kafka for selected asynchronous updates, and stronger IAM federation to reduce bespoke access control patterns.

But it failed in one crucial area: it did not model operational dependency between national systems, central agency hubs, identity trust services, audit chains, and incident handling responsibilities.

That gap had consequences.

Procurement assumptions were too optimistic because the architecture implied that once interfaces and standards were agreed, operational interoperability would follow. It did not. Integration risks surfaced late. Incident ownership became contentious. Resilience testing exposed hidden dependencies on trust services that had not been treated as critical operational components. The target-state model looked coherent. Reality was not.

That mistake deserves to be named plainly: the architects assumed a strong target-state design was enough.

It was not.

TOGAF is good at helping you say where you want to go. It is less naturally equipped to explain, in disciplined operational terms, what keeps working when the environment gets ugly.

And government programmes do get ugly.

And where DODAF becomes too much, too soon

The opposite failure is just as common.

A team realises they need operational traceability and overcorrects. Suddenly everything becomes a defence architecture exercise. Terminology multiplies. View production expands. Stakeholders outside the defence tradition start to disengage. The architecture team becomes very busy and the programme gets no clearer.

I saw this in a security operations coordination initiative that tried to produce a near full-spectrum defence-style architecture package. The team generated detailed operational and systems views. They had rich modelling of actors, nodes, exchanges, and standards. On paper, it was more rigorous than the earlier TOGAF-heavy efforts.

But there was a catch.

Very little of it connected back to budgeting, product increments, implementation ownership, or the migration path that delivery teams actually needed. The programme board still could not answer basic sequencing questions. Which capabilities were funded first? Which suppliers were accountable for what? Which legacy components could be retired when? What dependencies had to be resolved before onboarding the next institution?

DODAF had answered the wrong question brilliantly.

That is a hard-earned lesson. A sophisticated architecture can conceal the absence of executable planning. In fact, I have seen it do exactly that. Teams feel productive because the model is detailed. Governance still lacks decision-ready artefacts.

So yes, DODAF can be too much, too soon. Especially when delivery governance is immature. In those environments, architecture sophistication sometimes acts like camouflage.

A few real-world examples, because abstractions only get you so far

Shared EU identity and access federation

This is one of those domains where people underestimate the operational side until something fails.

TOGAF is genuinely useful here. It helps with stakeholder mapping across institutions, principles around trust and reuse, baseline/target thinking, and transition architectures for moving from fragmented identity providers to a more coherent federation model. It is good for structuring the change journey.

But DODAF-style thinking becomes important the moment you ask harder questions: what is the operational dependency between identity proofing, credential validation, token issuance, access enforcement, audit, privileged access review, and incident response? What happens if a credential authority is available but the audit chain is delayed? Which actors can continue operating under partial trust degradation?

Architects often document services and interfaces but not failure chains. In IAM, that is dangerous. Especially where cloud-based identity components, national trust anchors, and local enforcement points are all involved.

Border management information exchange

This is where DODAF feels natural.

Operational nodes matter. Actors matter. Systems matter. Standards matter. Capability outcomes matter. Timing, coordination, resilience, and degraded operations are central, not peripheral. If you cannot model who exchanges what information under which authority and what breaks when one participant is delayed or disconnected, the architecture is incomplete.

That said, TOGAF still helps. You still need roadmap governance, architecture contracts, transition sequencing, and a way to align implementation with funding and procurement lots. The mistake I have seen more than once is trying to force all stakeholders into one notation language. Border authorities, policy teams, technologists, procurement officers, and agency leadership do not all need the same representation.

Architects need to translate. That is part of the job, whether we like it or not.

Agency-wide application rationalisation in an EU body

This is where people sometimes overcomplicate things.

If the problem is mainly redundant applications, scattered data ownership, inconsistent hosting patterns, legacy workflows, and overlapping business support capabilities, TOGAF is usually enough. You need a business capability map, an application portfolio assessment, target-state simplification, and a phased decommissioning roadmap. You may need cloud migration patterns, data classification principles, and some integration clean-up. Fine.

You probably do not need heavy operational/system view production. If there is no strong mission-thread requirement, DODAF adds modelling burden without much payoff.

Not every public-sector programme with security concerns is a DODAF problem. That is worth saying plainly.

Crisis response coordination platform

This is where framework purism usually breaks down.

Operational workflows and command relationships require DODAF-like thinking. You need to understand decision authorities, communication paths, capability dependencies, and degraded modes. But implementation governance, platform migration, supplier coordination, and transition planning require TOGAF-like discipline.

In one such programme, we ended up using a very pragmatic split: DODAF-inspired artefacts for crisis scenarios, capability dependencies, and operational nodes; TOGAF-oriented artefacts for baseline/target architecture, transition releases, governance checkpoints, and procurement alignment. It was not doctrinally elegant. It worked.

That is usually the better test.

Diagram 1
Crisis response coordination platform

That flow is simple, but it captures the practical point: government architecture rarely succeeds if it starts and ends at the system layer.

What I would do on an EU programme today

If I were starting a defence-adjacent or security-heavy EU programme tomorrow, I would not begin by declaring TOGAF or DODAF as the answer. I have made that mistake before, or at least watched teams make it with great confidence.

I would start with the decisions the architecture must support in the next 6 to 12 months.

Not in theory. Literally write them down.

Which funding approvals are coming? Which procurement choices need justification? Which operational concerns are blocking stakeholder alignment? Which dependencies are likely to surface in oversight reviews? Which failure scenarios would embarrass the programme if they were discovered late?

Then I would do five things.

First, define decision scenarios and oversight needs. Who needs what evidence, in what form, by when? An executive board, a security authority, a procurement panel, and an operational steering group do not consume architecture in the same way.

Second, map capabilities, actors, systems, and constraints only to the level useful for governance. Not more. Public-sector teams love overproduction. Resist it.

Third, choose a small set of views and artefacts that answer real questions. A capability map. An operational dependency view. A system/service relationship model. A standards profile where interoperability matters. A target architecture and transition roadmap. That is often enough to get traction.

Fourth, build a migration path tied to funding, procurement, and operating ownership. If the architecture cannot survive contact with the budget and tender structure, it is decorative.

Fifth, institutionalise only the minimum modelling discipline needed to keep the artefacts coherent over time.

My opinion, based on experience: for most EU defence-adjacent or security-heavy government programmes, use TOGAF as the transformation and governance backbone, and borrow DODAF-style viewpoints for capability and operational traceability.

That tends to be the sweet spot.

There are exceptions. If contractual obligations, alliance commitments, or formal defence oversight require a primary defence architecture structure, then DODAF may need to lead. But even then, somewhere in the programme you still need TOGAF-like change planning, because systems do not migrate themselves and governance does not become executable just because the views are rigorous.

Diagram 2
TOGAF vs DODAF: Which Framework for Defence and Government

The uncomfortable mistakes

This is the part architects often skip because it hits a bit too close to home.

Mistaking certification for architecture capability

A team with TOGAF certification is not automatically able to run an enterprise architecture function in a politically complex programme. A team with DODAF training is not automatically able to maintain traceable models under delivery pressure. Sparx EA guide

Real capability comes from modelling discipline, facilitation skill, and governance credibility.

I have met excellent architects with no interest in framework evangelism, and weak architects with long certification lists. The market often rewards the wrong signal.

Producing artefacts before identifying decision owners

If nobody owns the decision, the diagram is decoration.

That sounds harsh, but it is true. I have seen beautifully produced operational views sit untouched because no one had linked them to a concrete governance forum or approval gate. Conversely, I have seen rougher artefacts have enormous impact because they were built around a live procurement decision.

Consequence matters more than polish.

Confusing interoperability with integration

This mistake is endemic in EU institutions.

Standards compliance does not resolve operational coordination, trust, or accountability. You can have APIs, schemas, security profiles, and message contracts all aligned on paper and still fail operationally because escalation paths, identity assurance levels, legal responsibilities, or degraded-mode procedures are unclear.

I have watched programmes discover this during testing, which is about the worst possible time.

Ignoring member-state variance

Architecture that looks neat at EU level often collapses when national constraints surface. Legal diversity. Operational diversity. Technical diversity. Funding asymmetry. Different cloud positions. Different identity infrastructures. Different tolerance for centralised services.

If the architecture does not represent that variance explicitly, it is lying.

That is not too strong a word.

Treating repositories as the outcome

Repositories matter. I like having one. They help with dependency analysis, change control, impact assessment, and procurement traceability. But they are not the outcome. The outcome is better decisions, fewer hidden dependencies, clearer supplier accountability, and more realistic migration planning.

A repository full of disconnected artefacts is just expensive wallpaper.

Using DODAF terminology to sound rigorous

This is a very real anti-pattern. People borrow defence vocabulary because it gives an impression of seriousness. But if the underlying relationships are not maintained and the views are not used in governance, the terminology creates false confidence.

I have seen programme leaders assume a model was robust because it looked “defence-grade.” It was not.

How to choose without making it a religious argument

Here is the informal decision framework I actually use.

Ask these questions:

  • Are mission outcomes and operational dependencies central to funding approval?
  • Do you need explicit traceability from capability to system to standards to operational activity?
  • Is your architecture team mature enough to sustain view consistency over time?
  • Is the immediate challenge implementation sequencing across a broad enterprise estate?
  • Will non-defence stakeholders need a more accessible transformation method?
  • Are procurement lots being shaped around operational outcomes or just technical scope?
  • Will assurance and resilience questions drive programme scrutiny?

If the programme is dominated by enterprise transformation, rationalisation, target-state design, and roadmap governance, go TOGAF-first.

If it is dominated by mission capability, operational assurance, defence-style oversight, and system-of-systems complexity, go DODAF-first.

If both are unavoidable—and in EU security and defence-adjacent programmes they often are—choose a hybrid.

But hybrid does not mean “do everything.” That is the trap. It means selecting a coherent minimum set of methods and views that serve real decisions.

A lightweight hybrid model that actually works

What has worked best for me is not a grand unified framework. It is a curated operating model.

From TOGAF, keep:

  • architecture principles
  • stakeholder map
  • baseline and target architecture
  • transition architectures
  • roadmap
  • governance checkpoints

From DODAF-inspired practice, keep:

  • capability mapping
  • operational activity modelling
  • system and service relationships
  • standards profile where interoperability matters
  • dependency traceability for critical scenarios

Run governance on two rhythms.

Use a TOGAF-style architecture board cadence for enterprise change, roadmap alignment, and implementation governance. Use mission or capability reviews—more DODAF in spirit—for operational evidence, resilience concerns, and cross-organisational dependency discussions.

One repository. Not two.

I cannot stress that enough. The fastest way to destroy trust is to create parallel architecture universes where the enterprise team maintains one model and the mission team maintains another. They will diverge. They always do.

The hybrid only works if it is tightly curated. Otherwise teams duplicate content, argue about semantics, and quietly stop believing the model.

What executives, programme managers, and architects each need to hear

For executives: do not ask which framework is “best” in the abstract. Ask whether architecture is improving decision quality, procurement clarity, risk visibility, and operational confidence.

For programme managers: demand artefacts that inform sequencing, dependency management, and supplier accountability. Reject framework theatre. If the architecture cannot help you structure delivery lots, identify critical path dependencies, or expose unrealistic assumptions, it is not doing its job.

For architects: be bilingual.

Learn to move between enterprise transformation language and mission/capability language. Learn to explain IAM dependencies to policy people and procurement implications to operational stakeholders. Learn when a Kafka event backbone is a useful resilience pattern and when it just adds complexity to a programme that really needs simpler ownership and clearer contracts. The frameworks matter less than your ability to connect strategy, operations, systems, and delivery reality.

That is the real craft.

Conclusion

Back to the opening scenario.

An EU programme starts with a familiar instruction: align to TOGAF. Later, defence-oriented stakeholders ask for operational views, capability traceability, and mission impact. The tension feels like a framework choice. Usually it is not.

TOGAF is generally the better starting point for broad government transformation. It gives structure, governance language, and a practical way to move from strategy to roadmap. DODAF is generally stronger where defence-style capability logic, operational traceability, and system-of-systems assurance are non-negotiable.

In EU defence and security-adjacent programmes, the most credible answer is often a disciplined hybrid rather than a doctrinal choice.

Not because compromise is fashionable. Because programme reality demands it.

The framework that wins is the one that helps people make hard decisions early, expose dependencies honestly, and survive procurement, politics, and implementation.

Frequently Asked Questions

What is TOGAF used for?

TOGAF provides a structured approach to developing, governing, and managing enterprise architecture. Its ADM guides architects through phases from vision through business, information systems, and technology architecture to migration planning and governance.

What is the difference between TOGAF and ArchiMate?

TOGAF is a process framework defining how to develop and govern architecture. ArchiMate is a modelling language defining how to represent architecture. They work together: TOGAF provides the method, ArchiMate provides the notation.

Is TOGAF certification worth it?

Yes — TOGAF Foundation and Practitioner are widely recognised, especially in consulting, financial services, and government. Combined with ArchiMate and Sparx EA skills, it significantly strengthens an enterprise architect's profile.