⏱ 23 min read
A lot of architecture teams say they “use TOGAF.” TOGAF training
What they often mean is this: they run a few ADM-shaped workshops, produce some current-state and target-state diagrams, maybe keep a standards spreadsheet alive for a while, and then rush to pull together a governance pack before the review board meeting. I’ve watched this pattern play out for years. The method gets referenced. The content discipline quietly falls away. EA governance checklist
That missing piece matters more than many teams like to admit.
Architecture in banks rarely fails because there were not enough diagrams. It usually fails because nobody can answer basic but high-consequence questions with any real confidence: Which requirements actually drove this design? What business capability is changing? Which applications are genuinely in scope? Where is customer consent stored? What control depends on that interface? If the KYC platform changes, what downstream reporting is affected? Why was this vendor chosen instead of reusing an existing service?
Those are content questions before they are modeling questions.
In banking, that distinction is not theoretical. Banks carry fragmented estates, years—sometimes decades—of legacy, overlapping control frameworks, and a steady stream of change portfolios all touching the same data and platforms from different angles. You can get away with lightweight architecture in a startup. It is much harder to do that when a mortgage journey depends on identity verification, AML screening, document capture, underwriting rules, core lending, customer communications, regulatory reporting, IAM, and a resilience posture that somebody, sooner or later, will ask you to evidence.
So this article is really about something fairly simple: the TOGAF Architecture Content Framework is useful when you treat it as a way to structure architecture knowledge, not as a compliance ceremony. I’ll explain what it is, where teams get it wrong, and how I would apply it step by step in a banking initiative. I’ll use one running example throughout: a mid-sized retail bank launching digital mortgage origination.
That example is realistic enough to be uncomfortable.
A banking scenario before the framework
Imagine a retail bank trying to launch digital mortgage origination in one country. Not a greenfield digital bank. A normal, established institution with branch history, ageing lending platforms, and more middleware patterns than anyone would design on purpose today.
The initiative sounds straightforward in a steering deck:
- customers apply online
- documents are uploaded digitally
- identity and affordability checks happen faster
- underwriters handle exceptions instead of everything
- customers get better status visibility
- approvals happen in hours or minutes, not days
But the architecture footprint widens almost immediately.
Customer channels are involved, obviously. So are KYC and AML checks. Credit decisioning. Document management. Core lending. CRM. Data reporting. Consent capture. IAM. Security monitoring. Probably some event streaming if the bank wants responsive status updates. Maybe Kafka if the estate already uses it. Maybe not, which introduces its own politics. And because this is banking, there are external actors too: credit bureaus, identity services, perhaps open banking income verification providers, e-signature services, and regulators indirectly through reporting and controls.
This is exactly the kind of initiative where architecture should help. It spans business, data, application, and technology domains. It carries internal and external dependencies. Compliance and integration complexity are built in from the start. It is not “just a channel project,” even if someone in the budget process tried to label it that way.
Without content structure, things get messy fast.
One team defines “customer” as a party with a CRM profile. Another means an authenticated digital user. A third means the legal applicant on the mortgage. Mortgage operations map a process that differs from the one digital channels are using. The KYC team assumes their platform is the source of truth for verified identity attributes. Data teams insist warehouse definitions should prevail. Core lending owns account-level status but not application-level interaction history. Solutions get approved without a clear line back to target-state capability needs.
I’ve seen all of this. More than once, unfortunately.
And then six months later, the architecture board is looking at a vendor proposal for document capture and asking whether the bank already owns the same capability somewhere else. Nobody is sure. Not because the people involved are weak. Usually they are not. It is because the architecture content was never structured well enough for reuse or traceability.
That is where the TOGAF Architecture Content Framework earns its keep. ArchiMate in TOGAF
What the TOGAF Architecture Content Framework actually is
Let’s clear away the usual confusion first.
The TOGAF Architecture Content Framework is not a single template. It is not just a repository folder structure. And it is not the ADM.
What it gives you is a structured way to define, organize, and relate architecture outputs. In plain English: it helps you say what kinds of architecture things you produce, how they fit together, and how they support decisions, governance, and reuse.
That sounds dry right up until the moment you actually need it.
Inside TOGAF, the Content Framework exists to bring consistency to architecture work products and make traceability possible. It is one of the few things that can stop architecture from collapsing into disconnected diagrams and one-off presentations.
Three terms matter here, and teams blur them constantly:
Deliverables
These are formal work products submitted for review, approval, or sign-off. They are what governance sees. Think Architecture Definition Document, Architecture Requirements Specification, or a transition roadmap pack.
Artifacts
These are the specific architectural views, catalogs, matrices, and diagrams that architects create. A capability heatmap. An application interaction diagram. A data entity catalog. A process flow. These are the working materials.
Building Blocks
These are reusable components of architecture, whether conceptual or implementation-oriented. This is where TOGAF distinguishes Architecture Building Blocks and Solution Building Blocks, and in my experience this is one of the most misunderstood parts of the framework.
Why does the distinction matter? Because governance boards care about deliverables. Architects think and analyse through artifacts. Portfolio planning and reuse depend on building blocks. If you muddle them together, you get awkward governance, weak reuse, and poor continuity between strategy and delivery.
That happens all the time.
The part people skip: content structure before modeling detail
Most teams rush straight to notation or tooling.
They want ArchiMate models, PowerPoint diagrams, a repository tool, maybe a Confluence space, maybe all four. I understand the instinct. Diagrams feel like progress. Tools feel like maturity.
But if you have not agreed the content types and relationships first, the tooling just helps you produce confusion faster.
Then the architecture board starts arguing over semantics instead of decisions. Delivery teams reinterpret architecture because the diagrams are ambiguous. Data definitions drift. Security controls get bolted on late because nobody traced them properly. And the repository becomes a graveyard of “approved” content that nobody trusts enough to reuse.
My view, after too many years in large regulated organisations, is fairly simple: architecture quality is often limited less by modeling skill than by content hygiene.
Not glamorous. Still true.
So let’s get practical. Here is the method I use in real change programmes.
Step 1 — Define the architecture scope in business terms, not framework terms
Start with the change initiative and the outcomes it is supposed to deliver.
In our mortgage case, the business outcomes might be:
- reduce mortgage pre-approval time from days to under 10 minutes for straightforward cases
- lower manual underwriting effort by 30%
- improve control evidence for affordability and identity checks
- give customers real-time status updates across the application journey
That is a usable start. Already much better than saying “this architecture covers lending systems.”
Then clarify the scope dimensions that become dangerous when left vague:
- business units
- geographies
- product types
- customer segments
- channels
- relevant regulations
- material dependencies
For this example, I would state it plainly:
In scope: retail first-lien mortgages in one country, digital and assisted digital channels, origination through approval, including KYC, affordability assessment, document collection, decisioning, and handoff into core lending.
Out of scope: commercial lending, collections, treasury systems, downstream securitization processes, and full servicing transformation beyond basic status integration.
Write assumptions early. Seriously. In banks, teams waste months rediscovering unstated assumptions.
And identify stakeholders outside IT on day one: operations, risk, compliance, legal, information security, data governance, customer servicing. Mortgage transformation is exactly the sort of initiative that gets derailed when architecture engagement is limited to application owners and project managers.
A classic mistake is defining scope as “all systems related to lending.” That sounds inclusive and mature. In practice, it creates an unbounded swamp. You need enough scope to manage dependencies, not enough scope to boil the ocean.
Step 2 — Decide which content you need, instead of producing everything TOGAF allows
This is where teams either become useful or bureaucratic.
The Content Framework gives you a meta-model and structure. It does not require you to produce every possible artifact, catalog, matrix, and view. If you try, you will create content that nobody reads and then wonder why architecture is seen as overhead.
Choose content based on stakeholder concerns, decision points, governance obligations, and delivery readiness.
For the mortgage initiative, I would normally expect a focused set like this:
- a capability map slice showing impacted business capabilities
- a value stream or process view for mortgage origination
- an application cooperation or integration view
- a logical data model and data lifecycle view
- technology environment constraints, especially cloud/on-prem integration boundaries
- gap analysis between current and target
- roadmap and transition architecture components
- control mapping where regulated decisions or evidence matter
That is enough to drive serious architecture without drowning the team.
Practical use of TOGAF content in a banking initiative
That table looks simple, but teams trip over it constantly.
A deck used in a workshop is not automatically a deliverable. A vendor platform name is not automatically a building block. And a beautiful process diagram that answers no decision question is just decoration.
Step 3 — Build the minimum useful set of deliverables
Deliverables are the formal containers. They are versioned, reviewed, and approved. They align with governance gates and often become the basis for audit or design authority evidence.
For this mortgage scenario, I’d expect something like:
- Architecture Definition Document
Captures baseline and target architecture, key decisions, constraints, risks, and the rationale.
- Architecture Requirements Specification
Holds the architectural requirements and non-functional needs: latency for pre-approval, IAM controls, resilience expectations, retention rules, integration standards, eventing constraints.
- Transition Roadmap Pack
Shows current state, target state, transition architectures, sequencing, dependencies, and retirement impacts.
- Compliance Impact Appendix
Not always a standalone deliverable, but often worth including. It maps the architecture to regulatory obligations, control points, model risk considerations, and audit evidence needs.
These deliverables are not the same thing as raw diagrams. They contain and reference artifacts, but they are governance products.
One lesson I learned the hard way: keep deliverables relatively stable even when artifacts evolve. The process model may change three times as operations refine exception handling. The integration view may shift once Kafka becomes preferred over synchronous APIs for some status events. That is normal. You do not want your governance structure collapsing every time a working artifact matures.
A common anti-pattern is the giant architecture document stuffed with every diagram ever created. Nobody can review it properly. Nobody can maintain it. Eventually people stop trusting it and start bypassing it.
Step 4 — Create artifacts that answer specific questions
This is the real work.
Artifacts should be organized around questions, not notation preferences. I do not care much whether the team likes ArchiMate, BPMN, Visio, a repository tool, or draw.io. I care whether the artifact answers something that matters. ArchiMate modeling best practices
For our mortgage case, those questions might be:
- What business capability is changing?
- Which process steps remain manual, and which become automated?
- Which applications exchange customer, affordability, and loan data?
- Where are the control points for identity, consent, and decision evidence?
- What technology constraints apply to cloud intake connecting to on-prem core lending?
Useful artifacts follow naturally:
- a business capability heatmap showing impact on onboarding, underwriting, and servicing handoff
- an end-to-end mortgage process showing automated versus manual activities
- a logical data diagram covering customer, application, property, collateral, income, liabilities, and loan entities
- an integration view showing API calls, event flows, and batch dependencies
- an environment view of a cloud-hosted digital intake platform connected via IAM, API gateway, middleware, and core systems in the data center
Here’s a very simple conceptual flow.
Not everything needs to be in every artifact. That is another trap teams fall into.
Every artifact should support a decision, dependency analysis, control review, or implementation planning conversation. If it does none of those, I question why it exists. In architecture teams, we sometimes confuse effort with value. A polished diagram can still be useless.
And yes, always name the audience. If you cannot say who the artifact is for, there is a good chance it should not be produced.
Step 5 — Use building blocks properly: the most misunderstood part
This is the part I wish more architects treated with care.
TOGAF distinguishes Architecture Building Blocks (ABBs) from Solution Building Blocks (SBBs).
A grounded way to think about it is this:
- ABB = the conceptual capability or function required in the target architecture
- SBB = the specific product, service, or component that realizes it
For example:
- ABB: Customer identity verification capability
- SBB: External KYC vendor service plus internal orchestration API and case management integration
Another example:
- ABB: Document ingestion capability
- SBB: Upload portal, OCR service, malware scanning, storage service, metadata extraction component
This separation matters because it gives you architectural stability. The target architecture can say the bank needs a reusable document ingestion capability across mortgage, unsecured lending, and SME onboarding. That remains true even if the implementation shifts from one vendor product to another, or from a packaged service to a cloud-native assembly.
Too many banks jump straight from business need to product name.
That creates a tool-centric architecture that is brittle, hard to reuse, and politically loaded. Repositories become catalogs of technologies instead of records of architectural intent. Then when procurement changes, commercial terms move, or a strategic platform is retired, the architecture itself looks obsolete because it was never abstracted properly in the first place.
I have also seen the opposite problem: ABBs described so vaguely that they become meaningless. “Customer management capability” is not a useful ABB if it lumps together onboarding, identity proofing, consent, profile maintenance, and communication preference management. The point is conceptual clarity, not abstraction for its own sake.
A short detour: where the Content Metamodel helps and where it becomes too heavy
The TOGAF content metamodel gives you standardized relationships between things like actors, business services, processes, data entities, applications, technology components, requirements, and principles.
Used well, it helps with consistency and traceability. You can connect a capability to a process, a process to application services, those services to data, data to storage and controls, and then understand the impact when something changes.
In a bank, that is not overkill. Quite often it is the only way to survive governance with your sanity intact.
But teams overdo it. They model every possible entity and relationship before solving a real problem. Suddenly the metamodel becomes a project in its own right. Architects spend weeks debating whether a concept is a business service or an application service while delivery teams are trying to decide how mortgage status events will reach the customer portal and contact center.
My view is blunt: metamodel purity can become waste.
Still, in regulated enterprises, some discipline is essential. You need enough structure for impact analysis, control mapping, and reuse. You do not need an ontological masterpiece. Use the subset your governance and change portfolio actually require.
Step 6 — Trace relationships across domains so architecture can survive governance
This is where the Content Framework moves from neat structure to actual value.
Architecture content should connect across domains. At minimum, I want to be able to trace something like:
business capability → process → application service → data entity → technology component → control requirement
For the mortgage case, an example chain might look like this:
- Capability: Mortgage affordability assessment
- Process: Underwriting and pre-approval
- Application service: Credit decisioning service
- Data entities: Applicant income, liabilities, credit exposure, declared expenses
- Technology components: Rules engine, API gateway, integration middleware, Kafka event topic for status updates
- Controls: Model risk approval, audit logging, data retention, access segregation, explainability evidence
That traceability matters in banks because architecture is constantly asked to prove itself against non-functional and regulatory concerns. Model risk wants to know where automated decisions are made. Data governance wants lineage and stewardship. Resilience teams want to understand critical dependencies. Security wants the IAM boundary. Audit wants evidence of decision rationale and control ownership.
When business, data, and application architecture live in separate silos with no cross-reference, governance becomes theatre. Everyone attends. Very little is genuinely governed.
Here is a compact traceability view.
That chain is often enough to expose hidden problems. For instance, if the requirement demands near-real-time pre-approval but the source income verification feed is overnight batch, the issue becomes visible early. Good content structure surfaces unpleasant truths before the build phase. That alone often pays for the effort.
Step 7 — Turn architecture content into gap analysis and transition planning
Architecture is not there to describe the world elegantly. It is there to help change it safely.
Once current-state and target-state artifacts exist, you can do useful gap analysis:
- missing business capabilities
- redundant applications
- poor data quality or duplicated master data
- unsupported controls
- technology constraints
- operational bottlenecks
In our mortgage example, the current state might rely on manual document review, fragmented applicant data, and overnight synchronization between the front-end application tracker and core lending. The target state requires near-real-time status updates, reusable identity verification, centralized decision evidence, and better exception routing to underwriters.
That gap analysis should lead to transition architectures, not just a target-state picture on a slide.
Maybe phase one introduces a middleware orchestration layer and Kafka topics for application status events while core lending remains unchanged. Phase two consolidates duplicate document repositories. Phase three improves customer and application data mastering. That is credible. It acknowledges legacy constraints while still moving toward a coherent target.
One of the worst habits in architecture is producing target-state diagrams with no migration path. It looks visionary for ten minutes and then falls apart under delivery scrutiny.
What this looks like in a real bank repository
In theory, architecture repositories are neat and integrated. In real banks, they are mixed landscapes.
You will usually have some combination of:
- an initiative workspace or project folder
- an enterprise library of reusable building blocks
- standards and reference architectures
- approved deliverables stored for governance evidence
- links to requirements, risks, controls, and projects
- spreadsheets that everyone insists are temporary
- documents that are more important than the repository admins would like to admit
That is fine. Mostly.
I’m not tool-agnostic in some romantic sense. Good tools absolutely help with traceability and reuse. But most banks are hybrid whether they intended to be or not. The trick is to make the relationships consistent even if the storage is imperfect.
I would rather see a modest repository with reliable links between requirements, artifacts, ABBs, and approved deliverables than a grand enterprise tool with immaculate taxonomy and no trust from delivery teams.
Repository taxonomy should support architecture thinking, not drive it.
The mistakes I see repeatedly when teams “apply” the framework
A few patterns show up again and again.
Treating TOGAF content as documentation bureaucracy
Consequence: delivery teams see architecture as delay, and the really important decisions happen informally outside governance.
Confusing deliverables, artifacts, and building blocks
Consequence: review boards receive inconsistent submissions, and reuse never materializes because nobody knows what is enterprise content versus project-specific design.
Modeling too much too early
Consequence: architecture spends weeks on repository perfection while solution choices get made elsewhere.
Creating content no stakeholder asked for
Consequence: shelfware. Pretty, expensive shelfware.
Failing to relate architecture content to risk, control, and regulation
Consequence: painful late-stage challenges from compliance, resilience, or audit.
Naming vendor products as target architecture
Consequence: shallow design, weak optionality, and duplicated platform buying.
Leaving data architecture shallow in process-heavy programs
Consequence: inconsistent definitions, poor lineage, and reporting surprises after go-live.
Producing target states without transition states
Consequence: programs approve an aspiration instead of a migration path.
Letting repository taxonomy drive architecture thinking
Consequence: content is organized for the tool, not for decision-making.
None of these are rare. If anything, they are common enough that avoiding them becomes a competitive advantage.
Step 8 — Tailor the framework for different banking change types
TOGAF becomes useful when tailored.
A digital channel enhancement needs different emphasis from a core banking modernization, which is different again from a regulatory reporting remediation.
For digital channel change, I would weight:
- customer journey and process views
- application interaction content
- IAM and consent controls
- eventing and responsiveness constraints
- reusable channel services
For core modernization, I would weight:
- transition architectures
- technology constraints and coexistence patterns
- data migration content
- resilience and operational continuity
- interface rationalization
For regulatory remediation, I would weight:
- requirements traceability
- data lineage
- control mapping
- reporting data definitions
- evidence and approval records
That is not anti-TOGAF. It is exactly how TOGAF becomes useful instead of ceremonial.
A compact worked example: from requirement to governed architecture content
Let’s take one requirement:
Customers must receive mortgage pre-approval decisions within 10 minutes.
Here is how that should flow through the content structure.
First, capture it as an architectural requirement, not just a product ambition. Clarify conditions, exclusions, and control implications. Is it for all applications or only low-complexity ones? Does the clock start after identity verification? What evidence is required for automated declines?
Then identify affected capabilities:
- affordability assessment
- customer identity verification
- application orchestration
- customer notification
Update the process artifact. The underwriting flow now distinguishes straight-through cases from manual exceptions.
Define or refine the ABB:
- automated credit and affordability decisioning capability
Assess SBB options:
- existing bank rules engine extended?
- packaged decisioning platform?
- cloud-native service with policy engine?
- hybrid because model governance rules limit where some logic can run?
Map data entities:
- applicant identity
- income
- liabilities
- application data
- property data
- bureau response
- decision evidence
Record technology constraints:
- cloud intake allowed, but customer financial data residency controls apply
- Kafka approved for asynchronous status events
- synchronous API required for immediate eligibility response
- IAM must use enterprise federation and step-up authentication for application submission
Document transition gaps:
- current bureau integration supports batch only
- no centralized decision evidence store
- inconsistent customer identifier across channel and lending systems
Assemble the deliverable for governance:
- requirement
- impacted capabilities and processes
- target application interaction
- ABB/SBB assessment
- control implications
- transition plan
- key risks and decisions requested
That is architecture content doing real work. It connects intent, design, governance, and implementation planning.
How to keep the framework lightweight enough for delivery teams
This matters. Teams fear architecture content because they expect drag.
So don’t make them right.
Define a minimum mandatory set of artifacts for regulated change. Make the rest optional based on risk and complexity. Reuse patterns aggressively. Avoid duplicate documentation between enterprise architecture and solution architecture. Review content at decision milestones, not every week out of habit. Sparx EA best practices
For example, if the bank already has a standard integration pattern for external credit bureau access, reuse it across mortgage and unsecured lending rather than re-documenting the same controls and message flows every time. If IAM has a standard pattern for federated identity and step-up authentication, reference the pattern and document only deviations.
That is what mature architecture looks like in practice: not more documentation, but more reusable content with clearer intent.
I’ll put it a bit more strongly. If the framework slows delivery without improving decision quality, it has been applied badly.
That is usually not TOGAF’s fault. It is ours.
Conclusion — Use the framework to make architecture usable, not ceremonial
The TOGAF Architecture Content Framework is valuable for a simple reason: it structures architecture knowledge in a way that supports consistency, traceability, governance, and reuse.
In banking, that matters because change is rarely isolated. A mortgage journey touches customer identity, consent, data quality, integrations, controls, core platforms, external providers, and regulatory obligations. Without content discipline, architecture degrades into disconnected diagrams and governance theatre. With it, the bank has a much better chance of changing safely.
That is the real payoff. Not theoretical completeness.
Safer change. Clearer accountability. Better reuse. Fewer surprises in delivery, resilience reviews, and audit.
If I were advising a team starting this tomorrow, I would keep it simple:
Start small.
Define the right content, not all possible content.
Separate deliverables from artifacts.
Separate architecture building blocks from implementation choices.
Connect content across business, data, application, and technology domains.
And never forget that the point of the framework is not to look architecturally sophisticated. It is to help the bank make good decisions under pressure.
That, in my experience, is where TOGAF becomes real.
FAQ
Is the TOGAF Architecture Content Framework mandatory to use exactly as defined?
No. It should be tailored. The value is in structured content and relationships, not rigid adherence to every defined element.
What is the difference between the Content Framework and the Architecture Repository?
The Content Framework defines the kinds of architecture content and how they relate. The repository is where that content is stored, organized, and accessed.
Do small architecture teams need the full content metamodel?
Usually not. Most teams need a focused subset that supports their governance obligations and change types.
How do ABBs and SBBs relate to vendor products in banking programs?
ABBs define the conceptual capability needed. SBBs define how that capability is realized. Vendor products may be part of an SBB, but they should not replace the ABB concept.
What is the minimum viable set of artifacts for a regulated change initiative?
Typically: scope and requirements, a business/process impact view, an application interaction view, a data view, technology constraints, key controls, and a transition roadmap. Enough to support decisions and governance. No more than that unless complexity justifies it.
Frequently Asked Questions
What is TOGAF used for?
TOGAF provides a structured approach to developing, governing, and managing enterprise architecture. Its ADM guides architects through phases from vision through business, information systems, and technology architecture to migration planning and governance.
What is the difference between TOGAF and ArchiMate?
TOGAF is a process framework defining how to develop and govern architecture. ArchiMate is a modelling language defining how to represent architecture. They work together: TOGAF provides the method, ArchiMate provides the notation.
Is TOGAF certification worth it?
Yes — TOGAF Foundation and Practitioner are widely recognised, especially in consulting, financial services, and government. Combined with ArchiMate and Sparx EA skills, it significantly strengthens an enterprise architect's profile.