⏱ 24 min read
I’ve watched more than one transformation program look perfectly healthy right up to the point where it suddenly wasn’t.
One that sticks with me was a regional insurer. Mid-sized, ambitious, and generally sensible. They were modernizing policy administration, starting with personal lines. The executive brief had the usual blend of sensible objectives and dangerous shorthand: move to cloud, improve broker experience, reduce product launch time. Funding got signed off. Target-state diagrams looked convincing on slides. The CIO had been socializing a future built around API-led integration, eventing, stronger identity, more automation, and clearer data ownership. integration architecture guide
Everyone agreed.
Then delivery started, and teams began building to different interpretations of what those goals actually meant.
The digital channel team took cloud-native to mean broad freedom to use managed services wherever they sped up delivery. Security interpreted cloud as tighter central controls, minimal variation, and strict regional deployment constraints. Integration was thinking event-driven where possible, but the core platform vendor’s implementation team leaned heavily toward batch extracts plus a small number of synchronous APIs. Operations wanted standardization around the existing Kubernetes platform because they already knew how to run it. Product wanted faster launches and, frankly, did not care whether that came from SaaS configuration, custom microservices, or luck.
What was missing wasn’t energy. And it wasn’t budget.
It was a usable Architecture Requirements Specification.
Not a generic deck of requirements. Not a BRD with some architecture terminology layered on top. Not a standards appendix that no one opens after the steering meeting. A real Architecture Requirements Specification: one that connected business outcomes to architectural consequences, made constraints explicit, quantified non-functionals, separated transitional reality from target-state aspiration, and gave delivery teams something far sharper than “cloud-first.”
That’s really the point of this article.
In TOGAF, the Architecture Requirements Specification is often treated as a compliance artifact. A thing to complete between vision and design. In practice, on serious programs, it’s much more operational than that. It becomes the working contract between strategy, architecture, engineering, risk, security, operations, and procurement. If it’s vague, ambiguity leaks downstream for years. In insurance, where platforms stay around for a long time and coexistence is rarely brief, that cost compounds fast.
So let’s talk about how to write one that people will actually use.
Insurance is exactly where this artifact matters more than most teams expect
The case I’ll use here is familiar enough to be realistic.
A mid-sized multiline insurer. Legacy policy administration on-premises. Claims already partly modernized, which is common and creates its own integration politics. Distribution spread across brokers, tied agents, and direct digital channels. Documents living somewhere between ECM, shared services, and historical compromise. Customer and party data duplicated in too many places. Some underwriting rules buried in old systems, some living in spreadsheets, and some effectively carried around in people’s heads.
The transformation drivers sounded straightforward on paper:
- faster product configuration for new offerings
- better self-service for brokers and customers
- stronger regulatory traceability
- less batch integration
- fewer manual underwriting handoffs
- shorter quote-to-bind time
All valid. None of them architecture-ready.
Insurance is awkward territory for architecture requirements because almost nothing sits in isolation. Product, policy, billing, claims, documents, identity, and finance are usually more tightly coupled than transformation business cases like to admit. Security and privacy aren’t edge concerns; they shape topology, hosting, key management, retention, access design, and auditability very early. Underwriting rules and actuarial models tend to survive platform replacement longer than anyone plans. Then there’s the time dimension: policy lifecycles are long, renewals constrain cutover windows, and transition states have a nasty habit of becoming semi-permanent.
That’s why I’m fairly opinionated about this.
In insurance, vague architecture requirements are expensive because ambiguity survives into implementation for years.
A weak requirement in a digital campaign project is annoying. A weak requirement in a core insurance modernization becomes a migration workaround, a vendor dispute, an operational exception, and eventually a permanent scar in the architecture.
Before defining the artifact, it helps to see how programs fail without it
A lot of teams only appreciate the Architecture Requirements Specification after they’ve paid for not having one.
Three quick failure snapshots.
Mistake 1: “Cloud-first” written as a slogan instead of a requirement.
I’ve seen this repeatedly. Leadership says cloud-first. Everyone hears what they want to hear. One team chooses managed messaging services tightly coupled to a hyperscaler. Another builds on containers to preserve portability. Security assumes customer PII will stay within approved regional boundaries, but nobody writes that requirement in a testable form. No one states the resilience expectation for customer journeys, or the criteria for managed keys versus customer-managed keys, or what integration patterns are allowed between regulated workloads and SaaS.
So the program gets “cloud,” just not one architecture.
Mistake 2: Non-functional requirements buried in RAID logs or left in people’s heads.
The claims platform scales nicely in test. The document service does not. During catastrophe season, intake volume spikes and downstream correspondence generation backs up. APIs technically remain available, but the overall process fails because the bottleneck was never treated as an architectural requirement. Performance and resilience were discussed informally, then captured nowhere authoritative.
That’s a design governance problem, yes. But in my experience it usually starts as a requirements problem. ARB governance with Sparx EA
Mistake 3: Business requirements copied straight from the funding deck.
“Improve customer experience” appears in the specification. Fine. But what architectural consequence follows from that? Does it mean sub-two-second quote responses for 95% of standard risks? Does it require identity federation for brokers? Does it imply straight-through processing thresholds, asynchronous document retrieval, session continuity across channels, or observability of abandonment points?
If none of that is specified, the phrase is decorative.
This is where TOGAF’s Architecture Requirements Specification earns its keep. It turns ambition into something that can actually constrain and guide architecture.
So what is the TOGAF Architecture Requirements Specification, really?
In plain language, it is the structured statement of what the architecture must do, support, constrain, enable, and prove.
That wording matters, because people tend to make the artifact either too loose or too bloated.
In TOGAF flow, it sits in a practical relationship with several things:
- it refines the Architecture Vision into requirements that can drive architecture work
- it informs Business, Data, Application, and Technology Architecture development
- it provides a basis for evaluating options and making architecture decisions
- it feeds implementation governance, acceptance criteria, and traceability
That last point is underrated. If your architecture governance process cannot point to specific requirements when it challenges solution choices, governance quickly turns into opinion theater. I’ve seen that more than once, and it never ends well. EA governance checklist
What it is not:
- not a business requirements document dressed up in architecture language
- not a Jira export
- not a standards catalog
- not a detailed solution design spec
I think architects misuse it for two opposite reasons.
Some make it so abstract that delivery teams can’t derive anything useful from it. You get phrases like “the solution shall be scalable, secure, and interoperable.” Nobody disagrees, and nobody can implement from it either.
Others make it so detailed that it becomes a pseudo-design document, full of implementation choices disguised as requirements. “Use Kafka.” “Use Azure AD B2B.” “Deploy on OpenShift.” Sometimes those are genuine mandates. Often they’re just prematurely collapsed trade-offs.
The useful middle ground is this: architecture-relevant, testable, traceable requirements that force or justify decisions without smuggling in every solution detail.
The document anatomy I actually use
I don’t follow a textbook TOGAF shell line by line. Very few practitioners do, at least not after the first couple of painful programs. TOGAF training
What matters isn’t the sequence. It’s traceability and testability. architecture traceability in Sparx EA
Still, a pragmatic structure helps. The one I use most often looks something like this:
- executive context and scope boundary
- business outcome statements
- architecture drivers
- functional architecture requirements
- data requirements
- integration and eventing requirements
- security, risk, and compliance requirements
- operational and service management requirements
- non-functional quality attributes
- constraints and mandated decisions
- assumptions and dependencies
- measures of success and acceptance criteria
- open issues and requirement conflicts
- traceability to architecture decisions and work packages
That reads neatly when listed. In reality, you move back and forth between sections. Good architecture requirements writing is iterative, because decisions expose missing requirements and requirements expose hidden conflicts.
One point from experience: in insurance, I almost always separate data requirements from integration requirements. Teams constantly blur them together. “Customer data must be shared across systems in real time” sounds reasonable until you unpack it and realize it mixes ownership, consistency, latency, transport, and synchronization semantics into one muddy sentence. That is exactly how future pain gets created.
Sometimes a simple picture helps. Not because diagrams are magical, but because people need to see where the spec sits in the flow.
The real test of the structure is whether it helps people answer four questions:
- What outcomes are we trying to achieve?
- What architectural consequences follow from those outcomes?
- What constraints are immovable, temporary, or negotiable?
- How will we know whether the architecture satisfies what mattered?
If the document can’t answer those, the formatting does not matter.
What good looks like, and what gets left out
Here’s a practical view of the content. Honestly, I wish more architecture teams wrote this part with a bit more honesty and a bit less polish.
The “common weak version” column is not theoretical. That’s the kind of language I still see in architecture packs attached to very expensive programs.
Start from decisions backward, not from templates forward
This is probably the most useful practical advice I can give.
The best Architecture Requirements Specifications are driven by the decisions the architecture team already knows it will have to make.
Not by section headings.
If you begin with a template, you often produce balanced-looking emptiness. Every section has content, but very little of it forces a real decision. If you begin with the hard decisions, the writing gets sharper almost immediately.
So I usually ask: what decisions are expensive, irreversible, politically loaded, or likely to shape the next two years?
In this insurance case, several stood out:
- event-driven integration vs continued nightly batch patterns
- canonical customer model vs bounded-context ownership across domains
- single enterprise document repository vs domain-owned document services with federated retrieval
- public cloud managed services vs standardized container platform for most workloads
- centralized IAM patterns vs local identity handling by channels and vendor products
Once you know those are the decisions, you can write requirements precise enough to justify them.
For example, “support system integration” won’t help decide between Kafka-based event distribution and batch reconciliation. But “policy endorsement events must be available to claims and fraud consumers within 60 seconds, with replay capability and idempotent consumption” starts to narrow the field dramatically.
This backward approach does three useful things.
It surfaces trade-offs early.
It forces measurable language.
And it reveals stakeholder conflict faster, which is uncomfortable but healthy.
The core case: drafting the spec for policy administration modernization
Let me stay with the insurance program.
The scope looked sensible on paper: replace policy administration for personal lines first, keep the current claims platform in place, introduce new broker and customer digital journeys, and retain actuarial models plus some rating services during the first phase. Very normal. Also exactly the sort of setup where transitional architecture gets underestimated.
The first pass of the Architecture Requirements Specification was bad in a familiar way. Not incompetent. Just too broad and too polished.
It reused language from the portfolio business case. It said the platform should improve agility, support omnichannel experiences, enable cloud adoption, strengthen security, and reduce technical debt. All true. None especially useful. Worse, it mixed target-state aspirations with immediate phase-one realities as if coexistence would be trivial.
That created predictable confusion. The digital team read target-state requirements as current delivery scope. The core platform vendor optimized for package fit. Integration assumed they could defer some eventing because the target architecture allowed eventual evolution. Operations interpreted “cloud” through existing platform standards. Security saw gaps around identity federation and region-specific control expectations.
The correction was simple in concept and surprisingly powerful in practice:
we split requirements into target-state requirements, transition-state requirements, and inherited constraints.
That one move changed the quality of the architecture conversation.
Here’s the difference.
A weak requirement looked like this:
- System should integrate with claims.
Better:
- Policy creation, cancellation, reinstatement, and endorsement events must be published to downstream consumers within 60 seconds of business commit, support replay for seven days, and allow idempotent consumption by claims and fraud services.
Another weak one:
- Solution should be scalable.
Better:
- Quote service must support 3x renewal peak and 8x catastrophe-related inquiry surge without degradation beyond agreed response thresholds, with horizontal scaling initiated automatically based on transaction rate and queue depth.
Those are not just better sentences. They change architecture outcomes. They affect whether event streaming is needed, whether you choose managed messaging or self-managed Kafka, how you design retry semantics, what observability you require, and how platform engineering prepares capacity and auto-scaling rules.
A few more examples from that rewrite exercise:
Weak: customer data must be consistent across platforms
Better: customer identity updates originating in policy or claims domains must be reconciled to the enterprise party service within 15 minutes, with mismatch exceptions surfaced to operations for review.
Weak: brokers must have secure access
Better: broker authentication must use federated identity with MFA enforced for privileged actions, session timeout aligned to channel risk policy, and support delegated administration by agency administrators subject to segregation-of-duties controls.
Weak: documents must be retained appropriately
Better: policy and claims documents containing regulated customer data must be retained by product and jurisdiction-specific retention schedules, with legal hold capability and auditable retrieval history.
What happened after the rewrite was predictable in the best possible sense.
Vendor evaluation improved because RFP questions became architecture-relevant instead of generic. Migration sequencing got sharper because transition requirements forced honest acknowledgment of coexistence windows, data authority, and reconciliation. Engineering disputes reduced because teams had fewer places to hide behind interpretation.
This is the point where an Architecture Requirements Specification stops being “architecture paperwork” and starts becoming leverage.
A second diagram shows what changed once the program separated target, transition, and constraint thinking.
The hardest section: non-functional requirements that aren’t fluff
This is where most architecture specs disappoint.
The language gets soft. “High availability.” “Low latency.” “Secure.” “Auditable.” Everyone knows these matter. Very few people write them in a way that survives design review.
The better method is scenario-based quality attributes. It sounds formal, but it’s actually very practical. Use a structure like:
- source
- stimulus
- environment
- artifact
- response
- response measure
That forces context.
For example:
- Source: catastrophe event causing surge in customer and broker activity
- Stimulus: claim intake volume spikes to 8x normal weekday peak
- Environment: production, regional impairment not present, downstream document generation under load
- Artifact: claims intake APIs, event backbone, document services
- Response: intake remains available, requests queue safely when needed, no data loss, priority workflows preserved
- Response measure: 95% of claim submissions accepted within 4 seconds; event delivery backlog cleared within 30 minutes after peak subsides
That is architecture gold compared with “system must be resilient.”
Insurance-specific non-functional requirements almost always need this treatment. Good examples include:
- availability during renewal season
- resilience during catastrophe spikes
- auditability of underwriting decisions
- retention and legal hold for policy documents
- performance of broker quote journeys over constrained or inconsistent networks
- recovery objectives for policy issuance and payment processing
- observability requirements for straight-through processing failures
My blunt opinion: most architecture requirement failures are really NFR failures that nobody wanted to quantify.
Because quantification creates accountability. It also exposes cost. Once you state that a service must survive multi-day surge loads, support regional failover, or retain replayable events for seven days, the architecture implications become expensive and very real. Some stakeholders would rather keep things fuzzy until later. Later is usually where the bill shows up.
Requirement conflict is normal. Pretending otherwise is what hurts
On the insurer program, conflict wasn’t a sign of dysfunction. It was evidence that real architecture was finally happening.
Product teams wanted local flexibility so they could launch variants quickly.
Risk and compliance wanted central controls, strong traceability, and tighter policy enforcement. ArchiMate traceability
Operations wanted platform standardization because they were the ones carrying the pager at 3 a.m.
Distribution wanted speed and minimal friction for brokers, and had little patience for architecture purity debates.
You will not harmonize all of that into one elegant statement. Don’t try.
Document the conflicts.
A few common ones in cloud transformation:
- low latency vs deep synchronous validation
- data minimization vs analytics demand
- SaaS adoption vs customization needs in underwriting workflows
- managed services velocity vs portability and operational familiarity
- local product variation vs enterprise consistency
The useful practice is straightforward:
- record the conflict explicitly
- identify owner and priority
- define the resolution path
- tie unresolved decisions to governance checkpoints
If you don’t, the conflict doesn’t disappear. It just reappears later as design churn, exception requests, and stakeholder escalation.
I’ve seen teams spend months arguing about whether a document service should be centralized or domain-owned when the real unresolved issue was retention policy variance by jurisdiction and business line. That should have been a documented requirement conflict from day one.
What good insurance architecture requirements look like in practice
A few concise examples, because patterns matter.
Business and capability
- New personal lines products must be configurable and deployable to production within 20 business days without core code changes for standard rating structures.
- Straight-through processing must support at least 70% of standard-risk quote-to-bind transactions without manual underwriting intervention.
Data
- Party identity matching must reconcile policyholder records across policy, billing, and claims domains within 15 minutes of authoritative update.
- Policy version history must preserve all effective-dated changes and support regulatory reconstruction of coverage at any point in time.
- Policy and claims records must comply with jurisdiction-specific retention schedules and legal hold requirements.
Application
- Underwriting workbench must retrieve current quote, risk flags, document status, and policy history through published APIs rather than direct database access.
- Rules execution for eligibility and referral thresholds must be externally configurable and versioned independently of policy transaction orchestration.
Technology
- Customer-facing quote and self-service journeys must support deployment across at least two approved availability zones with automated failover for stateless services.
- Managed observability must provide correlation IDs across API gateway, policy services, event consumers, and document generation workflows.
Security and compliance
- Broker users must authenticate through federated IAM with MFA, agency-level delegated administration, and role-based access aligned to product and region.
- PII must be encrypted at rest using approved key management in the permitted deployment region, with access logging retained for audit review.
- Segregation of duties must prevent the same administrative identity from approving both underwriting rules changes and production deployment of those changes.
Notice what these examples do. They don’t prescribe every technology choice. But they create enough shape that architecture decisions are grounded in something real.
The mistakes I still see architects make
Some are so common they’re practically habits.
Writing requirements as solutions.
“Use Kafka” is not a requirement. “Publish policy events to multiple downstream consumers with replay and independent consumption” is. Kafka may well be the right answer. But write the need first unless the platform decision is already mandated.
Leaving transitional constraints out.
This one is huge. Architects love the target state. Delivery teams live in coexistence for 24 months. If the old rating engine, batch billing interface, or nightly regulatory extract remains in scope during transition, write it down as a constraint or transition requirement.
Mixing goals, principles, and requirements.
“Reuse before buy before build” is a principle. It may guide option assessment. It is not itself a requirement.
No ownership per requirement.
If nobody owns validation, priority, or interpretation, the requirement becomes folklore.
No testability.
If governance cannot verify it and engineering cannot derive checks from it, it probably is not written well enough.
Ignoring operational architecture.
Logging, backup, DR, support model, runbook expectations, incident telemetry, release rollback, support hours, and service dependency visibility often get omitted. Especially in cloud programs, where the platform team assumes someone else captured them.
I’ve seen this happen more than once: the workload team assumes the cloud platform handles observability, while the platform team assumes the application team will define logging and alerting needs. Six months later, there’s a major incident and no one can trace a broker transaction across API gateway, quote service, rules engine, and document generation.
That is not an observability tooling issue. It started as a missing requirement.
A writing method that works in the field
Not elegant. Effective.
Start by pulling drivers from the business case, risk registers, regulatory obligations, and operating model assumptions. Then interview the people who actually carry constraints: product, operations, security, data, integration, IAM, vendor management, and support leads. enterprise architecture guide
Ask them for decision-forcing scenarios, not aspirations.
What happens during catastrophe surge?
How are broker identities provisioned today, and what must change?
What cannot move regions?
Which interfaces must remain batch during transition?
Where do underwriting decisions need audit evidence?
What failure is unacceptable?
Then draft requirements in measurable language.
Workshop the conflicts. Don’t smooth them out too early. Trace each major requirement to likely architecture building blocks and work packages. Revisit them after solution options are tested, because real options expose hidden assumptions quickly.
For cloud-heavy transformations, I’d add a few practical tips:
- distinguish platform requirements from workload requirements
- specify tenancy, residency, IAM, observability, and resilience expectations early
- make managed service usage a constrained decision, not an unspoken assumption
- document whether portability matters, and where it matters
- state event backbone requirements in terms of behavior before naming Kafka, Event Hubs, Pub/Sub, or anything else
And keep artifact hygiene. It matters more than many teams think.
For each major requirement, keep:
- ID
- source
- rationale
- priority
- owner
- verification method
- linked architecture decision or work package
That discipline is what makes the document usable after the workshop ends.
Transitional architecture deserves its own voice
In insurance modernization, transition architecture is rarely a brief chapter. It is often the dominant operational reality.
Renewal cycles slow cutovers. Regulatory constraints limit timing. Billing dependencies linger. Customer and broker channels must remain stable while internals change underneath. Coexistence is not an edge case; it is the architecture for a long while.
So capture it explicitly:
- batch coexistence windows
- master data authority during transition
- reconciliation requirements
- temporary duplicate channels and support procedures
- dual-running constraints for policy and billing processes
- temporary identity bridging
- archival and document retrieval behavior across old and new estates
In the case study, the legacy rating engine had to stay for 18 months while a new digital quote journey was introduced. That sounds manageable until you realize the API orchestration, response-time expectations, fallback behavior, and reconciliation logic all depend on it. Because transitional requirements were weak initially, the team had to rework orchestration and support processes later.
That rework was avoidable.
How the spec connects to downstream delivery
A good Architecture Requirements Specification doesn’t sit in the architecture repository gathering polite dust. It shows up everywhere downstream.
It informs solution architecture.
It shapes vendor RFP and RFQ language.
It becomes platform engineering guardrails.
It anchors governance reviews.
It influences test strategy and operational readiness criteria.
Traceability is the real value:
- requirement to architecture decision
- requirement to work package
- requirement to control
- requirement to acceptance test
That traceability matters a lot in insurance because auditability and vendor accountability matter. If a vendor promises event-driven integration but your requirement never defined latency, replay, or consumer independence, you’ve left too much room for interpretation. If a control requirement around PII handling was implied rather than specified, expect expensive debates later.
Here’s a short excerpt in the style I’d actually include.
Mini example excerpt
REQ-IAM-014
Statement: Broker users must authenticate via federated IAM with MFA required for policy bind, endorsement approval, and privileged customer data access.
Rationale: Reduce credential risk and align authentication strength to transaction sensitivity.
Verification: Design review of IAM pattern, integration test with broker federation, control validation in pre-production.
REQ-EVT-022
Statement: Policy create, cancel, reinstate, and endorsement events must be published within 60 seconds of transaction commit to the enterprise event backbone with replay capability for seven days.
Rationale: Support near-real-time claims, fraud, and document processing while preserving recovery options.
Verification: Event latency test, replay test, consumer idempotency review.
REQ-DOC-031
Statement: Policy and claims documents containing customer PII must support jurisdiction-specific retention schedules and legal hold without physical duplication across repositories.
Rationale: Meet compliance obligations while avoiding inconsistent retention behavior.
Verification: Retention rules review, legal hold simulation, audit trail inspection.
REQ-OPS-044
Statement: Claims intake services must sustain 8x normal transaction volume for 72 hours during catastrophe events with no data loss and queue-based degradation controls.
Rationale: Preserve service continuity during predictable surge scenarios.
Verification: Performance and resilience testing under surge profile.
REQ-DATA-052
Statement: Customer identity updates must reconcile between policy and claims domains within 15 minutes, with unresolved mismatches routed to operational review.
Rationale: Reduce servicing errors and duplicate party records across journeys.
Verification: Data reconciliation test, exception workflow validation.
REQ-TRN-061
Statement: During transition, the legacy rating engine remains authoritative for commercial exceptions and must be invokable from the new quote orchestration layer without direct channel access.
Rationale: Support phased migration while containing legacy exposure.
Verification: Integration testing, channel architecture review.
That kind of requirement survives contact with reality. It can be challenged, tested, and linked to actual work.
When to stop
This is a real problem. Architects can overwork these documents until they become encyclopedic and brittle.
A good rule: if a requirement cannot influence architecture, it probably doesn’t belong here. If a significant architecture decision cannot be justified from the spec, something is missing.
Signals the document is mature enough:
- stakeholders can challenge priorities meaningfully
- engineering can derive constraints and acceptance checks
- governance can assess compliance against something concrete
- vendors can respond without guessing
- transition planning has fewer hidden assumptions
My closing opinion before the conclusion: a good Architecture Requirements Specification creates productive friction early, and that is much cheaper than polite ambiguity later.
I would rather have a hard workshop in month two than a major design reversal in month fourteen.
Conclusion: the document that still matters after the slides are gone
Back to the insurer.
Once the Architecture Requirements Specification was rewritten properly, the program changed in unglamorous but important ways. Vendor evaluation became less theatrical. Migration planning got cleaner because transition-state requirements were explicit. Security debates got sharper because IAM, PII handling, residency, and control objectives were stated in architecture terms. Engineering had fewer downstream disputes because requirements were more measurable and less open to interpretation.
The target architecture diagram didn’t become irrelevant. It just stopped carrying more meaning than it should.
That’s the real takeaway.
In TOGAF, the Architecture Requirements Specification is where intent becomes executable architecture. If you write it well, it becomes one of the few documents that survives the strategy slides and still matters during delivery.
And if you write it badly, the program will still move forward. It just won’t move forward consistently.
FAQ
How is the Architecture Requirements Specification different from a Solution Requirements Specification?
The architecture spec defines cross-cutting, capability-shaping, constraint-setting requirements that guide architecture decisions across domains and solutions. A solution requirements spec goes deeper into the detailed needs of a particular solution.
Should cloud service choices appear in the requirements spec?
Usually only as constraints or mandated decisions when they are already established. Otherwise, write the required behavior first. Don’t hide design choices inside requirement language.
How detailed should non-functional requirements be at architecture stage?
Detailed enough to influence architecture, option selection, and governance. If they can’t shape topology, integration pattern, resilience model, IAM design, or operations, they’re too vague. ADR template
Who owns the document in a federated architecture team?
Usually the lead domain or enterprise architect for the initiative, but each major requirement should still have a business or technical owner responsible for validation and priority. Sparx EA guide
How often should it be updated in a multi-year insurance transformation?
More often than teams expect. Refresh it at major phase boundaries, after option evaluations, when transition assumptions change, and when governance decisions materially alter constraints.
Frequently Asked Questions
What is TOGAF used for?
TOGAF provides a structured approach to developing, governing, and managing enterprise architecture. Its ADM guides architects through phases from vision through business, information systems, and technology architecture to migration planning and governance.
What is the difference between TOGAF and ArchiMate?
TOGAF is a process framework defining how to develop and govern architecture. ArchiMate is a modelling language defining how to represent architecture. They work together: TOGAF provides the method, ArchiMate provides the notation.
Is TOGAF certification worth it?
Yes — TOGAF Foundation and Practitioner are widely recognised, especially in consulting, financial services, and government. Combined with ArchiMate and Sparx EA skills, it significantly strengthens an enterprise architect's profile.