⏱ 24 min read
Most enterprise architecture repositories do not fail because Sparx EA is weak, because the database platform cannot scale, or because someone chose the wrong notation. Sparx EA training
They fail because, quietly and almost predictably, the repository turns into a dumping ground.
I’ve watched this happen more than once in large financial institutions. A team buys the tool, builds a respectable package tree, imports some ArchiMate content, maybe loads an application catalog from a CMDB, and announces that the repository is now the enterprise source of truth. Six months later, every domain is using its own conventions. After a year, the architecture review board is debating whether a diagram uses the right color or stereotype, while nobody can answer a straightforward executive question like: which critical customer journeys depend on this identity service, what regulated data flows through it, and which controls are inherited versus implemented locally?
That is the real failure mode.
Financial institutions need traceability, control, segregation of duties, and long-lived institutional memory. They have to withstand audits, reorganizations, outsourcing changes, cloud migrations, incident response, and the occasional politically charged merger discussion. Healthcare examples are useful here because they expose the same tensions in a slightly less abstract setting. If you can model patient identity, consent, PHI flows, API gateways, retention controls, and vendor dependencies well, you are dealing with almost the same structural problem as customer onboarding, KYC/AML evidence capture, payment processing, or communications preference management in banking or insurance.
So repository design is not an admin exercise. It shapes governance. It affects whether people trust the model. It often determines whether delivery teams use the platform at all, or quietly keep architecture in PowerPoint and Confluence while Sparx EA becomes a ceremonial archive. EA governance checklist
The warning signs are usually obvious once you know what to look for: duplicate application inventories, local naming rules by tribe, a security model that has little to do with the institution’s operating model, and review boards spending more time arguing about diagrams than making architecture decisions.
Before package trees. Before stereotypes. Before naming standards.
There is one question to settle first.
What is this repository supposed to prove?
Step 1 — Decide what the repository is supposed to prove
In a regulated institution, the repository should not merely describe architecture. It should support proof obligations.
That phrase matters more than it first appears.
You need to be able to prove ownership. Prove lineage. Prove approval. Prove impact. Prove control effectiveness, or at least trace the route to evidence. A smaller firm can live with a lightweight modeling repository that helps solution architects collaborate and maybe supports strategic roadmaps. A large bank, insurer, or capital markets firm cannot stop there. When a regulator asks for dependencies, a security incident forces rapid impact analysis, or a resilience review needs system criticality and control inheritance across cloud and on-prem estates, the repository either helps or it embarrasses you.
Those proof obligations vary a little by institution, but the use cases are usually broader than teams expect:
- strategic planning
- solution architecture delivery
- regulatory response
- control mapping
- operational dependency analysis
- merger and divestiture support
- resilience and recovery planning
- vendor and outsourcing risk analysis
A healthcare example makes this concrete. Imagine modeling patient identity services, consent management, EHR integrations, member communications, and claims interfaces. The repository should show which systems process PHI, where consent is captured, which controls are inherited from a shared IAM platform, which are local to a care management solution, and who owns the evidence. In a financial institution, swap in customer identity, marketing consent, KYC documents, fraud monitoring, and payment rails. Same pattern, just different labels.
My practical advice is blunt: write down 8 to 12 explicit questions the repository must answer within 48 hours during an audit, incident, or executive review. Not generic questions. Hard ones.
For example:
- Which applications support customer onboarding in the retail banking domain?
- Which of those process regulated personal data or payment data?
- What APIs and Kafka topics connect the onboarding journey to downstream fraud, CRM, and document services?
- Which controls are inherited from enterprise IAM and which are solution-specific?
- Who approved the current target-state architecture?
- What breaks if the identity platform fails in region A?
- Which systems still depend on a legacy integration broker scheduled for retirement?
If your repository design is not driven by questions like that, it is probably decoration.
And the common mistake is always the same: “we’ll capture everything and decide later.” No, you won’t. In practice, you’ll capture a lot of loosely governed fragments, and later discover that the meaning diverged before anyone noticed.
Step 2 — Choose the repository model: one global repository, federated repositories, or a controlled hybrid
This decision gets political very quickly in large financial institutions because it mirrors a deeper argument: who gets to define enterprise truth?
A single enterprise repository sounds elegant. One source of truth, one standards framework, one reporting layer, one place to apply controls. There are real strengths here. Consistency is easier. Enterprise-level reporting is simpler. Standards enforcement at least has a chance.
But I’ve also seen single repositories turn into central queues. Every package change needs mediation. Domain teams feel like they are borrowing someone else’s platform. Onboarding slows. The central EA function becomes a bottleneck and then gets blamed for “tool friction,” when the real issue is an operating model mismatch.
Federated repositories are the opposite. They align well when the institution already has strong domain architecture teams with real accountability. Retail banking, wealth, insurance, payments, and corporate functions can move faster. Local solution delivery often feels more natural.
The downside is semantic drift. One domain’s “application service” becomes another domain’s “system function.” Reference data gets duplicated. Reconciliation becomes a permanent tax. If there is no canonical meta-model ownership, federated turns into fragmented very quickly.
In practice, the most workable pattern for large regulated enterprises is usually the hybrid: central reference architecture plus domain workspaces.
Here is the comparison in plain form:
The healthcare version of this is familiar. Central repository content holds enterprise capabilities, control taxonomy, data classifications, integration patterns, IAM standards, cloud landing zone rules, and common information concepts. Domain workspaces hold care management, billing, provider network, member engagement, and claims solution architectures. The same structure maps neatly to banking: enterprise reference at the center, domain workspaces for lending, cards, onboarding, fraud, payments, and channels.
The mistake I would actively push back on is pretending federated can work without ownership of the canonical meta-model. It cannot. If no one owns core semantics, you do not have federation. You have drift with a logo.
Once that decision is made honestly, package structure gets easier and far less ideological.
Step 3 — Design the package hierarchy around accountability, not drawing style
One of the worst Sparx EA anti-patterns is the diagram-type tree. Sparx EA guide
You know the one:
- Business
- Application
- Data
- Technology
And under each, chaos.
It looks tidy on day one and becomes nearly unusable by year two, because packages are not where accountability lives. They are where drawing preferences went to hide.
A better organizing principle is this:
- stable enterprise reference layers
- domain architecture spaces
- solution delivery spaces
- governance and decision records
- reusable standards and patterns
A top-level structure that tends to hold up in large institutions looks something like:
- Enterprise Reference
- Regulatory and Control Architecture
- Business Domains
- Solution Architecture Workspace
- Integration and Information Models
- Technology Standards
- Transition States and Roadmaps
- Archived / Superseded
Not perfect. But durable.
Business capability maps should not live inside project folders. I still see this far too often. A transformation program creates its own capability hierarchy because it needs one for funding or roadmap discussions, and suddenly the institution has three versions of “Customer Identity Management” or “Payments Orchestration.” Capabilities are enterprise reference assets. Projects consume them. They do not own them. ArchiMate for governance
Healthcare offers a good example. “Clinical Data Exchange” might be an enterprise integration concept reused by patient onboarding, telehealth, claims adjudication, fraud monitoring, and analytics platforms. It should not be buried in one project package because one project happened to document it first. In the same way, a financial institution’s “Customer Event Streaming” or “Enterprise Consent Service” belongs in a reusable reference area, even if a specific onboarding or servicing program funded the initial work.
And ownership needs to map to named roles, not committees. A committee can endorse. It cannot steward content day to day.
A package owner should be a person or a clearly delegated role: enterprise business architect for capabilities, chief data architect for enterprise information concepts, domain architect for domain baseline content, security architecture lead for restricted control mappings, and so on.
Allow every program to create its own “reference architecture” subtree and you will spend the next 18 months reconciling competing truths.
A detour that matters more than people expect: security architecture inside the repository
In regulated institutions, repository access design is architecture design.
This is where many teams get naive. They treat the repository as harmless documentation and grant broad update access in the name of collaboration. Then someone casually edits a standards package, or a merger target-state model leaks more widely than intended, or vulnerability-linked infrastructure diagrams get reused in the wrong context, and trust collapses.
Repository access needs dimensions, not a single permission model:
- read vs update
- domain-limited access
- confidential initiative segregation
- external partner access
- audit and oversight access
Some content is genuinely sensitive: merger plans, remediation architecture for known control weaknesses, vulnerability-linked technology inventories, privileged integration details, network segmentation models, resilience gaps, third-party service dependencies, even detailed PHI or PCI-related data flow maps. In healthcare, architecture models that show PHI flows, retention controls, or vendor access paths are not harmless. Neither are customer identity and payment flow models in financial services.
I strongly recommend classifying repository content into tiers before broad onboarding begins. For example:
- Tier 1: broad reference content, standards, approved enterprise patterns
- Tier 2: domain baseline and solution architecture content
- Tier 3: restricted initiative content, security-sensitive flows, remediation detail, M&A content
Then define package-level access patterns accordingly. Separate broad reference content from restricted initiative models. If everyone can update the meta-model or enterprise reference packages, nobody owns meaning. That sounds harsh, but in my experience it is simply true.
Step 4 — Define the meta-model only after you understand decisions and reporting
A lot of Sparx EA programs over-engineer the meta-model in the first three months.
They import too much notation. They create dozens of stereotypes. They debate purity. They build something intellectually satisfying and operationally awkward.
Start instead from the decisions the institution actually needs to make:
- application rationalization
- control inheritance
- data residency
- criticality and resilience
- outsourcing risk
- integration modernization
- cloud migration sequencing
- legacy retirement
- audit response
Then derive the minimum viable set of elements and relationships needed to support those decisions.
Most large financial institutions need some version of these core concepts:
- business capability
- business process or value stream
- application service
- application component
- data entity or information object
- interface, API, event stream, or file exchange
- technology component
- control objective
- regulatory obligation
- owner or steward
- environment or deployment node
- project, initiative, or transition state
That is already enough to create a useful repository.
The relationships matter even more than the element types. In practice, the ones I care about most are things like:
- capability realized by application service
- application component provides application service
- application processes classified data
- interface exchanges information object
- technology hosts application component
- control applies to process or system
- regulatory obligation satisfied by control objective
- initiative changes application component or interface
- IAM service authenticates access to application service
- Kafka topic publishes event consumed by downstream service
A healthcare example helps here. A consent management capability may be supported by a member identity platform, document service, API gateway, audit logging service, notification service, and a Kafka event stream used to propagate consent changes downstream. Those are then linked to HIPAA-related controls, retention policy, encryption requirements, and identity assurance standards. In a bank, substitute consent and communications preference management, customer profile, digital channels, and fraud/event analytics. Structurally, it is almost the same thing.
The mistake is importing every ArchiMate possibility and expecting normal architects, under delivery pressure, to use it consistently. They won’t. Keep a short modeling handbook. Define allowed relationships. Define discouraged ones. Make it easy to model correctly and mildly difficult to model sloppily. ArchiMate layers explained
Not a 200-page encyclopedia. Nobody reads those.
Step 5 — Separate enterprise reference content from project delivery content before they contaminate each other
This sounds obvious until you live through the alternative.
Enterprise architects want durable knowledge. Project and solution architects need speed, experimentation, and the freedom to be wrong for a while. Those are different needs, and if you force both into the same undifferentiated area of the repository, you get contamination. Draft assumptions start looking like enterprise facts simply because they appear on a published diagram.
A pattern I trust is:
- curated reference zone
- solution incubation zone
- approved solution baseline zone
- retirement/archive zone
A claims modernization project in healthcare, for instance, might create a temporary canonical member record model in its incubation space. That model can evolve quickly while APIs, event payloads, and document structures are being worked out. After review, only the validated abstractions move into enterprise reference. The rest remains project history or gets archived.
Do not let draft project assumptions become “enterprise truth” just because a diagram was reused in a steering pack.
Lifecycle state should exist at both package and element level. And the status needs to be visible in lists and reports, not hidden in a diagram note. If users cannot tell what is draft, approved, deprecated, or superseded at a glance, they will make bad reuse decisions.
Step 6 — Model for traceability that auditors care about, not just architects
Traceability in a large financial institution is not a vague aspiration. It usually means some chain like this:
regulation to policy, policy to control objective, control objective to process or system implementation, implementation to owner, and owner to evidence source.
That chain does not have to be modeled for everything. But for key regulated areas, it absolutely should.
Healthcare sharpens the point because privacy, consent, retention, interoperability, and access controls cut across platforms in ways that are hard to fake. If there is a requirement for access logging, you should be able to trace it through the identity platform, API gateway, EHR integration broker, analytics environment, and the specific business processes handling sensitive data. In financial services, the same applies to customer access logging, transaction monitoring, KYC evidence retention, privileged admin access, and communications consent.
My recommendation is to define five to seven mandatory traceability chains and make sure they are queryable without heroic customization. For example:
- regulatory obligation -> control objective -> system/process -> owner
- business capability -> application service -> application component -> hosting platform
- sensitive data classification -> data object -> interface -> consuming system
- critical business service -> supporting applications -> cloud services / infrastructure dependencies
- initiative -> impacted components -> standards exceptions -> approval record
If traceability only exists in PowerPoint narration, it does not exist.
That may sound severe, but auditors and incident teams do not care how attractive your layered diagrams are. They care whether the relationships are explicit and reportable.
What usually goes wrong in year two
This part is less a method and more field notes.
By year two, the original enthusiasm is gone. The repository starts reflecting social realities rather than design intent. That is when you find out whether the operating model was ever real.
Common failure modes:
Repository administration gets delegated to tool specialists who have no architecture authority. They can manage users and scripts, but they cannot resolve semantic disputes.
Architecture teams model different abstractions for the same thing. One team models a patient identity platform as an application. Another as a service. A third as a vendor product. A fourth as an integration hub. In banking, I’ve seen exactly the same thing with IAM platforms, customer master services, and payment gateways.
Technology standards libraries go stale. Cloud reference patterns lag reality. Kafka standards are documented but not maintained. IAM patterns drift while delivery teams adopt new federation flows or token exchange mechanisms.
Publishing becomes painful, so solution teams bypass the repository. They still produce architecture, but they do it elsewhere. The repository becomes the place where models go to die after approval.
And then duplicate inventories emerge: CMDB says one thing, ITSM another, architecture repository a third. Nobody fully trusts any of them.
Recovery is possible, but it is laborious. Semantic reconciliation workshops help. So do controlled clean-up sprints. Deprecation rules matter. Ownership resets matter even more.
The hard lesson is this: repositories decay socially before they decay structurally.
Step 7 — Integrate Sparx EA with adjacent enterprise systems, but selectively
Integration is seductive. It promises freshness, scale, and reduced manual effort.
It also spreads bad semantics at machine speed.
Typical integration candidates are sensible enough: CMDB or IT asset inventory, ITSM platform, GRC platform, API management catalog, data governance catalog, project portfolio tools, identity and access systems. The issue is not whether to integrate. It is what should be mastered where.
The architecture repository should not become the system of record for everything. Usually the CMDB should master approved application inventory and ownership, at least at a baseline level. The GRC platform should master control obligations and assessment workflows. API gateways or API management platforms should master deployed API contracts and runtime metadata. IAM platforms should master identities, roles, and entitlements.
Sparx EA should hold conceptual and logical architecture relationships, reference models, approved patterns, and curated traceability links.
A healthcare example works well here: pull approved application inventory and ownership from the CMDB, link control obligations from GRC, reference active APIs from the API catalog, but maintain conceptual relationships in Sparx EA that show how identity, consent, scheduling, claims, and clinical systems interact.
In financial institutions, that often means one-way ingestion first. Load reference data. Reconcile. Assign exception ownership. Do not start with bi-directional synchronization before semantics are stable. I’ve seen organizations create endless integration churn because they synchronized names and statuses between systems that did not mean the same thing by “application,” “service,” or “owner.”
My view is simple: integration multiplies bad modeling faster than it multiplies value.
Step 8 — Build naming, versioning, and status conventions that survive mergers, reorganizations, and audits
Regulated enterprises need more discipline here than startups because names change more often than architecture teams expect.
Application vs product vs service should be explicit. Domain naming should be canonical. Interface identifiers matter. Control identifiers matter. Transition-state labels matter. And none of this should depend on whoever happened to publish first.
The trick is not to encode too much meaning into the visible name. If you do, every operating model change causes rename storms. Use stable IDs plus readable labels. Preserve aliases and history.
A healthcare example: a provider data service gets renamed twice after operating model changes and vendor realignment. The business-facing label changes. The stable identifier remains. Its interface contracts and regulatory mappings remain traceable. The same issue shows up in banking constantly during channel reorganizations, mergers, and platform consolidations.
Versioning is also more nuanced than people think. You need baseline versions for architecture states, versioning for standard patterns, and often interim-state models during major transformations. If a bank is moving from on-prem integration middleware to cloud-native event streaming with Kafka and API gateways, there may be two or three legitimate states in play for 18 months. The repository needs to represent that without collapsing into ambiguity.
Step 9 — Design review and approval workflows people will actually follow
Repository governance is not the same thing as architecture governance, though they clearly overlap.
The minimum viable workflow I’ve found useful is:
create, review, challenge, approve, publish, retire.
Not every content class needs every step with the same rigor. That is where many institutions go wrong. They route everything through one central board and then wonder why teams avoid publication.
Use thresholds. Enterprise-impacting content gets formal approval: reference capabilities, cross-domain standards, shared data objects, common integration patterns, control mappings, exception decisions. Local solution content can often be approved within domain governance, provided it conforms to reference standards and does not create enterprise obligations.
Who approves what should be clear. Domain architects approve local baselines. Enterprise reference owners approve reusable abstractions. Security architecture reviews sensitive flows, IAM patterns, trust boundaries, and restricted integration detail. Data governance reviews canonical information concepts and classification-sensitive models. Control or compliance representatives review specific content classes where regulatory traceability is involved.
A healthcare case makes this tangible: a new care coordination platform introduces external data exchange. The review checks integration pattern, PHI classification, logging control inheritance, vendor dependency, IAM approach, and retention obligations. In a financial institution, the equivalent might be a customer onboarding platform introducing new document intake, identity verification APIs, and event streaming to fraud systems.
The best workflow is not the one that looks complete on paper. It is the one that reduces argument later.
Step 10 — Make the repository usable by non-architects or accept that it will remain peripheral
This is one of my stronger opinions.
If the repository is only usable by architects, it will remain peripheral no matter how elegant the meta-model is.
Risk teams, auditors, engineering leads, resilience teams, program managers, data governance leads, security operations, and even some business stakeholders need consumable outputs. They do not want to navigate deep package trees or interpret UML-style diagrams with precision.
Give them curated views: dependency maps, heat maps, control coverage views, technology standards summaries, cloud deployment impact views, IAM dependency maps, Kafka event lineage views where relevant.
A healthcare resilience team, for example, should be able to understand the impact of an identity platform outage on scheduling, e-prescribing, claims, patient portal access, and provider integrations. In a bank, the equivalent might be understanding how an IAM outage affects digital onboarding, card servicing, fraud step-up authentication, branch-assisted journeys, and customer communications.
Role-based views should be defined early and tested with real stakeholders. Not imagined stakeholders. Real ones.
Step 11 — Plan repository operations as a product, not a side responsibility
This is where seriousness shows.
Treat the repository as an internal platform with operating responsibilities:
- meta-model stewardship
- package administration
- onboarding and training
- quality control
- reporting support
- integration maintenance
- archival management
And then measure it. Active usage by role. Stale content rate. Orphan elements. Duplicate rate. Approval cycle time. Query and report success rate. Number of unresolved semantic exceptions. Reference standard freshness.
I like monthly repository health reviews. In one healthcare-style environment, those reviews flagged obsolete vendor integrations still shown as active after a claims platform retirement. In financial institutions, the same review can expose dead APIs, retired Kafka topics still appearing in diagrams, stale cloud standards, or applications left in “target” state long after the program changed direction.
If nobody funds repository operations, the institution does not really want architecture discipline. It may want architecture theater. Different thing.
A concrete example: a repository slice for member onboarding and consent management
Let’s make this less abstract.
Suppose we are designing a repository slice for healthcare member onboarding across digital portal, call center, broker channel, and provider referral. The equivalent in financial services would be customer onboarding with KYC/AML evidence capture, consent and communications preferences, and downstream account setup.
First, package placement:
- Enterprise Reference / Business Capabilities contains Member Onboarding
- Enterprise Reference / Information Concepts contains Member, Consent Record, Eligibility Result, Document
- Regulatory and Control Architecture contains logging, retention, encryption, identity assurance, and consent controls
- Business Domain / Member Services contains the domain baseline
- Solution Architecture Workspace / Onboarding Modernization contains draft and active solution models
- Approved Solution Baseline stores the promoted approved state
Now key elements:
- capability: Member Onboarding
- process: Identity Verification
- application services: Consent Capture, Eligibility Verification, Document Intake
- application components: member portal, CRM, call center platform, document service, IAM service, rules engine
- interfaces: API to enrollment platform, event to CRM over Kafka, document transfer to content repository
- controls: access logging, consent retention, encryption in transit, privileged access review
- data classifications: PHI or sensitive personal data on relevant objects
- owners: product owner, solution architect, control owner, data steward
Relationships:
- Member Onboarding capability realized by Consent Capture, Eligibility Verification, Document Intake
- Identity Verification process uses IAM platform and external verification service
- Consent Capture exchanges Consent Record with content repository and downstream member systems
- Kafka event publishes onboarding status changes to CRM and analytics
- Controls apply to document service, IAM service, API gateway, and integration flows
- Regulatory obligations trace to consent retention and access logging controls
Restricted access decisions:
detailed data flow diagrams showing PHI paths and external partner connectivity might sit in a restricted package, while broader capability and solution views remain visible to a wider audience.
Expected reporting outputs:
- systems supporting onboarding
- interfaces and events involved
- control coverage by component
- owner matrix
- impact of IAM outage
- approved versus draft solution elements
- inherited versus local controls
One intentional modeling compromise I would accept: for speed, I might initially model the external identity verification vendor as an application component rather than splitting conceptual service, vendor product, and deployed service representation. If the decision at hand is dependency and control mapping, that simplification is acceptable. Purity can wait. Clarity and consistency matter more.
Translate that back to finance and you have customer onboarding, KYC checks, consent capture, document intake, CRM updates, event publication, IAM dependency, and control mapping for audit and retention.
That is exactly the point. The pattern generalizes.
Common mistakes I would actively look for in a repository assessment
This is the short diagnostic list.
Too many custom stereotypes. Usually a sign that the meta-model is compensating for weak governance.
No authoritative business capability map. Then every program invents its own planning language.
Project teams cloning packages instead of reusing objects. That guarantees divergence.
Every integration modeled differently. APIs, files, Kafka topics, ETL jobs, and event streams all blurred together.
Archived content mixed with live standards. Dangerous during audit and design review.
Security classifications absent. In regulated environments that is not a nice-to-have.
No distinction between conceptual, logical, and physical views. Then people compare unlike things and governance turns argumentative.
Quick remediation sequence? Stop new sprawl. Define ownership. Clean reference content. Rationalize relationships. Automate only after semantic stabilization.
Not glamorous, but it works.
Closing argument — good repository design is really institutional memory with controls
The longer I do this work, the less I think of repository design as a tool topic.
It is institutional memory with controls.
A good Sparx EA repository for a large financial institution is not trying to model the universe. It is trying to create dependable meaning over time, across reorganizations, cloud migrations, platform replacements, audits, and leadership changes. That only happens when structure follows accountability, the meta-model follows decision need, access follows sensitivity, traceability follows regulatory proof, and operations follow product thinking.
Chief architects sometimes aim for elegance. I understand the instinct. But the better target is more practical than elegant: strict in meaning, flexible in usage, and boring in operation.
Boring is underrated.
The repository will never be perfect. Large institutions are too messy, and the architecture is always moving. But it absolutely must be dependable. That is the bar that matters.
FAQ — Questions senior architecture teams usually ask late in the program
How much of ArchiMate should we actually use in Sparx EA?
Less than you think. Use the subset your architects can apply consistently and your stakeholders can understand. Breadth is not maturity.
When should application inventory remain outside the repository?
Often. If CMDB or another asset source is already authoritative for inventory and ownership, keep it there and integrate selectively.
How do we handle confidential M&A or remediation programs?
Separate restricted packages, tight access controls, and explicit publication boundaries. Do not rely on informal discretion.
What is the minimum viable governance model for federated repositories?
Central ownership of meta-model, naming, core reference content, and reconciliation rules. Without that, federation becomes fragmentation.
How do we prevent delivery teams from bypassing architecture publication entirely?
Reduce friction. Limit mandatory fields to what drives decisions. Provide useful outputs back to teams. If publication only serves central governance, teams will route around it.
Frequently Asked Questions
What is enterprise architecture?
Enterprise architecture aligns strategy, business processes, applications, and technology. Using frameworks like TOGAF and languages like ArchiMate, it provides a structured view of how the enterprise operates and must change.
How does ArchiMate support enterprise architecture?
ArchiMate connects strategy, business operations, applications, and technology in one coherent model. It enables traceability from strategic goals through capabilities and application services to technology infrastructure.
What tools support enterprise architecture modeling?
The main tools are Sparx Enterprise Architect (ArchiMate, UML, BPMN, SysML), Archi (free, ArchiMate-only), and BiZZdesign. Sparx EA is the most feature-rich, supporting concurrent repositories, automation, and Jira integration.