How to Model Data Architecture with ArchiMate

⏱ 24 min read

The moment it went wrong was not dramatic in the cinematic sense. There was no outage. No red screen. No trading halt.

It was worse.

During a liquidity review, a regulator asked a very ordinary question: show us the lineage behind this reported exposure figure. The number was already in the submission. Treasury had signed off on it. Finance had reconciled it. Risk had approved the exposure logic weeks earlier.

And then three different teams explained the same number in three different ways.

Risk said the figure came from the exposure engine, net of collateral, after legal-entity mapping. Finance said it came from ledger-aligned balances with reporting adjustments applied in the warehouse. The data team pulled a lineage export from a catalog showing batch jobs, landing zones, transformation scripts, and a reporting mart—but it still didn’t explain business meaning, ownership, or why one “customer” turned into three different identifiers on the path to the report.

We had architecture diagrams. Lots of them. Application landscapes. Integration maps. Infrastructure views. A couple of heroic attempts at “enterprise information architecture.” None of them answered the regulator’s actual question: what does this number mean, where did it come from, who owns the underlying data, what controls apply, and what changes if one upstream source moves?

That gap matters in banking, and it matters quickly. If you operate in a regulated environment—especially under anything close to BCBS 239 expectations—you are not judged only on whether a number exists. You are judged on whether the number is attributable, controlled, reproducible, and accountable. Auditability is not optional. Neither is ownership of critical data elements. If product systems, AML controls, risk engines, finance consolidations, and customer mastering all touch the same reporting path, change impact becomes a board-level concern faster than most teams expect.

This is where ArchiMate turned out to be much more useful than many data people initially assumed.

To be clear: ArchiMate is not a classical data modeling notation. It is not a substitute for logical data models, canonical schemas, metadata catalogs, or field-level lineage tooling. I would never use it to design a warehouse star schema or document physical database structures in detail. That would be the wrong tool at the wrong level of abstraction.

But if your real problem is that the business, application, technology, governance, and control communities cannot see the same data story at the same time, ArchiMate can be excellent. Not perfect. Excellent. ArchiMate training

This is the story of how we used it in a mid-to-large banking environment, what we got wrong first, which modeling patterns actually worked, and what I would tell any chief architect who wants data architecture to make sense outside the architecture team.

The bank we inherited

This was not a greenfield digital bank with tidy domain boundaries and a clean event backbone. It was a real bank. Which meant history was in the walls.

Multiple countries. Multiple legal entities. A merger trail you could practically reconstruct from duplicate customer masters alone. Legacy core banking platforms in some markets, a separate payments hub, a CRM stack that had been “strategic” for years without ever becoming truly authoritative, a data warehouse serving finance and reporting, risk engines with their own curated datasets, and a newer cloud analytics platform where everyone hoped the future would finally arrive.

There was Kafka in the middle of some flows, mostly around payments and customer events. There were APIs where the bank wanted near-real-time servicing. There were still overnight batches for the things that really mattered at month-end. IAM was modernizing, but identity propagation across legacy platforms and cloud analytics was patchy enough that access lineage and data accountability were often discussed separately, which, in my experience, is always a smell.

The pressure stack was familiar.

Regulatory reporting deadlines were fixed. M&A history had left fragmented customer and product records. Controls had been duplicated across functions because nobody fully trusted the other team’s version. Ownership was blurry in the classic enterprise way: application support owners were easy to find, business data owners much less so. Everyone could point to a system. Far fewer could point to a decision-right.

The architectural problem was actually quite precise, even if the organization described it vaguely. The bank had system diagrams and process maps, but no shared model linking business meaning, data objects, applications, integrations, stores, controls, and reporting use.

That is a data architecture problem. Not merely a systems integration problem.

If all you model is interfaces, you can tell me that System A sends a file to System B every night. You cannot tell me whether the “customer” in that file is the legal counterparty, the servicing party, the group parent, or the reporting party after jurisdiction-specific consolidation rules. In banking, those distinctions drive risk, finance, AML, sanctions, onboarding, and reporting outcomes.

The recurring example through this case was customer exposure for liquidity reporting. One figure. On paper, simple enough. In reality it depended on customer master data, account balances, collateral positions, legal entity mapping, reporting adjustments, and timing conventions that varied between intraday and month-end views.

That one figure exposed almost every weakness in the bank’s data architecture.

Why ArchiMate, despite its limits

I’ve seen two bad reactions to ArchiMate in data-heavy programs. ArchiMate modeling guide

One camp says it is too abstract to be useful. The other tries to use it for absolutely everything and turns the repository into a graveyard of unreadable diagrams.

Both reactions are understandable. Neither helps.

ArchiMate is not ideal for detailed logical data modeling. If you try to force every attribute, entity relationship, payload structure, and transformation rule into it, you will lose the audience and probably your own patience. I’ve watched teams produce enormous “data architecture” views so dense that even the architects stopped trusting them.

Still, it was the right choice here for one reason: we needed a common language across business, application, technology, and governance concerns. ArchiMate for governance

We needed to connect data to business capabilities like Regulatory Reporting and Customer Management. We needed to show which processes created, transformed, validated, and consumed critical data. We needed to show which applications realized which data concepts, and where controls sat. We needed to support traceability discussions with auditors and design authorities without asking them to decipher ten different notations.

That is ArchiMate’s sweet spot, assuming you stay disciplined.

Our practical stance became very simple:

  • use ArchiMate for enterprise-level data architecture relationships
  • use ER models, schema definitions, metadata catalogs, lineage tools, OpenAPI specs, Kafka event schemas, and governance platforms for deeper detail

The key principle was this: model enough to answer risk, ownership, lineage, and change questions.

No more than that.

The moment you start using ArchiMate as a physical data design tool, you are already drifting off course. ArchiMate tutorial

The first mistake: we started from applications, not data

This is embarrassingly common, and yes, we did it too.

The first pass looked sensible enough. We cataloged major applications: core banking, CRM, collateral platform, risk engine, finance ledger, warehouse, reporting engine, cloud lakehouse. We mapped interfaces. We drew flows. Kafka topics appeared where relevant. Batch jobs were noted. API gateways were included. It looked like architecture.

It was not useless. But it did not answer the real questions.

Who owned “customer”? Not the application support team—the business meaning. Which version of customer was used in the liquidity report? Why did the risk engine’s exposure not match treasury’s liquidity snapshot on some days? Where did the post-close adjustments sit? Which controls applied before the number was submitted?

None of that became clear from the application-centric view, because application-centric modeling confuses plumbing with semantics.

In banking, business terms are overloaded. “Customer” can mean individual, corporate entity, reporting party, obligor, account holder, beneficial owner, or group parent depending on context. A field in one application does not equal a governed enterprise concept. Regulatory reporting depends on meaning, timing, and control—not just movement.

That was the lesson.

Begin with critical business meaning and data objects. Applications come after that.

Once we accepted that, the model improved quickly.

A better entry point: identify the handful of critical data objects that matter

Do not start with “enterprise data.” That phrase has ruined more architecture workshops than I care to count.

Start with five to twelve data objects tied to a pressing use case. Small enough to force precision. Important enough that people show up prepared.

For our case, the initial set was:

  • Customer
  • Account
  • Exposure
  • Collateral
  • Cash Position
  • Legal Entity
  • Transaction
  • Regulatory Reporting Adjustment

That list does not sound revolutionary. It wasn’t. What mattered was the discipline around separating three things that organizations constantly blur:

  1. Business object meaning
  2. The enterprise concept: what “Customer” or “Exposure” means in business terms.

  1. Application data object representation
  2. The system-specific form: CRM Customer Record, Core CIF Record, Risk Exposure Position, Reporting Party Dimension.

  1. Storage location
  2. Database, warehouse table, Kafka topic, cloud object store, reporting mart, spreadsheet, whatever ugly place it actually lives.

When those collapse into one conversation, arguments become endless and sterile. One domain insists its system is authoritative. Another insists the report uses a different construct. Everyone is right inside their own frame and wrong at the enterprise level.

We ran workshops with risk, finance, operations, data governance, and application owners in the same room. Sometimes tense rooms, frankly. Architects had to force agreement on terms before drawing relationships. Not perfect agreement—usually that was impossible—but enough shared understanding that the model reflected reality rather than slogans.

And yes, politics appeared immediately. Every domain believed its definition was authoritative. That is normal. The architect’s job is not to pretend there is one clean truth if the bank does not operate that way. The job is to model current-state plurality honestly, then create a path toward cleaner governance.

The core modeling pattern we used in ArchiMate

This pattern became the backbone of the work.

At the enterprise meaning level, we used Business Object for concepts like Customer, Exposure, Cash Position, and Legal Entity.

At the system representation level, we used Data Object for things like CRM Customer Record, Core CIF Record, Risk Exposure Position, Treasury Liquidity Exposure Snapshot, Reporting Party Dimension, and Post-Close Adjustment File.

Applications were represented as Application Components: core banking platform, CRM/master data hub, collateral management system, risk engine, finance ledger, warehouse/lakehouse, reporting engine.

We used Business Process and Application Process to show creation, transformation, validation, reconciliation, and reporting activities: onboard customer, book transaction, calculate exposure, consolidate balances, produce regulatory report, approve reporting adjustment.

Where platform context mattered, we used Technology Node and System Software sparingly—mostly for cloud data platform components, integration runtime, and streaming infrastructure. For example, showing that specific data object realizations were processed through Kafka-based event ingestion versus overnight ETL on a warehouse platform mattered in change discussions.

Ownership and accountability sat with Business Actor and Business Role. That mattered more than many teams first expected.

We also linked the whole thing upward to Capability—especially Regulatory Reporting, Treasury Management, Customer Management, and Data Governance—because without that, data architecture remains oddly disconnected from strategic conversation.

Regulatory expectations, policy-level requirements, and control obligations were represented through Constraint and Requirement where it helped clarify why a flow or checkpoint existed.

The most useful relationships turned out to be:

  • access — which applications or processes create, read, write, or use data objects
  • realization — how business objects are represented by application-level data objects
  • serving — useful for showing how applications or services support processes or roles
  • flow — to show movement or handoff where sequence mattered
  • assignment — for roles and responsibilities
  • association — only when the specific semantics of another relationship were not justified

That last point is worth stressing. Notation purity matters less than consistency, but laziness with relationships creates mush. Every relationship should answer a stakeholder question. If it doesn’t, don’t draw it.

Here’s a simplified version of the core case pattern:

Diagram 1
How to Model Data Architecture with ArchiMate

Not perfect ArchiMate notation in a mermaid shortcut, obviously. But it captures the pattern: business meaning first, application realization second, process linkage all the way through.

What we modeled in ArchiMate vs what we kept elsewhere

This boundary saved us from turning the repository into a dumping ground.

That last column matters. Teams often know what they can model but not why they shouldn’t. In practice, saying no to detail is part of good architecture.

Building the main case view: customer exposure for liquidity reporting

This was the flagship view, the one that finally got traction outside architecture circles.

We started with the business intent. Treasury and finance needed reliable liquidity and exposure figures. The regulator expected traceability and controlled adjustments. That sounds obvious, but starting from intent matters because it keeps the conversation tied to decisions rather than technology.

Then we modeled the business objects:

  • Customer
  • Account
  • Exposure
  • Cash Position
  • Legal Entity

Those became the nouns everyone could see.

Next came the business processes:

  • onboard customer
  • book transaction
  • calculate exposure
  • consolidate balances
  • produce regulatory report

Only then did we bring in the applications:

  • core banking platform
  • CRM / master data hub
  • collateral management system
  • risk engine
  • finance ledger
  • enterprise data warehouse / lakehouse
  • reporting engine

And then the data object realizations that made the pain visible:

  • Customer realized by CRM Customer Record, Core CIF Record, Reporting Party Dimension
  • Exposure realized by Risk Exposure Position and Treasury Liquidity Exposure Snapshot
  • Cash Position realized by Ledger Balance Extract and Intraday Cash Consolidation Record
  • Legal Entity realized by Party Hierarchy Mapping and Legal Entity Reference Table

This is where the model started doing real work.

Two systems may both “store exposure,” but for different purposes, different timing, and different control regimes. The risk engine’s exposure position might be fit for credit risk aggregation but not for intraday treasury liquidity. The warehouse snapshot might be adjusted for reporting consistency and therefore unsuitable as a source of operational truth. The CRM might own customer onboarding attributes but not reporting-party hierarchies after legal-entity consolidation.

That semantic drift was the actual problem, and the model let people see it without drowning them in detail.

We also had to show how data moved. Some customer changes arrived through APIs. Some account and balance data still came through overnight batch. Certain transaction events were published over Kafka into the cloud analytics platform, but regulatory reporting remained dependent on controlled batch snapshots because the reconciliations and signoff process were built that way. This was one of the more important tradeoffs: modern streaming architecture existed, but the report still depended on slower, more controlled flows.

That bothered some engineers. It shouldn’t have. Architecture is there to describe operating reality, not to flatter modern platforms.

A simplified flow looked like this:

Diagram 2
How to Model Data Architecture with ArchiMate

That manual adjustment file in the middle? Politically inconvenient, architecturally essential.

Once it appeared in the view, the conversations improved dramatically. Architecture gained credibility because it was finally exposing the concrete reconciliation pain everyone privately knew existed.

The second mistake: we modeled a perfect target and hid the ugly current state

This one nearly undermined the whole exercise.

After the application-centric false start, the team swung too far the other way and produced a clean target model: canonical customer, canonical exposure, tidy ownership, standardized data quality checkpoints, rationalized reporting path.

It was lovely.

It was also misleading.

In regulated banking, auditors and regulators care deeply about what actually happens. Remediation depends on transparency about current-state mess. Hidden manual controls are often the real risk points. If local country teams use spreadsheet adjustments, end-user computing, or overnight reconciliations to bridge known gaps, that is not noise. That is architecture.

So we had to add the things people wanted to leave out:

  • manual adjustment files
  • end-user computing steps
  • local country reporting marts
  • overnight batch consolidations
  • exception-handling workflows outside strategic tooling
  • regional identity workarounds where IAM integration was incomplete

I have a fairly strong opinion here: if the official flow and the actual flow differ, model both. Call it out explicitly. Do not sanitize architecture for governance committees.

You can use plateaus and transition concepts if the organization is mature enough to maintain them. Most are not. We kept transition views focused and limited because stale transition models become fiction very quickly.

How we handled ownership, stewardship, and accountability in the model

Many architecture diagrams show systems everywhere and accountability nowhere.

That is a serious weakness in banking.

Critical data elements require accountable owners. Issue remediation needs named roles. Control signoff needs someone who cannot shrug and say “that sits with the other team.” So we modeled roles directly where they mattered.

Examples included:

  • Business Role: Head of Treasury Data
  • Business Role: Customer Domain Owner
  • Business Role: Regulatory Reporting Steward
  • Business Role: Group Finance Controller

We assigned roles to processes and linked them to business objects where ownership or stewardship mattered. Not every governance committee, not every forum, not every policy board. That way lies nonsense. We focused on decision rights relevant to the data path.

A useful pattern emerged:

  • Customer owned in business terms by the Customer Domain Owner
  • Reporting Party Dimension managed operationally by the Data Warehouse team
  • Liquidity Report production stewarded by Regulatory Reporting
  • Final report signoff sitting with the Regulatory Reporting function and finance control roles

That distinction between business ownership and operational management resolved repeated meeting arguments. The CRM team stopped claiming they “owned customer” simply because they supported the platform. The warehouse team stopped being blamed for business-definition disputes they had no authority to settle.

This sounds basic, but it changed behavior.

Modeling controls and regulatory constraints without making the diagram unreadable

Control-heavy organizations can drown architecture in annotations. I’ve seen diagrams so littered with policy references and checkpoints that nobody could tell where the data actually flowed.

What was worth modeling?

  • data quality validation checkpoints
  • reconciliation controls
  • retention and classification constraints
  • approval workflow for post-close adjustments
  • segregation of duties where it was architecturally relevant
  • regulatory requirements attached to capabilities or reporting processes

Examples from the bank:

  • reconciliation between core balances and reporting snapshots
  • approval control on manual reporting adjustments after close
  • retention constraint on customer KYC records
  • restricted access to customer identifiers enforced through IAM controls on cloud analytics workspaces
  • maker-checker approval on certain reporting transformations

The trick was not to put all of that on one mega-diagram.

We ended up maintaining three related views for critical domains:

  1. an executive view — simple, directional, business-friendly
  2. an architecture traceability view — enough detail for design authorities and auditors
  3. a control overlay — focused on checkpoints, constraints, and ownership

Readability is not cosmetic. It is a governance asset. If nobody can read the view, nobody will use it during incidents or changes.

Where ArchiMate helped most: impact analysis during change

Static documentation rarely justifies itself. Change does.

The model really earned its keep when the bank started planning major changes: CRM replacement in one region, migration of reporting workloads to cloud, decommissioning a local customer master, and introduction of a new real-time payments platform publishing events to Kafka.

Without the model, every change discussion started from scratch. With the model, we could ask useful questions immediately:

  • which business objects are affected?
  • which application data object realizations change?
  • which reports consume them?
  • what controls break if the lineage shifts?
  • which roles need to approve?
  • where does IAM policy need to change because access paths move with the platform?

For example, decommissioning a local customer master sounded like a straightforward simplification. The model showed otherwise. That local master was not the business owner of Customer, but it did feed a country reporting mart used for a liquidity adjustment step. It also carried jurisdiction-specific legal-entity mapping attributes that had never been fully migrated. Remove it carelessly and you broke reporting lineage, control evidence, and month-end reconciliation.

Similarly, moving reporting workloads to cloud looked attractive from a platform perspective. The model exposed that some controls were embedded in warehouse-era operational procedures, not in code. If the lineage moved into cloud pipelines, control design and IAM needed to move too. Otherwise you got technical modernization with weakened regulatory assurance. That is a very expensive way to learn that architecture matters.

Data architecture models earn their keep during change, remediation, and incident response. Not in repository demos.

The arguments we had over “one source of truth”

Architects love this phrase. So do executives. It is usually too simplistic to survive contact with a bank.

Treasury said the ledger was authoritative. Risk said the exposure engine was authoritative. Customer teams said CRM owned party data. Reporting teams said only curated warehouse data was fit for purpose because that was the controlled and reconciled version.

They were all partly right.

The better way to model this was not “single source of truth” but distinctions between:

  • system of record
  • system of reference
  • system of reporting consumption

For Customer, CRM might be the system of record for onboarding attributes, while a mastered party hub served as reference for cross-entity identity, and the reporting warehouse held the approved consumption representation for regulatory use.

For Exposure, the risk engine could be the operational calculation source, while treasury used a curated liquidity exposure snapshot as its reporting-consumption representation.

Truth is contextual. It depends on purpose and lifecycle stage.

ArchiMate helped because it let us show these distinctions without pretending they were the same thing. We added lightweight annotations and view conventions rather than inventing some giant theory of authority. That was enough.

My advice here is blunt: do not try to resolve semantic governance disputes inside the architecture tool. Use the model to make disagreement visible, bounded, and actionable. Governance must settle the actual policy.

The minimum set of views a chief architect should demand

Not fifty. Five.

That is my opinion, and I’m sticking with it.

1. Data concept and ownership view

Audience: business leaders, data governance, chief architect

Detail: business objects, roles, ownership boundaries

Update frequency: quarterly or on major governance change

2. Application realization and flow view

Audience: architects, engineering leads, program teams

Detail: business objects, data objects, applications, key flows, manual interventions

Update frequency: per major release or integration change

3. Regulatory reporting traceability view

Audience: finance, risk, audit, regulators, control teams

Detail: critical data path from source meaning to reporting output, including adjustments

Update frequency: before material reporting change and at least each reporting cycle for critical submissions

4. Control overlay for critical data elements

Audience: control owners, operational risk, internal audit

Detail: validation, reconciliation, approval, retention, access constraints

Update frequency: when controls or obligations change

5. Target transition view for remediation or modernization

Audience: portfolio governance, transformation leads

Detail: current state, target state, transition dependencies, risk hotspots

Update frequency: tied to program increments

Five good views beat fifty stale ones.

Warning signs of shelfware are easy to spot: diagrams nobody uses in change boards, ownership views with ex-employees still listed, target-state pictures detached from actual migration waves, and architecture repositories filled with application objects that never connect to business meaning.

Practical modeling heuristics that saved time

A few tactical rules made a disproportionate difference:

  • model nouns before pipelines
  • limit each view to one decision question
  • separate semantic concepts from technical payloads
  • use color carefully and consistently
  • mark manual interventions explicitly
  • avoid relationship overload just because the tool permits it
  • version views when regulations or org structures change materially
  • validate diagrams with the people who reconcile numbers, not just architects
  • do not collapse month-end and intraday paths if control expectations differ
  • where Kafka topics exist, model them only when they matter to lineage or control—not because event-driven architecture is fashionable
  • include IAM dependencies when access control affects data accountability or segregation of duties

That last one is often missed. In cloud-heavy banking environments, data architecture and identity architecture overlap more than teams expect. If critical reporting data moves into cloud analytics but role-based access and approval paths are not modeled, you are leaving out part of the control story.

What the finished architecture actually changed

It didn’t solve everything. That is important.

But it changed enough to matter.

The bank got faster at responding to regulator lineage questions. Not instantly, but in hours or a couple of days instead of weeks of email archaeology. Ownership ambiguity reduced because business roles were visible and repeatedly referenced in governance. The remediation roadmap for duplicated customer and reporting data stores became clearer because the model showed which duplications were merely untidy and which were genuinely risky. Platform modernization discussions improved because impact analysis had a usable baseline. ArchiMate in TOGAF ADM

Governance meetings also became less theatrical. Fewer circular debates about whose system was “the truth.” More concrete discussion about purpose, lifecycle stage, and control point.

What did not change?

Detailed metadata quality still depended on tooling and discipline. Some local workarounds remained because the cost of removing them exceeded the current risk appetite. ArchiMate did not improve source data quality by itself. It did not magically align every domain definition. And it certainly did not eliminate spreadsheet behavior where business deadlines still outran platform change.

That is fine. Architecture is not sorcery.

Common traps when modeling data architecture with ArchiMate in banking

A few mistakes show up repeatedly.

Treating every table as a first-class architecture element.

Don’t. Enterprise architecture is not a database inventory.

Confusing data ownership with application support ownership.

The team running the CRM platform is not automatically the owner of Customer as a business concept.

Skipping manual adjustments because they are embarrassing.

Those are often the highest-risk control points.

Creating one giant end-to-end diagram no one can read.

If the diagram needs a tour guide, it has failed.

Trying to settle semantic disputes inside the tool instead of through governance.

The repository should expose ambiguity, not adjudicate it on its own.

Never linking data objects to business capabilities or reporting obligations.

Then the model becomes technical scenery.

Letting repository hygiene collapse after the first program wave.

This happens constantly. The initial program funds the model. Then ownership fades and the architecture fossilizes.

None of these are theoretical. I’ve seen every one of them, more than once.

Closing argument: model for accountability, not decoration

In regulated banking, data architecture has to do more than look coherent. It has to explain responsibility, lineage, meaning, and operational dependency.

That is why ArchiMate can be so useful. Not because it is the best notation for data in the abstract, but because it can connect those concerns across business, application, and technology layers in a way people can actually discuss.

Used well, it helps answer uncomfortable questions quickly. Used badly, it becomes a prettier inventory.

If I were advising a chief architect starting this tomorrow, I would keep it simple:

start with one critical reporting or risk use case.

Model the few data objects that matter.

Expose the ugly current truth.

Connect data to process, systems, ownership, and controls.

Keep the views alive through change.

Because the next time the regulator asks where a number came from, the architecture should help you answer in hours, not weeks.

FAQ

Is ArchiMate enough for data lineage?

No. It is enough for high-level lineage and traceability conversations. Use dedicated lineage tooling for field-level detail.

How detailed should data objects be in enterprise architecture models?

Detailed enough to answer ownership, meaning, flow, and change-impact questions. Not detailed enough to reproduce schema design.

Can ArchiMate represent data governance roles effectively?

Yes, if you keep the focus on decision rights and accountability relevant to the data path. Don’t model every committee.

How do you model manual adjustments and spreadsheets without clutter?

Represent them explicitly as data objects or process steps in a dedicated traceability or control view. Don’t hide them, but don’t put every detail on the executive diagram.

What is the right boundary between ArchiMate and logical/physical modeling tools?

ArchiMate for enterprise relationships and decision support. Logical and physical tools for schema, payload, attribute, and implementation detail. Keep that boundary firm.

Frequently Asked Questions

What is enterprise architecture?

Enterprise architecture aligns strategy, business processes, applications, and technology. Using frameworks like TOGAF and languages like ArchiMate, it provides a structured view of how the enterprise operates and must change.

How does ArchiMate support enterprise architecture?

ArchiMate connects strategy, business operations, applications, and technology in one coherent model. It enables traceability from strategic goals through capabilities and application services to technology infrastructure.

What tools support enterprise architecture modeling?

The main tools are Sparx Enterprise Architect (ArchiMate, UML, BPMN, SysML), Archi (free, ArchiMate-only), and BiZZdesign. Sparx EA is the most feature-rich, supporting concurrent repositories, automation, and Jira integration.