ArchiMate for Security Architecture: Zero Trust Modeled in

⏱ 24 min read

Zero Trust has become one of those phrases that gets approved in steering committees before anyone has done the harder work of defining what it actually means in the architecture.

I’ve seen this pattern repeatedly in banks. The program begins as a policy statement, or an IAM modernization stream, or a network segmentation investment, or a cloud security initiative. Procurement starts moving. Product owners get assigned. Vendor decks multiply. A few principles are drafted, usually some variation of never trust, always verify. Everyone nods along.

And yet, six months later, the mobile banking team is doing one thing, the branch environment another, the call center another, and the privileged admin estate something else again. Fraud teams score transactions. IAM teams score identities. Network teams still treat internal traffic as semi-trusted. Operations teams maintain emergency exceptions that quietly turn permanent. Audit asks a simple question — where is the actual trust decision made for this journey? — and the room goes quiet.

That is usually the moment when it becomes obvious that Zero Trust was declared before it was modeled.

In banking, that gap is expensive. It creates fragmented controls, duplicated decision points, unclear ownership, contradictory customer journeys, and painful audit cycles. Worse, it creates a false sense of progress, because plenty of technology may have been deployed while the underlying trust model remains inherited, inconsistent, and mostly implicit.

This is exactly where ArchiMate earns its keep. ArchiMate training

Used properly, ArchiMate gives you a way to connect strategy, regulatory drivers, business processes, application services, runtime enforcement, and evidence. Not as separate documents. As one traceable architecture. In regulated banking, that matters because security controls are never purely technical; they shape customer experience, operational resilience, fraud loss, outsourcing posture, and auditability.

So the argument in this article is straightforward: Zero Trust only becomes credible when trust assumptions, policy decisions, identity context, enforcement points, and business impact are modeled end to end.

Not as slogans. As architecture.

The banking scenario

To keep this practical, I’ll use a composite but very familiar banking environment throughout.

Think of a mid-to-large bank with retail channels, SME lending, payment processing, a customer service center, and a growing set of third-party fintech integrations. Its digital channels run in a hybrid cloud model. Customer-facing mobile and web services sit across private and public cloud, often containerized, usually fronted by an API gateway, with Kafka carrying event streams for fraud, servicing, and operational telemetry. Core banking and some payment systems remain on-prem, including a legacy mainframe and a couple of systems that everyone is quietly afraid to touch.

That is not unusual. In my experience, it is normal.

The regulatory backdrop is just as recognizable: data protection obligations, operational resilience expectations, third-party and outsourcing scrutiny, strong identity and access auditability, and a constant drumbeat around fraud and customer harm. Add to that the board-level expectation that the bank can explain who accessed what, under what authority, with what evidence, and whether the control actually worked.

The key actors are easy to recognize:

  • customer
  • call center agent
  • fraud analyst
  • payments operations user
  • privileged platform engineer
  • external fintech partner

And the journeys we care about are not abstract boxes on a capability map. They are concrete:

  • a customer logs into mobile banking and initiates a high-value transfer
  • an employee accesses customer records from a managed device
  • a partner API requests account verification
  • an engineer performs production support on the payments platform during an incident

These journeys matter because Zero Trust does not look the same in each of them. Nor should it. The trust signals, risk tolerance, enforcement points, and evidence expectations are different. If you do not model those differences deliberately, teams will invent their own local version of Zero Trust and call it enterprise architecture.

I’ve seen that movie more than once. It doesn’t end well.

Before you draw anything: define what “trust” means

Here is a blunt observation from practice: many teams model controls, but they do not model trust assumptions.

That sounds subtle. It really isn’t.

A lot of security architecture repositories are full of components called IAM Platform, API Gateway, SIEM, PAM, and EDR. Fine. Useful. But those component names tell you almost nothing about the actual trust model of the enterprise. They don’t tell you who is requesting access, to what asset, under what conditions, based on what evidence, decided by whom, enforced where, and observed how.

Those are the questions that matter.

If I were setting up an ArchiMate usage guide for Zero Trust in a bank, I would insist on standardizing a small set of modelable concerns very early: ArchiMate modeling guide

  • identity confidence
  • device posture
  • session risk
  • data sensitivity
  • transaction criticality
  • workload identity
  • network location as a weak signal, not a primary trust anchor
  • policy decision authority
  • enforcement point
  • evidence artifact

That last one often gets ignored, which is odd given how much time banks spend with auditors, regulators, and risk committees.

Trust in this context is not a moral category, and it is not a slogan. It is an architectural property describing the confidence and conditions under which an access request, transaction, or administrative action is allowed to proceed. It is contextual, revocable, and increasingly dynamic. TOGAF training

A high-value transfer from a known customer on a registered device is a good example. The customer may be “known.” The device may be healthy. The session may look ordinary. But the action still needs a stronger policy path than a balance inquiry because the transaction itself carries greater fraud and business risk. That is not because the customer is less trusted as a person. It is because the requested action demands more assurance.

If that distinction is not explicit in the model, teams collapse identity assurance and transaction authorization into one muddy idea of “good login.”

Here is what teams often miss: if these concepts are not normalized in your architecture metamodel usage guide, every program creates its own vocabulary. IAM says assurance level. Fraud says risk score. Device teams say posture. API teams talk about client trust. Operations teams say approved access. Before long, you have five architectures describing the same trust problem in incompatible terms.

That is where governance starts to fray.

A pragmatic viewpoint set for Zero Trust

Do not try to model Zero Trust in one all-knowing security diagram. That diagram will be unreadable by week two.

In practice, I recommend a small stack of views, in an order that reflects how architects actually work:

  1. Motivation view
  2. Business interaction or process view
  3. Application cooperation view
  4. Technology or infrastructure view
  5. Implementation and migration view

I’m fairly opinionated on this. Capability maps alone are too abstract for this topic. Pure technology views become product inventories. Process views without decision and enforcement logic are misleading. You need the stack.

The motivation view answers why controls exist at all: fraud pressure, resilience expectations, audit findings, cloud adoption, third-party growth. It captures drivers, assessments, goals, principles, requirements, and constraints.

The business/process view shows where trust decisions affect operations and customer outcomes. Not in theory. In actual journeys.

The application cooperation view is where Zero Trust usually becomes real. It identifies which services participate in policy administration, policy decision, telemetry enrichment, enforcement, exception handling, and evidence logging.

The technology view places those services in runtime context: endpoints, network paths, service mesh, cloud segments, PAM gateways, mainframe brokers, telemetry pipelines.

The migration view forces honesty about current state and transition. Most banks are moving from perimeter-heavy, app-specific, inherited trust patterns. They are not teleporting to a clean policy-centric target state.

A few conventions help enormously:

  • model Policy Decision Service as an application service
  • model Policy Administration Service separately from decision and enforcement
  • model Device Posture Collection and Segmentation Control as technology services
  • use business objects or data objects for things like Customer Profile, Access Policy, Risk Signal, Consent Artifact
  • use Assessment, Requirement, and Constraint for regulatory and control drivers
  • use Plateau, Gap, and Work Package for transition planning

That may sound dry, but consistency matters. Without it, the model becomes interpretive art.

Start with motivation and risk, not tools

The first model should not be a product architecture.

It should be the rationale.

In the banking scenario, the drivers are obvious: fraud pressure on digital channels, regulatory scrutiny, cloud adoption, increased third-party integration, and the need to reduce blast radius when something inevitably goes wrong. Assessments might include inconsistent trust decisions across channels, excessive standing privilege in operations, weak lateral movement controls, and poor traceability of access decisions.

From there, goals become meaningful:

  • reduce implicit trust
  • improve traceability of access decisions
  • contain breaches
  • tighten privileged access governance
  • align customer and workforce decision patterns

The outcomes should be measurable enough to matter: lower fraud loss, faster production of audit evidence, fewer standing privileged accounts, cleaner partner access governance.

Then come principles. In my experience, these are only useful if they can be tested against design choices:

  • verify explicitly
  • least privilege by default
  • assume compromise
  • policy decisions must be observable
  • business criticality influences assurance level

Requirements and constraints give the model regulatory teeth: strong customer authentication, segregation of duties, production admin session recording, data residency constraints for security telemetry, and so on. ArchiMate in TOGAF

A real-world example from payments operations illustrates the value of this. Many banks still have operations users with broad access across internal applications because speed and exception handling historically trumped control design. That condition should be modeled as an assessment and linked to a principle that replaces inherited trust with contextual authorization and observable decisioning. Otherwise, the broad access survives every transformation under a new label.

Common mistake? Writing Zero Trust principles so vague they cannot be traced to architecture decisions. If a principle says “all access must be secure,” that is not architecture. That is wallpaper.

Model the business journey where trust is actually decided

Before expanding into a grand enterprise repository exercise, pick one end-to-end scenario.

For this article, the core scenario is a mobile banking customer initiating a high-value international transfer.

At the business layer, the model should include:

  • customer authentication
  • transfer initiation
  • transaction review
  • fraud screening
  • step-up verification
  • payment authorization
  • exception handling

The point is not to show a nice process flow for the sake of BPM aesthetics. The point is to surface where trust decisions happen. Login is only one part of the story. The transfer itself carries its own trust questions: Is this a known beneficiary? Is the session behavior consistent? Is the customer traveling? Has device posture changed? Is the amount unusual? Is the destination account associated with known risk indicators?

Zero Trust, in a banking journey, is not just about whether a subject can access an application. It is about whether the requested action should proceed under current conditions.

So in ArchiMate terms, I would model the Customer as a business actor using a Mobile Banking Service, supported by business processes such as Authenticate Customer, Submit Transfer Request, Assess Transaction Risk, Perform Step-Up Challenge, and Authorize Payment. Relevant business objects would include Transfer Request, Customer Profile, Beneficiary Details, and Challenge Result. A Step-Up Challenge Triggered event should be modeled as an event caused by risk-based decisioning, not as some mysterious UX branch.

That distinction matters because many banks still treat fraud architecture and access control architecture as separate worlds. The result is contradictory decisions in the same journey: IAM says authenticated, fraud says suspicious, payments says proceed, customer experience says simplify. Nobody owns the decision composition.

Here is what teams often miss: if fraud signals are outside the core architecture model, your Zero Trust design is incomplete. In banking, transaction trust and access trust are intertwined whether the org chart likes it or not.

Drop to application cooperation: who decides, who enforces

This is the layer where things stop being philosophy.

In the banking example, the key application services and components might include:

  • customer identity provider
  • authentication service
  • device intelligence service
  • fraud and risk engine
  • policy decision service
  • policy administration service
  • API gateway
  • mobile channel backend
  • payments orchestration service
  • customer profile service
  • case management service
  • audit evidence service
  • event backbone, often Kafka, for risk and decision telemetry

Now the important distinction: architects need to model policy administration, policy decision, policy enforcement, and telemetry collection separately.

Those are not the same thing. They are often owned by different teams, implemented in different platforms, and changed at different rates. When they get blurred together, the architecture becomes very hard to reason about.

A typical interaction looks something like this:

  1. Mobile app submits transfer request.
  2. API gateway validates token and channel context.
  3. Mobile banking backend calls customer profile service and device intelligence service.
  4. Risk engine enriches the request with behavior signals, geo-velocity anomalies, beneficiary novelty, and recent fraud indicators.
  5. Policy decision service evaluates transaction conditions, session conditions, and business policy.
  6. Payments orchestration service either proceeds, blocks, or requires step-up verification.
  7. Decision, inputs, and resulting action are logged to an audit evidence service, with selected events published to Kafka for fraud review, SIEM correlation, and downstream investigation.

That is the kind of cooperation model chief architects should want. It shows where the deny decision is made, where challenge is triggered, what data informs the decision, and where evidence is preserved.

If the model cannot answer where is the deny decision made? it is too vague.

In banking environments, enforcement is usually scattered. Some decisions sit in channel code. Some in IAM policies. Some in API gateways. Some in fraud tools. Some in operational procedures. The model should make that visible, not sanitize it away.

Partner APIs deserve their own pattern

Take a fintech account-verification API. This is not just “external integration.”

The application cooperation view should show workload identity, certificate trust, API authorization, contract-based data minimization, and decision logging. The fintech partner is an external actor consuming an application interface governed by a contract and constrained by policy. The API gateway may enforce client certificate validation and OAuth scopes. A policy decision service may evaluate whether the requested data attributes are permitted under the contract. Payload filtering may enforce minimization. Every call should produce evidence.

Banks often over-trust approved partners because the due diligence process felt rigorous. But approved does not mean broadly trusted. It means contextually permitted under specific conditions.

A small Mermaid sketch, just to make the cooperation concrete:

Diagram 1
Partner APIs deserve their own pattern

Not a complete model, obviously. But enough to show the moving parts without drowning in notation.

Zero Trust concerns mapped to ArchiMate focus

The mapping below is one of the more useful artifacts to standardize across teams.

What matters here is not the table itself. It is the discipline behind it. You are telling architecture teams that a Zero Trust concern is never just a technology item. It has a place in the model, a type of evidence, and a known failure mode.

That saves a lot of confusion later.

Do not skip the technology layer, but do not let it take over

Banks love turning security architecture into product diagrams.

It is understandable. Technology is visible. It has owners, budgets, implementation plans. But a Zero Trust technology view should model runtime context and enforcement topology, not just logos.

What should be there?

  • managed and unmanaged endpoints
  • workload runtime environments
  • branch and remote access patterns
  • private and public cloud segments
  • privileged access paths
  • telemetry pipelines
  • enforcement technologies
  • critical communication paths

In our scenario, that might mean a branch thin-client environment, VDI for offshore operations, a service mesh in the digital banking platform, a legacy mainframe access broker, a PAM vault and session proxy, and a SIEM or data lake retaining decision evidence. Kafka may sit in the telemetry path, carrying high-value transaction events, policy outcomes, and fraud alerts into monitoring and investigation pipelines.

How do you model “assume breach” in practical terms? Not with a principle box alone. You show segmentation boundaries, constrained east-west access, short-lived credentials, monitored admin paths, isolation of high-risk workloads, and runtime telemetry feeding centralized observation.

My view is that network location still matters operationally. Of course it does. A managed branch network and a coffee-shop Wi-Fi are not equivalent. But location should not carry most of the trust burden anymore. It is one signal among many. A lot of banks say this out loud while quietly preserving flat authorization inside “trusted” zones. That is not Zero Trust. It is perimeter thinking with better branding.

The awkward bit: legacy core banking and inherited trust

This section belongs in the middle because legacy constraints shape the architecture more than target-state purity ever will.

Typical realities in banks include shared service accounts, terminal-based admin access, broad middleware trust, brittle integration patterns, and incomplete telemetry. Some core systems cannot enforce modern fine-grained policy. Some batch interfaces were designed for a different era. Some admins still access production through patterns that would never be approved in a greenfield cloud platform.

Pretending otherwise helps nobody.

So model compensating architecture honestly:

  • access broker as interim enforcement point
  • privileged session gateway
  • command logging
  • batch interface restrictions
  • stronger monitoring around weak native controls
  • transaction mediation at the front door
  • constrained operator roles around the legacy platform

A very common example is a core payment switch that cannot enforce fine-grained access policy itself. In that case, Zero Trust is achieved through front-door mediation, transaction risk controls, tightly constrained operator access, and comprehensive evidence capture around the weak native control surface.

Residual risk should be explicit in the model. That is not a failure. It is architecture honesty.

One mistake I see far too often: target-state diagrams quietly omitting the legacy systems auditors care about most. If your shiny Zero Trust architecture excludes the mainframe, the payment switch, or the old middleware estate, it is not the bank’s architecture. It is a selective illustration.

Privileged access is where credibility gets tested

Customer journeys get the funding and the executive airtime. Administrator journeys reveal whether the architecture is serious.

Take a familiar scenario: a platform engineer needs emergency production access during a payment incident.

The model should capture the request and approval flow, just-in-time privilege issuance, session brokering, command control or restricted shell access, session recording, break-glass path, and post-event review. This is a place where resilience and security genuinely pull in different directions. Operations teams want speed. Control teams want friction. Good architecture reconciles both without relying on heroics.

At the business layer, you model the emergency support workflow and the triggering event. At the application layer, you show approval service, credential issuance service, privileged session proxy, and audit evidence service. At the technology layer, you show bastion, PAM, logging, and target nodes.

A compact sketch:

Diagram 2
Privileged access is where credibility gets tested

My advice from experience: design a distinct emergency pattern. Do not let temporary exceptions become the real operating model. In many banks, customer-facing Zero Trust controls are maturing while administrators still operate with standing privilege on trusted networks. Once auditors see that asymmetry, every maturity claim starts to wobble.

Connect the architecture to control frameworks and audit

Elegant diagrams are not enough in regulated environments.

Chief architects need the model to support control mapping, evidence lineage, policy ownership, and operational accountability. One of the most valuable trace patterns is this:

driver → requirement → principle → service → process → evidence

That simple chain turns architecture into something auditors, risk teams, and control owners can actually use.

A few concrete examples:

  • a strong customer authentication requirement traces to authentication service, step-up challenge event, and authentication logs
  • a privileged access control requirement traces to PAM services, approval workflow, session proxy, and session recordings
  • a data minimization constraint traces to partner API contract, payload filtering service, and API access logs

Auditors care about more than whether a control exists. They care where it operates, who owns it, what business process it affects, and how effectiveness is evidenced. If the architecture repository cannot answer those questions, the GRC repository ends up inventing a parallel reality. Then neither repository reflects implementation.

Practical tip: attach evidence-producing artifacts in the architecture repository itself, not just in meeting notes or slides. The repository should not become a dumping ground, but it should preserve linkage to evidence classes and ownership.

This is one of those unglamorous practices that pays for itself during audits and remediation planning.

Migration: from perimeter-heavy to policy-centric

Transition architecture matters because most banks have to improve while continuing to operate, satisfy regulators, and avoid breaking customer journeys.

A sensible migration sequence usually looks like this:

  • inventory critical journeys and their trust dependencies
  • establish a shared policy vocabulary
  • centralize or federate policy decision patterns
  • remove broad internal trust for privileged access
  • introduce workload and service authentication for APIs
  • tighten segmentation around crown-jewel services
  • unify decision logging and evidence generation
  • retire duplicate embedded authorization logic over time

ArchiMate gives you the right concepts for this: plateaus, gaps, work packages, deliverables.

An example set of plateaus might be:

  • Current: perimeter VPN, app-specific access rules, fragmented fraud and IAM decisions, standing admin privilege
  • Transitional: centralized identity, partial device posture integration, PAM uplift, API gateway policy enforcement, Kafka-based decision telemetry
  • Target: contextual policy decisions across user, workload, transaction, and admin access paths, with consistent evidence generation

One lesson learned the hard way: trying to centralize every decision engine at once usually fails. Teams underestimate embedded business logic, local exceptions, regulatory nuances, and delivery dependencies. Start with consistency of policy semantics and evidence patterns. Tool consolidation can follow. Sometimes it should follow.

That is not the fashionable answer, but it is the one that tends to work.

What architects usually get wrong

Let me be candid.

A lot of Zero Trust modeling in ArchiMate is not wrong because the notation is weak. It is wrong because the thinking behind it is shallow. ArchiMate tutorial

Here are the common mistakes:

1. Equating Zero Trust with MFA rollout

Better approach: model trust decisions across transaction, workload, and privileged access paths, not just login.

2. Modeling products instead of decision architecture

Better approach: show who decides, who enforces, who observes, and what evidence is produced.

3. Ignoring transaction-level trust

Better approach: include fraud and payment risk in the core model for high-risk journeys.

4. Separating fraud architecture from security architecture

Better approach: treat them as cooperating domains in customer and payment journeys.

5. Omitting legacy platforms

Better approach: model compensating controls and residual risk explicitly.

6. No distinction between policy decision and enforcement

Better approach: separate policy administration, decision, enforcement, and telemetry in the model.

7. Assuming segmentation equals least privilege

Better approach: show authorization semantics inside segments, not just boundaries between them.

8. Not modeling evidence generation

Better approach: every critical access path should show how decision evidence is produced and retained.

9. Using one giant “security view”

Better approach: create a connected set of views for motivation, business, application, technology, and migration.

10. Treating partner connectivity as exempt because of contracts

Better approach: model workload identity, API authorization, minimization, and observability.

I’ll add one more, because I keep running into it.

11. Letting security architecture become a parallel shadow model

Better approach: integrate with business, platform, integration, and resilience architecture so delivery teams see one enterprise picture, not competing diagrams.

A worked mini-model set

Let’s pull the pieces together.

Motivation snapshot

The bank faces rising fraud on digital transfers, increased regulatory scrutiny of access decisions, growth in fintech integrations, and operational risk from broad admin privilege. Principles are set: verify explicitly, least privilege by default, assume compromise, and ensure decisions are observable.

Business journey snapshot

A customer initiates a high-value transfer from the mobile app while traveling. Authentication succeeds, but the session risk increases due to geo-velocity and a newly added beneficiary. A business event triggers step-up challenge and fraud review before authorization can proceed.

Application cooperation snapshot

The mobile channel submits the request through the API gateway. Authentication service validates identity. Device intelligence contributes posture and registration signals. Risk engine enriches context using profile, behavior history, and fraud indicators. Policy decision service evaluates the combined conditions. Payments orchestration either pauses for challenge, rejects, or proceeds. All decision data is logged and relevant events are streamed over Kafka to fraud operations and monitoring.

Technology enforcement snapshot

The mobile backend runs in cloud Kubernetes clusters with service mesh policy for east-west communication. API gateway terminates external channel requests. Risk and decision services publish telemetry to Kafka, then to SIEM and the evidence store. Payment core connectivity is constrained through mediated services. Admin access to the payments environment is only via PAM proxy with session recording.

Audit trace snapshot

A strong customer authentication requirement links to authentication service and challenge records. A fraud control requirement links to risk scoring and case management events. A payment authorization policy links to the policy decision service and the recorded decision outcome. An admin control requirement links to PAM approval, session recording, and post-event review.

A second, shorter example makes the point from the workforce side.

During a payment outage, an engineer requests break-glass access. Incident declaration triggers emergency workflow. Approval is recorded. JIT privilege is issued for a limited duration. Session is brokered and recorded. Post-event review confirms commands executed and whether the access remained within scope.

That is Zero Trust in practice: not one diagram, but a coherent set of connected views.

Practical modeling conventions for EA teams

A few conventions make these models much easier to use.

Use names that describe decisions and control intent, not vague platform labels:

  • Policy Decision Service
  • Transaction Risk Assessment
  • Privileged Session Enforcement
  • Customer Transfer Authorization
  • Partner API Contract Enforcement

Keep repository hygiene under control. Reuse security concepts across domains instead of inventing local variants in each program. Avoid separate metamodel folklore for IAM, fraud, network, and cloud teams.

My suggested minimum rules are simple:

  • every critical access path must show decision, enforcement, and evidence
  • every crown-jewel service must show trust dependencies
  • every exception path must identify compensating controls
  • every partner integration must show authorization basis and data minimization
  • every privileged workflow must distinguish approval, credential issuance, and session control

And collaborate early. IAM, fraud, infrastructure, resilience, platform engineering, and GRC all need to see themselves in the model. Security architecture should not become an isolated expert language that nobody else reads.

If only the security architects can understand the repository, it will not change delivery behavior. It will just decorate governance.

Optional questions I get a lot

Is ArchiMate too abstract for Zero Trust implementation?

No. It is abstract enough to stay enterprise-relevant and precise enough to model decision and enforcement patterns, if you use it with discipline.

How detailed should policy logic be?

Detailed enough to show categories of decision input, ownership, and enforcement placement. Not so detailed that the repository becomes a policy rule engine.

Should fraud decisioning sit inside or beside Zero Trust architecture?

In banking, beside is usually not enough. It needs to be connected directly for high-risk journeys.

How do we model vendor tools without turning the repository into a catalog?

Model products where they realize services or enforce controls. Do not let product names substitute for architectural meaning.

What if legacy systems cannot support modern enforcement?

Then model compensating controls, residual risk, and transition plateaus honestly. That is architecture too.

Conclusion: model trust decisions, not just security components

Zero Trust in banking is not a destination pattern copied from vendor decks.

It is a disciplined way to model trust assumptions, policy logic, enforcement placement, evidence generation, and business impact across the bank’s real operating landscape: cloud, legacy, APIs, Kafka event flows, IAM, fraud engines, human workflows, and all the awkward seams between them.

ArchiMate is useful here because it lets chief architects connect those layers instead of discussing them in isolation. That connection is what makes Zero Trust governable, auditable, and real.

If I were advising a chief architect starting this journey, I would keep it very simple.

Pick one high-risk banking journey.

Model it across motivation, business, application, technology, and migration views.

Expose where trust is still implicit.

Identify who decides, who enforces, and what evidence is produced.

Then use that model to drive the roadmap, rationalize controls, and prepare for the audit questions that are definitely coming.

That is when Zero Trust stops being a slogan and starts becoming architecture.

Frequently Asked Questions

How is Zero Trust architecture modeled in ArchiMate?

Zero Trust is modelled by making identity and access control explicit at every layer. IAM components appear as Application Components with Serving relationships to every application. Network micro-segmentation appears in the Technology layer.

What ArchiMate elements are used for security architecture?

Application Components (IAM, firewall, WAF, SIEM), Application Services (authentication, authorisation, logging), Technology Nodes (security infrastructure), and Constraints in the Motivation layer for regulatory requirements.

Can ArchiMate model GDPR compliance?

Yes — Requirements and Constraints in the Motivation layer represent GDPR obligations. These link to Application Components and Data Objects that must comply, creating a traceable map from regulation to technical control.