ArchiMate Physical Layer: When and How to Use It

⏱ 26 min read

There’s a question I wish more architecture teams asked much earlier:

Do we actually need the Physical Layer here?

Not “does ArchiMate include it?”

Not “can our tool model it?”

Not “wouldn’t it be nice to make the repository complete?”

The real question is simpler, and harder: does physical reality materially change the decision in front of us?

I’ve seen this surface very clearly in banking. A payments platform migration is under review. The business architecture looks neat. The application landscape is mapped. Technology views show cloud landing zones, Kafka-based integration, IAM dependencies, API gateways, resilience patterns — all the things people expect in a modern architecture pack. On paper, it feels complete.

Then somebody asks a very ordinary operational question.

Which branch services still depend on local devices?

Where do the cash-handling machines actually sit?

What happens if the branch WAN link fails?

Can we close that smaller data center without breaking secure printing, card issuance, or device management?

Which customer journeys still rely on equipment physically present in branches?

And the room goes quiet.

That silence is usually the tell. The architecture can be digitally elegant and still be operationally incomplete.

That, to me, is where the ArchiMate Physical Layer earns its place. Not as an exercise in modeling purity. Not because the metamodel says it exists. But because some enterprises — banks especially — still run on a combination of digital services and stubborn physical dependencies. Buildings matter. Devices matter. Network paths across real sites matter. Sometimes materials matter too, usually more than architects expect until a regulator, an auditor, or an outage reminds them.

At the same time, there’s a second failure mode. Teams discover the Physical Layer and lurch too far the other way. Suddenly the repository starts to resemble a warehouse inventory system. Every device. Every room. Every cable route. Endless equipment objects with no clear architectural purpose. Within six months it’s stale, nobody trusts it, and the whole thing gets dismissed as “too hard to maintain.”

So my view is pretty straightforward, and yes, slightly opinionated:

Most teams underuse the Physical Layer in operationally real environments. Just as many misuse it by modeling everything they can touch.

The trick is judgment. That’s what this article is really about.

Not notation theory.

Not exhaustive metamodel coverage.

Practical use.

What the Physical Layer is for — and what it definitely is not for

ArchiMate’s Physical Layer is one of those parts of the language that makes immediate sense once you’ve spent time around real operating environments, and feels oddly abstract if you haven’t.

In plain terms, the main concepts are straightforward:

  • Facility: a physical structure or place that matters operationally. A branch building. A data center. A secure room. A vault area.
  • Equipment: a physical device or machine. ATM. Cash recycler. Secure print device. Hardware security module appliance. Kiosk.
  • Distribution Network: the physical network used to transport things, energy, or signals between places. In enterprise architecture practice, this often appears as branch connectivity, leased lines, site interconnects, or other physically anchored network links where location and path really matter.
  • Material: physical matter or consumables that are operationally relevant. Cash cassettes. Card stock. Secure forms. In some industries, fuel or medicines. In banking, usually only where continuity or control depends on them.

That all sounds obvious. But the architectural intent is more specific than many people assume.

The Physical Layer exists to represent physical-world structures that matter to the enterprise, especially when those structures affect service delivery, resilience, compliance, operational risk, or transformation planning. It acts as a bridge between digital design and real-world operating constraint.

And that bridge matters.

Because a customer does not care that your core banking platform is elegantly decomposed into domain services if the branch can’t issue the regulated document, the ATM is down, the branch router is hanging off one brittle carrier link, or the card personalization hardware is still tied to a site you’re trying to exit.

Still, there’s a hard boundary here.

An enterprise architecture repository is not:

  • a CMDB
  • an asset register
  • a warehouse inventory list
  • a building blueprint
  • a maintenance scheduling system

I’m fairly firm on this, partly because I’ve seen what happens when teams blur the line. If the model does not support a decision, it probably does not belong in the architecture repository.

That sounds harsher than I mean it to. I’m not dismissing operational records. Those are essential. I’m saying they do different jobs.

A CMDB is usually trying to answer, “what assets do we have and what are their technical relationships?”

Architecture is trying to answer, “what matters for design, risk, change, and business outcome?”

Those overlap. They are not the same thing.

A good example is the word server. In technology architecture, a server is often modeled as a node because the computational role is what matters. In physical architecture, an ATM machine or a branch cash recycler matters because it is a tangible operational dependency in a customer-facing process. A data center might be modeled as a facility when site dependency, resilience zoning, or exit planning matters. In other cases, it may just be a location reference and not worth elevating in the architecture at all.

That distinction is where a lot of teams either become disciplined or get lost.

Before modeling anything: five decisions that justify using the Physical Layer

I don’t start with the metamodel. I start with a decision somebody genuinely needs help making.

In practice, there are five questions that usually justify Physical Layer modeling.

1. Does physical failure change a business outcome?

This is the first filter.

If an ATM outage reduces cash availability, that’s architecture-relevant. If a branch device failure disrupts assisted servicing, that matters. If the failure of a secure printer stops regulated documents being issued, then yes, the physical asset has crossed into architecture territory.

If the failure only creates a local inconvenience with no meaningful service, risk, or transformation impact, it probably belongs in operational tooling rather than your enterprise model.

2. Is there regulatory, safety, or audit relevance?

Banking has plenty of these cases.

Secure rooms. Vault areas. Hardware custody. Card stock handling. Device controls in branch environments. Hardware security modules in controlled facilities. Physical segregation or access constraints that auditors care about.

Once physical placement or custody becomes part of the control design, I would absolutely consider it fair game for enterprise architecture — at least to the extent needed to explain the control environment. In regulated environments, that line comes up more often than people expect. Sparx EA performance optimization

3. Are physical assets part of a transformation roadmap?

This is where teams often wake up late.

Branch modernization. Smart kiosks. Assisted self-service. Data center exit. Moving from owned sites to co-location. Replacing on-prem card issuance equipment. Redesigning branch archetypes after a merger.

If physical assets are changing as part of the transformation, the architecture should make them visible. Otherwise you end up with target-state diagrams that look complete while quietly skipping half the work.

I’ve seen this repeatedly in “cloud-first” programs. The application migration roadmap looks polished. IAM is redesigned. Kafka connectivity is rerouted. But some critical physical device, local network termination, or secure hardware process is still anchored to a site nobody planned for. Then the migration suddenly isn’t just a migration anymore; it’s an operations and premises program hiding inside a technology deck.

4. Is resilience impossible to explain without physical context?

Some resilience conversations are mostly logical. Others really aren’t.

If branch continuity depends on a single WAN path, local device estate, edge processing, or a facility with limited redundancy, you will not explain resilience properly without the physical context. Same if disaster recovery assumptions rely on distinct facilities but in reality terminate through the same carrier corridor or shared service point.

This is also where cloud architecture can mislead people a bit. Cloud removes some physical concerns from your direct control, but not all of them. The accountability model changes. The physical dependency does not disappear.

5. Do multiple teams misunderstand ownership or dependency?

This one is underrated.

Facilities thinks the device belongs to workplace technology. Workplace thinks it belongs to branch operations. Network says the carrier owns the line. Security says the secure room controls are theirs. The business assumes “IT” handles all of it.

Whenever ownership and dependency are fuzzy across teams, a modest amount of physical architecture can save a lot of confusion.

And yes, there’s a classic anti-pattern here:

“We modeled it because ArchiMate has the element.”

That is not a reason. That is how repositories bloat.

Banking is where this gets very real

Banking is a particularly useful domain for talking about the Physical Layer because it still lives in both worlds.

It is highly regulated. It is geographically distributed. It still has physical customer touchpoints. And many supposedly digital services rely on hidden physical dependencies that only become visible during incidents, audits, closures, or transformation programs.

A typical banking landscape might include:

  • retail branches
  • ATMs
  • self-service kiosks
  • cash recyclers
  • secure print devices
  • branch routers and local connectivity equipment
  • owned data centers
  • a disaster recovery site
  • carrier links to branches
  • card personalization hardware
  • hardware security modules
  • secure storage areas
  • physical materials like card stock, secure stationery, or cash media

None of that means all of it belongs in the architecture repository. But a surprising amount of it becomes relevant once you’re answering real questions.

And those are exactly the questions architects get asked.

Which customer journeys still depend on in-branch equipment?

What physical assets are in scope if we close 60 low-traffic branches?

Which services fail if branch WAN links degrade?

Can we decommission a site without breaking secure operations?

What physical concentration risks exist in card issuing or cash handling?

What sits in a leased branch versus a bank-owned site?

Which dependencies are outsourced, and what does that mean for exit planning?

Those are not edge cases. They sit right in the middle of strategy, operations, and risk.

A useful rule of thumb: model exposure and dependency, not furniture

This is probably the most practical sentence in the whole article:

Model exposure and dependency, not furniture.

That’s the line.

What you want in enterprise architecture is the set of physical things that change risk, continuity, design, compliance, or transformation outcomes. Not a descriptive catalog of everything people can see in a building. Sparx EA guide

In banking, it is often worth modeling:

  • ATMs
  • self-service kiosks
  • cash recyclers
  • vault access control devices
  • branch routers where continuity depends on them
  • data center facilities
  • disaster recovery facilities
  • site interconnects or branch distribution networks
  • hardware security appliances where physical control matters
  • paper stock or card stock only when material supply affects business continuity

Usually not worth modeling:

  • every monitor
  • every desk printer
  • every cable path
  • generic office furniture
  • standard desktops unless there is a highly unusual dependency
  • room-level detail that never affects a design or risk decision

There is nuance, though. A secure printer might look trivial in one context and become critical in another. In retail banking, if that printer is tied to regulated document issuance or legal customer correspondence that must happen in branch, then suddenly it is not “just a printer.”

This is why I’m wary of blanket rules. The same object can be irrelevant in one context and central in another.

When stakeholders push for more detail than is useful, I often use a line like this:

“We are modeling the physical estate only to the level needed to support impact, risk, and design decisions.”

That usually resets the conversation.

The biggest mistakes I keep seeing

Some of these are very common. A few are almost predictable.

Mistake 1: Using the Physical Layer as a duplicate CMDB

This is the classic one.

A team starts capturing every device because it feels responsible and thorough. Soon the architecture repository contains hundreds or thousands of physical objects. Nobody has a sustainable update process. Operations changes something. The repository falls behind. Trust drops. Then every meeting includes the same caveat: “obviously the model isn’t fully current.”

At that point, the model has stopped being architecture and become bad asset management.

How it shows up in meetings:

“Can we rely on this diagram?”

“Well, broadly yes, but not for the exact current device count.”

That’s usually the smell of overmodeling.

Mistake 2: Leaving out physical assets in branch-heavy or operations-heavy environments

This is the opposite error, and honestly I see it just as often.

The architecture looks elegant: capabilities, applications, APIs, Kafka event flows, IAM controls, cloud zones. All clean. All modern. But it explains very little about how the service runs on a Tuesday morning in a branch network under degraded conditions.

Then operations people dismiss architecture as abstract theatre, and they’re not entirely wrong.

How it shows up in meetings:

“This all makes sense, but where’s the branch dependency?”

Or worse:

“So who has captured the site and device impacts?”

And no one has.

Mistake 3: Confusing location with facility

A city is not a facility.

A region is not a facility.

A branch building may be a facility. A data center might be. A secure room within a site may matter as a facility-like concept depending on what you need to show. But “London” is not a facility. It’s a location reference.

This sounds pedantic until models start implying controls or dependencies that aren’t actually there.

How it shows up in meetings:

“We have resilience because these two things are in different cities.”

Then somebody points out they are actually in the same co-location campus or depend on the same carrier entry point.

Mistake 4: Modeling technology nodes and physical equipment interchangeably

This is subtle and messy.

A node is about computational or execution structure. Equipment is tangible operational hardware. Sometimes one real-world thing can be represented in both ways depending on the question. An ATM, for example, has a physical existence and a technology role. But if you blur those concepts completely, your model gets muddy very quickly.

I’ve found it helps to ask: am I trying to explain logical processing, or tangible operational dependency?

If it’s both, be explicit.

How it shows up in meetings:

“Is this ATM here because it hosts software, because it’s a customer touchpoint, or both?”

If nobody can answer, the abstraction boundary is off.

Mistake 5: No relationship back to business services

This one kills usability.

I sometimes see physical diagrams full of branches, devices, and network links with no connection back to business services or customer impact. At that point it’s just an operational sketch floating in space.

Architecture value comes from traceability. If the ATM equipment is relevant, show what service it supports. If the facility matters, show why.

How it shows up in meetings:

“Interesting diagram. So what decision is this helping us make?”

If the answer is vague, the model isn’t finished.

Mistake 6: Trying to represent every resilience detail in one diagram

Physical views get unreadable very quickly. Add facilities, devices, branch types, carrier links, failover paths, outsourced ownership, cloud connectivity, DR routing, and control zones to one page and you’ve built a visual punishment device.

Use more than one view. Please.

How it shows up in meetings:

A room full of smart people staring at a dense picture while one person says, “Maybe zoom in?”

Mistake 7: Forgetting material dependencies

Architects often remember buildings and devices, then forget materials entirely. In many cases that’s fine. Sometimes it isn’t.

Card stock. Cash cassettes. Secure forms. Printer consumables. These can become continuity-critical in narrow but important processes.

This is not an invitation to model stationery. It’s just a reminder that physical continuity is not only about machines.

How it shows up in meetings:

“We planned the site transition, but how is regulated document issuance supposed to work in the interim?”

Or:

“Where does replacement card media sit in this target design?”

How to decide the right modeling depth in practice

The easiest way to get Physical Layer modeling wrong is to start from the notation instead of the decision.

Start with the decision.

Then choose the depth.

I generally think of three useful levels.

Level 1: Presence

At this level, you are simply acknowledging that a business service or transformation has physical dependencies.

This is enough for strategic conversations, early risk framing, and executive awareness. You might show that branch cash withdrawal depends on branch facility presence, ATM estate, and WAN connectivity, without breaking down every site or device type.

This is lightweight, and early on it is often enough.

Level 2: Operational dependency

This is where most enterprise repositories should land for a small number of important value streams.

Here you identify key facilities, equipment, and service relationships. Enough to support transformation planning, impact analysis, and ownership conversations. You’re not going into forensic detail, but you can explain how the service actually works and what it depends on.

For example, you might show that a branch withdrawal service depends on:

  • ATM equipment in branch archetype A and B
  • branch WAN distribution network
  • centralized transaction authorization platform
  • IAM-based device authentication
  • Kafka-based event publication for monitoring or fraud telemetry
  • secure cash replenishment process
  • a specific site class for business continuity

That’s useful. More importantly, it’s maintainable.

Level 3: Resilience and control

This is the deepest level and should be used selectively.

Now you add redundancy patterns, resilience zones, custody, critical materials, facility distinctions, regulatory controls, and key concentration points. This is where you go when continuity, audit, major incidents, or operational risk genuinely demand it.

Most enterprises do not need Level 3 everywhere. In fact, very few should even try.

In my experience, Level 2 for a small subset of critical services gives better value than Level 3 across everything.

The triggers for increasing detail are usually obvious:

  • a major incident
  • a regulator finding
  • a branch transformation program
  • an outsourcing transition
  • a site exit
  • a merger integration
  • a severe audit issue
  • repeated confusion over operational ownership

That’s when the architecture needs to sharpen.

Example walkthrough: modeling a branch cash withdrawal service the right way

Let’s make this concrete.

A customer withdraws cash either through an ATM or, in some branch formats, through an assisted channel supported by branch staff and local equipment.

The temptation is to model this either too abstractly or too mechanically. The better approach is to show the cross-layer dependency chain clearly enough to support decisions.

At the top, you have the business service: cash withdrawal.

Below that, a set of application services:

  • transaction authorization
  • fraud screening
  • account balance validation
  • device management support
  • customer notification or journaling where relevant

Then technology services:

  • branch connectivity
  • compute hosting
  • messaging/event streaming, maybe Kafka for operational events or fraud telemetry
  • IAM or certificate-based device identity
  • monitoring and management services

Then the physical elements:

  • ATM equipment
  • branch facility
  • WAN distribution network
  • cash cassette material
  • perhaps a secure room or cash handling area if it matters to the control design

That is enough to answer some very practical questions:

  • what happens if a branch closes?
  • where are the single points of failure?
  • what is bank-owned versus vendor-managed?
  • does continuity depend on local equipment, network path, or material supply?
  • which teams must sign off a change?

What you do not need is every internal component of the ATM, every maintenance workflow, every consumable, every software package, and every local cable. Unless the decision actually depends on those things, they are noise.

A realistic complication here is ownership.

The ATM may be third-party managed. Device maintenance may be outsourced. Cash replenishment may involve another provider. Transaction processing remains bank-owned. IAM trust anchors may sit in central infrastructure. Monitoring events may feed a cloud analytics platform. Fraud services may be centralized and highly digital, while the customer interaction still depends on a physical machine in a leased branch.

That split matters. Not because architects enjoy complexity, but because transformation, exit planning, accountability, and incident response all depend on it.

Here’s a simple illustrative view.

Diagram 1
ArchiMate Physical Layer: When and How to Use It

Not perfect notation purity. Good enough for discussion. In practice, that is often the better trade.

When a data center migration needs the Physical Layer more than teams expect

This is another scenario where people think they’re dealing with a purely technology question and then discover they’re not.

Say a bank is moving from an owned data center model to a mix of co-location and cloud. The standard migration pack will usually focus on application disposition, hosting target, landing zones, network segmentation, IAM redesign, integration shifts, maybe Kafka cluster strategy, security controls, and migration waves.

All necessary.

Often insufficient.

Because data center exit has physical dependencies that application migration views tend to hide:

  • network termination points
  • hardware security modules or other regulated appliances
  • card issuance devices
  • secure print and mailing hardware
  • equipment that can’t simply be “cloud migrated”
  • physical media handling or destruction processes
  • custody controls for regulated hardware
  • facility exit sequencing and decommissioning constraints

This is where the Physical Layer brings clarity. It helps show:

  • which facilities still matter
  • which equipment is tied to regulatory or operational control
  • which distribution networks must change to maintain resilience
  • where the real accountability shifts when services move to cloud or co-location

There’s a subtle lesson here that a lot of “cloud-first” rhetoric misses:

Cloud-first does not remove physical architecture. It relocates it and changes the accountability model.

You may no longer own the facility. You may no longer operate the hardware directly. But you still depend on physical environments, physical network paths, physical security arrangements, and sometimes physical devices outside the cloud estate.

That distinction matters enormously during audits and exits.

When to model ArchiMate Physical Layer elements in banking

Relationship patterns that make the model usable

You do not need every possible ArchiMate relationship to get value from the Physical Layer. In fact, a bit of discipline goes a long way. ArchiMate training

A few patterns are consistently useful.

Pattern 1:

Business service depends on application service.

Application service depends on technology service.

Technology service is realized by nodes or platforms.

Where relevant, delivery also depends on physical equipment or facilities.

This keeps physical elements connected to actual enterprise outcomes.

Pattern 2:

Facility houses equipment.

Distribution network connects facilities or equipment.

Simple. Familiar. Usually enough.

Pattern 3:

Material is associated with a business process or equipment where supply matters. BPMN training

Use this sparingly, but when it matters, it really matters.

Pattern 4:

Physical elements are linked to risk, requirement, or constraint viewpoints.

This is underrated. If the branch facility has a control requirement, if the equipment has a custody constraint, if the WAN path represents a concentration risk, link it.

Without relationship discipline, physical views drift into isolated operational sketches. And once they’re isolated, they stop being architecture.

Where architects get stuck: ownership, sourcing, and abstraction boundaries

This is where banking gets messy.

The branch premises may be leased.

The ATM may be outsourced.

The network link may be carrier-managed.

The HSM may sit in a shared facility.

The card printer may be operated by a third party under strict contractual control.

The branch router may be bank-standard but managed through a workplace or infrastructure function the business barely understands.

So what do you model?

My answer is: model what the bank owns, what is operated for the bank, and what is simply critical to the bank, even if owned by somebody else. Those are different things.

And they matter differently in transformation.

A dependency you own can be redesigned directly.

A dependency operated for you may require contract change.

A dependency merely critical to you may need contingency treatment, alternate supplier strategy, or explicit risk acceptance.

This is why I prefer architecture that shows dependency and accountability, not just asset possession.

If you only show what the bank owns, you can miss exactly the dependencies most likely to hurt you in an outage or a transition.

A contrarian point: sometimes not using the Physical Layer is the better choice

There are absolutely cases where the right answer is restraint.

If you’re doing application portfolio rationalization, capability heatmaps, early target operating model work, or a strategy paper about channel shift, the Physical Layer may add very little. In those situations, physical detail often creates clutter and false precision.

And false precision is dangerous. It makes immature thinking look mature.

There is also a real cost to unnecessary physical modeling:

  • repository sprawl
  • stakeholder confusion
  • maintenance burden
  • arguments over detail that do not improve decisions
  • views that age quickly and lose credibility

A practical alternative is to capture physical considerations first as requirements, assumptions, risks, or constraints. Then, once a decision genuinely depends on them, promote only the necessary parts into explicit Physical Layer modeling.

That’s a mature move, not a shortcut.

Sometimes the best architecture choice is to say, “not yet.”

Build views people can actually use

One diagram will not satisfy executives, operations teams, and delivery teams.

Don’t try.

For this topic, I usually recommend a small set of audience-specific views:

Executive impact view

Shows customer service to physical dependency. Clear, sparse, outcome-oriented.

Transformation view

Shows current-to-target physical change. Useful for branch modernization, site exit, outsourcing, and cloud transition.

Operational resilience view

Shows facilities, equipment, distribution concentration, and key control points.

Branch archetype view

A reusable model for branch types: full-service, light branch, self-service zone, advisory-only format, and so on.

This matters more than in many ArchiMate topics because physical diagrams clutter quickly. If you overload them, even experienced stakeholders stop reading them. ArchiMate modeling guide

A little visual discipline helps:

  • use layers sparingly
  • limit legends
  • avoid showing every relationship at once
  • keep the purpose of each view explicit

Here’s a simple transformation-oriented sketch.

Diagram 2
ArchiMate Physical Layer: When and How to Use It

Again, not trying to win notation awards. Trying to help people make decisions.

A practical playbook

If I had to reduce this to a working method, it would look like this:

  1. Identify the decision.
  2. If you cannot name the decision, don’t start modeling physical detail.

  1. Identify which business services have real physical dependencies.
  2. Not all of them do. Pick the ones that matter.

  1. Select only the physical elements that change design or risk conclusions.
  2. This is the discipline most teams lack.

  1. Connect them back to business and technology layers.
  2. Otherwise the view will sit in isolation and slowly die.

  1. Validate with operations, facilities, and security teams.
  2. If they don’t recognize the model, it’s probably too abstract.

  1. Define maintenance ownership.
  2. If nobody owns updates, the model will decay quickly.

  1. Create a reusable pattern, not one-off artwork.
  2. Branch archetypes, data center patterns, secure processing patterns — these scale better than bespoke diagrams.

Two warnings from experience:

If operations cannot recognize the model, it is probably too abstract.

If architects cannot maintain it, it is probably too detailed.

That tension never completely goes away. You manage it.

Mini case: branch modernization went sideways because physical dependencies were “assumed”

A fairly typical story.

A bank wanted to consolidate teller services into self-service zones supported by improved apps, better customer guidance, and upgraded assisted-service flows. On the architecture side, the application work looked ready. IAM had been sorted. New APIs were in place. Event integration back to central monitoring through Kafka was designed. The target branch journey looked efficient.

What got missed was physical reality.

Power resilience in some branches wasn’t sufficient for the new device mix.

Network resilience assumptions didn’t match the actual carrier setup.

Secure device placement conflicted with branch layouts and control rules.

Cash replenishment processes had not been redesigned for the new operating pattern.

A supposedly minor dependency on document printing turned out to be essential for part of the assisted journey.

The result was painfully predictable: rollout delays, branch workarounds, audit concerns, staff frustration, and a measurable drop in customer experience during the first waves.

Would Physical Layer modeling have solved everything? No.

Would it have exposed bad assumptions earlier? Almost certainly.

That’s really the point. Good architecture does not eliminate complexity. It makes the consequential parts visible early enough to act.

The bottom line

Use the ArchiMate Physical Layer selectively.

But take it seriously.

Not every architecture needs it. Plenty of strategy work does not. Plenty of portfolio work does not. Plenty of high-level target-state thinking can safely leave it out.

Banking often cannot.

Especially in branches, cash operations, secure processing, card issuance, site strategy, data center exit, and resilience analysis, the physical estate is not an implementation footnote. It is part of how the enterprise actually works.

So the judgment call is this:

Use the Physical Layer when physical reality materially affects service, risk, compliance, resilience, or transformation.

Ignore it when it adds no decision value.

And resist the temptation to turn your repository into a warehouse inventory system.

If your architecture claims to explain how the bank works, but ignores the facilities, equipment, networks, and materials that customers and operators actually depend on, then however elegant it looks, it is incomplete.

FAQ

Isn’t this what the CMDB is for?

Partly, but not entirely.

A CMDB tracks operational configuration items. Architecture explains decision-relevant structure and dependency. Use the CMDB as a source where helpful. Don’t try to replace it.

How is Equipment different from a Node in practice?

A Node represents computational or execution structure. Equipment is the tangible physical thing. Sometimes one real-world asset is relevant in both ways, but the distinction should reflect the question you’re answering.

Should ATMs be modeled as technology, physical, or both?

Sometimes both. If you need to show the ATM as a customer-facing physical dependency and also as a technology execution point with software and interfaces, represent both aspects carefully rather than collapsing them into one ambiguous object.

Do cloud-heavy banks still need the Physical Layer?

Yes, in selected areas. Cloud changes ownership and control boundaries. It does not erase facilities, network termination, hardware controls, branch devices, or physical concentration risk.

How often should physical views be updated?

Only as often as needed to remain decision-useful. Tie updates to major changes, incidents, programs, and periodic review cycles. If you try to mirror every operational change, you’re probably rebuilding the wrong system.

Frequently Asked Questions

What is enterprise architecture?

Enterprise architecture aligns strategy, business processes, applications, and technology. Using frameworks like TOGAF and languages like ArchiMate, it provides a structured view of how the enterprise operates and must change.

How does ArchiMate support enterprise architecture?

ArchiMate connects strategy, business operations, applications, and technology in one coherent model. It enables traceability from strategic goals through capabilities and application services to technology infrastructure.

What tools support enterprise architecture modeling?

The main tools are Sparx Enterprise Architect (ArchiMate, UML, BPMN, SysML), Archi (free, ArchiMate-only), and BiZZdesign. Sparx EA is the most feature-rich, supporting concurrent repositories, automation, and Jira integration.