⏱ 22 min read
If you lead integration architecture in healthcare, you probably know the awkward reality already: the organization has multiple inventories, and none of them really answer the question people ask when something important breaks.
The CMDB knows about servers.
Cyber keeps a spreadsheet of “critical systems.”
Procurement has a vendor list.
Operations maintains a support wiki.
Project teams leave solution diagrams in SharePoint.
And architecture, if we are being honest, often has the neatest pictures and the stalest metadata.
Then somebody asks a very DORA-shaped question:
Which ICT assets support critical services, who owns them, what depends on them, and what happens if they fail?
That is usually the moment the room goes quiet.
In a hospital group, this gets complicated fast. You have an EHR, maybe Epic or Cerner or a regional equivalent. You have imaging platforms, LIS, identity services, patient portals, API layers, old file transfer services nobody really wants to admit are still important, cloud subscriptions hosting integration workloads, maybe Kafka somewhere because one team got serious about event streaming, and a stack of external services for messaging, prescribing, claims, or national exchange. TOGAF roadmap template
And as the integration lead, you often know more of the real dependency chain than almost anyone else. You know what breaks downstream when ADT stops. You know which API gateway is quietly holding up half the digital front door. You know that the “simple interface” on the diagram actually depends on certificates, IAM, queues, DNS, firewall policy, managed SQL, and one person in operations who still remembers how failover actually works.
That makes you valuable.
It also creates a trap. You can end up modeling only interfaces and application context, while missing the operational assets that keep the whole thing alive.
That is exactly why Sparx EA is worth using here instead of producing the next heroic spreadsheet. A spreadsheet can list things. EA can model relationships, ownership, hosting, traceability, dependency chains, and different views for different audiences without duplicating the truth five different ways. If you set it up well, it becomes the place where business services, applications, integration assets, platforms, data stores, third parties, and controls can actually connect.
This article is about a practical path, not a theoretical one:
- define what counts as an ICT asset
- design a metamodel that people can maintain
- structure the repository so it does not decay in three months
- populate real healthcare assets
- connect them to critical services
- expose risk, dependency, and ownership views
- avoid the usual traps
I’m writing this from the perspective of someone who has seen hospital estates that looked very well governed on paper and then turned into archaeology digs during incidents.
Start with the problem you already have, not the regulation
I think this matters more than a lot of DORA guidance admits. TOGAF roadmap template
If you start with regulation wording alone, teams tend to produce compliance artifacts. They create columns. They classify things. They spend weeks debating definitions. Then, six months later, nobody can use the result in an outage bridge, architecture review, resilience test, or supplier risk discussion.
Start with the operational question instead.
In a realistic healthcare estate, a single business service like patient admission and registration may rely on:
- the EHR registration module
- an HL7 engine routing ADT messages
- an IAM platform for clinician access and downstream provisioning
- a bed management application
- a radiology or lab downstream subscription
- a patient portal context API
- a SQL platform
- virtual or container hosting
- certificates
- monitoring
- backup and recovery tooling
- maybe an SMS provider if notifications are part of onboarding
None of those things sit neatly in one inventory today.
That is the actual problem. DORA just gives it sharper consequences.
So the objective is not “build a DORA repository.” The objective is to create a trustworthy model that can answer impact, dependency, ownership, and resilience questions quickly enough to be useful.
Before touching Sparx EA, decide what “ICT asset” means here
Most teams fail before they even open the tool.
If “asset” means literally everything, your EA repository becomes unusable. You drown in noise, and nobody maintains it. If “asset” means only servers and databases, your traceability is too shallow to answer any meaningful service question.
You need a pragmatic definition.
A workable healthcare definition is this:
An ICT asset is any technology component, platform, application, service, interface, or supporting configuration item required to deliver, secure, monitor, or recover a regulated or important business service.
That is broad enough to matter and still narrow enough to model.
In practice, I would split assets into a small number of categories that architects, operations, and risk teams can all live with:
- business-facing digital services
- applications
- integration assets
- infrastructure or platform assets
- security and operational tooling
- externally provided ICT services
- critical data repositories where relevant
That means you absolutely include things like:
- EHR application
- Mirth or NextGen Connect integration engine
- FHIR gateway
- Kafka event platform if it supports operationally critical data movement
- Azure subscription or tenant context hosting integration services
- radiology archive platform
- SSO / IAM platform
- SIEM or managed SOC service
- managed backup platform
- patient messaging SaaS
And at least initially, you do not model as first-class assets:
- every workstation
- every switch port
- every HL7 message flavor
- every test clone
- every clinic scanner unless it is directly tied to critical service continuity
This is where politics appears very quickly.
Infrastructure teams often want maximum technical granularity because they are used to CI-level detail. Risk teams want accountability and evidence. Architecture has to sit in the middle and design a model that scales without collapsing under too much detail. I learned the hard way that if you do not hold that line early, the repository becomes a duplicate CMDB with prettier diagrams and worse data.
Step 1 — Identify the healthcare services DORA questions point back to
Inventory should begin from service impact, not technology cataloging.
That sounds obvious. Teams still skip it.
A DORA-style ICT inventory only becomes useful when you can trace assets back to important business services. Without that, you just have a long list of technology objects and no way to prioritize what matters.
For a healthcare provider, a sensible first-pass service list might include:
- patient admission and registration
- clinical documentation
- medication management
- lab order and result exchange
- diagnostic imaging workflow
- patient portal access
- claims and billing submission
- identity and access for clinicians
Keep it short at first. Honestly, very short.
In Sparx EA, I would create Business Service elements, or use your existing capability/service pattern if your metamodel already exists and people actually understand it. Add a few key properties:
- service criticality
- service owner
- regulatory sensitivity
- recovery expectation
- maybe operating hours if that matters to support
Do not disappear into process modeling. That is another common detour. You do not need every BPMN flow for every hospital department before you can model the asset chain behind medication management.
The mistake I see over and over is starting with 1,500 applications because somebody has an application register, then discovering much later that nobody knows which of those really support the most important services.
That is backwards.
Step 2 — Design a metamodel people can actually maintain
This is the center of the whole thing.
Sparx EA does not fail because it lacks capability. It fails because teams create a metamodel that is too abstract, too pure, too large, or too inconsistent to survive contact with real delivery and operations teams.
My advice is simple: use a deliberately small set of asset classes.
Suggested core asset classes
- Business Service
- Application
- Integration Component
- Platform / Infrastructure Service
- Data Repository
- Security Service / Control Platform
- External ICT Third Party Service
- Support Team / Owner
- Business Process if useful
- Location / Hosting Context
You can implement these with ArchiMate if your organization already uses it and people are fluent enough not to spend every workshop arguing over notation. If not, use a lightweight custom profile in EA with clear stereotypes and tagged values. I’m not precious about UML purity here. Stakeholder comprehension beats notation elegance every time.
That may irritate purists. Fine.
Suggested relationship types
Use relationship names people understand:
- Business Service is enabled by Application
- Application uses Integration Component
- Application runs on Platform
- Application stores data in Data Repository
- Asset is owned by Team
- Asset is provided by Third Party
- Asset is hosted in Location
- Asset depends on Asset
- Asset supports Business Process
- Security Service protects/monitors Asset
You can map these to formal connector types under the hood if you want. What matters is consistency.
Key tagged values
Do not go wild. Start with metadata you will actually use:
- asset criticality
- service criticality
- confidentiality / integrity / availability profile
- production status
- lifecycle state
- recovery tier
- support owner
- technical owner
- vendor
- contract reference
- data classification
- interface type
- deployment model
- incident priority expectation
Naming matters more than most people think. If one team calls it “Support Owner,” another calls it “Ops Team,” and another “Resolver Group,” you will spend months cleaning reports.
Also, avoid creating 40 custom stereotypes before the first inventory workshop. I have seen this happen more than once. It feels productive because the repository looks sophisticated. In reality, nobody knows which object type to use, and the first bulk import creates chaos.
A metamodel should be boring. Usually that is a very good sign.
A table worth building before the repository grows
This kind of table is not glamorous, but it prevents a lot of nonsense later.
I would build this table before scaling the repository. It becomes the reference point for onboarding architects and settling recurring debates about what belongs where.
Without something like this, every domain team invents its own interpretation.
Step 3 — Set up the EA repository so the model does not decay
Structure matters more than people think.
A repository without clear package discipline turns into a junk drawer very quickly. For this purpose, a simple package structure works well:
- 00 Governance and Metamodel
- 10 Business Services
- 20 Applications
- 30 Integration Estate
- 40 Platforms and Infrastructure
- 50 Data Repositories
- 60 Security Services
- 70 Third-Party ICT Services
- 80 Views for Risk, Operations, Audits
- 90 Reference Lists and Archived Assets
A few practical rules help a lot:
- separate canonical asset records from diagrams
- separate imported source data from curated architecture objects
- use controlled vocabularies for tagged values
- enforce naming conventions
- use status values consistently
- use saved searches and matrix views to detect gaps
And be careful with edit permissions. If too many people can modify core asset records directly, quality falls apart faster than anyone expects. In one program I worked on, five architects and three analysts were all “helpfully” adjusting owner fields and names. Within two months we had duplicate integration platforms, contradictory lifecycle states, and three spellings of the same IAM service.
Governance is not bureaucracy here. It is basic hygiene.
Here is a simple repository sketch:
Nothing fancy. That is enough.
Step 4 — Model the integration estate first, because that is where healthcare complexity hides
This is where I get opinionated.
In most healthcare estates, the integration layer is the most under-modeled source of systemic ICT risk.
Applications get attention. Infrastructure gets inventoried. The integration estate sits in between, often treated as arrows on a diagram rather than assets in its own right. That is a mistake.
Integration components connect clinical systems, corporate systems, patient channels, and external networks. They are often shared. They are often operationally critical. Ownership is often fuzzy. Outages create cascading failures that are far larger than the component itself.
Model them as assets.
Real examples:
- an HL7 engine routing ADT from the EHR to bed management, LIS, RIS, and identity provisioning
- a FHIR gateway exposing patient demographics and appointment data to the portal and mobile app
- a Kafka platform moving near-real-time event streams for analytics, notifications, or operational workflow
- a secure file transfer service sending lab batches to external partners
- an integration broker connecting radiology orders to imaging systems
The key point is this: the integration engine is not just a line between applications. It is an operational asset with hosting, ownership, support expectations, dependencies, certificates, monitoring, and recovery characteristics.
In EA, represent:
- the integration component itself
- the applications that use it
- the platform it runs on
- the IAM or certificate services it depends on
- the business services it supports
Do not model every individual transaction variant as a separate first-class asset unless there is a strong operational reason. Usually, interface clusters are enough at first. For example, “ADT Distribution Service” is often more useful than 47 separate message transformations.
And identify whether the integration service is:
- shared
- dedicated
- embedded inside an application
That distinction matters during outages and modernization planning.
The classic mistake is drawing application context diagrams and assuming that counts as an inventory. It does not. A picture showing EHR connected to LIS via “HL7” tells you almost nothing useful when the integration cluster fails on a Sunday morning.
Step 5 — Bring in infrastructure and cloud platforms, but stop before atomization
Good-enough infrastructure modeling for DORA is not the same as complete infrastructure modeling.
That distinction saves a lot of pain.
In a healthcare estate, you probably have a mix like this:
- on-prem EHR database cluster in a hospital data center
- virtual server estate supporting legacy clinical apps
- Azure-hosted API and integration services
- maybe AKS or OpenShift for container workloads
- SaaS patient engagement platform
- managed backup and disaster recovery service
Model the platform services and operationally meaningful nodes. Not every VM by default.
Capture things like:
- compute cluster
- SQL or managed database service
- container platform
- identity platform
- network edge service
- backup platform
- observability platform
If a CMDB exists, consume it selectively. Do not mirror every CI into EA. That is one of the fastest ways to destroy the usefulness of the architecture repository. EA should hold the architecture-relevant operational picture, not become a second-rate infrastructure database.
I usually ask a simple question: does this infrastructure item have distinct risk, ownership, recovery, or dependency significance at the level we need to answer service-impact questions?
If yes, model it.
If no, reference it through a higher-level platform asset.
This is especially important in cloud environments. “Azure” is not an asset. But an Azure subscription hosting integration workloads, an API Management instance, a Key Vault service, an Event Hub namespace, or a managed PostgreSQL service might be.
Granularity should follow decision-making.
Step 6 — Add third-party ICT services early, not as an afterthought
This one gets neglected constantly.
Healthcare organizations are deeply dependent on external ICT services: claims clearinghouses, e-prescribing networks, cloud messaging providers, managed SOCs, outsourced hosting, identity federation services, imaging exchange services, and more. If those dependencies sit only as free text in application descriptions, you do not have usable supplier traceability.
Model the supplier and the service.
At minimum, add:
- supplier as organization
- service provided by supplier
- contract or service agreement reference
- concentration risk note
- exit complexity indicator
A few healthcare examples:
- patient portal depends on external messaging SaaS
- claims submission depends on a clearinghouse
- integration platform depends on cloud identity provider
- national health information exchange depends on an external network service
- managed SOC monitors core clinical platforms
This matters because outage and concentration questions are almost always cross-cutting. If one provider underpins identity, messaging, backup, and digital channels, that is a very different risk picture than four separate providers.
The mistake here is very common: putting vendor names in free text on application records and calling that supplier modeling. That gives you no service-level traceability and no meaningful dependency view.
Step 7 — Link assets to ownership, support, and recovery expectations
This is where the model becomes operationally credible.
An asset inventory with no ownership or recovery metadata is mostly decorative.
You need to distinguish between:
- business owner
- technical owner
- support team
- service manager
- external provider
Those are not the same thing, and in healthcare they are often split awkwardly across central IT, digital teams, clinical systems teams, and suppliers.
Then add resilience-related attributes where they matter:
- service hours
- support tier
- RTO / RPO
- backup method
- failover pattern
- monitoring coverage
Medication management may require very different recovery expectations than a staff rota application. The architecture repository should make that visible, not assume equal treatment across all systems.
And if ownership is unclear, mark it clearly as unclear. Do not invent a placeholder and hope someone fixes it later. They usually do not.
I have seen too many inventories where every unresolved ownership issue gets assigned to “Architecture” or “Infrastructure.” That may make the spreadsheet complete. It does not make the model true.
A worked example: patient admission to downstream clinical messaging
Let’s make this concrete.
A patient is admitted in the EHR. An ADT message is generated. That event updates bed management, lab, radiology, identity provisioning, and gives the patient portal enough context to show the encounter correctly. Maybe an SMS confirmation is triggered through an external provider.
That is a very ordinary healthcare scenario.
The model might include:
- Business Service: Patient admission and registration
- Applications: EHR, bed management, LIS, RIS, patient portal
- Integration Components: HL7 engine, API gateway, maybe Kafka event bridge if used
- Platforms: Windows cluster, SQL service, Azure API hosting
- Security Services: IAM, SIEM
- External ICT Service: SMS notification provider
And the relationships matter more than the boxes.
Now ask practical questions:
What fails if the HL7 engine cluster is unavailable?
Not just one interface. Potentially admissions downstream visibility, bed flow updates, lab awareness, radiology scheduling context, and identity provisioning timing.
Which critical services depend on Azure API Management?
Maybe not just the portal. Possibly appointment access, identity federation, mobile app functions, maybe partner APIs.
Which third parties are in the patient onboarding chain?
Messaging provider, cloud hosting platform, maybe identity provider, maybe national exchange.
Who owns the recovery plan for the admission integration estate?
That answer is often surprisingly unclear. Which is exactly why the model is useful.
And the real world is always messier than the diagram. Some downstream systems may still receive flat files. Some updates may be asynchronous. One hospital site may use a different routing pattern. The point is not to erase the mess. The point is to make enough of it visible to reason about it.
Step 8 — Build the views different stakeholders will actually use
One inventory is not one diagram.
This is another Sparx EA strength if you use it properly. The same underlying model can support several viewpoints: Sparx EA training
- executive heatmap of critical services and top dependencies
- operational dependency view for support teams
- third-party dependency map
- integration criticality landscape
- audit traceability view from service to asset to owner
Use relationship matrices. Use saved searches. Build reports for missing owner, missing lifecycle, missing hosting context, missing RTO. Tailor diagrams for the audience. Do not force everyone to consume one giant “enterprise landscape” picture.
Nobody can really read those. People just pretend they can.
Executives want concentration and criticality.
Ops wants support and dependency chains.
Audit wants traceability and evidence.
Architecture wants patterns, shared services, and modernization hotspots.
Same model. Different views.
Where teams usually get this wrong
Bluntly, the most common failure mode is treating DORA inventory as a compliance spreadsheet exercise.
That approach produces lists, not understanding.
Other mistakes I keep seeing:
- modeling applications but not shared integration and security services
- confusing process maps with asset inventory
- trying to synchronize every CMDB detail into EA
- failing to define ownership fields consistently
- modeling only production and ignoring DR or backup dependencies
- using inconsistent naming for the same service
- not capturing third-party dependencies precisely enough
And a very healthcare-specific anti-pattern: assuming the EHR is “the platform” and therefore not modeling all the surrounding operational services it relies on.
That mindset hides a lot of risk. The EHR may be central, but it still depends on identity, integration, storage, backup, monitoring, network services, third-party connectivity, and downstream clinical workflow tools. During incidents, those dependencies matter enormously.
Step 9 — Govern the inventory like a product, not a one-time project
This is not a project deliverable you finish and admire.
It needs an operating model.
A sensible split looks like this:
- architecture owns metamodel and quality rules
- domain architects maintain key asset structures
- service owners validate business criticality
- operations verify hosting and resilience attributes
- supplier management validates third-party references
- cyber reviews security service coverage and dependency assumptions
Use a cadence:
- monthly quality review for missing metadata
- quarterly review of critical service dependencies
- change-triggered update for major platform or supplier changes
Define minimum viable completeness. That phrase helps. You do not need perfect detail everywhere. You need enough trustworthy information to answer important questions. Measure coverage before demanding perfection.
My honest view: if no one owns curation, the repository becomes a museum of confident inaccuracies. It looks impressive right up until someone relies on it during a major incident.
Step 10 — Use the model for decisions, otherwise people stop maintaining it
This is probably the most important point in the article.
If the model is only used for audit response, maintenance quality will decay. People update what helps them do real work.
A good ICT asset inventory in EA should support decisions like:
- assessing outage blast radius
- prioritizing modernization
- identifying shared integration bottlenecks
- reviewing third-party concentration risk
- planning resilience testing
- supporting audit and regulatory response
Real examples from healthcare:
- deciding how risky it is to replace a legacy HL7 engine
- consolidating identity services across hospital entities
- testing whether a patient portal is too dependent on one SaaS provider
- validating recovery design for lab result exchange
- assessing whether Kafka should become a strategic shared event platform or remain a niche implementation
- understanding whether API gateway failure affects just digital channels or also internal care workflows
Once incident reviews and investment decisions start referencing the repository, data quality improves naturally. Teams stop seeing it as architecture overhead and start seeing it as useful operational memory.
That is when the model begins to stick.
A sensible implementation sequence for the first 90 days
Do not start with enterprise-wide ingestion unless your source data is already disciplined. It almost never is.
A realistic first 90 days looks more like this:
Weeks 1–2
- define scope
- agree metamodel
- set naming rules
- identify shortlist of critical healthcare services
Weeks 3–5
- model top 20 applications and integration assets supporting 3–5 critical services
- focus on patient admission, identity, lab exchange, portal access, medication if those are central
Weeks 6–8
- add hosting and platform dependencies
- add owners, support teams, key third-party services
- capture recovery tier and monitoring coverage where available
Weeks 9–10
- create stakeholder views
- run gap reports for missing owner, missing hosting context, missing lifecycle, missing third-party reference
Weeks 11–12
- validate with operations, cyber, service owners, supplier management, and risk/internal audit
That sequence works because it proves value early. You can answer real questions on a small but important slice of the estate before scaling.
What “good” looks like
Good does not mean perfect.
Good means trustworthy enough that when someone asks about a critical healthcare service, you can quickly identify:
- supporting applications
- integration assets
- platforms and hosting contexts
- security services
- third parties
- owners
- likely failure points
- recovery expectations
That is the practical test.
Can you start from patient admission and registration and trace down to the EHR, HL7 engine, API layer, IAM, SQL platform, Azure services, SMS provider, support team, and recovery assumptions without opening six spreadsheets and three old Visio files?
If yes, you are in a decent place.
If not, the problem is not notation. It is operational truth.
And that is really the point. Modeling a DORA ICT asset inventory in Sparx EA is less about elegant architecture language and more about representing the estate honestly enough to support resilience, accountability, and decisions. For integration leads especially, that means resisting the urge to model only message flows. Sparx EA guide
Model the things that keep the message flows alive.
Optional FAQ
Do I need ArchiMate to do this properly in Sparx EA?
No. If your organization already uses ArchiMate well, use it. If not, a lightweight custom profile is often more maintainable. ArchiMate layers explained
Should I import CMDB data into EA or keep them separate?
Usually keep them separate and consume selectively. EA should not become a duplicate CI repository.
How detailed should interface modeling be?
Detailed enough to identify critical shared integration assets and dependency clusters. Not so detailed that every message variant becomes a maintenance burden.
Do SaaS platforms count as ICT assets even if we do not host them?
Yes. If the service is necessary to deliver, secure, monitor, or recover an important business service, it belongs in scope.
What is the minimum ownership data to insist on?
Business owner, technical owner, support team, and external provider where relevant. Anything less becomes hard to operationalize.
How do I handle shared services used by both clinical and corporate processes?
Model them once, relate them to multiple business services, and let criticality be driven by dependency context rather than duplication.
If I had to reduce all of this to one practical lesson, it would be this: start from the service, model the integration layer properly, and keep the metamodel simpler than your instincts want. In healthcare, complexity arrives on its own. You do not need to help it.
Frequently Asked Questions
What is DORA and why does it matter for enterprise architects?
DORA requires financial institutions to demonstrate ICT resilience, manage third-party risk, and maintain an ICT asset register. Enterprise architects are central because DORA requires a structured, traceable view of ICT assets and dependencies.
How do you build a DORA ICT asset inventory in Sparx EA?
Model each ICT asset as an Application Component or Technology Node with tagged values capturing criticality, supporting function, third-party dependencies, RTO, and RPO. Link assets to business functions using Serving and Assignment relationships.
What ArchiMate elements map to DORA ICT asset categories?
Application systems → Application Components; infrastructure → Technology Nodes; third-party services → Application Services from external components; data assets → Data Objects.