UML Package Diagrams for Large Systems: Structuring

⏱ 21 min read

There is an uncomfortable truth in large manufacturing enterprises that most architecture decks tend to sidestep: the landscape is usually not hard because there are “too many applications.”

That is almost never the real issue.

The real issue is that almost nobody can explain, in plain language, what is actually allowed to depend on what across ERP, MES, SCADA, PLM, quality, warehouse, supplier, analytics, and the ever-expanding integration layer in between. People know the systems. They know the vendors. They know, broadly, where the interfaces sit. But ask three senior stakeholders whether Manufacturing Execution should depend on ERP for line-time decisions, or whether Quality belongs under ERP or beside it, and you will often get three different answers followed by a very political silence.

Nearly every transformation initiative starts the same way: someone draws boxes.

Two steering meetings later, those boxes are dead. Not technically dead. Socially dead. Nobody trusts the picture enough to make decisions from it, so the project slides back into spreadsheets, point-to-point interface lists, and conversations dominated by whichever team says “critical dependency” the loudest.

That is why I still defend UML package diagrams, even though they are often dismissed as old-fashioned, too abstract, or too “software design” for enterprise work. In large manufacturing environments, they become useful precisely when they stop trying to be elegant, complete, or UML-pure. In my experience, they start to matter when they stop acting like documentation ornaments and start behaving like governance tools. UML modeling best practices

A good package diagram for a large system is not really a drawing of reality. It is a controlled argument about dependency intent.

That distinction matters more than many architecture teams are willing to admit. TOGAF training

Why package diagrams have a bad reputation — and why some of that criticism is fair

The criticism is familiar.

Package diagrams are too high level. Nobody reads UML anymore. C4 is better. ArchiMate is richer. Enterprise architecture should focus on capabilities, processes, value streams, or platform maps. Package diagrams belong to developers. enterprise architecture guide

Some of that is true. Quite a lot of package diagrams are terrible.

I have seen far too many diagrams labeled “target architecture” that contain somewhere between 60 and 100 applications, all on one slide, all connected to all, with middleware in the middle like a web-spinning deity. Every line means something different. Some lines are data flows. Some are API calls. Some are file exchanges. Some are wishful thinking. Nobody can tell which dependencies are strategic, which are temporary, and which are simply accidents that have survived 14 years.

That is not a package diagram. That is an application inventory having a breakdown.

The deeper problem is this: architects often try to represent reality faithfully when what they actually need is to make architectural constraints visible. Those are not the same thing. If you draw everything that exists, the very thing you need to govern disappears into the noise.

Teams also make a few predictable mistakes:

  • they use package hierarchy as a disguised org chart
  • they model interfaces at the wrong level
  • they treat every dependency as equally significant
  • they confuse “contains” with “owns”
  • they use packages as a tidy taxonomy instead of a decision aid

I’m fairly opinionated on this. A package diagram should not try to be fair to every application. It should be unfair in exactly the right way. It should elevate the boundaries that matter and demote detail that distracts.

If that annoys a few domain leads, good. Architecture is not a popularity contest.

Before the notation debate, tell the manufacturing story

Let me ground this in a scenario that is painfully common.

Imagine a global discrete manufacturer. Corporate runs SAP ERP. Engineering uses PLM for product structures and change control. Plants run different MES platforms depending on region, age, and what got approved five years ago. On the shop floor you have SCADA, PLC-connected control systems, historians, and a scattering of OT vendors no one wants to touch during peak season. Quality runs a QMS for non-conformance and CAPA. Distribution centers use WMS. Suppliers connect through EDI and some newer API-based collaboration services. There is middleware. There is Kafka. There is a cloud data lake. There is a BI estate that insists it is becoming a data product platform.

Then the business acquires two plants.

One is a greenfield smart factory with modern edge connectivity, event-driven telemetry, cloud-managed observability, and a MES stack that actually has APIs somebody can use. The other is a brownfield site where “integration” means CSV drops, custom SQL, and a line controller that only one contractor fully understands.

Leadership asks for what leadership always asks for: “one integrated application landscape.”

That phrase sounds harmless. It is not.

Because the same business capability now shows up in multiple systems. Scheduling is partly ERP, partly plant-level. Quality data exists in MES, QMS, and local historian contexts. Engineering changes originate in PLM but are interpreted differently by plants. Warehouse events affect production readiness. Traceability spans ERP batches, MES work orders, machine telemetry, and quality holds. Plant autonomy conflicts with enterprise standardization, and nobody agrees on whether dependency direction should follow policy, process, or operational reality.

This is where package diagrams earn their keep.

Not because they solve integration. They do not.

Not because they model every interface. They should not.

They matter because they force explicit grouping and explicit dependency discipline before the integration work disappears into tactical exceptions.

They are the first battle map.

What a package means here — and what it definitely does not mean

In enterprise architecture, a package is only useful if we stop overloading it.

In the context of large manufacturing landscapes, I use a package to mean a stable dependency container. That container might represent a bounded application domain, a platform responsibility area, a portfolio grouping with governance meaning, or a layer of architectural accountability. EA governance checklist

It does not automatically mean a deployable component. It does not automatically map to one team. It is not just a vendor product box. And it is not the same thing as a business capability tile from a strategy workshop.

That sounds messy, and honestly it is a little messy. But it is the right kind of mess.

For a manufacturing landscape, sensible package candidates might include:

  • Enterprise Planning
  • Plant Execution
  • Shop-Floor Control
  • Engineering & Product Data
  • Quality & Traceability
  • Logistics & Warehousing
  • Partner & Supplier Collaboration
  • Data & Intelligence
  • Integration Services

Some of those are conceptual domains. Some are more platform-centric. Some are close to product clusters. In a textbook, that inconsistency might look wrong. In the field, it is often exactly what makes the model readable.

I would rather have a slightly impure package model that executives and plant architects can debate than a perfectly pure one that nobody uses.

The first mistake almost everyone makes: drawing by application instead of by dependency gravity

Application inventories are a bad starting point for package diagrams.

That catches some teams off guard because inventories feel concrete. They are auditable. You can export them from CMDBs and portfolio tools. They look like facts.

But facts are not structure.

The better starting point is what I call dependency gravity: which systems pull others into their change orbit, where process orchestration is actually concentrated, where master data enters or gets reinterpreted, and where a failure propagates across plants.

In manufacturing, ERP often has enormous enterprise gravity around finance, procurement, supplier master, and formal order structures. But inside a plant, MES may have stronger operational gravity because actual execution sequencing, work instruction context, labor capture, and production status coherence live there. PLM may have fewer interfaces than ERP and still dominate dependency direction because engineering changes reshape manufacturing behavior, quality plans, and material interpretation.

That is the thing people miss. Gravity is not the same as application size.

When identifying packages, I usually test for four forces first:

  • change coupling: which systems repeatedly need to evolve together
  • ownership boundary: who can approve changes and under what governance
  • integration failure blast radius: if this breaks, what stops
  • data authority and lifecycle: where truth is asserted and where it is only consumed or transformed

If you skip this and simply group by application category, you get a neat taxonomy. It may even look sophisticated. But it has almost no architectural value because it tells you nothing about legal dependency direction or risk concentration.

That is the whole game.

A table you can actually use

I am not a big believer in method tables for their own sake, but this one has held up well in workshops. It is not doctrine. It is field-tested triage.

What I like about these heuristics is that they force architecture back into operational reality. Not just notation.

A package model that ignores downtime risk is usually the work of people who do not get called when a line stops.

Draw less, mean more

Notation discipline matters. Formal UML purity matters less.

At enterprise scale, the conventions that help are fairly simple:

  • no package should exist without a real reason for containment
  • dependency arrows should show permitted dependency direction, not just the fact that integration exists
  • color should be used sparingly, ideally for ownership or lifecycle criticality
  • stereotypes should be used only when they remove ambiguity
  • do not mix runtime message paths with package dependencies on the same view unless you enjoy confusing everyone

If I were setting a simple enterprise legend, it would be this:

  • package = domain or responsibility container
  • nested package = subdomain or managed sub-landscape
  • dashed dependency = allowed usage or information dependency
  • note = governance rule, transition state, or tolerated exception

That is enough.

If your package diagram requires a 15-item legend, it has already failed. At that point you are not communicating structure; you are defending notation.

From chaotic inventory map to a package structure that actually helps

I have seen the “before” picture many times.

ERP, MES, historians, LIMS, QMS, WMS, TMS, PLM, EDI gateway, MDM, IoT platform, BI, CMMS, data lake, API gateway, IAM, and some unnamed cloud integration service are all shown as peers. Middleware sits in the center as if all meaning passes through it. Maybe Kafka is beside it, usually in a heroic color. The result is visually dense and conceptually empty.

A more useful “after” structure in manufacturing often looks something like this:

  • Enterprise Planning
  • Manufacturing Operations
  • - Plant Execution

    - Shop-Floor Control

    - Maintenance

  • Product & Engineering Information
  • Quality & Compliance
  • Supply Chain Execution
  • Integration & Event Backbone
  • Data & Analytics
  • External Collaboration

What becomes visible after packaging is more important than the packages themselves.

For example:

  • shop-floor systems should not depend directly on analytics tools just because dashboards exist
  • supplier collaboration should not consume raw plant control semantics directly
  • quality in regulated production is not just an ERP module with a slightly better report
  • integration backbone is not a business owner and should not quietly become a hidden domain where business logic goes to rot

That last one deserves emphasis. I have spent enough years in integration architecture to say this plainly: if your middleware or event platform starts becoming the place where ownership ambiguity goes to hide, your architecture is decaying. Kafka is powerful. API gateways are useful. Cloud integration platforms can accelerate a lot of work. None of them should become your de facto business model.

Package boundaries also change roadmaps. TOGAF roadmap template

Once you can see that Product & Engineering Information should influence Manufacturing Operations through governed interfaces, not through direct custom dependencies into every site solution, your sequencing decisions change. You modernize inside packages first. You stabilize cross-package contracts before ripping out internals. Acquired plants get mapped into the structure, not forced into a fake uniformity on day one.

That is architecture doing real work.

A simple package view in Mermaid

A simple package view in Mermaid
A simple package view in Mermaid

This is deliberately sparse. That is the point.

The awkward but necessary part: package diagrams expose organizational dysfunction

This is where package diagrams stop being harmless.

A serious package diagram usually reveals that nobody agrees on ownership, that integrations reflect funding lines more than process design, and that “enterprise standard” often means “what corporate preferred when the template was approved.”

In manufacturing, I see this all the time.

Each plant has its own MES extensions because the central architecture ignored line-specific constraints and then called the local workarounds noncompliant. Quality systems duplicate ERP master data because governance over data authority was never actually settled. Event streaming platforms become shadow integration teams because nobody wanted to decide whether the enterprise integration function or the domain teams owned event contracts.

If stakeholders are comfortable with the first package diagram, it is probably too vague.

I mean that quite literally. Some discomfort is a sign the model is finally touching the real fault lines.

When facilitating these discussions, the most useful questions are not “what systems do we have?” They are:

  • who is allowed to depend on whom?
  • what dependency would we ban if we were starting over?
  • where do we intentionally tolerate local exceptions?
  • what must never bypass IAM, API policy, or plant network controls?
  • which integrations are strategic, and which are scars?

Those questions change the room.

Some common mistakes, including a few I have made myself

Let’s be candid.

One of the easiest mistakes is creating packages that mirror vendor products. SAP becomes a package. Ignition becomes a package. Siemens becomes a package. Azure becomes a package. This feels practical until procurement changes, platform strategy shifts, or a rollout introduces a second product in the same responsibility area. Then your architectural structure is trapped inside a purchasing decision.

Another classic error is putting integration middleware in the center of the picture as if it owns the enterprise. It does not. Integration is a conduit, a control point, and sometimes a transformation boundary. It is not the business structure. Same goes for your Kafka platform. Same goes for your iPaaS. Same goes for your API gateway. A platform can be central operationally without being central architecturally.

Deep nesting is another one. Beyond two or three levels, enterprise readers stop following. You may enjoy the precision. Nobody else does.

Then there is confusion between data flow and dependency. A nightly batch file from one system to another does not automatically justify a package-level dependency of strategic significance. Sometimes it does. Often it does not. Architecture should care about meaning and governance, not just traffic.

And one mistake I see repeatedly in global manufacturing: forcing all plants into one package model as if variation itself is a design failure. It is not. Some variation is architecture. A semiconductor fab, a food processing site, and a discrete assembly plant do not live at the same operational tempo or regulatory profile. Pretending they do creates beautiful target diagrams and bad implementation programs.

A more painful lesson from my own experience: I once used a package-level target structure to win a governance argument before we had properly validated operating realities in the plants. On paper, it was elegant. We had clean dependency lines, good domain separation, and a clear role for the integration backbone. In practice, one plant’s cutover window, local validation obligations, and machine-vendor constraints made the sequencing impossible. The model was not wrong, exactly. It was premature.

That is an important distinction. Good architecture can still fail if it outruns operational truth.

Why integration architects should care more than they usually do

From an integration lead’s point of view, package diagrams are not decoration. They are one of the few artifacts that can define legal integration paths before the interface portfolio explodes.

They help answer questions like:

  • where should APIs be exposed?
  • where should events be published from?
  • where are canonical models realistic, and where are they fantasy?
  • which consumers may subscribe directly to operational events?
  • where should IAM policies and trust boundaries be enforced?
  • what is an acceptable route into plant-facing data?

Take a manufacturing example. PLM publishes engineering change events. Those events should reach Manufacturing Operations through the Integration & Event Backbone, where contract governance, versioning, security policy, and subscriber control can be applied. PLM should not directly integrate into every machine-facing or plant-local system just because technically it can.

Likewise, WMS may consume shipment readiness from Enterprise Planning and Manufacturing Operations, but it should not query OT systems directly. If a warehouse planner wants near-real-time status, that is still not a license to bypass architecture.

Supplier ASN flows are another one. They belong in External Collaboration, backed by proper API and identity controls, not buried in ERP customizations where every onboarding becomes a mini-program. IAM is not a side concern here. In federated manufacturing landscapes, who is allowed to access what interface through which trust path is inseparable from structural dependency design.

Package diagrams also help rationalize interface portfolios. They make plant onboarding easier during acquisitions because you can map local systems into a dependency structure instead of debating every interface from scratch. They support modernization sequencing because they reveal where to preserve contracts while replacing internals. And in governance reviews, they give you a language for saying “no” that is more defensible than “the architecture team prefers it.”

A deliberately imperfect example package diagram in prose

Here is a practical package structure for a manufacturing enterprise, described the way I would narrate it in a workshop.

At the top level, the landscape consists of:

  • Product & Engineering Information
  • Enterprise Planning
  • Manufacturing Operations
  • Quality & Compliance
  • Supply Chain Execution
  • Integration & Event Backbone
  • Data & Analytics
  • External Collaboration

Within Manufacturing Operations, I would nest Plant Execution, Shop-Floor Control, and Maintenance. Not because they are all equivalent, but because they sit inside a managed operational dependency space and usually need shared governance around latency, resilience, and local autonomy.

Product & Engineering Information is allowed to depend on Enterprise Planning for released material and commercial alignment, and on Manufacturing Operations for execution-relevant engineering context. Enterprise Planning does not reach down into Shop-Floor Control directly. It should interact through Manufacturing Operations or through governed interfaces exposed via the Integration & Event Backbone.

Quality & Compliance depends on Manufacturing Operations because quality status, genealogy, and execution evidence often originate there. Supply Chain Execution depends on Enterprise Planning for enterprise order and inventory context. External Collaboration depends on the Integration & Event Backbone because partner traffic should enter through controlled integration services, not through random system-specific endpoints.

Data & Analytics depends on published interfaces from several packages, but it has no reverse operational authority. That is a useful rule to make explicit. Analytics can consume. It does not command production.

Now the imperfection: in one plant, Quality & Compliance has a direct dependency on a plant-local historian because a legacy traceability implementation predates the current architecture and remediation would require revalidation effort the business has not funded yet.

Leave that on the diagram.

Seriously. Do not clean it up for aesthetic reasons.

Architecture credibility depends on showing tolerated debt, not airbrushing it away.

An exception-oriented view

An exception-oriented view
An exception-oriented view

It is a tiny example, but it matters. People trust diagrams that admit reality.

Where package diagrams stop being useful

This is not a magic artifact.

Package diagrams are weak at detailed orchestration design. They are weak at latency and resilience analysis. They are not enough for cloud deployment topology. They do not replace security trust zoning. They will not solve event schema design, identity federation, or detailed API contract management.

That is fine.

The handoff matters:

  • use sequence diagrams for process interactions
  • use deployment views for runtime and cloud topology
  • use context maps or domain models for semantic boundaries
  • use interface catalogs for contractual detail
  • use security architecture views for IAM, trust zones, and policy enforcement

My strong opinion is that package diagrams are strategic scaffolding, not operational blueprints. If you ask them to do more than that, they become muddy. If you ask less of them, they become decorative.

A practical way to build one without boiling the ocean

The best package diagrams usually start with a transformation question, not a notation exercise.

Are you integrating acquired plants? Rolling out ERP? Harmonizing MES? Building a traceability architecture? Rationalizing middleware? Start there.

Then identify 7 to 12 meaningful packages. More than that and the model is probably too fragmented. Fewer than that and it is usually too vague to govern anything useful.

Next, test each package against ownership and dependency rules. Who approves changes? Where is data authority? What failure matters? What local variation is legitimate? This is also where cloud platform concerns matter. For example, if Data & Analytics is largely on a cloud lakehouse platform and Manufacturing Operations is largely edge-connected and plant-resident, that runtime split may justify package separation even if the business language overlaps.

After that, draw allowed dependency directions first. Not actual interfaces. Allowed directions.

Only then add exceptions.

Validate with plant architects, integration engineers, OT leads, and the people who deal with operational incidents. Not just enterprise governance. I have had package models improved more by one skeptical plant engineer than by three polished architecture review boards.

A few workshop tips from experience:

  • do not rely on projector-only sessions; print the diagram large and mark it up
  • ask each team to challenge one dependency and defend one boundary
  • capture disputes as architecture decisions, not side comments
  • separate “current tolerated” from “target allowed”
  • force naming consistency early; sloppy labels hide sloppy thinking

And then use the diagram. Actually use it. Put it in roadmap conversations, integration design reviews, API governance, acquisition onboarding, and exception approvals. If it just lands in a repository, it will die with the other boxes.

What changes when the package structure is right

You usually do not get fewer systems immediately. That is an important reality to say out loud.

What you get is a governable landscape.

New plants onboard faster because you have a federated structure: an enterprise package view plus local refinements, instead of a blank page each time. Enterprise standard services separate more cleanly from site-specific execution logic. Product introduction and engineering change produce less integration sprawl because dependencies have named pathways. Quality traceability becomes more credible because it is treated as its own structural concern, not as a reporting side effect. ERP stops being the accidental control plane for real-time operations. Modernization priorities improve because you can replace inside a package before breaking cross-package contracts.

That is not glamorous. It is useful.

And useful architecture ages better than glamorous architecture.

One final challenge

Stop asking whether UML package diagrams are fashionable.

That is the wrong question, and frankly it is a convenient distraction.

Ask instead whether your architecture can clearly state which parts of the manufacturing landscape are allowed to depend on which other parts, under what conditions, and with what exceptions. Ask whether that structure is understood by enterprise teams, platform owners, integration engineers, and plant stakeholders alike.

If the answer is no, the problem is not notation.

The problem is that the enterprise has no agreed structure.

A package diagram is valuable because it makes that absence impossible to ignore.

FAQ

Are UML package diagrams too abstract for application portfolio work?

No, but only if they express dependency intent rather than exhaustive inventory.

How are package diagrams different from application landscape maps?

Landscape maps show what exists. Package diagrams should show structural dependency boundaries and permitted direction.

Can package diagrams coexist with ArchiMate or C4?

Absolutely. They answer different questions. I often use package diagrams earlier because they force boundary conversations quickly.

Should each manufacturing plant have its own package diagram?

Usually yes, but as a refinement of an enterprise package model, not as a completely separate universe.

What if reality violates the target package structure?

Show the violations explicitly. Hidden debt is far more dangerous than visible debt.

Frequently Asked Questions

Can UML be used in Agile development?

Yes — UML and Agile are compatible when used proportionately. Component diagrams suit sprint planning, sequence diagrams clarify integration scenarios, and class diagrams align domain models. Use diagrams to resolve specific ambiguities, not to document everything upfront.

Which UML diagrams are most useful in enterprise architecture?

Component diagrams for application structure, Deployment diagrams for infrastructure topology, Sequence diagrams for runtime interactions, Class diagrams for domain models, and State Machine diagrams for lifecycle modelling.

How does UML relate to ArchiMate?

UML models internal software design. ArchiMate models enterprise-level architecture across business, application, and technology layers. Both coexist in Sparx EA with full traceability from EA views down to UML design models.