⏱ 24 min read
By the middle of week two, I was already wondering whether we’d made a very expensive mistake.
We were in a model review with architects from three workstreams in a global manufacturing transformation. One team had built an application catalog under a domain package. Another had created a second version of the same catalog inside a solution architecture area because they “needed to move faster.” A third group had modeled the same plant quality platform using a completely different naming convention. And one of the most senior architects on the program had quietly opted out altogether, still circulating Visio and PowerPoint packs by email and insisting the repository was “not mature enough for real work.”
That was the moment the Sparx EA rollout stopped looking like a training exercise and started revealing itself as an operating model problem. Sparx EA training
The client was a large manufacturer in the middle of a cloud-led transformation. Plants in different regions were running a mix of legacy MES platforms, ERP customizations, historians, quality systems, warehouse tools, and a fair amount of local ingenuity that had never made it into any formal inventory. The architecture team had grown quickly during mobilization. We had enterprise architects, solution architects, integration specialists, data architects, cloud and infrastructure people, security architects, and business-domain architects all trying to work across the same transformation agenda.
Sparx EA had been chosen as the common modeling platform. On paper, it was a sensible choice. It could hold capability maps, application portfolios, interface catalogs, deployment views, transition states, and standards references in one place. But not everyone wanted it. And, if I’m honest, not everyone needed the same parts of it.
The program did work in the end. We got 40 people productive enough in six weeks to support architecture governance, live project delivery, and reuse across workstreams.
But it did not work because we ran classes.
It worked because we combined tool training with repository design, modeling standards, review governance, a handful of painful compromises, and the reality of live delivery pressure. That mix mattered far more than any slide deck ever could.
Why we had to do it so fast
The speed was not arbitrary. It came straight from the business context.
This manufacturer was dealing with the usual pressures, just compressed into the same period. Analytics teams wanted cleaner access to plant and quality data. Operations leaders wanted better resilience because some of the legacy integration between plant systems and corporate platforms was brittle enough to become a board-level issue after outages. Finance was looking at integration support costs and asking some pointed questions. Meanwhile, cloud teams were trying to rationalize a growing sprawl of patterns across Azure, AWS, event streaming, and security controls.
At the same time, architecture decisions were getting trapped in documents.
You would find a solid capability model in PowerPoint, a useful integration view in Visio, a deployment sketch in Lucidchart, and a standards exception buried in meeting notes. On their own, none of those things were disastrous. Together, they made reuse close to impossible. Every workstream started from a partial understanding, rebuilt context, and then spent time arguing over basics that should already have been settled.
That was the real trigger. Not “we need a better drawing tool.” We needed a place where architecture knowledge could survive beyond one architect, one project, one steering deck.
The team itself had come from very different tool cultures. A few people had worked in LeanIX-heavy environments. Some were comfortable in Archi. Some were very capable architects who had, truthfully, done most of their work in PowerPoint for years. A couple of the integration leads were effectively diagramming in spreadsheets and text documents because they cared more about interface ownership than notation. We even had analysts who had been rebadged into architecture roles because the transformation moved faster than the staffing plan.
So the challenge was never really teaching people how to make boxes and lines.
It was teaching them how to create reusable architecture under pressure.
And we had constraints that ruled out any neat, textbook rollout: six weeks, active delivery, inconsistent availability, mixed capability levels, and very visible pressure from the PMO and CIO to show progress.
Anyone who has done architecture inside a serious transformation will recognize that combination. The calendar says “enablement.” Reality says, “do this while decisions are already being made.”
The team we were actually training
On paper it was a 40-person architecture team. In practice, it was several tribes with overlapping responsibilities and very different patience levels.
Roughly speaking, we had 6 enterprise architects, 10 solution architects, 7 integration architects, 5 data architects, 4 infrastructure/cloud architects, 3 security architects, and 5 business or domain architects focused on manufacturing and supply chain. Those numbers moved around slightly week to week because contractors rolled on and off, but that was the shape of it.
The more important distinction, though, was behavioral rather than organizational.
Some people cared deeply about governance. They wanted standards, review control, traceability, and cleaner architecture packs. Others cared about speed and saw repository work as overhead unless it demonstrably saved them time. Some simply wanted to avoid another admin burden because they were already stretched across multiple projects and steering groups. EA governance checklist
There was also a plant IT versus corporate IT dynamic that mattered more than I expected. Plant-facing architects tended to value practical specificity: what system talks to what, where the data lands, who supports it at 2 a.m., what changes during a cutover window. Corporate architects, especially around platforms and cloud, were generally more comfortable with abstraction. Both viewpoints were valid. In the repository, they collided.
Then there was the internal-versus-external split. SI architects often had stronger tool discipline because they had been asked to produce repository assets before. Internal architects usually had better institutional memory and better instincts about what would actually survive governance. Again, both useful. Also, at times, combustible.
A single training track would have failed. I’m still convinced of that.
If we had kept trying to deliver a generic Sparx curriculum to all 40 people at once, half the room would have been bored, a quarter would have been lost, and the rest would have quietly gone back to their old habits.
What we got wrong in the first five days
We made the classic mistake first.
We started with Sparx features instead of architecture outcomes.
So the early sessions covered repository views, package structures, diagram types, stereotypes, matrices, connectors, and various bits of navigation that looked perfectly sensible in a training plan and landed badly in reality. People learned just enough to become confused in their own preferred ways.
The second mistake was subtler: we assumed experienced architects would self-standardize.
They did not.
Give five good architects the same manufacturing execution scenario and they will often produce five reasonable but incompatible models unless you define what question the model is supposed to answer. One person modeled business capability impact. Another focused on application interactions. Another built a deployment view. One used ArchiMate-like semantics carefully. One drew what was effectively a solution concept sketch. None of this was irrational. It just was not reusable. enterprise architecture guide
We also underestimated repository administration. That one hurt. TOGAF roadmap template
Access rights. Package ownership. Versioning expectations. Baselines. What could be edited directly versus reused from reference areas. Who was allowed to create top-level structures. These are not glamorous topics, so they get pushed aside in early planning. Then, inevitably, they become the thing that slows everything down.
We made another mistake I’ve seen more than once in architecture teams: trying to teach notation purity too early.
I like good notation. I care about semantics. I would always prefer clean architectural language over a repository full of improvised symbols. But in a six-week rollout inside an active transformation, usefulness beats formalism every time. If a solution architect can clearly show ERP-to-MES order flow, integration ownership, error handling, and transition-state implications, I care more about that than whether every relationship is textbook-perfect on day eight.
And maybe the biggest miss: we had no gold-standard examples in the repository at the start.
People copied the first thing they saw. That is just human nature. If the first thing they see is mediocre, you are effectively scaling mediocrity with tool support.
If I were doing it again, I’d spend much less time teaching menus and much more time showing what good looks like in context.
The pivot: we stopped teaching the tool in isolation
After the week-two near-failure, we changed the plan.
Not slightly. Properly.
The new principle was simple: every session had to produce or improve a reusable artifact for the manufacturing transformation. No abstract examples. No generic “create a component diagram” exercises. If we were spending 90 minutes together, something in the repository had to become more useful to the program by the end of it.
That changed almost everything.
First, we split training by role and use case. Enterprise and business architects did not need the same sequence as integration and data architects. Cloud and security teams needed different examples again. We kept a common orientation, but most of the useful work happened in role-based slices.
Second, we seeded the repository with real examples. Not many. Just enough. A capability map tied to manufacturing operations, an application inventory slice for a plant domain, an integration pattern around event-driven telemetry, a cloud deployment view showing landing zone dependencies, IAM boundaries, and data flow through Kafka-based event streaming into cloud analytics. Once people could see something recognizable, the resistance dropped noticeably.
Third, office hours became more important than formal classes.
That was the turning point most training plans miss. Architects do not ask their best questions during a workshop. They ask when they are trying to get a review pack ready, or when they realize they have created a duplicate application, or when they cannot decide whether a plant event should be modeled as an API interaction, a message on Kafka, or a batch transfer because the actual implementation pattern is still under debate.
That is where the learning happens.
The six-week structure we ended up using
Here’s the structure we actually used. In reality, the weeks overlapped, people missed sessions, and cleanup happened continuously. But this was the backbone.
The weeks look tidy in a table. They were not tidy in practice.
Week three bled into week four because integration questions were inseparable from cloud deployment design. Week five started early because governance pressure arrived early. Attendance was inconsistent because the best architects are always half-booked and overcommitted. Every week included some amount of instruction and a surprising amount of cleanup.
That is another thing people do not say enough: on a live program, enablement is part teaching and part janitorial work.
The manufacturing examples that made the training stick
The content became real once we used scenarios painful enough to matter.
The best example was shop-floor to cloud quality analytics. It touched almost every architecture discipline at once.
We modeled PLC and SCADA signals flowing through plant collection layers into a historian, then into MES and an event broker, and onward into a cloud data platform for quality analytics. In one case, Kafka was the event backbone under discussion for streaming plant events into a broader enterprise integration pattern, while some plants were still using older middleware and file-based drops. The point was not to prematurely standardize every implementation. It was to make the architecture visible across business capability impact, integration patterns, data ownership, deployment boundaries, and security concerns.
That one scenario did more for team alignment than any amount of abstract Sparx instruction.
The OT architects saw their world represented. The cloud teams could finally connect event ingestion, landing zones, IAM boundaries, and analytics services to plant realities. Data architects could define canonical objects with actual usage in mind. Security architects could discuss trust boundaries and remote access implications instead of reviewing generic diagrams detached from manufacturing risk.
Another useful training case was ERP-to-MES production order synchronization.
That sounds simple until you put three architects in a room and ask whether the dominant pattern should be event-driven, API-mediated, or still partially batch in plants with operational constraints. We had arguments over ownership, retries, reconciliation, and how much implementation detail belonged in an architecture view. Good arguments, mostly. In Sparx, we forced the issue in a helpful way: application interaction diagrams, interface elements with owners, error handling notes, and explicit status on target versus interim patterns.
That created one of the first moments of visible reuse. Once one team had modeled the order-sync pattern properly, another plant modernization workstream could reuse the structure instead of recreating the entire conversation.
Warehouse modernization gave us the third strong example. Several regional plants were moving from older WMS variations toward a cloud-enabled platform, but nobody was doing a clean greenfield cutover. There were interim states, coexistence periods, phased integration swaps, and ugly realities around local process variation. Modeling only the target state would have been basically decorative. Transition architecture was the real work.
That mattered because it taught architects to use the repository for change over time, not just end-state aspiration.
And then there was plant network segmentation and secure remote maintenance, which is where some of the security and infrastructure architects finally became believers. Once they could trace a plant support scenario through network zones, identity controls, privileged access, remote vendor maintenance paths, and dependencies on both local plant assets and cloud management services, the repository stopped looking like admin overhead and started looking like a defensible governance asset.
People learn enterprise architecture tools much faster when the content is painful and real.
How we divided the repository so 40 people did not collide
This part was operationally critical.
We ended up with a package structure that was simple enough to understand quickly and structured enough to reduce collisions: an enterprise layer, domain architectures, solution architectures, integration and data shared assets, technology and cloud deployment views, and a standards/reference area.
Not elegant. Effective.
The ownership model mattered more than the package names. Each major package had an owner. Some areas had designated reviewers. Reference assets were read-only for most users. Shared integration objects and common applications had controlled stewardship because otherwise everyone would create their own version after the first deadline panic.
Here’s a simplified view of how we explained it:
A few conventions turned out to matter disproportionately.
Naming rules, obviously. If one team calls it “SAP S4,” another calls it “S/4HANA,” and another uses a business-friendly alias, your application portfolio becomes unreliable almost immediately. We enforced naming conventions for applications, interfaces, and capabilities early, and with more discipline than some people liked.
We also added diagram purpose labels. Not because it was theoretically elegant, but because it helped people understand what they were looking at: context view, interaction view, deployment view, transition-state view, review-ready decision support, and so on. It cut down pointless debate.
Element reuse rules were another big one. We had to teach people, repeatedly, that reusing an application element is not the same as copying it. Sounds basic. In mixed-skill teams, it isn’t.
What we deliberately did not do was over-engineer the meta-model from day one. No giant formalism exercise. No attempt to lock down every stereotype and relationship in the first fortnight. That would have been a very good way to produce a beautifully governed empty repository.
We tolerated some duplicate elements briefly to keep momentum. Then we scheduled cleanup. That sounds obvious, but many teams do the opposite: they tolerate duplicates forever while telling themselves quality will improve organically. It won’t.
The standards we enforced—and the ones we let go
This is one area where I have a fairly strong opinion.
In a six-week rollout, consistency beats sophistication.
So we enforced a small set of standards early: naming conventions, minimum metadata for applications and interfaces, approved diagram types by use case, package ownership, and a review workflow before anything went to the architecture board. Those gave us enough structure to make assets reusable.
We delayed the rest.
That included full ArchiMate consistency, advanced Sparx scripting and add-ins, exhaustive traceability across every architecture layer, and heavily customized profile work. All useful things in the right context. None of them were the immediate bottleneck. ArchiMate modeling guide
One architect on the program built an exceptionally polished model early on. Perfect notation. Beautiful layout. Very little decision value. It answered no urgent question, traced to no live review, and was too intricate for most of the team to reuse under delivery pressure. Technically impressive. Operationally almost irrelevant.
That was the lesson in miniature.
If your standards help the team make better decisions faster, enforce them. If they mostly satisfy method purity while slowing adoption, defer them until the basics stick.
Here’s another simple way we framed it internally:
The training mechanics that actually worked
The mechanics were not glamorous.
We used 90-minute role-based sessions instead of half-day generic workshops because attention drops, calendars explode, and people need just enough structure to apply something immediately. We recorded short demos for repeatable basics so nobody had to sit through the same navigation tutorial twice.
Then we gave people homework that was intentionally inconvenient: model something real in the next 48 hours.
Not a toy exercise. A real package, a real review artifact, a real integration view, a real capability-to-application mapping. That forced the learning into delivery.
We also ran office hours twice a week. Those sessions consistently outperformed formal training. Once architects encountered friction in live work, the questions became practical and memorable: how do I reuse the enterprise application element without breaking ownership, where should an interface live if it spans plant and corporate domains, how do I show an interim Kafka bridge pattern when the target is API-led but the site is not ready, what is the right level of detail for IAM dependencies in a cloud deployment review?
Those are architecture questions wearing tool clothing.
We paired stronger modelers with skeptical senior architects. That worked better than I expected, especially when the pairing was framed around solving a review problem rather than “helping with the tool.” Pride matters in architecture teams. You have to account for it.
The cheat sheets helped too. Lightweight, practical, and intentionally short:
- how to create a solution package
- how to reuse an application element
- how to prepare a review-ready diagram
- minimum metadata for an interface
- what needs to be in the repository before architecture board review
One thing surprised me: peer examples were adopted faster than official standards. If a respected architect in a live manufacturing workstream produced a useful model, others copied it almost immediately. If the same guidance appeared in a standards PDF, they skimmed it and moved on.
That is just how teams behave.
Resistance, politics, and the people side
This was never only a tooling problem.
Some resistance was straightforward. “I already know architecture tools.” Fair enough. Some people genuinely did. Others knew how to draw, which is not quite the same thing. We also heard “Sparx is too technical,” “this will slow delivery,” and “my diagrams are for my project only.” All common. None of that was solved by more enthusiastic admin emails.
Executive mandates alone did not work. We had them. They helped establish seriousness, but they did not create adoption. Long standards documents also failed. No one under transformation pressure wants to read a 35-page modeling standard before they can submit a review pack.
What changed minds was different.
First, the architecture review board began requiring repository-backed artifacts. Not in a punitive way, but in a practical one. If a solution came for review without application dependency mapping, transition-state documentation, and reference to shared standards assets where relevant, it was not considered review-ready.
Second, reuse started to save time. Once a team could pull an existing application inventory slice, interface record, or integration pattern into their own work rather than recreating it, the value became obvious.
Third, visible manufacturing examples mattered. Architects are much more likely to change behavior when they see a peer solving a real problem than when they are told a method is good for them.
One of the most respected lead architects on the program was initially one of the loudest skeptics. He became one of the strongest advocates after discovering that two plants had separately modeled and partially designed the same integration pattern for quality event ingestion. The repository did not magically prevent duplication. But it exposed it early enough to stop us institutionalizing it.
That changed his view.
Governance became the forcing function
This was probably the most important shift.
Once Sparx EA became part of how governance actually worked, adoption moved from voluntary to embedded.
We used the repository in architecture review preparation, decision traceability, standards exception handling, and transition roadmap discussions. It became the place where architecture artifacts had to exist if they were going to influence program decisions.
A good example was a cloud migration review for plant quality systems. The proposed solution had sensible cloud services, sensible analytics outcomes, and a compelling deck. But it could not pass review until the application dependency mapping and transition-state architecture were in the repository, including the interfaces to MES, the historian dependencies, the IAM implications for support access, and the interim cutover sequencing for regional plants.
That was not bureaucracy for its own sake. It prevented exactly the kind of architecture drift that causes delivery pain later.
Training sticks when the operating model requires the skill. Otherwise it remains optional, and optional practices decay very quickly under deadline pressure.
If the repository is optional, it stays optional.
What we measured, and what we refused to measure
We tracked a few things. Not too many.
Number of active contributors mattered. Percentage of architecture reviews using repository artifacts mattered more. We tracked duplicate application and interface records found and resolved because that was a direct signal of whether reuse was improving. We tracked the number of reusable reference models adopted by projects. And we looked at time to prepare review packs before and after the rollout.
That last one was useful because it reflected practical value. If preparing for review becomes faster and less chaotic, something is working.
What we did not overvalue were raw diagram counts, total elements created, or training attendance as a proxy for competency. Activity is easy to count and often misleading. A repository can be very busy and still not be very helpful.
I’ve seen teams celebrate model growth while making architecture harder to use. It is not really a metric problem. It is a judgment problem.
Results after six weeks: good, imperfect, and enough to keep going
The outcome was solid, but not magical.
Repository usage became normal for most of the team. Review-ready architecture packs were produced faster. Ownership of manufacturing applications and integration assets became clearer. We got a much better connection between business capability views and cloud transition decisions, which is often where architecture programs quietly fail—lots of target-state language, not enough line of sight into what has to move, in what order, and with what dependencies.
Reuse improved. Not everywhere. Enough to matter.
The messy parts remained messy. Notation was still inconsistent in some domains. A few senior architects remained partial adopters and continued exporting heavily into PowerPoint for executive consumption. Metadata quality lagged in places, especially where teams were rushing to support live delivery. Some duplicate elements persisted longer than I wanted.
And that was fine, honestly.
Success was not “everyone became a Sparx expert.” That would have been unrealistic and mostly unnecessary.
Success was that the team could now produce consistent enough architecture artifacts under delivery pressure, in a shared repository, with enough quality to support governance and reuse.
That is a far more useful definition.
Lessons I’d apply before doing this again
A few lessons stand out very clearly.
Start with five to seven architecture use cases, not tool modules. If you begin with menus, you are already drifting away from what the team actually needs.
Seed the repository with excellent examples before broad rollout. Not decent examples. Excellent ones. People imitate what they see first.
Design package ownership before training starts. Otherwise your early energy goes into collisions and cleanup instead of capability building.
Tie training directly to review and governance events. If people need the skill next Tuesday for a real architecture board, they will learn it. If the connection is abstract, they won’t.
Accept temporary imperfection to achieve adoption. This is hard for method-minded architects, myself included. But in active transformation, controlled messiness is often healthier than elegant irrelevance.
Do not let the most tool-savvy person define the whole method. Tool expertise is valuable. It is not the same as architecture operating model design.
And in manufacturing especially, include OT/IT boundary scenarios early. If your examples live only in corporate application landscapes, half the real complexity is invisible.
A practical playbook if you’re doing something similar
If I had to compress this into a practical sequence for another architecture lead, it would look like this:
Define the architecture outcomes you need in the next 90 days. Not the ideal future state. The real deliverables: application portfolios, integration views, transition architectures, review artifacts, standards traceability.
Map user groups by role and expected repository behavior. Who creates? Who reuses? Who reviews? Who approves? Who only consumes?
Create three to five gold-standard examples from real work. Preferably one end-to-end plant scenario if you are in manufacturing. Include application, integration, data, and deployment views. Include transition states, not just target architecture.
Establish package ownership and minimum standards before broad access. Keep the standards short enough that people will actually use them.
Train by scenario, not by menu.
Run office hours and live model clinics because that is where the practical learning happens.
Embed repository usage in governance. Otherwise you are running a side initiative.
And clean up aggressively after the first wave. Do not expect the repository to self-heal.
A few quick answers people usually ask
How much Sparx EA expertise did the team need beforehand?
Very little, actually. What they needed was architecture context, live use cases, and enough support to avoid getting lost in the repository.
Can this work outside manufacturing?
Yes. But manufacturing helped because the integration pain was obvious and the OT/IT boundary made reuse valuable very quickly.
Should you customize Sparx heavily during rollout?
Usually no. Start lighter than you think. Heavy customization too early becomes a distraction.
What’s the minimum team structure needed?
You need a named repository owner, a method lead, and domain champions. Without those roles, the burden gets diffused and quality slips fast.
Is six weeks enough?
Enough for productive use. Not mastery.
The real takeaway
I still think back to that week-two review where everything seemed to be fragmenting: duplicate catalogs, clashing naming conventions, one senior architect refusing the repository altogether.
Six weeks later, the picture was not pristine, but it was fundamentally different. Fewer collisions. Better shared models. More useful architecture reviews. More traceable decisions. Less reinvention.
That did not happen because we got better at teaching Sparx menus.
It happened because we turned a tool rollout into an architecture operating model change under real delivery pressure.
That is the part people often underestimate. Training an architecture team on Sparx EA is not mainly a learning-and-development problem. It is a discipline problem, a governance problem, and a team-design problem disguised as a tooling exercise.
The tool mattered.
But the discipline around it mattered more.
Frequently Asked Questions
What is enterprise architecture?
Enterprise architecture aligns strategy, business processes, applications, and technology. Using frameworks like TOGAF and languages like ArchiMate, it provides a structured view of how the enterprise operates and must change.
How does ArchiMate support enterprise architecture?
ArchiMate connects strategy, business operations, applications, and technology in one coherent model. It enables traceability from strategic goals through capabilities and application services to technology infrastructure.
What tools support enterprise architecture modeling?
The main tools are Sparx Enterprise Architect (ArchiMate, UML, BPMN, SysML), Archi (free, ArchiMate-only), and BiZZdesign. Sparx EA is the most feature-rich, supporting concurrent repositories, automation, and Jira integration.