⏱ 24 min read
The first time TOGAF became genuinely useful to me, nobody in the room wanted to discuss frameworks. TOGAF training
They wanted answers.
We had three manufacturing plants, two aging MES platforms that should have been retired years before, one recently acquired business running a different ERP, and quality records scattered across spreadsheets, local Access databases, enterprise systems, and whatever else capable, under-supported people had built over time to keep production moving. Then an audit landed badly. Not catastrophic, but bad enough. Traceability was slow, batch genealogy was inconsistent, and ownership of critical data was blurry in exactly the way regulators tend to dislike.
The board did not ask for an architecture vision.
They asked for a modernization roadmap. Lower integration cost. Clearer ownership. Fewer projects colliding with each other. A way to stop paying for the same problem three times in different plants.
That was the moment TOGAF stopped being theoretical for me. ArchiMate in TOGAF
Because in practice, TOGAF is rarely at its most valuable when it is used to draw a pristine target architecture. That is the marketing version of the story. In the real enterprise, especially in regulated manufacturing, TOGAF earns its keep when it helps structure decisions across business process, data, applications, technology, controls, and governance while the ground is still moving under your feet. BPMN training
Yes, people are skeptical. Honestly, they should be.
“We don’t need more framework theater.”
“ADM looks too linear.”
“We’re not going to generate 80 artifacts nobody reads.”
I have said versions of those things myself, more than once.
But I have also seen what happens when there is no common decision model. Integration turns into improvisation. ERP programs become political negotiations dressed up as system design. Data platforms become expensive containers for unresolved semantics. Audit remediation devolves into document production instead of architecture correction.
So here is the thesis, plainly stated: TOGAF is most useful when it becomes a common operating model for enterprise change. Not a ceremony. Not a certification exercise. A way to make hard decisions repeatedly, with evidence, in environments where traceability and trade-offs actually matter.
That is what it is used for.
What TOGAF is actually doing in the background
I’m not going to recap TOGAF like a training manual.
In practical terms, TOGAF gives architecture teams a way to move from a concern to a roadmap without skipping the inconvenient middle. It offers a method for understanding the current mess, defining what matters, identifying trade-offs, and governing the path between where you are and where you can realistically get to.
That sounds obvious. It isn’t.
Most enterprises are surprisingly weak at that middle part.
They either jump straight to solution selection — “let’s buy a new ERP,” “let’s stand up a cloud data platform,” “let’s put Kafka in the middle of everything” — or they get trapped in analysis with no real decision mechanism. Used well, TOGAF helps avoid both failure modes.
A few ideas matter more than the rest.
The ADM, for example, is often misunderstood because people read it as a waterfall. In real work, it is an iterative decision cycle. You loop through business, data, application, and technology concerns at different depths depending on risk. You revisit assumptions. You refine transition states. If you treat it as a strict sequence, you will irritate delivery teams and probably fail. If you treat it as a repeatable way to ask the right questions in the right order, it becomes genuinely useful.
Architecture principles are another. I’ve seen teams dismiss them as posters for walls, and to be fair, that does happen when they are vague. But specific principles are powerful investment guardrails. “Cloud-first” is too generic to help much. “No new point-to-point OT-to-enterprise integrations without mediated event or API patterns” — now we are saying something. “Master data ownership must be explicitly assigned before system harmonization decisions are approved.” Also useful. In my experience, those sharper principles age better because they can survive real delivery pressure.
Capability-based planning matters even more than many architects admit. In manufacturing, system-first thinking is a chronic disease. People argue about SAP versus Oracle, local MES versus global MES, historian replacement, SCADA segmentation, lakehouse versus warehouse. Fine. But the business problem is usually capability-related: production scheduling, quality release, supplier traceability, maintenance planning, recipe governance, deviation handling. If you start with systems, you inherit historical bias. If you start with capabilities, you can separate the needed outcome from the legacy tooling wrapped around it.
Then there is the repository and the idea of building blocks. Again, this can become bureaucratic nonsense if overdone. But reusable design assets are gold when handled properly: approved IAM patterns for plant access, canonical event models for machine telemetry, reference integration patterns for MES-to-ERP handoffs, standard security zoning models for edge deployments. You should not be solving those from scratch in every initiative. Teams that do usually end up paying for the lesson twice.
And governance. This is where TOGAF’s reputation tends to suffer.
Governance should be evidence, not ceremony. A good architecture review helps leaders decide. A bad one delays delivery and teaches teams to route around architecture altogether.
The most common misuses are predictable:
- trying to produce every artifact formally
- forcing every project through the same architecture depth
- treating architecture as disconnected from delivery and operations
- confusing documentation volume with architectural rigor
The easiest way to understand what TOGAF is used for is to look at where it actually earns its keep.
Use case 1: Stabilizing a fragmented manufacturing landscape after acquisition
Acquisitions expose architecture weaknesses immediately. They do not create them. They reveal what was already there.
One integration I worked on involved an acquired manufacturer with its own PLM, ERP, and a shop-floor historian stack that nobody in the parent company really understood. On paper, the acquisition thesis was straightforward: expand product lines, consolidate suppliers, standardize operations over time. In reality, supplier master data was duplicated, product definitions conflicted, quality workflows differed, and each side assumed its own process logic was “standard.”
The integration team’s first instinct was exactly what you would expect: point-to-point fixes everywhere. Interface this. Replicate that. Build temporary reconciliations. Add a middleware layer. Stand up Kafka. Put APIs around the problem and move on.
That is not integration. That is delay.
What TOGAF gave us was a way to slow down just enough to make better decisions. We defined the baseline architecture across both companies, not just the application inventory. That distinction matters. An application list tells you what exists. A baseline architecture starts to show how work gets done, where information originates, what systems are authoritative, which controls matter, and where dependencies are dangerous.
We mapped the business capabilities affected by the acquisition: product lifecycle management, planning, production execution, quality management, procurement, maintenance, regulatory reporting. Then we identified architecture building blocks that should be standardized versus preserved.
That was the breakthrough.
Not everything needed harmonization at once. In fact, trying to harmonize too much too early would have been reckless. One plant’s MES was locally validated, deeply embedded in quality procedures, and expensive to replace. So we preserved it. We standardized identity and access management first, because user provisioning and segregation-of-duties risk was ugly. We standardized integration patterns second, introducing a more disciplined API and event-based model rather than letting point-to-point dependencies multiply. Kafka had a role, but only for specific telemetry and event propagation scenarios, not as some magic architecture solvent.
We also used transition architectures instead of pretending there was a single immediate target state. This is one of the most useful TOGAF concepts in the real world, and one of the least appreciated by people who only know it from exams. Enterprises almost never move from baseline to target in one motion. They move through constrained, politically negotiated, budget-bound intermediate states. Naming those states matters. Governing them matters even more.
A concrete example: product lifecycle data had to be harmonized, but replacing all plant systems at once would have blown cost and validation timelines apart. So we agreed on a transitional model: common product definitions and a shared governance process, with local execution systems retained temporarily and integrated through approved services and canonical data mappings. Not elegant. Effective.
The mistake, and we made it early, was trying to standardize applications before agreeing on process ownership and information ownership. That always backfires. If two organizations disagree on what a product revision means, or who owns approved supplier status, application consolidation will simply hide the disagreement inside configuration and custom code.
If you are using TOGAF post-merger, start with capabilities, legal entities, and critical value streams. Not with an application rationalization spreadsheet by itself. That spreadsheet is useful later, but on its own it creates the illusion that architecture is just a software cleanup exercise.
It isn’t.
Used properly, TOGAF helps prevent post-merger architecture from turning into a pile of exceptions that nobody can explain a year later.
A short detour: where TOGAF frustrates delivery teams
Let me be candid.
If architects weaponize governance, teams stop listening.
If review boards mostly say “non-compliant” without helping people navigate trade-offs, shadow architecture appears. Quietly at first. Then everywhere. Plant teams buy their own tools. Integration teams bypass standards because delivery dates are immovable. Cloud teams implement something “temporary” that becomes permanent. Security complains after the fact.
I have seen architecture teams create their own irrelevance this way.
The complaints are usually legitimate:
too many gates,
artifacts nobody reads,
reviews detached from sprint reality,
enterprise standards that seem written for someone else’s context.
The correction is not to abandon architecture. It is to scale it properly.
Use lightweight architecture contracts. Publish reference architectures with clear decision boundaries. Tailor the depth of the method to the risk and scope of the initiative. A validated MES change in a regulated process area needs more evidence than a low-risk internal analytics dashboard. A plant computer vision pilot should not wait six weeks for a committee slot.
TOGAF works best when adapted to context. Always. That has been true in every serious program I’ve seen.
Use case 2: Connecting strategy to plant modernization investments
This is where many leadership teams start to feel overwhelmed.
Every plant has modernization proposals. Edge platforms. IoT sensors. Warehouse automation. Predictive maintenance tooling. Vision systems. New historians. Cloud analytics. Cyber upgrades. Digital work instructions. Every proposal comes with a slide promising ROI. Very few line up cleanly against the enterprise operating model.
Without a disciplined architecture lens, capital allocation becomes a contest of storytelling.
TOGAF is extremely useful here because capability-based planning forces a different conversation. Instead of asking, “Which technology should we fund?” you ask, “Which capabilities are strategically important, underperforming, or too risky to leave as they are?”
In one program, we defined target capability states for production operations, maintenance, quality, warehouse execution, and supply chain visibility. We then mapped proposed initiatives to those capabilities and performed gap analysis. That sounds dry, but it was one of the first times the executive team could clearly see why certain investments depended on others.
For example, plants wanted advanced analytics and AI-driven maintenance, but basic network resilience and telemetry consistency were not in place. Machine events were named differently by site. Some PLC-connected data landed only in local historians. Some plants had edge infrastructure with no standard observability. A few wanted to stream everything to cloud immediately. That was fantasy, and not the useful kind.
So we separated network modernization from analytics rollout. That annoyed people at first, because analytics had the excitement factor. But it was the right call. We also established a canonical event model for machine telemetry before expanding enterprise AI initiatives. Not because architects love models, but because without agreed event semantics, your data science layer becomes a landfill of site-specific transformations.
Another hard trade-off was local autonomy. Plants are not all the same. They should not be. Some production environments genuinely need local decision speed, local redundancy, or specialized execution patterns. TOGAF helped us define where local variation was justified and where it was simply inherited habit.
A frequent mistake here is funding technology pilots before defining target business capabilities and data dependencies. Leaders think they are moving fast. Often, they are just creating future integration debt.
My advice is simple: make transition architectures explicit. Score initiatives by capability uplift, compliance impact, and technical debt reduction, not by novelty. If a shiny pilot improves a local metric but increases enterprise complexity, it may still be worth doing — but that trade-off should be visible, not accidental.
Use case 3: Improving regulatory traceability and audit response
In regulated manufacturing, architecture debt eventually becomes compliance debt.
You can defer that reality for a while. Then an inspection, recall, supplier incident, or deviation trend drags it into the open.
One of the worst patterns I’ve seen is the assumption that traceability problems are mainly documentation problems. They are not. Documentation may reveal them. It rarely fixes them.
Consider a familiar scenario: batch genealogy must be reconstructed across ERP, MES, LIMS, warehouse systems, and sometimes plant-specific databases or spreadsheets created because the official systems never quite met operational need. Data retention rules differ. Time stamps do not align cleanly. Material identifiers vary. Manual reconciliation happens during inspections. People know heroic workarounds. Regulators are not interested in heroism.
This is exactly where TOGAF is valuable because it connects the layers that organizations tend to treat separately.
We used it to map business processes, information flows, system ownership, and control points end to end. Not as a wall-sized diagram exercise, but as a way to answer pointed questions: Where is the system of record for batch release status? Which system is authoritative for material genealogy? Where are manual interventions occurring? Which controls depend on trust between systems rather than enforced integrity? Who owns retention policy implementation?
We also established architecture principles around data integrity, segregation of duties, traceability, and evidentiary design. I like that phrase — evidentiary design — because it shifts the conversation. Architecture in regulated environments should help generate evidence, not merely host transactions.
In one case, the target state was not “replace system X.” It was a controls architecture: clearer source system authority, aligned data retention, stronger identity controls through IAM, and reduced manual reconciliation between MES and ERP quality status. We also introduced event and API mediation patterns to improve lineage, but very selectively. Not every traceability issue justifies a streaming architecture. Sometimes Kafka is the right backbone for propagation and observability. Sometimes a disciplined transactional integration with proper audit logs is better.
TOGAF helps because it ties process, data, applications, and technology together into audit-ready logic. That matters when leadership asks why a control failed or why remediation costs so much. It also supports governance decisions on authoritative data sources, which are often political as much as technical.
The common mistake is assuming compliance gaps can be solved with SOP updates and better documentation while the underlying architecture stays fragmented. That buys time. It does not buy confidence.
Architect for lineage. Architect for retention. Architect for ownership. Treat those as first-class architecture concerns.
Here is a simple view of what that cross-layer logic can look like:
Not sophisticated. But that is the point: the compliance problem is cross-layer, so the architecture response has to be as well.
Where TOGAF adds value in real enterprise situations
Use case 4: Keeping ERP transformation from becoming an application-centric failure
This is where architecture teams either prove their value or expose their irrelevance.
A global ERP program in manufacturing creates pressure from every direction. Finance wants standardization. Plants insist local processes are unique. Supply chain wants one planning model. Quality and maintenance cut awkwardly across modules and often depend on surrounding systems the ERP team would prefer not to discuss. The system integrator arrives with a strong view of “best practice,” which usually means some combination of software defaults, prior client templates, and commercial convenience.
If enterprise architecture does not show up here with something more useful than generic governance, it should not be surprised when the program runs around it.
TOGAF was useful in one such transformation because it gave us a structure to separate true process differentiation from historical habit. That is a business architecture exercise first, not an ERP configuration debate. Plants often say they are unique. Sometimes they are. Sometimes they have different labels for the same thing. Sometimes they have valid regulatory or product-family differences. Sometimes they simply inherited workarounds from old systems.
The architecture work clarified that.
Information architecture mattered just as much. Common master data definitions were not optional. Vendor master, product master, material classifications, work order structures, asset identifiers — if these are unresolved, the ERP program becomes a factory for exception handling.
Application architecture was where the realism came in. The ERP was not the target architecture. It was one component inside it. Surrounding systems had to be evaluated honestly: which MES platforms survive, which maintenance systems stay local for a period, what happens to LIMS integration, where PLM remains authoritative, how warehouse execution interacts, what IAM and SoD controls must be enterprise-wide.
A specific example: we standardized work order structures globally but allowed local maintenance execution variance where site operations and regulatory validation justified it. We centralized vendor master ownership but kept plant-specific quality checkpoints. We used transition architectures for phased rollout by region and product family rather than pretending one global deployment wave was sensible.
The biggest mistake in ERP programs is treating the ERP as the architecture. That is backwards. The ERP should fit the operating model and architecture principles, not define them by default.
And define those principles before the system integrator design starts. If you wait, you will spend the rest of the program negotiating exceptions from a baseline you never intended.
There is also a political truth here: every site thinks it is special once exception logic becomes easy. Document exception rationale rigorously or local variation will expand until standardization is mostly fictional.
Use case 5: Making enterprise data architecture usable, not theoretical
Almost every manufacturing company eventually launches some version of this effort.
A cloud data platform. An industrial data lake. A lakehouse. A modern analytics foundation. A “single source of truth.”
Then the real question arrives: truth about what, according to whom?
Asset identifiers differ by plant. Product and recipe definitions vary by system. Batch structures are not aligned. OEE is calculated differently across facilities and defended fiercely. I have seen executive teams talk confidently about enterprise metrics while plants are measuring different realities.
TOGAF is useful here, but not in the way some data teams expect. It is not a reason to model every data object in the enterprise before anything gets built. That becomes architecture as hostage-taking.
What it is good for is creating enough information architecture structure to support governance and implementation choices.
We used it to define information domains and ownership, map critical data entities to business capabilities and applications, establish canonical models where interoperability truly mattered, and align platform design to the operating model.
That last point gets missed constantly.
If your operating model allows meaningful local autonomy, your data architecture should not force premature global uniformity. Decide which data must be globally consistent and which can remain local with controlled mapping. Product identifiers for enterprise planning and compliance may need strict consistency. Machine-state labels for a local optimization dashboard may not.
This is also where cloud architecture choices become less ideological and more practical. A cloud platform can absolutely help, especially for scalable analytics, data sharing, and cross-site reporting. But standing up the platform before resolving ownership and semantic accountability is just expensive procrastination.
I have watched companies build elegant ingestion pipelines into cloud storage while still arguing about the meaning of “planned downtime.”
Architecture review boards should be used here to control metric proliferation and enforce ownership clarity. Not every plant KPI should become an enterprise KPI. Not every local schema deserves canonical status. You need decision criteria.
And in regulated settings, weak information architecture does not just create reporting disputes. It creates quality delays, audit friction, and endless reconciliation work that nobody funds explicitly but everyone pays for.
Use case 6: Retiring legacy technology without breaking operations
This part is less glamorous and much more representative of actual enterprise architecture work. free Sparx EA maturity assessment
Unsupported Windows servers on the plant floor. Bespoke integrations nobody fully understands. Historian dependencies tied to production and quality reporting. Cybersecurity pressure escalating because the compensating controls are wearing thin. Shutdown windows limited. Validation effort significant. Operations rightly nervous.
People often frame this as an infrastructure problem. It almost never is.
TOGAF helped us first by forcing a credible baseline architecture and dependency map. Not just “server X hosts system Y,” but what business processes depend on it, what data flows through it, what downstream reports use it, what manual procedures exist because of it, and which integrations are hidden in scheduled scripts built by someone who left six years ago. BPMN and UML together
That baseline work is tedious. It is also where bad surprises become visible early enough to manage.
Then we used technology standards and a risk-based target-state definition to shape the path forward. Some legacy MES environments could not be replaced immediately, so we decoupled them from downstream analytics first. In one case, we introduced API mediation around brittle interfaces rather than touching the core transaction engine during a period of operational sensitivity. In another, historian data was replicated into a more modern reporting layer so reporting dependencies could be retired before the historian itself was tackled.
Shutdown windows drove sequencing. So did validation impact. So did cyber risk. This is why architecture matters: infrastructure refresh schedules by themselves do not capture operational reality.
A useful prioritization model combined business criticality, validation impact, cybersecurity exposure, and integration complexity. Not elegant. Very effective.
The common mistake is assuming technical obsolescence can be handled by infrastructure teams alone. It cannot. Legacy risk is entangled with business process, data lineage, local workarounds, and control assumptions. If you ignore those, you modernize the platform and preserve the bad architecture.
Sometimes, frankly, a legacy system survives longer than anyone likes. TOGAF is useful partly because it helps justify why. If the replacement risk to validated operations is materially higher than the near-term technical risk, leaders need to hear that in a structured way.
A simple transition view often helps:
Again, not complicated. Real architecture often isn’t complicated in concept. It is difficult because the trade-offs are real.
Use case 7: Scaling innovation from pilot sites to enterprise patterns
This is the hopeful use case, and also one of the most deceptive.
A pilot plant succeeds with computer vision, predictive maintenance, or digital work instructions. Leadership sees results and wants rollout everywhere. The local team becomes a model site. Slides are made. Money appears.
Then architecture reality enters the room.
The pilot may have succeeded because of local heroes, exceptional vendor support, a tolerant network environment, unusual data quality, or a plant manager willing to absorb manual overhead that nobody else will tolerate. Copying the pilot stack blindly is one of the fastest ways to turn innovation into enterprise disappointment.
TOGAF helps here because it gives you a way to separate replicable architecture from non-replicable local improvisation.
We used it to define reusable building blocks and create reference architectures for edge, cloud, data ingestion, IAM, monitoring, and security zoning. In practical terms, that meant things like:
- a standard edge-to-cloud telemetry pattern
- approved integration patterns for machine data feeding maintenance workflows
- identity models for plant users and service accounts
- network and security zoning guidance for OT-connected solutions
- observability requirements so scaled deployments could be supported centrally
Kafka sometimes belonged in this pattern, particularly where event-driven telemetry and decoupled consumers made sense across multiple use cases. But not every pilot needed to become an enterprise streaming platform. Sometimes an API-first pattern with queued delivery and clear ownership was more sustainable.
The governance criteria for scale mattered as much as the technology. Before rollout, we asked: what assumptions did the pilot depend on? Are those assumptions generally available? What minimum enterprise controls must be present? Which local adaptations are permitted? What support model exists after the enthusiastic pilot team moves on?
This is where TOGAF shifts from control mechanism to accelerator. If you have reusable reference architectures and decision criteria, scaling gets faster. Teams spend less time reinventing edge security, data ingestion, IAM integration, and monitoring. More time goes into actual business value.
The mistake is obvious in hindsight: copying the pilot without understanding its operating assumptions.
Do the opposite. Standardize the parts that should be standard. Be honest about what was local improvisation.
What TOGAF did not solve on its own
TOGAF did not create executive sponsorship.
It did not resolve political conflict between global functions and plant leadership. It did not magically simplify an ugly application portfolio. It did not replace regulatory expertise, validation discipline, or product ownership. It did not make weak architects stronger.
That is worth saying because too many “TOGAF implementations” are judged as if the framework itself should compensate for leadership gaps and organizational indecision.
It won’t.
You still need executives who will make trade-offs stick. Delivery teams that will align. Process and product owners who accept accountability. Architects with domain knowledge — not just framework vocabulary. Governance that leads to decisions, not deferral.
In my experience, most failed TOGAF adoptions were not really failures of TOGAF. They were failures of tailoring, pragmatism, or leadership courage.
How a chief architect should introduce TOGAF in a regulated manufacturing enterprise
Start with one urgent enterprise problem.
Not training. Not a repository rollout. Not a giant template pack.
Pick a problem leaders already care about: post-merger integration, ERP chaos, traceability findings, legacy risk, digital pilot sprawl. Use architecture to help solve that. People trust what reduces pain.
Then define a small set of architecture principles. Keep them sharp enough to influence funding and design decisions. Establish baseline and target views only where decisions are needed. Build a lightweight governance path tied to risk and investment. Publish a few genuinely useful reference architectures. Build repository discipline gradually, as a support mechanism, not a religion.
Tailor everything.
Plants do not need the same architecture depth as enterprise platform programs. High-risk validated environments need stronger evidence trails. Innovation work needs faster review cycles. OT-connected changes need security and operational realism. If the first thing people experience is a template pack, adoption will stall. Probably deservedly.
What TOGAF is used for when the enterprise is under pressure
Back to that opening situation.
Three plants. Different ERP environments. Aging MES. Fractured quality records. Audit pressure. Too many projects. Too little coherence.
What changed was not that we suddenly possessed a perfect target architecture. We did not. What changed was that architecture became a discipline for making better decisions repeatedly. System ownership became clearer. Duplicate investments dropped. Traceability improved because ownership and flows improved. Roadmaps became more credible because transition states were explicit. Friction between local operations and enterprise standards did not disappear, but it became discussable and governable instead of tribal.
That, to me, is what TOGAF is used for.
It is used to make enterprise change coherent when transformation spans plants, platforms, regulations, and competing executive agendas. It gives leaders and architects a common way to connect strategy, operating reality, technical constraints, and governance without pretending the work is cleaner than it is.
Not a silver bullet. Not a documentation exercise.
A practical structure for making hard architecture decisions repeatedly and with evidence.
FAQ
Is TOGAF only useful for very large enterprises?
No. But it becomes far more valuable when complexity, regulation, and cross-functional change are high.
Can TOGAF work in agile delivery environments?
Yes, if governance is lightweight and artifacts are decision-focused rather than ceremonial.
Is TOGAF too generic for manufacturing?
Only if you use it generically. Combined with manufacturing process knowledge, OT realities, and regulatory requirements, it works very well.
What is the biggest mistake companies make with TOGAF?
Turning it into bureaucracy instead of tailoring it to business risk, operational reality, and actual transformation decisions.
Frequently Asked Questions
What is TOGAF used for?
TOGAF provides a structured approach to developing, governing, and managing enterprise architecture. Its ADM guides architects through phases from vision through business, information systems, and technology architecture to migration planning and governance.
What is the difference between TOGAF and ArchiMate?
TOGAF is a process framework defining how to develop and govern architecture. ArchiMate is a modelling language defining how to represent architecture. They work together: TOGAF provides the method, ArchiMate provides the notation.
Is TOGAF certification worth it?
Yes — TOGAF Foundation and Practitioner are widely recognised, especially in consulting, financial services, and government. Combined with ArchiMate and Sparx EA skills, it significantly strengthens an enterprise architect's profile.