⏱ 5 min read
The core architectural difference: log-based versus queue-based
Kafka’s official documentation explicitly contrasts Kafka’s model with traditional queuing and publish-subscribe models, explaining that consumer groups combine the ability to scale consumption (like a queue) with multi-subscriber behavior (like pub-sub) without forcing you to choose one. turn16view0
Recommended Reading
This is the foundational architecture difference: Kafka centers architecture around a persistent log with consumer-controlled position (offsets), while many traditional message brokers center around message delivery and queue consumption semantics. turn16view0
Durability and retention versus “delivery then discard”
Kafka persists published records durably with configurable retention, independent of whether they have been consumed, and consumers can reset offsets to reprocess. turn16view0
This retained history enables architectural patterns such as replay-based recovery, auditability, and re-derivation of downstream projections—capabilities that differ materially from systems where messages are removed as soon as they are consumed. turn16view0
Throughput and scaling mechanics
Kafka scales by partitioning: partitions provide the unit of parallelism, distributing load across brokers and across consumer instances. turn16view0
Traditional brokers can scale too, but the key difference is that Kafka’s scaling is explicitly built into the log structure and consumer group assignment, preserving per-partition order while distributing work. turn16view0
Ordering guarantees and where they break
Kafka provides ordering guarantees within a partition; the documentation explains why ordering breaks in classic queue patterns under parallel consumption because asynchronous delivery to different consumers can reorder messages, and how Kafka avoids this by assigning a partition to exactly one consumer per group. turn16view0
This is directly relevant in enterprise domains where ordering is a business requirement (e.g., payment events, inventory changes).
Routing and integration trade-offs
Many traditional brokers excel at advanced routing patterns (topic exchanges, routing keys, dead-lettering as built-in constructs). Kafka ecosystems can implement routing and DLQ patterns too, but architectures often express them differently—via topic design conventions, stream processing, and governance rules. turn16view0turn19view1 ARB governance with Sparx EA
Comparative guidance explains the conceptual categorization: RabbitMQ is positioned as a message broker while Kafka is positioned as a distributed streaming platform, with different strengths across latency, throughput, scalability, and durability.
An enterprise decision lens
A simple architecture decision logic:
Choose Kafka when you need durable event history, replay, scalable pipelines, and stream processing as a first-class capability. turn16view0 ArchiMate capability map example
Choose classic brokers when you need complex routing and task-queue semantics as the primary concern and event history is not the central architectural purpose.
Frequently asked questions
Can Kafka behave like a queue?
Kafka documentation explicitly explains how consumer groups provide queue-like work sharing while still supporting multiple subscriber groups. turn16view0
Kafka in the enterprise architecture context
Kafka is not just a messaging system — it is an architectural decision that reshapes how systems communicate, how data flows, and how teams organize. Enterprise architects must understand the second-order effects: integration topology changes from N×(N-1)/2 point-to-point connections to 2N topic-based connections, data flows become visible and governable through the topic catalog, and team structure shifts toward platform-plus-domain ownership. free Sparx EA maturity assessment
Model Kafka infrastructure in the ArchiMate Technology Layer and the event-driven application architecture in the Application Layer. Use tagged values to track topic ownership, retention policies, and consumer dependencies. Build governance views that the architecture review board uses to approve new topics, review schema changes, and assess platform capacity. ArchiMate relationship types
Operational considerations
Kafka deployments require attention to operational fundamentals that are often underestimated during initial architecture decisions. Partition strategy determines consumer parallelism — too few partitions limits throughput, too many creates metadata overhead and increases leader election time during broker failures. A practical starting point: 3 partitions for low-volume topics, 6-12 for medium traffic, and 30+ only for topics exceeding 10,000 messages per second.
Retention configuration directly affects storage costs and replay capability. Set retention per topic based on the business requirement: 7 days for operational events (sufficient for most consumer catch-up scenarios), 30 days for analytics events (covers monthly reporting cycles), and multi-year for regulated data (financial transactions, audit trails). Use tiered storage to move older data to object storage (S3, Azure Blob) automatically, reducing broker disk costs without losing replay capability. enterprise cloud architecture patterns
Monitoring must cover three levels: cluster health (broker availability, partition balance, replication lag), application health (consumer group lag, producer error rates, throughput per topic), and business health (end-to-end event latency, data freshness at consumers, failed processing rates). Deploy Prometheus with JMX exporters for cluster metrics, integrate consumer lag monitoring into the platform team's alerting, and build business-level dashboards that domain teams can check independently.
If you'd like hands-on training tailored to your team (Sparx Enterprise Architect, ArchiMate, TOGAF, BPMN, SysML, Apache Kafka, or the Archi tool), you can reach us via our contact page.
Frequently Asked Questions
What is enterprise architecture?
Enterprise architecture is a discipline that aligns an organisation's strategy, business operations, information systems, and technology infrastructure. It provides a structured framework for understanding how an enterprise works today, where it needs to go, and how to manage the transition.
How is ArchiMate used in enterprise architecture practice?
ArchiMate is used as the standard modeling language in enterprise architecture practice. It enables architects to create consistent, layered models covering business capabilities, application services, data flows, and technology infrastructure — all traceable from strategic goals to implementation.
What tools are used for enterprise architecture modeling?
Common enterprise architecture modeling tools include Sparx Enterprise Architect (Sparx EA), Archi, BiZZdesign Enterprise Studio, LeanIX, and Orbus iServer. Sparx EA is widely used for its ArchiMate, UML, BPMN and SysML support combined with powerful automation and scripting capabilities.