The Monolith Is Not Your Enemy
I want to tell you about two companies. Both were building products in the same market, around the same time, with similar funding and comparable engineering talent. Company A — my company — started with a monolith. Company B, our closest competitor, went straight to microservices on day one.
Three years later, Company A had shipped four major product lines and was processing millions of transactions daily. Company B had shipped one product line, was drowning in operational complexity, and eventually got acquired at a fraction of their peak valuation. Their CTO later told me, off the record, that premature microservice decomposition was the single biggest technical mistake they made.
This is not an argument against microservices. It is an argument against making architectural decisions based on what looks impressive rather than what solves your actual problems.
Why We Chose the Monolith
When I joined as CTO, the pressure to adopt microservices was immediate. Candidates in interviews would ask about our service architecture. Investors wanted to know we were building with "modern" patterns. The prevailing narrative in the industry was clear: monoliths are legacy technology for companies that do not know better.
I chose the monolith anyway. Here is why.
We did not know our domain boundaries yet. This is the most important reason, and it is the one most teams ignore. Microservices require you to define service boundaries, and service boundaries are really domain boundaries. If you draw those boundaries wrong — and you will, because you do not understand your domain well enough at the start — the cost of fixing them is enormous. Every boundary change means modifying inter-service communication, updating deployment configurations, migrating data between databases, and coordinating releases across teams.
In a monolith, refactoring a module boundary is a code change. In a microservice architecture, refactoring a service boundary is an infrastructure project.
We had a small team. We started with eight engineers. Microservices impose a per-service operational tax: each service needs its own CI pipeline, monitoring, alerting, logging, deployment configuration, and on-call rotation. With eight engineers, that overhead would have consumed more time than actual product development.
There is a rough heuristic I use: you need at least two to three engineers per service to maintain it effectively. If you have eight engineers and ten services, you have a problem.
We needed to move fast. Our competitive advantage was speed of execution, not architectural sophistication. A monolith let us share code trivially, refactor fearlessly, and deploy atomically. When a product manager said "can we add this feature?" the answer was almost always "yes, by next week" — because we were not negotiating API contracts between six different teams.
What a Well-Structured Monolith Looks Like
I want to be clear: choosing a monolith does not mean choosing chaos. A poorly structured monolith is genuinely terrible — a big ball of mud where everything depends on everything and changes in one area cause failures in unrelated areas.
We invested heavily in internal structure from the start. Our monolith was organized into clearly defined modules with explicit boundaries.
Module boundaries enforced in code. Each module had a public API — a set of interfaces that other modules could depend on — and an internal implementation that was off-limits. We used our language's module system to enforce this at compile time. If an engineer tried to import an internal class from another module, the build failed.
Separate data ownership. Even within the monolith, each module owned its data. Modules communicated through their public APIs, not by reaching directly into each other's database tables. This was the single most important structural decision we made, because it meant that when we eventually did extract services, the data was already cleanly separated.
Shared infrastructure, independent logic. Cross-cutting concerns like authentication, logging, and monitoring were shared infrastructure. Business logic was strictly modular. This gave us the operational simplicity of a single deployment while maintaining the logical separation of a distributed system.
We called this approach a "modular monolith," and it gave us the best of both worlds for the first two years.
The Competitor's Cautionary Tale
Company B's story deserves telling in detail, because the failure mode is common and instructive.
They started with twelve microservices. By month six, they had twenty-eight. Each service had its own database, its own deployment pipeline, and its own technology stack — because one of the perceived benefits of microservices is "polyglot programming." So they had services in Node.js, Go, Python, and Java.
The problems cascaded quickly.
Debugging became archaeological. A single user request might touch eight services. When something went wrong, engineers had to correlate logs across multiple systems, trace requests through multiple network hops, and understand the interaction patterns between services they did not own. What would have been a 30-minute debugging session in a monolith became a multi-day investigation.
Data consistency was a nightmare. Because each service owned its data, operations that spanned multiple services required distributed transactions or eventual consistency patterns. They spent months building a saga orchestrator to handle multi-step business processes. In a monolith, those same processes would have been a single database transaction.
Testing was nearly impossible. To test a feature end-to-end, you needed a running instance of every service in the chain. Their local development environment required 14 Docker containers and 32 GB of RAM. New engineers spent their first two days just getting the system to run on their machines.
Deployment coordination consumed enormous effort. Despite the promise that microservices enable independent deployment, in practice their services were tightly coupled. Deploying Service A required a compatible version of Service B, which required a specific version of Service C. They built an internal tool just to manage deployment ordering. They called it "the dependency orchestrator." It had its own bugs.
Hiring and onboarding suffered. Because they had four different programming languages in production, they needed specialists in each. Knowledge was siloed not just by domain but by technology stack. An engineer who was an expert in the Go services could not easily help with the Python services.
The cumulative effect was devastating. Their feature velocity, which had been comparable to ours in the early months, dropped to roughly one-third of ours by the end of year one. They were spending the majority of their engineering effort on operational overhead rather than product development.
When to Decompose
I am not arguing that you should run a monolith forever. There are real, legitimate reasons to extract services. The key is recognizing when you have those reasons versus when you are decomposing for fashion.
Decompose when you have proven domain boundaries. After running our monolith for eighteen months, we had a deep understanding of our domain. We knew which modules changed together and which were truly independent. We knew where the natural seams were. Our first extraction — pulling out the payment processing module into its own service — took two weeks and caused zero production incidents. It worked because we were formalizing a boundary that already existed, not inventing one.
Decompose when you have independent scaling requirements. One of our modules handled real-time data processing and needed to scale horizontally during peak hours. The rest of the system had stable, predictable load. Extracting that module into its own service let us scale it independently without over-provisioning everything else.
Decompose when you have independent deployment requirements. Our core transaction processing module needed to be extremely stable with infrequent, carefully managed releases. Our customer-facing UI module needed to ship multiple times a day. Separating them let each team deploy at its natural cadence.
Decompose when you have team scaling challenges. At around 60 engineers, we started to feel the coordination overhead of working in a single codebase. Not because of the code, but because of the people. Merge conflicts increased. Build times grew. Teams stepped on each other. That was a legitimate signal to extract.
Our Current Architecture
Today, three years in, we have seven services extracted from the original monolith. The remaining monolith is still the largest single codebase and handles the majority of our business logic. And that is fine.
Each extraction was driven by a specific, measurable need. Each was executed only after the module had been running within the monolith long enough for us to be confident in its boundaries. And each came with the full operational investment: dedicated CI/CD pipeline, monitoring, alerting, on-call rotation, and documentation.
We spend about 20% of our infrastructure engineering effort on the operational overhead of those seven services. That is manageable and justified. If we had twenty-eight services like our competitor, that overhead would be crippling.
The Decision Framework
If you are starting a new project and trying to decide between a monolith and microservices, here is the framework I use.
Start with a monolith if you have fewer than 40 engineers, if your domain is not yet well-understood, or if speed of iteration is your primary competitive advantage. Invest in internal modularity from day one. Enforce module boundaries, separate data ownership, and keep your modules loosely coupled.
Consider extracting a service when you can articulate a specific, measurable benefit — independent scaling, independent deployment cadence, or team autonomy at scale. If the only reason is "microservices are best practice," that is not a reason. That is peer pressure.
The monolith is not your enemy. Premature complexity is. The best architecture is the simplest one that solves your current problems while leaving the door open for future evolution. For most teams, at most stages of growth, that architecture is a well-structured monolith.
The industry's obsession with microservices has caused more damage than the technical debt it was supposed to prevent. Build what you need. Decompose when you must. And never let conference talks dictate your architecture.