Lesson 14 of 107 4 min

Microservices Are Overrated for Most Startups

A contrarian but technically grounded case for starting with a well-structured monolith. Distributed transaction costs, network latency math, observability overhead, and when to actually break services apart.

Reading Mode

Hide the curriculum rail and keep the lesson centered for focused reading.

Microservices vs Monolith: The "Complexity Tax"

Mental Model

Connecting isolated components into a resilient, scalable, and observable distributed web.

The choice between a Monolith and Microservices is one of the most debated topics in modern engineering. While Microservices are the standard for FAANG, they can be a death sentence for an early-stage startup.

1. Visualizing the Architecture

graph LR
    subgraph Monolith
        M[Unified App Code] --- DB[(Single DB)]
    end
    
    subgraph Microservices
        S1[Auth Service] --- DB1[(Auth DB)]
        S2[Order Service] --- DB2[(Order DB)]
        S3[Payment Service] --- DB3[(Payment DB)]
        S1 <--> S2
        S2 <--> S3
    end

2. The Complexity Tax

A monolith call is a Method Call (~0.1ms). A microservice call is a Network Request (~5ms). This 50x latency increase must be justified by massive scale.

Feature Monolith Microservices
Deployment Simple (Single unit) Complex (K8s/Docker)
Consistency ACID (Single DB) Eventual (Sagas/2PC)
Observability Easy (Single log) Hard (Tracing/ELK)
Scaling All or nothing Fine-grained

3. The Migration Path: Strangler Fig Pattern

Don't rewrite. "Strangle" the monolith by gradually moving features to new services.

graph TD
    User((User)) --> Gateway[API Gateway]
    Gateway -- 1. Legacy Path --> Monolith[Monolith]
    Gateway -- 2. New Path --> S1[New Service A]
    Gateway -- 3. New Path --> S2[New Service B]

4. Engineering Standard: The "Staff" Perspective

In high-throughput distributed systems, the code we write is often the easiest part. The difficulty lies in how that code interacts with other components in the stack.

Data Integrity and The "P" in CAP

Whenever you are dealing with state (Databases, Caches, or In-memory stores), you must account for Network Partitions. In a standard Java microservice, we often choose Availability (AP) by using Eventual Consistency patterns. However, for financial ledgers, we must enforce Strong Consistency (CP), which usually involves distributed locks or a strictly linearizable sequence.

The Observability Pillar

Writing logic without observability is like flying a plane without a dashboard. Every production service must implement:

  • Tracing (OpenTelemetry): Track a single request across 50 microservices.
  • Metrics (Prometheus): Monitor Heap usage, Thread saturation, and P99 latencies.
  • Structured Logging (ELK/Splunk): Never log raw strings; use JSON so you can query logs like a database.

5. Critical Interview Nuance

When an interviewer asks "When should we move to microservices?", focus on Team Scaling, not just Technical Scaling.

The Staff Answer: "I would only recommend microservices when the organizational overhead of a monolith becomes the primary bottleneck. This usually happens when 50+ developers are working in a single repository, leading to constant merge conflicts and deployment gridlock. Technically, I look for modules with radically different resource profiles—like an image processing module that needs 10x more CPU than the rest of the app. Until then, a Modulith (Structured Monolith) provides 90% of the benefits with 10% of the operational cost."

6. Optimization Summary for High-Load Systems

  1. Reduce Context Switching: Use non-blocking I/O (Netty/Project Loom).
  2. Minimize GC Pressure: Prefer primitive specialized collections over standard Generics.
  3. Data Sharding: Use Consistent Hashing to avoid "Hot Shards."

Key Takeaways

  • ****Tracing (OpenTelemetry): Track a single request across 50 microservices.
  • ****Metrics (Prometheus): Monitor Heap usage, Thread saturation, and P99 latencies.
  • ****Structured Logging (ELK/Splunk): Never log raw strings; use JSON so you can query logs like a database.

Production Readiness Checklist

Before deploying this architecture to a production environment, ensure the following Staff-level criteria are met:

  • High Availability: Have we eliminated single points of failure across all layers?
  • Observability: Are we exporting structured JSON logs, custom Prometheus metrics, and OpenTelemetry traces?
  • Circuit Breaking: Do all synchronous service-to-service calls have timeouts and fallbacks (e.g., via Resilience4j)?
  • Idempotency: Can our APIs handle retries safely without causing duplicate side effects?
  • Backpressure: Does the system gracefully degrade or return HTTP 429 when resources are saturated?

Verbal Interview Script

Interviewer: "How would you ensure high availability and fault tolerance for this specific architecture?"

Candidate: "To achieve 'Five Nines' (99.999%) availability, we must eliminate all Single Points of Failure (SPOF). I would deploy the API Gateway and stateless microservices across multiple Availability Zones (AZs) behind an active-active load balancer. For the data layer, I would use asynchronous replication to a read-replica in a different region for disaster recovery. Furthermore, it's not enough to just deploy redundantly; we must protect the system from cascading failures. I would implement strict timeouts, retry mechanisms with exponential backoff and jitter, and Circuit Breakers (using a library like Resilience4j) on all synchronous network calls between microservices."

Want to track your progress?

Sign in to save your progress, track completed lessons, and pick up where you left off.