Lesson 18 of 107 5 min

System Design: Designing a Distributed Search Engine (Elasticsearch)

How does Elasticsearch search through billions of documents in milliseconds? A technical deep dive into Inverted Indexes, Sharding, and Segment Merging.

Reading Mode

Hide the curriculum rail and keep the lesson centered for focused reading.

Case Study: Design a Search System (Google-like)

Mental Model

Connecting isolated components into a resilient, scalable, and observable distributed web.

Designing a search engine at scale requires more than just a LIKE %word% query. It involves Web Crawling, Indexing, and Ranking.

1. Requirement Clarification

graph LR
    Producer[Producer Service] -->|Publish Event| Kafka[Kafka / Event Bus]
    Kafka -->|Consume| Consumer1[Consumer Group A]
    Kafka -->|Consume| Consumer2[Consumer Group B]
    Consumer1 --> DB1[(Primary DB)]
    Consumer2 --> Cache[(Redis)]

Functional

  • Crawl the web and find new pages.
  • Index the content of these pages.
  • Provide a search interface to find pages by keywords.

Non-Functional

  • Low Latency: Sub-second search results.
  • Freshness: Index should update within hours of a page changing.
  • Scalability: Handle billions of documents and queries.

2. The Core Component: Inverted Index

Standard DBs map Document -> Words. An Inverted Index maps Word -> List of Documents.

  • Example: java -> [doc1, doc42, doc500].
  • This allows for $O(1)$ lookups for any given keyword.

3. High-Level Architecture

  1. Crawler: Fetches pages and stores them in Blob Storage.
  2. Indexer: Tokenizes text and builds the Inverted Index.
  3. Searcher: Resolves queries using the index and calculates rankings.

4. Ranking (PageRank)

Not all pages are equal. We use a Ranking Service that looks at:

  • Keyword Density: How often the word appears.
  • Links (Authority): How many reputable sites link to this page.

Final Takeaway

Search systems are about Data Pre-processing. The heavy lifting happens in the Indexer so that the Searcher can be lightning fast.

Engineering Standard: The "Staff" Perspective

In high-throughput distributed systems, the code we write is often the easiest part. The difficulty lies in how that code interacts with other components in the stack.

1. Data Integrity and The "P" in CAP

Whenever you are dealing with state (Databases, Caches, or In-memory stores), you must account for Network Partitions. In a standard Java microservice, we often choose Availability (AP) by using Eventual Consistency patterns. However, for financial ledgers, we must enforce Strong Consistency (CP), which usually involves distributed locks (Redis Redlock or Zookeeper) or a strictly linearizable sequence.

2. The Observability Pillar

Writing logic without observability is like flying a plane without a dashboard. Every production service must implement:

  • Tracing (OpenTelemetry): Track a single request across 50 microservices.
  • Metrics (Prometheus): Monitor Heap usage, Thread saturation, and P99 latencies.
  • Structured Logging (ELK/Splunk): Never log raw strings; use JSON so you can query logs like a database.

3. Production Incident Prevention

To survive a 3:00 AM incident, we use:

  • Circuit Breakers: Stop the bleeding if a downstream service is down.
  • Bulkheads: Isolate thread pools so one failing endpoint doesn't crash the entire app.
  • Retries with Exponential Backoff: Avoid the "Thundering Herd" problem when a service comes back online.

Critical Interview Nuance

When an interviewer asks you about this topic, don't just explain the code. Explain the Trade-offs. A Staff Engineer is someone who knows that every architectural decision is a choice between two "bad" outcomes. You are picking the one that aligns with the business goal.

Performance Checklist for High-Load Systems:

  1. Minimize Object Creation: Use primitive arrays and reusable buffers.
  2. Batching: Group 1,000 small writes into 1 large batch to save I/O cycles.
  3. Async Processing: If the user doesn't need the result immediately, move it to a Message Queue (Kafka/SQS).

Technical Trade-offs: Messaging Systems

Pattern Ordering Durability Throughput Complexity
Log-based (Kafka) Strict (per partition) High Very High High
Memory-based (Redis Pub/Sub) None Low High Very Low
Push-based (RabbitMQ) Fair Medium Medium Medium

Key Takeaways

  • Crawl the web and find new pages.
  • Index the content of these pages.
  • Provide a search interface to find pages by keywords.

Verbal Interview Script

Interviewer: "How would you ensure high availability and fault tolerance for this specific architecture?"

Candidate: "To achieve 'Five Nines' (99.999%) availability, we must eliminate all Single Points of Failure (SPOF). I would deploy the API Gateway and stateless microservices across multiple Availability Zones (AZs) behind an active-active load balancer. For the data layer, I would use asynchronous replication to a read-replica in a different region for disaster recovery. Furthermore, it's not enough to just deploy redundantly; we must protect the system from cascading failures. I would implement strict timeouts, retry mechanisms with exponential backoff and jitter, and Circuit Breakers (using a library like Resilience4j) on all synchronous network calls between microservices."

Want to track your progress?

Sign in to save your progress, track completed lessons, and pick up where you left off.