System Design: Designing a Database Proxy for Sharding
Mental Model
The source of truth where data persistence, consistency, and retrieval speed must be balanced.
Scaling a relational database like MySQL or PostgreSQL is one of the hardest challenges in engineering. When a single database server can't handle the load, you must Shard (partition) your data. But sharding manually in your application code is a nightmare. This is why we need a Database Proxy.
1. What is a Database Proxy?
graph LR
Producer[Producer Service] -->|Publish Event| Kafka[Kafka / Event Bus]
Kafka -->|Consume| Consumer1[Consumer Group A]
Kafka -->|Consume| Consumer2[Consumer Group B]
Consumer1 --> DB1[(Primary DB)]
Consumer2 --> Cache[(Redis)]
A proxy (like Vitess, ProxySQL, or Prisma Data Proxy) sits between your application and your database nodes. The application talks to the proxy as if it were a single, giant database, and the proxy handles the complexity of sharding, routing, and replication in the background.
2. The Sharding Coordinator
The proxy's most important job is Routing.
- The Logic: You define a "Shard Key" (e.g.,
user_id). When the application runsSELECT * FROM users WHERE user_id = 123, the proxy calculates:shard = hash(123) % Nand routes the query to the correct physical server. - Cross-Shard Queries: If a query doesn't include the shard key, the proxy must "Scatter-Gather"—sending the query to all shards and merging the results.
3. Connection Pooling at Scale
Opening a new database connection is expensive (handshakes, authentication).
- The Problem: 10,000 application containers each opening 10 connections = 100,000 connections. MySQL will crash.
- The Solution: The proxy maintains a small, fixed pool of persistent connections to each database node and multiplexes application requests over them. This allows thousands of app instances to share a handful of DB connections.
4. Query Rewriting and Optimization
A smart proxy can improve performance without changing application code:
- Query Sanitization: Blocking slow or dangerous queries (e.g.,
SELECT *without aLIMIT). - Read-Write Splitting: Automatically routing
SELECTqueries to Read Replicas andINSERT/UPDATEto the Primary node.
5. Handling Database Failovers
When a primary database node dies, the proxy detects it instantly.
- Automatic Routing: The proxy redirects all traffic to a promoted replica. The application never sees a connection error—it only experiences a tiny latency spike.
6. Real-world Architectures: Vitess
Vitess (used by YouTube and Slack) takes this further by adding:
- VTGate: The proxy layer.
- VTTablet: A sidecar that runs alongside every MySQL instance to monitor health and enforce query limits.
- Topology Store: A Zookeeper/Etcd cluster that stores the global sharding map.
Summary
Building a database proxy is about Abstracting Complexity. By moving sharding and connection management into a dedicated infrastructure layer, you can scale your relational data to millions of users while keeping your application code clean and simple.
Engineering Standard: The "Staff" Perspective
In high-throughput distributed systems, the code we write is often the easiest part. The difficulty lies in how that code interacts with other components in the stack.
1. Data Integrity and The "P" in CAP
Whenever you are dealing with state (Databases, Caches, or In-memory stores), you must account for Network Partitions. In a standard Java microservice, we often choose Availability (AP) by using Eventual Consistency patterns. However, for financial ledgers, we must enforce Strong Consistency (CP), which usually involves distributed locks (Redis Redlock or Zookeeper) or a strictly linearizable sequence.
2. The Observability Pillar
Writing logic without observability is like flying a plane without a dashboard. Every production service must implement:
- Tracing (OpenTelemetry): Track a single request across 50 microservices.
- Metrics (Prometheus): Monitor Heap usage, Thread saturation, and P99 latencies.
- Structured Logging (ELK/Splunk): Never log raw strings; use JSON so you can query logs like a database.
3. Production Incident Prevention
To survive a 3:00 AM incident, we use:
- Circuit Breakers: Stop the bleeding if a downstream service is down.
- Bulkheads: Isolate thread pools so one failing endpoint doesn't crash the entire app.
- Retries with Exponential Backoff: Avoid the "Thundering Herd" problem when a service comes back online.
Critical Interview Nuance
When an interviewer asks you about this topic, don't just explain the code. Explain the Trade-offs. A Staff Engineer is someone who knows that every architectural decision is a choice between two "bad" outcomes. You are picking the one that aligns with the business goal.
Performance Checklist for High-Load Systems:
- Minimize Object Creation: Use primitive arrays and reusable buffers.
- Batching: Group 1,000 small writes into 1 large batch to save I/O cycles.
- Async Processing: If the user doesn't need the result immediately, move it to a Message Queue (Kafka/SQS).
Advanced Architectural Blueprint: The Staff Perspective
In modern high-scale engineering, the primary differentiator between a Senior and a Staff Engineer is the ability to see beyond the local code and understand the Global System Impact. This section provides the exhaustive architectural context required to operate this component at a "MANG" (Meta, Amazon, Netflix, Google) scale.
1. High-Availability and Disaster Recovery (DR)
Every component in a production system must be designed for failure. If this component resides in a single availability zone, it is a liability.
- Multi-Region Active-Active: To achieve "Five Nines" (99.999%) availability, we replicate state across geographical regions using asynchronous replication or global consensus (Paxos/Raft).
- Chaos Engineering: We regularly inject "latency spikes" and "node kills" using tools like Chaos Mesh to ensure the system gracefully degrades without a total outage.
2. The Data Integrity Pillar (Consistency Models)
When managing state, we must choose our position on the CAP theorem spectrum.
| Model | latency | Complexity | Use Case |
|---|---|---|---|
| Strong Consistency | High | High | Financial Ledgers, Inventory Management |
| Eventual Consistency | Low | Medium | Social Media Feeds, Like Counts |
| Monotonic Reads | Medium | Medium | User Profile Updates |
3. Observability and "Day 2" Operations
Writing the code is only 10% of the lifecycle. The remaining 90% is spent monitoring and maintaining it.
- Tracing (OpenTelemetry): We use distributed tracing to map the request flow. This is critical when a P99 latency spike occurs in a mesh of 100+ microservices.
- Structured Logging: We avoid unstructured text. Every log line is a JSON object containing
correlationId,tenantId, andlatencyMs. - Custom Metrics: We export business-level metrics (e.g., "Orders processed per second") to Prometheus to set up intelligent alerting with PagerDuty.
4. Production Readiness Checklist for Staff Engineers
- Capacity Planning: Have we performed load testing to find the "Breaking Point" of the service?
- Security Hardening: Is all communication encrypted using mTLS (Mutual TLS)?
- Backpressure Propagation: Does the service correctly return HTTP 429 or 503 when its internal thread pools are saturated?
- Idempotency: Can the same request be retried 10 times without side effects? (Critical for Payment systems).
Critical Interview Reflection
When an interviewer asks "How would you improve this?", they are looking for your ability to identify Bottlenecks. Focus on the network I/O, the database locking strategy, or the memory allocation patterns of the JVM. Explain the trade-offs between "Throughput" and "Latency." A Staff Engineer knows that you can never have both at their theoretical maximums.
Optimization Summary:
- Reduce Context Switching: Use non-blocking I/O (Netty/Project Loom).
- Minimize GC Pressure: Prefer primitive specialized collections over standard Generics.
- Data Sharding: Use Consistent Hashing to avoid "Hot Shards."
Technical Trade-offs: Messaging Systems
| Pattern | Ordering | Durability | Throughput | Complexity |
|---|---|---|---|---|
| Log-based (Kafka) | Strict (per partition) | High | Very High | High |
| Memory-based (Redis Pub/Sub) | None | Low | High | Very Low |
| Push-based (RabbitMQ) | Fair | Medium | Medium | Medium |
Key Takeaways
- The Logic: You define a "Shard Key" (e.g.,
user_id). When the application runsSELECT * FROM users WHERE user_id = 123, the proxy calculates:shard = hash(123) % Nand routes the query to the correct physical server. - Cross-Shard Queries: If a query doesn't include the shard key, the proxy must "Scatter-Gather"—sending the query to all shards and merging the results.
- The Problem: 10,000 application containers each opening 10 connections = 100,000 connections. MySQL will crash.
Read Next
- HyperLogLog at Scale: Billion-Cardinality Estimation
- gRPC vs REST: A Decision-Maker''s Guide for Backend Architecture
- System Design: Building a Secrets Management Platform
Verbal Interview Script
Interviewer: "How would you ensure high availability and fault tolerance for this specific architecture?"
Candidate: "To achieve 'Five Nines' (99.999%) availability, we must eliminate all Single Points of Failure (SPOF). I would deploy the API Gateway and stateless microservices across multiple Availability Zones (AZs) behind an active-active load balancer. For the data layer, I would use asynchronous replication to a read-replica in a different region for disaster recovery. Furthermore, it's not enough to just deploy redundantly; we must protect the system from cascading failures. I would implement strict timeouts, retry mechanisms with exponential backoff and jitter, and Circuit Breakers (using a library like Resilience4j) on all synchronous network calls between microservices."