Lesson 33 of 70 6 min

MANG Problem #6: Sliding Window Maximum (Hard)

Learn how to find the maximum in every sliding window of size K in O(n) time using a Monotonic Deque. Master the concept of "Useless Candidates" in algorithms.

Reading Mode

Hide the curriculum rail and keep the lesson centered for focused reading.

1. Problem Statement

Mental Model

Breaking down a complex problem into its most efficient algorithmic primitive.

You are given an array of integers nums, there is a sliding window of size k which is moving from the very left of the array to the very right. You can only see the k numbers in the window. Each time the sliding window moves right by one position.

Return the max sliding window.

Input: nums = [1,3,-1,-3,5,3,6,7], k = 3
Output: [3,3,5,5,6,7]

2. The Mental Model: The "Useless Candidates" Intuition

Imagine a talent competition. A new candidate (current element) arrives.

  • If the new candidate is stronger (larger) than the previous candidates in the window, those previous candidates will never be the maximum again. They are younger than the new candidate, but the new candidate is already better.
  • Rule: If nums[new] > nums[old], we can permanently discard old.

This allows us to maintain a Monotonic Deque (Double-ended queue) where elements are kept in strictly decreasing order. The "Winner" is always at the front.

3. Visual Execution (Monotonic Deque)

graph LR
    subgraph "Monotonic Deque (Indices)"
        Front[Front: Max Candidate] --- D1[Candidate 2] --- D2[Candidate 3] --- Back[Back: Smallest]
    end
    
    New[New Element] --> Check{Larger than Back?}
    Check -- Yes --> PopBack[Pop Back] --> New
    Check -- No --> PushBack[Push Back to Deque]

4. Java Implementation

public int[] maxSlidingWindow(int[] nums, int k) {
    if (nums == null || k <= 0) return new int[0];
    int n = nums.length;
    int[] result = new int[n - k + 1];
    int ri = 0;
    
    // Deque stores indices of elements
    Deque<Integer> deque = new ArrayDeque<>();
    
    for (int i = 0; i < nums.length; i++) {
        // 1. Remove indices of elements smaller than current from the back
        // They are no longer candidates for the maximum
        while (!deque.isEmpty() && nums[deque.peekLast()] < nums[i]) {
            deque.pollLast();
        }
        
        // 2. Add current index to the back
        deque.offerLast(i);
        
        // 3. Remove the front index if it has fallen out of the window
        if (deque.peekFirst() == i - k) {
            deque.pollFirst();
        }
        
        // 4. Once we have a full window (size k), the front is our maximum
        if (i >= k - 1) {
            result[ri++] = nums[deque.peekFirst()];
        }
    }
    return result;
}

5. Verbal Interview Script (Staff Tier)

Interviewer: "Why is a Deque better than a Priority Queue for this problem?"

You: "While a Priority Queue can find the maximum, removing an arbitrary element (the one that fell out of the window) takes $O(K)$ time in most implementations, leading to an overall $O(N \cdot K)$ or $O(N \log N)$ complexity. The Monotonic Deque optimization leverages the 'contiguous' nature of the window. By maintaining the deque in strictly decreasing order, we ensure that every element is added and removed at most once. This amortizes the work to strictly $O(N)$ time. This pattern is the gold standard for 'sliding window extrema' problems."

6. Real-World Application: Stream Processing

This algorithm is the foundation of Windowed Aggregates in stream processing engines like Apache Flink or Kafka Streams.

  1. Threshold Monitoring: If you need to alert when a sensor reading stays above a certain level for a 5-minute window.
  2. Trading Systems: Calculating the "High-Water Mark" for a stock price over a moving time interval.

7. Performance Nuances (Staff Level)

  1. ArrayDeque vs. LinkedList: In Java, always use ArrayDeque for this problem. LinkedList creates a new Node object for every insertion, creating massive GC pressure. ArrayDeque uses a circular array buffer and is significantly faster.
  2. Primitive Specialization: If performance is absolute key (e.g., HFT), I would use a primitive int array to simulate a deque to avoid the overhead of the Deque<Integer> object and autoboxing.

8. Summary Comparison Table

Approach Time Complexity Space Complexity Recommendation
Brute Force $O(N \cdot K)$ $O(1)$ Never use.
Priority Queue $O(N \log K)$ $O(K)$ Good for non-contiguous.
Monotonic Deque $O(N)$ $O(K)$ Optimal for Contiguous.
Segment Tree $O(N \log N)$ $O(N)$ Use for dynamic updates.

6. Staff-Level Interview Follow-Ups

Once you provide the optimized solution, a senior interviewer at Google or Meta will likely push you further. Here is how to handle the most common follow-ups:

Follow-up 1: "How does this scale to a Distributed System?"

If the input data is too large to fit on a single machine (e.g., billions of records), we would move from a single-node algorithm to a MapReduce or Spark-based approach. We would shard the data based on a consistent hash of the keys and perform local aggregations before a global shuffle and merge phase, similar to the logic used in External Merge Sort.

Follow-up 2: "What are the Concurrency implications?"

In a multi-threaded Java environment, we must ensure that our state (e.g., the DP table or the frequency map) is thread-safe. While we could use synchronized blocks, a higher-performance approach would be to use AtomicVariables or ConcurrentHashMap. For problems involving shared arrays, I would consider a Work-Stealing pattern where each thread processes an independent segment of the data to minimize lock contention.

7. Performance Nuances (The Java Perspective)

  1. Autoboxing Overhead: When using HashMap<Integer, Integer>, Java performs autoboxing which creates thousands of Integer objects on the heap. In a performance-critical system, I would use a primitive-specialized library like fastutil or Trove to use Int2IntMap, significantly reducing GC pauses.
  2. Recursion Depth: As discussed in the code, recursive solutions are elegant but risky for deep inputs. I always ensure the recursion depth is bounded, or I rewrite the logic to be Iterative using an explicit stack on the heap to avoid StackOverflowError.

Key Takeaways

  • If the new candidate is stronger (larger) than the previous candidates in the window, those previous candidates will never be the maximum again. They are younger than the new candidate, but the new candidate is already better.
  • Rule: If nums[new] > nums[old], we can permanently discard old.
  • MANG Problem #7: Reverse Nodes in k-Group (Pointer Mastery)

Want to track your progress?

Sign in to save your progress, track completed lessons, and pick up where you left off.