Day207 - Leetcode: Python 53 & SQL 185 & DL Review
Python 53: Maximum Subarray / SQL 185: Department Top Three Sales / DL Review: RNNs, LSTM Networks & Gradient Vanishing & Exploding

🟩 Python Review
53. Maximum Subarray (M)
- Given an integer array
nums, find the subarray with the largest sum, and return its sum.
Solution
Given an integer array nums, find the contiguous subarray (containing at least one number) that has the largest sum, and return its sum. Such as:
- nums = [-2,1,-3,4,-1,2,1,-5,4]
- Output = 6
- Explanation: [4,-1,2,1] has the largest sum = 6
Key Concepts
- Brute Force could work, but very slow – I could check all subarrays (
O(n^2)subarrays) and compute their sums (O(n)each). Then that’sO(n^3)total, or with prefix sumsO(n^2). It could work, but it’s not an efficient solution. - A subarray’s maximum sum depends on a choice at each element:
- Extend the previous subarray by adding this element.
- Start a new subarray at this element.
If the previous sum is negative, carrying it forward only hurts. So at each step:
cur = max(nums[i], cur + nums[i])
- Kadane’s Algorithm: This hybrid of greedy and dynamic programming is the optimal solution.
cur(current maximum ending here): The best subarray sum ending at indexi.best(global maximum): The largest subarray sum seen so far.
Recurrnce:
cur[i] = max(nums[i], nums[i] + cur[i-1])
best = max(best, cur[i])
Solution Code:
from typing import List
class Solution:
def maxSubArray(self, nums: List[int]) -> int:
cur = best = nums[0] # initialize
for x in nums[1:]:
cur = max(x, cur + x) # extend or start new
best = max(best, cur) # update global max
return best
🟨 SQL Review
185. Department Top Three Sales
A company’s executives are interested in seeing who earns the most money in each of the company’s departments. A high earner in a department is an employee who has a salary in the top three unique salaries for that department.
Write a solution to identify employees who are high earners in each department.
Return the result table in any order.
My Answer
-- the top three unique salaries for that department
-- who are high earners
-- in any order
-- How we can deal with the dupes?
select b.name as Department, a.name as Employee, a.salary as Salary
from Employee as a
join Department as b on a.departmentId = b.id
-- group by ?
order by Salary desc
limit 3;
There were two main concerns regarding this:
- GROUP BY misuse: In PostgreSQL, every non-aggregated column in `SELECT` must appear in `GROUP BY`. I tried to group only by
b.name,which was incorrect; therefore, the engine doesn’t know which employee/salary to select for that department, causing the error. - Not “top 3 per department”: Even if the error were fixed,
GROUP BY ... LIMIT 3would return just three rows total across all departments, not the top 3 unique salaries within each department. - Doesn’t handle “unique salaries” (ties): I need to treat equal salaries as the same rank.
GROUP BYhere won’t give me “top unique salary tiers.”
Solution 1 – Using Correlated Subquery
SELECT d.name AS Department, e.name AS Employee, e.salary AS Salary
FROM Employee e
JOIN Department d ON d.id = e.departmentId
WHERE (
SELECT COUNT(DISTINCT e2.salary)
FROM Employee e2
WHERE e2.departmentId = e.departmentId
AND e2.salary >= salary
) <= 3
ORDER BY Department, Salary DESC, Employee;
- For each employee
e, the subquery counts the number of distinct salaries in the same department that are greater than or equal toe.salary. - If that count is
≤ 3, thene.salaryis in the top 3 unique salary tiers of that department—soeis a “high earner.” - Using
COUNT(DISTINCT ...)addresses duplicates (same salary values): tied salaries share the same rank/tier and are all included.
Solution 2 – Using Window Function
SELECT d.name AS Department, e.name AS Employee, e.salary
FROM Employee e
JOIN Department d ON e.departmentId = d.id
WHERE (
SELECT COUNT(DISTINCT e2.salary)
FROM Employee e2
WHERE e2.departmentId = e.departmentId
AND e2.salary >= e.salary
) <= 3;
PARTITION BY e.departmentId→ restart ranking for each department.ORDER BY e.salary DESC→ sort employees by salary within that department.DENSE_RANK()→ assigns ranks: 1 for the highest salary, 2 for the second-highest unique salary, etc. (ties share the same rank).
🟦 DL Review
1. Recurrent Neural Networks (RNNs)
RNNs are neural networks designed to handle data by maintaining a hidden state that captures information from previous time steps. They process inputs one element at a time, updating the hidden state recursively.
Why It Matters
- Good at modeling time series, speech, and text where order matters.
- However, it suffers from vanishing/exploding gradient problems, which limit the ability to learn long-range dependencies.
“RNNs are designed for sequential data, maintaining a hidden state to capture temporal dependencies. They work well for short sequences but struggle with long-term dependencies due to vanishing gradients.”
MLOps Angle
- RNNs are often replaced by LSTMs, GRUs, and Transformers, but they remain in use in resource-constrained applications. Deployment requires monitoring latency, as sequential processing is slower than parallelizable architectures, such as those used in transformers.
2. Long Short-Term Memory (LSTM) Networks
LSTMs are a type of RNN that uses gates (input, forget, output) and a cell state to regulate information flow. This helps preserve information across longer sequences.
Why It Matters
- Overcomes vanishing gradient issues.
- Can model long-term dependencies in sequences (e.g., text, stock predictions, speech).
- It was a state-of-the-art before transformers.
“LSTM extends RNNs by adding gates and a cell state to control information flow, enabling them to capture long-term dependencies effectively. They became a core architecture in NLP and time series before transformers.”
MLOps Angle
- In production, LSTMs are still used in real-time sequence tasks (finance, IoT, sensor analysis). However, careful optimization is needed for low-latency inference because they are heavier than GRUs.
3. Gradient Vanishing & Exploding
The vanishing gradient problem occurs when gradients diminish during backpropagation, preventing effective learning in earlier layers. In contrast, the exploding gradient problem occurs when gradients become excessively large, resulting in unstable updates.
Why It Matters
- A fundamental limitation in deep RNNs and intense networks.
- Directly affects the ability to train effectively.
“Vanishing gradients occur when updates become too small, slowing training and preventing early layers from learning. Exploding gradients occur when updates become too large, leading to instability. Solutions include better initialization, ReLU activations, residual connections, and gradient clipping.”
MLOps Angle
- In deployment, monitoring training stability is key. Exploding gradients often appear as NaNs in training logs, and vanishing gradients manifest as stagnant loss. Proper monitoring, logging, and auto-restarts are critical in production ML pipelines.
Leave a comment