This repository aims to categorize and label frequently asked leetcode coding questions into major categories such as linked list, binary search, two pointers, and backtracking, etc., and facilitate interviewees and developers to practice common programming interview questions.
Two great online resources have been leveraged to compile the list: Huahua and CyC2018.
-
Classification of Leetcode Problems
ID Topic Number of problems 01 Linked List 12 02 Two pointers 18 03 Binary Search 18 04 Binary Search Tree 15 05 Stack and Queue 14 06 Hash table and set 6 07 Tree 35 08 Divide and Conquer 6 09 Graph 25 10 Greedy Algorithm 10 11 Backtracking 22 12 Dynamic Programming 29 13 Miscellaneous (string, array, math, bit manipulation) 35 14 Advanced Topics (Trie, Topological sorting, Union find) 14 259 (total)
ID | Problem Name | Difficulty | Similar problems | Main Idea |
---|---|---|---|---|
206 | Reverse Linked List | Easy | Iteratively is straightforward but recursively is a little bit tricky. | |
21 | Merge Two Sorted Lists | Easy | 23 | Recursion make solution much cleaner for Problem 21. Use heap for Problem 23. |
160 | Intersection of Two Linked Lists | Easy | Connect the end of a list to the head of the other list. | |
19 | Remove Nth Node From End of List | Medium | Use two pointers and let one move N times first. | |
24 | Swap Nodes in Pairs | Medium | Use recursion and divide & conquer. | |
2 | Add Two Numbers | Medium | 445 | Iterate, add number and make new linked list. |
725 | Split Linked List in Parts | Medium | Find length n, and use n//k and n%k to determine the size for each part. | |
328 | Odd Even Linked List | Medium | Use odd and even pointers, and a node to save the head of the even list. | |
148 | Sort List | Medium | Slow and fast pointers + merge sort | |
234 | Palindrome Linked List | Medium | Use slow and fast pointers to cut it into halves. Reverse the second half and compare with the first half. |
ID | Problem Name | Difficulty | Similar problems | Main Idea |
---|---|---|---|---|
167 | Two Sum II - Input array is sorted | Easy | 15, 16 | Head and tail pointers move toward the middle. |
633 | Sum of Square Numbers | Easy | Head and tail (square root) pointers move toward the middle. | |
345 | Reverse Vowels of a String | Easy | Head and tail pointers move toward the middle. | |
917 | Reverse Only Letters | Easy | Head and tail pointers move toward the middle. | |
125 | Valid Palindrome | Easy | 680 | Head and tail pointers move toward the middle. |
88 | Merge Sorted Array | Easy | Two pointers initialized at the ends. | |
977 | Squares of a Sorted Array | Easy | Head and tail pointer moves based on certain condition. | |
925 | Long Pressed Name | Easy | 56, 986 | Two pointers, one moves based on certain condition. |
141 | Linked List Cycle | Medium | 142 | Slow and fast pointers |
524 | Longest Word in Dictionary through Deleting | Medium | Use two pointers to check if one string is a substring of another. | |
11 | Container With Most Water | Medium | 42 | Head and tail pointers move toward the middle. |
Please watch this video for an introductary tutorial and this video
(part 1, part 2) for a
great summary of binary search. It is extremely important to implement your own versions of binary search
algorithms, and make correspondence between your versions and bisect.bisect()/bisect.bisect_right()
and
bisect.bisect_left()
.
ID | Problem Name | Difficulty | Similar problems | Main Idea |
---|---|---|---|---|
69 | Sqrt(x) | Easy | ||
35 | Search Insert Position | Easy | 704 | bisect.bisect_left() |
278 | First Bad Version | Easy | 981 | |
744 | Find Smallest Letter Greater Than Target | Easy | Be careful about the 'wrap around' constraint and duplicates. Use bisect.bisect() function. |
|
875 | Koko Eating Bananas | Medium | 1011 | Search by checking validity. |
74 | Search a 2D Matrix | Medium | Search row and then search column or treat 2D as 1D array. | |
34 | Find First and Last Position of Element in Sorted Array | Medium | Use both bisect.bisect_left() and bisect.bisect() . |
|
33 | Search in Rotated Sorted Array | Medium | 81, 153, 154 | left +=1 and right -= 1 if nums[left]==nums[mid]==nums[right] . |
540 | Single Element in a Sorted Array | Medium | This problem is tricky. If mid is even and nums[mid] == nums[mid+1] , then left = mid+2 . If mid is odd, then mid -= 1 and check condition. |
|
378 | Kth Smallest Element in a Sorted Matrix | Medium | Nested binary searches. Outer one search for the target, and inner one is used to calculate how many elements are less than or equal to the target candidate in each row. Another idea is to use min heap. | |
719 | Find K-th Smallest Pair Distance | Hard | Sort array first, and then search by checking validity. | |
4 | Median of Two Sorted Arrays | Hard | Binary search using the shorter list based on condition nums_short[mid1] <? nums_long[mid2-1] . |
A special characteristic of BST is that its inorder traversal yields a sorted array. Note that the definitions of BST might be different for different problems. Moreover, you can recover a binary search tree from its preorder traversal.
ID | Problem Name | Difficulty | Similar problems | Main Idea |
---|---|---|---|---|
700 | Search in a Binary Search Tree | Easy | 701 | Recursion and search in half of the subtree each time. |
669 | Trim a Binary Search Tree | Easy | Recursion, and check current node value with lower and upper bounds | |
653 | Two Sum IV - Input is a BST | Easy | Use inorder to get a sorted array, and use two pointers. | |
235 | Lowest Common Ancestor of a Binary Search Tree | Easy | 236 | Recursion, and check current node value with the values of the two given nodes |
98 | Validate Binary Search Tree | Medium | 530 | Check value with lower and upper bounds for each node. Another way is to leverage inorder traversal. |
230 | Kth Smallest Element in a BST | Medium | Inorder traversal | |
108 | Convert Sorted Array to Binary Search Tree | Easy | 109 | Divide array by half and use recursion. |
501 | Find Mode in Binary Search Tree | Easy | Be careful to the BST definition, and use inorder traversal | |
450 | Delete Node in a BST | Medium | The deleted node could have no child, 1 child, and 2 children (which is the most tricky part) | |
1008 | Construct Binary Search Tree from Preorder Traversal | Medium | Use recursion and for each node, update the lower and upper values for its children. | |
99 | Recover Binary Search Tree | Hard | Inorder traversal to find the two nodes and swap them |
The main characteristics of stack and queue are LIFO (Last In First Out) and FIFO (First In First Out) respectively. Stack and queue are frequently used in tree and graph traversal problems, leveraging stack for DFS and queue for BFS. We will not list such problems in this list.
ID | Problem Name | Difficulty | Similar problems | Main Idea |
---|---|---|---|---|
155 | Min Stack | Easy | Push a tuple (x, minimum) into stack. | |
232 | Implement Queue using Stacks | Easy | Be careful about the amortized time complexity of each operation. Use two stacks. | |
1047 | Remove All Adjacent Duplicates In String | Easy | 1209 | Use stack. |
20 | Valid Parentheses | Easy | 921, 1249 | Be careful about the type of parentheses. Maybe use a dict to simplify code. |
739 | Daily Temperatures | Medium | 402 | Iterate through array, and pop util current temp smaller than or equal to temp at the top of the stack. Otherwise, push. |
503 | Next Greater Element II | Medium | 496 | Iterate trough the concatenated array, e.g., given a list nums , iterate through nums+nums . |
581 | Shortest Unsorted Continuous Subarray | Medium | Use monotone stack. | |
239 | Sliding Window Maximum | Hard | 1438 | Use monotone queue. |
Hash table/set is one of the most frequently asked data structures in coding interviews. When stuck, try hash table.
Hash table and hash set are also very useful to make time-space trade off. They enable O(1)
time of searching.
ID | Problem Name | Difficulty | Similar problems | Main Idea |
---|---|---|---|---|
169 | Majority Element | Easy | Hash table | |
1 | Two Sum | Easy | Iterate through the array and check if target - nums[i] is in the hash map. |
|
217 | Contains Duplicate | Easy | Hash set | |
594 | Longest Harmonious Subsequence | Easy | Iterate the array and do max(ans, count[num] + count[num+1]) . |
|
128 | Longest Consecutive Sequence | Hard/Medium | Convert array to hash set. Iterate the array and check if the next element is in the set. | |
560 | Subarray Sum Equals K | Medium | Use cumulative sum and the two sum problem (1) idea. |
Most of tree problems are traversal problems. Traversal algorithms such as DFS including 'pre-order',
'in-order', and 'post-order', and BFS or level order traversal are extremely important. Generally speaking, the time complexity of DFS algorithm
is O(n)
where n
is the number of nodes and the space complexity is O(h)
where h
is the height of the tree due to
implicit stack used in recursion. Additionally, BFS problems generally have complexity of O(n)
in time and O(w)
in
space, where w
is the maximum width of a tree. Moreover, it is very important to know concepts such as 'depth',
'height', 'level', 'complete tree', and 'perfect tree', etc.
ID | Problem Name | Difficulty | Similar problems | Main Idea |
---|---|---|---|---|
104 | Maximum Depth of Binary Tree | Easy | 110, 111, 112, 226, 617, 101, 404, 814, 1325 | Simple recursion. |
508 | Most Frequent Subtree Sum | Medium | Use a hash table to find all path sums, and then find answer from the hash table. | |
437 | Path Sum III | Easy/Medium | 113, 129, 257 | For Problem 437, use hash_table + DFS to achieve linear time complexity, similar to two sum problem. Check out similar problem 560. |
124 | Binary Tree Maximum Path Sum | Hard/Medium | 543 | Find left sum and right sum, and use both of them to calculate the sum with the current node as the root, and return only one of them. |
572 | Subtree of Another Tree | Easy/Medium | 687 | Recursively check if t is the same tree with a tree rooted at the current node, or if t is a subtree rooted at current node's left or right child. Problem 687 can have linear time complexity by calculating the longest path by using left and right paths and only returning the max of left and right paths. |
102 | Binary Tree Level Order Traversal | Medium | 637, 513, 429, 1302 | Level-order traversal. Use BFS or defaultdict[list] to add node into corresponding level. |
144 | Binary Tree Preorder Traversal | Medium | 589, 145, 590, 94 | Recursively is trivial, but iteratively is very complicated. IMO, iteratively in-order traversal is the hardest. |
297 | Serialize and Deserialize Binary Tree | Hard | 449, 106, 105 | In the binary tree problem, we need to have "null" indicator to represent null node in the serialized string, while BST does not. BST problem uses pre-order traversal to serialize and deserialize by recursively building trees by checking lower and upper bounds for subtrees. |
968 | Binary Tree Cameras | Hard | 979 | Bottom up. DFS + greedy. |
Divide and conquer approach refers to decomposing a big problem into smaller problems, then combine the results obtained from these smaller problems which are solved recursively. When stuck, think about the base case (the smallest problem), and see if we can build up a solution recursively from there. Merge sort and quick sort algorithms both leverage divide and conquer approach.
ID | Problem Name | Difficulty | Similar problems | Main Idea |
---|---|---|---|---|
912 | Sort an Array | Medium | 215, 148 | Merge/quick sort, quick select |
241 | Different Ways to Add Parentheses | Medium | Divide the string into two parts when encountering an operator, and recurse on each part and combine results. | |
95 | Unique Binary Search Trees II | Medium | Iterate through the possible values ("array"), and use the current as root and pivot, and divide the "array" into two parts, and find all possible solutions using the recursion results obtained by using the aforementioned two parts. | |
315 | Count of Smaller Numbers After Self | Hard | Merge sort. The key is gradually updating the answer in the merge step. |
Graph problems are generalization of tree problems. DFS and BFS traversals are extremely important algorithms in graph
problems. One key difference between graph and tree problems is that graph traversal requires a visited
variable to
memorize nodes that have been visited before. Moreover, use DFS to solve connected problems and BFS to solve shortest
path problems. The complexity of graph traversals is generally O(V+E)
in time and O(V)
in space, where V
is the
number of vertices/nodes and E
is the number of edges.
ID | Problem Name | Difficulty | Similar problems | Main Idea |
---|---|---|---|---|
997 | Find the Town Judge | Easy | Use both in-degrees and out-degrees. | |
133 | Clone Graph | Medium | 138 | Use hash table and DFS. The hash table also serves as the visited . |
200 | Number of Islands | Medium | 547, 695, 733, 841, 827, 1202, 130, 417 | It is a connected components problem, use DFS. Given grid might be able to be used as visited . |
1162 | As Far from Land as Possible | Medium | 433, 863, 1129, 1091, 279, 127, 399, 542, 934 | It is a shortest/longest path problem, use BFS. |
785 | Is Graph Bipartite? | Medium | 886, 1042 | Graph coloring |
Greedy algorithm is a simple and heuristic algorithm that follows the problem-solving heuristic of making the locally optimal choice which might but not necessarily end up a global optimal solution. It builds up a solution piece by piece, always choosing the next piece that offers the most obvious and immediate benefit. Sometimes, we solved a problem using our instinct without even realizing it is a greedy algorithm problem.
The difference between greedy algorithm and dynamic programming (DP) is subtle. Detailed comparison is highlighted in this article. In general, greedy algorithm is simpler and myopic, and make heuristic decision based on locally optimal choice. DP is generally harder, and makes decision at each step considering current problem and solution to previously solved sub-problems.
In practice, a problem might be able to solved in both ways, as pointed out in this article "beneath every greedy algorithm, there is almost always a more cumbersome dynamic programming solution".
ID | Problem Name | Difficulty | Similar problems | Main Idea |
---|---|---|---|---|
455 | Assign Cookies | Easy | Have kids and you will know 😂 | |
605 | Can Place Flowers | Easy | Pad each side of flowerbed by 0 , and iteratively check if we can place flower by flowerbed[i-1] == flowerbed[i] == flowerbed[i+1] == 0 . |
|
121 | Best Time to Buy and Sell Stock | Easy | 122 | Keep track of the min price. You might have solved the problem without realizing it is a greedy algorithm problem. Greedy algorithm allows you to follow your instinct. |
53 | Maximum Subarray | Easy/Medium | 152 | Repeatedly check current sum cur_sum = max(cur_sum + num, num) . There is a blur boundary between greedy algorithm and DP for this problem. |
435 | Non-overlapping Intervals | Medium | 452 | Sort the intervals by the end point of each interval. |
763 | Partition Labels | Medium | Use a hash table to keep track of the last time (largest index) a letter appeared. | |
406 | Queue Reconstruction by Height | Medium | Sort the list by height h in descending order and number of people k in ascending order. Construct a queue queue by iteratively queue.insert(k, h) for each pair in the sorted list. |
Backtracking is an exhaustive search algorithm for solving combination, permutation, subset, and constraint satisfaction
problems. It solves a problem recursively by trying to build a solution incrementally while removing solutions that fail to satisfy the constraints of
the problem. The time complexity of backtracking algorithm is generally large, e.g., at least C(n, k)
for combination
problem and at least P(n, k)
for permutation problem
reference. This
video gives a great tutorial on how to implement
backtracking algorithm for combination and permutation problems.
There are three key pillars in backtracking algorithms: (1) the base case, exit condition or the goal to achieve, (2) the choices to make to go to the next building step, and (3) pruning that discards solutions that fail to satisfy the constraints of the problem. A great high-level introduction of backtracking algorithm can be found here.
ID | Problem Name | Difficulty | Similar problems | Main Idea |
---|---|---|---|---|
17 | Letter Combinations of a Phone Number | Medium | 77, 39, 40, 216 | Combinations |
46 | Permutations | Medium | 47, 784, 996 | Permutations |
78 | Subsets | Medium | 90 | Subsets |
79 | Word Search | Medium | 212 | DFS for problem 79 and DFS + Trie for Problem 212. |
22 | Generate Parentheses | Medium | 301 | Add ( whenever the number of ( is smaller than n , and add ) whenever the number of ) is smaller than the number of open parentheses. |
131 | Palindrome Partitioning | Medium | 93, 698, 842, 282 | Partition |
37 | Sudoku Solver | Hard | 51 |
Dynamic programming (DP) is an algorithm to solve problems by breaking it down into simpler and smaller subproblems and utilizing the fact that the optimal solution to the overall problem depends upon the optimal solutions to these subproblems. DP algorithm is similar to Greedy and Divide & Conquer algorithms in breaking down the problem into smaller subproblems. However, these subproblems are not solved independently, and results of these smaller subproblems are remembered and used for similar and overlapping subproblems.
Please watch this video for a brief introductory tutorial to DP, and this video for a great summary of DP algorithms. DP has two forms: top-down (recursion + memoization) and iterative bottom-up. The main goals of DP are reducing time complexity of a problem solved by using recursion from exponential to lower ones such as linear or finding counting and global optimal solution to a problem. The key to solve a DP problem is to find an iterative or recursive formula.
Asking DP problems in coding interviews has been a little bit controversial. It is highly likely that an interviewee either solves the problem in several minutes or gets stuck for the whole coding session. Additionally, a lot of DP problems are really involved and complicated. However, DP is one of the most important algorithms in practice, and proven to be very useful to solve optimal control/decision problems.
ID | Problem Name | Difficulty | Similar problems | Main Idea |
---|---|---|---|---|
70 | Climbing Stairs | Easy | 746, 1137 | Simple DP |
198 | House Robber | Easy | 213, 337, 740 | For Problem 213, try twice, one without the first house and one without the last house. Then find the maximum of them. One way to solve Problem 740 is to convert it into a house robber problem. |
1218 | Longest Arithmetic Subsequence of Given Difference | Medium | 413, 300, 646, 1143 | For Problem 1218, use Hash table + DP |
416 | Partition Equal Subset Sum | Medium | 494, 474, 322, 518, 139, 377 | Knapsack problem. |
62 | Unique Paths | Medium | 63, 64, 120, 931 | DP in 2-dimensional space. Deal with the boundaries first for Problems 62-64. |
85 | Maximal Rectangle | Hard/Medium | 221, 1277 | Leverage the solution to Problem 84 using stack for Problems 85 and 221. |
309 | Best Time to Buy and Sell Stock with Cooldown | Medium/Hard | 801 | Multi-state (rest, sold, hold) DP |
Some of the most famous string problems include anagram, palindrome, substring, and rotation. If you are not familiar with the definition of anagram or palindrome, it might be a good idea to understand these concepts before the interview.
ID | Problem Name | Difficulty | Similar problems | Main Idea |
---|---|---|---|---|
242 | Valid Anagram | Easy | 438, 205 | Hash table |
409 | Longest Palindrome | Easy | Hash table | |
9 | Palindrome Number | Easy | Convert to string or reverse a number. | |
3 | Longest Substring Without Repeating Characters | Medium | Hash set | |
647 | Palindromic Substrings | Medium | 5 | Expand string centered at each letter |
796 | Rotate String | Easy | A naive solution is check if B in A+A . |
|
151 | Reverse Words in a String | Medium | Reverse each word and reverse the string. |
Index plays an important role in array problems. In some problems, values in array are used as indices as well.
Quick select is a great algorithm to find the k-th smallest element in an array in O(n)
time, which outperforms
heap/priority queue and sorting algorithms which have time complexities of O(nlog(k))
and O(nlog(n))
respectively.
ID | Problem Name | Difficulty | Similar problems | Main Idea |
---|---|---|---|---|
566 | Reshape the Matrix | Easy | Have a counter cnt , and the row index will be cnt//c and the column index will be cnt%c . |
|
645 | Set Mismatch | Easy | Hash table | |
697 | Degree of an Array | Easy/Medium | Hash table. | |
422 | Find All Duplicates in an Array | Medium | Leverage the input list, and use negation. | |
565 | Array Nesting | Medium | Use input as visited array. | |
769 | Max Chunks To Make Sorted | Medium | Iterate through array. If current max is smaller than or equal to current index, the the number of chunks increases by 1. | |
462 | Minimum Moves to Equal Array Elements II | Medium | 215 | Quick select algorithm. |
287 | Find the Duplicate Number | Medium | Slow and fast pointers + cycle detection. |
Typical math problems include finding prime numbers, gcd (greatest common divisor), lcm (least common multiple), converting base, and check if a number is a power of 2/3/4, etc.
There are a couple of tips related to gcd and lcm.
- find gcd,
def gcd(a, b): return a if b == 0 else gcd(b, a%b)
, time complexityO(log(min(a, b)))
. - find lcm,
def lcm(a, b): return a*b//gcd(a, b)
.
ID | Problem Name | Difficulty | Similar problems | Main Idea |
---|---|---|---|---|
204 | Count Primes | Easy | Sieve of Eratosthenes | |
504 | Base 7 | Easy | 405, 168 | Repeatedly use digit = n%base and n = n//base where integer base is the target base. |
67 | Add Binary | Easy | 415 | |
367 | Valid Perfect Square | Easy | Binary search with time complexity O(log(n)) or 1+3+...+(2n-1) = n*n with time complexity O(sqrt(n)) . |
|
326 | Power of Three | Easy | 342 | |
628 | Maximum Product of Three Numbers | easy | The max product is max(max1*max2*max3, max1*min1*min2)) . |
|
238 | Product of Array Except Self | Medium | Leverage the output array to have O(1) space and do cummulative product from left to right and right to left. |
Bit manipulation problems can be very tricky. If you don't know the concept, it is highly likely you will not be able to solve the problem, and there is generally no workaround by using other algorithms.
The fundamental operations of bit manipulation are:
x ^ 0s = x x & 0s = 0 x | 0s = x
x ^ 1s = ~x x & 1s = x x | 1s = 1s
x ^ x = 0 x & x = x x | x = x
where 0s
and 1s
represent a sequence of 0s and 1s respectively.
There are a few tips that might help solve bit manipulation problems.
- Leverage
x ^ 0s = x
andx ^ x = 0
, we can remove or find duplicate number, e.g.,1^1^2 = 2
. x&(x-1)
will remove the last 1 in the binary representation ofx
, andx&(-x)
will get the last 1 in the binary representation.- The binary representation of
-k
as a n-bit number isconcat(1, bin(2^(n-1)-k))
wherebin
denotes binary representation. - In Python converting an integer represented in any base to a binary number as a string can be done using
bin(num)
. Be careful whennum
is negative. x << n
will multiplyx
by 2^n, andx >> n
will dividex
by 2^n.
ID | Problem Name | Difficulty | Similar problems | Main Idea |
---|---|---|---|---|
136 | Single Number | Easy | 268, 260 | XOR |
318 | Maximum Product of Word Lengths | Medium | Use a mask/filter to indicate which letter has appeared for each word. | |
338 | Counting Bits | Medium | DP with dp[i] = dp[i&(i-1)] + 1 . |
Trie
is a special tree
data structure that stores the letters of a string in an ordered manner. It enables fast
search of a string by traversing down a branch path of the tree. It behaves as a dictionary, we can navigate to the next
letter from the current letter. Each Trie node has a children
variable and a is_word
variable. The children
variable is generally a hash table with letter as key and Trie node as the value, and the is_word
variable is boolean.
There are three common methods for Trie data structure, insert
, search
, and start_with
. The difference between the
search
and start_with
functions is the is_word
flag is true or not after iterating through the searched string.
When encountering key words such as prefix
, dictionary
, search
, and word
, etc. in a coding problem, think about
the Trie data structure.
ID | Problem Name | Difficulty | Similar problems | Main Idea |
---|---|---|---|---|
208 | Implement Trie (Prefix Tree) | Medium | 677, 648, 676, 720, 211 | Defining a TrieNode class whose children is a defaultdict(TrieNode) type variable will facilitate coding. |
Topological sort is only feasible for DAG (directed acyclic graph). One way to implement topological sorting is use BFS. First add nodes with in-degrees/out-degrees equal to 0 to queue. Then pop a node, traverse to its neighbors, and subtract 1 from the in-degrees/out-degrees of the neighbors. If the current in-degree/out-degree of a neighbor is equal to 0, add it to the queue.
ID | Problem Name | Difficulty | Similar problems | Main Idea |
---|---|---|---|---|
207 | Course Schedule | Medium | 210, 802 | Build graph and in-degree/out-degree list, then do topological sort. |
Union Find algorithm or disjoint-set data structure aims to find if two nodes are in the same set in amortized O(1)
time. There are two operations in the UnionFind
class: find
and union
. To enable amortized O(1)
time of find
,
two optimizations need to be done for the UnionFind
class: path compression and union by rank.
A python implementation of the disjoint set data structure is given below.
class UnionFind:
def __init__(self, n):
self.parents = [i for i in range(n)]
self.ranks = [1 for _ in range(n)]
def find(self, u):
if u != self.parents[u]:
self.parents[u] = self.find(self.parents[u])
return self.parents[u]
def union(self, u, v):
pu, pv = self.find(u), self.find(v)
if pu == pv: return False
if self.ranks[pu] < self.ranks[pv]:
self.parents[pu] = pv
elif self.ranks[pu] > self.ranks[pv]:
self.parents[pv] = pu
else:
self.parents[pv] = pu
self.ranks[pu] += 1
return True
In a coding interview, if you are asked a Union Find problem and you are not familiar with this algorithm, try other algorithms such as DFS or BFS, and it is highly likely that you will be still able to solve the problem.
ID | Problem Name | Difficulty | Similar problems | Main Idea |
---|---|---|---|---|
684 | Redundant Connection | Medium | 1319, 990, 721, 685 |
Algorithm complexity is a measure of the amount of time and/or space required by an algorithm as a function of the input size. The time complexity is the computational complexity that describes the amount of time it takes to run an algorithm and the space complexity is amount of memory space required by the algorithm.
The following picture depicts the Big O complexity of an algorithm as a function of input size n
.
We observe from the above picture that when n
is large,
O(n!) > O(2^n) > O(n^2) > O(nlog(n)) > O(n) > O(log(n)) > O(1)
which indicates the order of complexity in decreasing order as: factorial, exponential, polynomial, linearithmic, linear, logarithmic, and constant.
The following table summarizes the complexity of common algorithms.
Algorithm | Best Time | Average Time | Worst Time | Worst Space | Comments |
---|---|---|---|---|---|
Quick sort | O(nlog(n)) |
O(nlog(n)) |
O(n^2) |
O(n) |
Unstable |
Merge sort | O(nlog(n)) |
O(nlog(n)) |
O(nlog(n)) |
O(n) |
Stable |
Tim sort | O(n) |
O(nlog(n)) |
O(nlog(n)) |
O(n) |
Stable. Python uses Tim sort. |
Heap sort | O(nlog(n)) |
O(nlog(n)) |
O(nlog(n)) |
O(1) |
Unstable |
Binary search | O(log(n)) |
O(1) |
|||
Tree DFS | O(n) |
O(h) |
h is the height of the tree |
||
Tree BFS | O(n) |
O(w) |
w is the maximum width of the tree |
||
Graph DFS | O(V+E) |
O(V) |
V and E are numbers of vertices and edges respectively |
||
Graph BFS | O(V+E) |
O(V) |
|||
Permutation | O(n!) |
O(n) |
Permutation (backtracking) | ||
Combination | O(2^n) |
O(n) |
Combination (backtracking) |
Many leetcode questions not only present the problem description, but also give the constraint of input size. It is interesting to see that we can infer the time complexity of the algorithm we are supposed to develop based on the input size. The following picture shows the time complexity versus the input size. For example, if the input size is less than 10, then this problem is probably a backtracking (permutation) problem. Please refer to this great video for details. However, such information might not be available in a coding interview. But it doesn't hurt to ask.
I recommend to get a copy of Gayle McDowell's coding interview preparation book Cracking the Coding Interview, and follow the recommend seven steps (illustrated below) to answer any coding questions.
-
Listen carefully to the problem description and ask clarifying questions
Don't miss any important information/keyword which might help you find a(n) (optimal) solution. Most of the time, the interviewer will type the problem description into the collaborative coding environment. Do ask questions to clarify all your doubts such as input, output, edge cases, and constraints, etc. Even the problem description is crystal clear to you or you have solved this problem before, it is still helpful to repeat the problem to the interviewer in a different way to make sure you really understand it and engage the interviewer.
-
Present a nontrivial example if it is not given in the problem description
Sometimes, the interviewer will give you an example input and an expected output. Be careful, the given example might be too trivial or just an edge case or purposely misses some edge cases or information that might help you develop a solution to the problem. In this case, you might want to clarify with your interviewer by presenting a typical and non-trivial example, and explain the expected output. This step might be consolidated with the first step depending on the situation.
-
Describe a brute force solution ASAP
Do not dive into coding immediately. First give an intuitive and naive solution and explain the time and space complexity, then engage/discuss with the interviewer to see if you can optimize it. Note that you need to lead the conversation. The motivation for this step is that it is the natural way to solve a problem that you have never seen before. The interviewer cares more about how you get to the optimal solution than the solution itself. If you rush to present an optimal solution immediately without any explanation, it makes the impression that you have solved this problem before, and you are copying what you have memorized. Note that companies try to hire developers with great problem-solving skills instead of people with good memory.
-
Optimize your solution to get an optimal one
This is the most important step. You need to lead the discussion to explain why you use certain data structure and/or algorithm to optimize your brute force solution, and how you get there. Thinking out loud and explaining your thought process is extremely important in the coding interview process. Leverage the BUD (bottleneck, unnecessary, duplicated) framework and other tips in the Cracking the Coding Interview book. This is the part where all your practice and hard work pays off. Without sufficient practice and summarization of different types and categories of coding questions, it will be challenging to come up an optimal solution to a problem, especially to a problem you have never seen before.
-
Walk through your optimal solution
Now you have an optimal solution and have explained how you get there, walk through the optimal solution with your interviewer to make sure you and the interviewer understand everything.
-
Implement your algorithm and solution
Finally, we arrive the step of coding. Imagine you dived into coding immediately after your interviewer gave you the problem description, how many important steps you have missed! Remember to write clean code and modularize your code with standard variable and function names. PEP 8 and Google C++ Style Guide are great guidelines to write clean, readable, and production-quality code for Python and C++ respectively.
-
Test your code
It is super important to initiate testing your code before your interviewer asks you to do so. First eye-balling your code to see if there are any conceptual bugs such as keyword errors, incorrect indentations, inconsistent variable or function names, missing colons, and indexing errors, etc. Then use small cases, special cases, and edge cases to test your code. Do not panic if there are bugs. Everybody makes mistakes, and that is why we do the testing. Fix the bugs carefully and gracefully.
In a typical coding interview session, we generally need to solve one hard-level or two medium-level Leetcode questions in 45 minutes, including brief self-introductions and questions, etc. Therefore, use your time smartly and do not dogmatically follow the above 7 steps. Rather, be flexible and agile.
- Did not ask clarifying questions or confirm/repeat the question
- Did not explain your idea clearly or walk through your solution
- Did not listen to or take interviewer's suggestions/hints
- Dived into coding immediately
- Did not actively engage or interact/discuss with the interviewer
- Communication was poor or defensive
- Did not test your algorithm
- Poor naming of variables or functions
- Code had bugs or missed edge cases
- Code was not clean or modularized
- Did not respect the interviewer or was too arrogant
Now you have mastered the skills and tips to crack the coding interviews, it is time to practice them in mock interviews before you take a real job interview. This step is extremely valuable since coding under pressure is very different and you need practice to be at your best.
You can start by pairing a coding buddy and give each other a medium-level question to solve in 25 minutes, and give feedback at the end. CoderPad is a great online collaborative coding environment for mock interviews. After that, you might want to leverage Pramp.com and interviewing.io to practice mock interviews with anonymous people, which will give you the "real" interview feeling.