Skip to content

Commit 3aebf5d

Browse files
committed
issue 24 - update grammer
1 parent 8ce4df6 commit 3aebf5d

File tree

1 file changed

+17
-17
lines changed

1 file changed

+17
-17
lines changed

interview/LRU Algorithm.md

+17-17
Original file line numberDiff line numberDiff line change
@@ -2,17 +2,17 @@
22

33
**Author: [labuladong](https://github.com/labuladong)**
44

5-
# Detaild Analysis of LRU Algorithm
5+
# Detailed Analysis of LRU Algorithm
66

77
### 1. What is LRU Algorithm
88

99
It is just a cache clean-up strategy.
1010

11-
A computer has limited memory cache. If the cache is full, some contents need to be removed from cache to provide space for new content. However, which part of the cache should be removed? We hope to remove not so useful contents, while leaving useful contents untouched for future usage. So the question is, what is the criteria to determine if the data is _useful_ or not?
11+
A computer has limited memory cache. If the cache is full, some contents need to be removed from cache to provide space for new content. However, which part of the cache should be removed? We hope to remove not so useful contents, while leaving useful contents untouched for future usage. So the question is, what are the criteria to determine if the data is _useful_ or not?
1212

13-
LRU (Least Recently Used) cache clean-up algorithm is a common strategy. According to the name, the latest used data should be _useful_. Hence, when the memory cache is full, we should priortize to remove those data that haven't been used for long are not useful.
13+
LRU (Least Recently Used) cache clean-up algorithm is a common strategy. According to the name, the latest used data should be _useful_. Hence, when the memory cache is full, we should prioritize removing those data that haven't been used for long are not useful.
1414

15-
For example, an Android phone can run apps in the backgroud. If I opened in sequence: Settings, Phone Manager, and Calendar, their order in the background will be shown as following:
15+
For example, an Android phone can run apps in the background. If I opened in sequence: Settings, Phone Manager, and Calendar, their order in the background will be shown as following:
1616

1717
![jietu](../pictures/LRU%E7%AE%97%E6%B3%95/1.jpg)
1818

@@ -22,7 +22,7 @@ If I switch to Settings now, Settings will be brought to the first:
2222

2323
Assume that my phone only allows me to open 3 apps simultaneously, then the cache is already full by now. If I open another app, Clock, then I have to close another app to free up space for Clock. Which one should be closed?
2424

25-
Accoording to LRU strategy, the lowest app, Phone Manager, should be closed, because it is the longest unused app. Afterwards, the newly opened app will be on the top:
25+
According to LRU strategy, the lowest app, Phone Manager, should be closed, because it is the longest unused app. Afterwards, the newly opened app will be on the top:
2626

2727
![jietu](../pictures/LRU%E7%AE%97%E6%B3%95/3.jpg)
2828

@@ -34,10 +34,10 @@ LRU algorithm is actually about data structure design:
3434
1. Take a parameter, `capacity`, as the maximum size; then
3535
2. Implement two APIs:
3636
* `put(key, val)`: to store key-value pair
37-
* `get(key)`: return the value associated with the key; return -1 if key doesn't exist.
37+
* `get(key)`: return the value associated with the key; return -1 if the key doesn't exist.
3838
3. The time complexity for both `get` and `put` should be __O(1)__.
3939

40-
Let's use an example to understand how LRU algorithm works.
40+
Let's use an example to understand how the LRU algorithm works.
4141

4242
```cpp
4343
/* Cache capacity is 2 */
@@ -59,7 +59,7 @@ cache.put(3, 3);
5959
// cache = [(3, 3), (1, 1)]
6060
// Remarks: the memory capacity is full
6161
// We need to remove some contents to free up space
62-
// Removal will priortize longest unused data, which is at the tail
62+
// Removal will prioritize longest unused data, which is at the tail
6363
// Afterwards, insert the new data at the head
6464
cache.get(2); // return -1 (not found)
6565
// cache = [(3, 3), (1, 1)]
@@ -79,21 +79,21 @@ Through analysis of the above steps, if time complexity for `put` and `get` are
7979
- _Fast Deletion_: If the cache is full, we need to delete the last element.
8080
- _Fast Insertion_: We need to insert the data to the head upon each visit.
8181

82-
Which data structure can fulfill the above requirements? Hash table can search fast, but the data is unordered. Data in linked list is ordered, and can be inserted or deleted fast, but is hard to be searched. Combining these two, we can come up with a new data structure: __hash linked list__.
82+
Which data structure can fulfill the above requirements? Hash table can search fast, but the data is unordered. Data in linked list is ordered, and can be inserted or deleted fast, but is hard to search. Combining these two, we can come up with a new data structure: __hash linked list__.
8383

8484
The core data structure of LRU cache algorithm is hash linked list, a combination of doubly linked list and hash table. Here is how the data structure looks:
8585

8686
![HashLinkedList](../pictures/LRU%E7%AE%97%E6%B3%95/5.jpg)
8787

88-
The idea is simple - using hash table to provide the ability of fast earch to linked list. Think again about the previous example, isn't this data structure the perfect solution for LRU cache data structure?
88+
The idea is simple - using a hash table to provide the ability of fast search to linked list. Think again about the previous example, isn't this data structure the perfect solution for LRU cache data structure?
8989

9090
Some audience may wonder, why doubly linked list? Can't single linked list work? Since key exists in hash table, why do we have to store the key-value pairs in linked list instead of values only?
9191

92-
The answers only afloat when we actually do it. We can only understand the rationale behind the design after we implement LRU algorithm ourselves. Let's look at the code.
92+
The answers only afloat when we actually do it. We can only understand the rationale behind the design after we implement the LRU algorithm ourselves. Let's look at the code.
9393

9494
### 4. Implementation
9595

96-
A lot of programming languages has built-in hash linked list, or LRU-alike functions. To help understand the details of LRU algorithm, let's use Java to re-invent the wheel.
96+
A lot of programming languages have built-in hash linked list, or LRU-alike functions. To help understand the details of the LRU algorithm, let's use Java to reinvent the wheel.
9797

9898
First, define the `Node` class of doubly linked list. Assuming both `key` and `val` are of type `int`.
9999

@@ -115,7 +115,7 @@ class DoubleList {
115115
// Add x at the head, time complexity O(1)
116116
public void addFirst(Node x);
117117

118-
// Delete node x in the linked list (x is guarenteed to exist)
118+
// Delete node x in the linked list (x is guaranteed to exist)
119119
// Given a node in a doubly linked list, time complexity O(1)
120120
public void remove(Node x);
121121

@@ -129,9 +129,9 @@ class DoubleList {
129129

130130
P.S. This is the typical interface of a doubly linked list. In order to focus on the LRU algorithm, we'll skip the detailed implementation of functions in this class.
131131

132-
Now we can answer the question, why we have to use doubly linked list. In order to delete a node, we not only need to get the pointer of the node itself, but also need to update the node before and the node after. Only using a doubly linked list, we can guarentee the time complexity is O(1).
132+
Now we can answer the question, why we have to use a doubly linked list. In order to delete a node, we not only need to get the pointer of the node itself, but also need to update the node before and the node after. Only using a doubly linked list, we can guarantee the time complexity is O(1).
133133

134-
With the doubly linked list, we just need to use it in with hash table in LRU algorithm. Let's sort out the logic with pseudo code:
134+
With the doubly linked list, we just need to use it in with a hash table in the LRU algorithm. Let's sort out the logic with pseudo code:
135135

136136
```java
137137
// key associated with Node(key, val)
@@ -158,7 +158,7 @@ void put(int key, int val) {
158158
delete the last node in the linked list;
159159
delete the associated value in map;
160160
}
161-
inseart the new node x to the head;
161+
insert the new node x to the head;
162162
associate the new node x with key in map;
163163
}
164164
}
@@ -228,6 +228,6 @@ If the cache is full, we not only need to delete the last node, but also need to
228228

229229
Till now, you should have understood the idea and implementation of LRU algorithm. One common mistake is to update associated entries in the hash table when you deal with nodes in the linked list.
230230

231-
**To make algorithm clear! Subscribe to my WeChat blog labuladong, and find more easy-to-understand articles.**
231+
**Explanine the algorithms clearly! Subscribe to my WeChat blog labuladong, and find more easy-to-understand articles.**
232232

233233
![labuladong](../pictures/labuladong.png)

0 commit comments

Comments
 (0)