You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Most algorithms are affected by the size of the input (`n`). Let's say you need to arrange numbers in ascending order. Sorting ten items will naturally take less time than sorting out 2 million. But, how much longer? As the input size grow, some algorithms take proportionally more time, we classify them as <<part01-algorithms-analysis#linear, linear>> runtime [or `O(n)`]. Others might take power two longer; we call them <<part01-algorithms-analysis#quadratic, quadratic>> running time [or `O(n^2^)`].
69
69
70
70
From another perspective, if you keep the input size the same and run different algorithms implementations, you would notice the difference between an efficient algorithm and a slow one. For example, a good sorting algorithm is <<part04-algorithmic-toolbox#merge-sort>>, and an inefficient algorithm for large inputs is <<part04-algorithmic-toolbox#selection-sort>>.
71
-
Organizing 1 million elements with merge sort takes 20 seconds while bubble sort takes 12 days, ouch!
71
+
Organizing 1 million elements with merge sort takes 20 seconds while selection sort takes 12 days, ouch!
72
72
The amazing thing is that both programs are solving the same problem with equal data and hardware; and yet, there's a big difference in time!
73
73
74
74
After completing this book, you are going to _think algorithmically_.
@@ -135,7 +135,7 @@ There’s a notation called *Big O*, where `O` refers to the *order of the funct
135
135
136
136
TIP: Big O = Big Order of a function.
137
137
138
-
If you have a program which runtime is:
138
+
If you have a program that has a runtime of:
139
139
140
140
_7n^3^ + 3n^2^ + 5_
141
141
@@ -144,7 +144,7 @@ You can express it in Big O notation as _O(n^3^)_. The other terms (_3n^2^ + 5_)
144
144
Big O notation, only cares about the “biggest” terms in the time/space complexity. So, it combines what we learn about time and space complexity, asymptotic analysis and adds a worst-case scenario.
145
145
146
146
.All algorithms have three scenarios:
147
-
* Best-case scenario: the most favorable input arrange where the program will take the least amount of operations to complete. E.g., array already sorted is beneficial for some sorting algorithms.
147
+
* Best-case scenario: the most favorable input arrangement where the program will take the least amount of operations to complete. E.g., an array that's already sorted is beneficial for some sorting algorithms.
148
148
* Average-case scenario: this is the most common case. E.g., array items in random order for a sorting algorithm.
149
149
* Worst-case scenario: the inputs are arranged in such a way that causes the program to take the longest to complete. E.g., array items in reversed order for some sorting algorithm will take the longest to run.
150
150
@@ -154,7 +154,7 @@ TIP: Big O only cares about the highest order of the run time function and the w
154
154
155
155
WARNING: Don't drop terms that are multiplying other terms. _O(n log n)_ is not equivalent to _O(n)_. However, _O(n + log n)_ is.
156
156
157
-
There are many common notations like polynomial, _O(n^2^)_ like we saw in the `getMin` example; constant _O(1)_ and many more that we are going to explore in the next chapter.
157
+
There are many common notations like polynomial, _O(n^2^)_ as we saw in the `getMin` example; constant _O(1)_ and many more that we are going to explore in the next chapter.
158
158
159
159
Again, time complexity is not a direct measure of how long a program takes to execute, but rather how many operations it performs given the input size. Nevertheless, there’s a relationship between time complexity and clock time as we can see in the following table.
160
160
(((Tables, Intro, Input size vs clock time by Big O)))
Another more real life example is adding an element to the begining of a <<part02-linear-data-structures#linked-list>>. You can check out the implementation <<part02-linear-data-structures#linked-list-inserting-beginning, here>>.
49
49
50
-
As you can see, in both examples (array and linked list) if the input is a collection of 10 elements or 10M it would take the same amount of time to execute. You can't get any more performant than this!
50
+
As you can see in both examples (array and linked list), if the input is a collection of 10 elements or 10M, it would take the same amount of time to execute. You can't get any more performant than this!
51
51
52
52
[[logarithmic]]
53
53
==== Logarithmic
@@ -68,7 +68,7 @@ The binary search only works for sorted lists. It starts searching for an elemen
This binary search implementation is a recursive algorithm, which means that the function `binarySearch` calls itself multiple times until the solution is found. The binary search splits the array in half every time.
71
+
This binary search implementation is a recursive algorithm, which means that the function `binarySearchRecursive` calls itself multiple times until the solution is found. The binary search splits the array in half every time.
72
72
73
73
Finding the runtime of recursive algorithms is not very obvious sometimes. It requires some tools like recursion trees or the https://adrianmejia.com/blog/2018/04/24/analysis-of-recursive-algorithms/[Master Theorem]. The `binarySearch` divides the input in half each time. As a rule of thumb, when you have an algorithm that divides the data in half on each call you are most likely in front of a logarithmic runtime: _O(log n)_.
* *Best-case scenario*: first two elements are duplicates. It only has to visit two elements.
95
-
* *Worst-case scenario*: no duplicated or duplicated are the last two. In either case, it has to visit every item on the array.
96
-
* *Average-case scenario*: duplicates are somewhere in the middle of the collection. Only, half of the array will be visited.
95
+
* *Worst-case scenario*: no duplicates or duplicates are the last two. In either case, it has to visit every item in the array.
96
+
* *Average-case scenario*: duplicates are somewhere in the middle of the collection. Only half of the array will be visited.
97
97
98
98
As we learned before, the big O cares about the worst-case scenario, where we would have to visit every element on the array. So, we have an *O(n)* runtime.
99
99
@@ -147,19 +147,19 @@ Usually they have double-nested loops, where each one visits all or most element
147
147
[[quadratic-example]]
148
148
===== Finding duplicates in an array (naïve approach)
149
149
150
-
If you remember we have solved this problem more efficiently on the <<part01-algorithms-analysis#linear, Linear>> section. We solved this problem before using an _O(n)_, let’s solve it this time with an _O(n^2^)_:
150
+
If you remember, we have solved this problem more efficiently in the <<part01-algorithms-analysis#linear, Linear>> section. We solved this problem before using an _O(n)_, let’s solve it this time with an _O(n^2^)_:
As you can see, we have two nested loops causing the running time to be quadratic. How much difference is there between a linear vs. quadratic algorithm?
161
161
162
-
Let’s say you want to find a duplicated middle name in a phone directory book of a city of ~1 million people. If you use this quadratic solution you would have to wait for ~12 days to get an answer [big]#🐢#; while if you use the <<part01-algorithms-analysis#linear, linear solution>> you will get the answer in seconds! [big]#🚀#
162
+
Let’s say you want to find a duplicated middle name in a phone directory book of a city of ~1 million people. If you use this quadratic solution, you would have to wait for ~12 days to get an answer [big]#🐢#; while if you use the <<part01-algorithms-analysis#linear, linear solution>>, you will get the answer in seconds! [big]#🚀#
WARNING: This is just an example, there are better ways to solve multi-variable equations.
188
188
189
-
As you can see three nested loops usually translates to O(n^3^). If you have a four variable equation and four nested loops it would be O(n^4^) and so on when we have a runtime in the form of _O(n^c^)_, where _c > 1_, we refer to this as a *polynomial runtime*.
189
+
As you can see three nested loops usually translates to O(n^3^). If you have a four variable equation and four nested loops it would be O(n^4^) and so on. When we have a runtime in the form of _O(n^c^)_, where _c > 1_, we refer to this as a *polynomial runtime*.
Copy file name to clipboardExpand all lines: book/content/part02/array.asc
+9-9Lines changed: 9 additions & 9 deletions
Original file line number
Diff line number
Diff line change
@@ -17,7 +17,7 @@ TIP: Strings are a collection of Unicode characters and most of the array concep
17
17
18
18
.Fixed vs. Dynamic Size Arrays
19
19
****
20
-
Some programming languages have fixed size arrays like Java and C++. Fixed size arrays might be a hassle when your collection gets full, and you have to create a new one with a bigger size. For that, those programming languages also have built-in dynamic arrays: we have `vector` in C++ and `ArrayList` in Java. Dynamic programming languages like JavaScript, Ruby, Python use dynamic arrays by default.
20
+
Some programming languages have fixed size arrays like Java and C++. Fixed size arrays might be a hassle when your collection gets full, and you have to create a new one with a bigger size. For that, those programming languages also have built-in dynamic arrays: we have `vector` in C++ and `ArrayList` in Java. Dynamic programming languages like JavaScript, Ruby, and Python use dynamic arrays by default.
21
21
****
22
22
23
23
Arrays look like this:
@@ -29,7 +29,7 @@ Arrays are a sequential collection of elements that can be accessed randomly usi
29
29
30
30
==== Insertion
31
31
32
-
Arrays are built-in into most languages. Inserting an element is simple; you can either add them on creation time or after initialization. Below you can find an example for both cases:
32
+
Arrays are built-in into most languages. Inserting an element is simple; you can either add them at creation time or after initialization. Below you can find an example for both cases:
33
33
34
34
.Inserting elements into an array
35
35
[source, javascript]
@@ -44,7 +44,7 @@ array2[100] = 2;
44
44
array2 // [empty × 3, 1, empty × 96, 2]
45
45
----
46
46
47
-
Using the index, you can replace whatever value you want. Also, you don't have to add items next to each other. The size of the array will dynamically expand to accommodate the data. You can reference values in whatever index you like index 3 or even 100! In the `array2` we inserted 2 numbers, but the length is 101, and there are 99 empty spaces.
47
+
Using the index, you can replace whatever value you want. Also, you don't have to add items next to each other. The size of the array will dynamically expand to accommodate the data. You can reference values at whatever index you like: index 3 or even 100! In `array2`, we inserted 2 numbers but the length is 101 and there are 99 empty spaces.
Deleting from the middle might cause most the elements of the array to move back one position to fill in for the eliminated item. Thus, runtime: O(n).
226
+
Deleting from the middle might cause most of the elements of the array to move up one position to fill in for the eliminated item. Thus, runtime: O(n).
227
227
228
228
===== Deleting element from the end
229
229
@@ -237,7 +237,7 @@ array.pop(); // ↪️111
237
237
// array: [2, 5, 1, 9]
238
238
----
239
239
240
-
No element other element has been shifted, so it’s an _O(1)_ runtime.
240
+
No other element has been shifted, so it’s an _O(1)_ runtime.
241
241
242
242
.JavaScript built-in `array.pop`
243
243
****
@@ -264,7 +264,7 @@ To sum up, the time complexity of an array is:
264
264
(((Runtime, Constant)))
265
265
(((Tables, Linear DS, JavaScript Array buit-in operations Complexities)))
266
266
267
-
.Array Operations timex complexity
267
+
.Array Operations time complexity
268
268
|===
269
269
| Operation | Time Complexity | Usage
270
270
| push ^| O(1) | Insert element to the right side.
Copy file name to clipboardExpand all lines: book/content/part02/linked-list.asc
+20-20Lines changed: 20 additions & 20 deletions
Original file line number
Diff line number
Diff line change
@@ -12,18 +12,18 @@ A list (or Linked List) is a linear data structure where each node is "linked" t
12
12
13
13
.Linked Lists can be:
14
14
- Singly: every item has a pointer to the next node
15
-
- Doubly: every node has a reference to the next and previous object
15
+
- Doubly: every node has a reference to the next and previous node
16
16
- Circular: the last element points to the first one.
17
17
18
18
[[singly-linked-list]]
19
19
==== Singly Linked List
20
20
21
-
Each element or node is *connected* to the next one by a reference. When a node only has one connection it's called *singly linked list*:
21
+
Each element or node is *connected* to the next one by a reference. When a node only has one connection, it's called a *singly linked list*:
22
22
23
23
.Singly Linked List Representation: each node has a reference (blue arrow) to the next one.
24
24
image::image19.png[image,width=498,height=97]
25
25
26
-
Usually, a Linked List is referenced by the first element in called *head* (or *root* node). For instance, if you want to get the `cat` element from the example above, then the only way to get there is using the `next` field on the head node. You would get `art` first, then use the next field recursively until you eventually get the `cat` element.
26
+
Usually, a Linked List is referenced by the first element called *head* (or *root* node). For instance, if you want to get the `cat` element from the example above, then the only way to get there is using the `next` field on the head node. You would get `art` first, then use the next field recursively until you eventually get the `cat` element.
27
27
28
28
[[doubly-linked-list]]
29
29
==== Doubly Linked List
@@ -33,7 +33,7 @@ When each node has a connection to the `next` item and also the `previous` one,
33
33
.Doubly Linked List: each node has a reference to the next and previous element.
34
34
image::image20.png[image,width=528,height=74]
35
35
36
-
With a doubly list you can not only move forward but also backward. If you keep the reference to the last element (`cat`) you can step back and reach the middle part.
36
+
With a doubly list, you can not only move forward but also backward. If you keep the reference to the last element (`cat`) you can step back and reach the middle part.
37
37
38
38
If we implement the code for the `Node` elements, it would be something like this:
Arrays allow you to access data anywhere in the collection using an index. However, Linked List visits nodes in sequential order. In the worst case scenario, it takes _O(n)_ to get an element from a Linked List. You might be wondering: Isn’t always an array more efficient with _O(1)_ access time? It depends.
50
+
Arrays allow you to access data anywhere in the collection using an index. However, Linked List visits nodes in sequential order. In the worst case scenario, it takes _O(n)_ to get an element from a Linked List. You might be wondering: Isn’t an array always more efficient with _O(1)_ access time? It depends.
51
51
52
-
We also have to understand the space complexity to see the trade-offs between arrays and linked lists. An array pre-allocates contiguous blocks of memory. When it is getting full, it has to create a bigger array (usually 2x) and copy all the elements. It takes _O(n)_ to copy all the items over. On the other hand, LinkedList’s nodes only reserve precisely the amount of memory it needs. They don’t have to be next to each other, nor large chunks of memory have to be booked beforehand like arrays. Linked List is more on a "grow as you go" basis.
52
+
We also have to understand the space complexity to see the trade-offs between arrays and linked lists. An array pre-allocates contiguous blocks of memory. When it is getting full, it has to create a bigger array (usually 2x) and copy all the elements. It takes _O(n)_ to copy all the items over. On the other hand, LinkedList’s nodes only reserve precisely the amount of memory they need. They don’t have to be next to each other, nor large chunks of memory have to be booked beforehand like arrays. Linked List is more on a "grow as you go" basis.
53
53
54
54
Another difference is that adding/deleting at the beginning on an array takes O(n); however, the linked list is a constant operation O(1) as we will implement later.
55
55
56
-
A drawback of a linked list is that if you want to insert/delete an element at the end of the list, you would have to navigate the whole collection to find the last one O(n). However, this can be solved by keeping track of the last element in the list. We are going to implement that!
56
+
A drawback of a linked list is that if you want to insert/delete an element at the end of the list, you would have to navigate the whole collection to find the last one: O(n). However, this can be solved by keeping track of the last element in the list. We are going to implement that!
57
57
58
58
==== Implementing a Linked List
59
59
@@ -74,7 +74,7 @@ In our constructor, we keep a reference of the `first` and also `last` node for
74
74
75
75
==== Searching by value
76
76
77
-
Finding an element by value there’s no other way than iterating through the whole list.
77
+
There’s no other way to find an element by value than iterating through the entire list.
78
78
79
79
.Linked List's searching by values
80
80
[source, javascript]
@@ -109,7 +109,7 @@ Searching by index is very similar, we iterate through the list until we find th
If there’s no match, we return `undefined` then. The runtime is _O(n)_. As you might notice the search by index and by position methods looks pretty similar. If you want to take a look at the whole implementation https://github.com/amejiarosario/dsa.js/blob/7694c20d13f6c53457ee24fbdfd3c0ac57139ff4/src/data-structures/linked-lists/linked-list.js#L8[click here].
112
+
If there’s no match, we return `undefined` then. The runtime is _O(n)_. As you might notice, the search by index and by position methods looks pretty similar. If you want to take a look at the whole implementation, https://github.com/amejiarosario/dsa.js/blob/7694c20d13f6c53457ee24fbdfd3c0ac57139ff4/src/data-structures/linked-lists/linked-list.js#L8[click here].
113
113
114
114
==== Insertion
115
115
@@ -162,7 +162,7 @@ For inserting an element at the middle of the list, you would need to specify th
162
162
. New node's next `previous`.
163
163
164
164
165
-
Let’s do an example, with the following doubly linked list:
165
+
Let’s do an example with the following doubly linked list:
166
166
167
167
----
168
168
art <-> dog <-> cat
@@ -181,14 +181,14 @@ Take a look into the implementation of https://github.com/amejiarosario/dsa.js/b
<1> If the new item goes to position 0, then we reuse the `addFirst` method, and we are done!
184
-
<2> However, If we are adding to the last position, then we reuse the `addLast` method, and done!
184
+
<2> However, if we are adding to the last position, then we reuse the `addLast` method, and done!
185
185
<3> Adding `newNode` to the middle: First, create the `new` node only if the position exists. Take a look at <<Searching by index>> to see `get` implementation.
186
186
<4> Set newNode `previous` reference.
187
187
<5> Set newNode `next` link.
188
188
<6> No other node in the list is pointing to `newNode`, so we have to make the prior element point to `newNode`.
189
189
<7> Make the next element point to `newNode`.
190
190
191
-
Take notice that we reused, `addFirst` and `addLast` methods. For all the other cases the insertion is in the middle. We use `current.previous.next` and `current.next` to update the surrounding elements and make them point to the new node. Inserting on the middle takes *O(n)* because we have to iterate through the list using the `get` method.
191
+
Take notice that we reused `addFirst` and `addLast` methods. For all the other cases, the insertion is in the middle. We use `current.previous.next` and `current.next` to update the surrounding elements and make them point to the new node. Inserting in the middle takes *O(n)* because we have to iterate through the list using the `get` method.
192
192
193
193
==== Deletion
194
194
@@ -201,25 +201,25 @@ Deleting the first element (or head) is a matter of removing all references to i
201
201
.Deleting an element from the head of the list
202
202
image::image26.png[image,width=528,height=74]
203
203
204
-
For instance, to remove the head (“art”) node, we change the variable `first` to point to the second node “dog”. We also remove the variable `previous` from the "dog" node, so it doesn't point to the “art” node. The garbage collector will get rid of the “art” node when it seems nothing is using it anymore.
204
+
For instance, to remove the head (“art”) node, we change the variable `first` to point to the second node “dog”. We also remove the variable `previous` from the "dog" node, so it doesn't point to the “art” node. The garbage collector will get rid of the “art” node when it sees nothing is using it anymore.
205
205
206
206
.Linked List's remove from the beginning of the list
As you can see, when we want to remove the first node we make the 2nd element the first one.
212
+
As you can see, when we want to remove the first node, we make the 2nd element the first one.
213
213
214
214
===== Deleting element from the tail
215
215
216
-
Removing the last element from the list would require to iterate from the head until we find the last one, that’s O(n). But, If we have a reference to the last element, which we do, We can do it in _O(1)_ instead!
216
+
Removing the last element from the list would require to iterate from the head until we find the last one, that’s O(n). But, if we have a reference to the last element, which we do, we can do it in _O(1)_ instead!
217
217
218
218
.Removing last element from the list using the last reference.
219
219
image::image27.png[image,width=528,height=221]
220
220
221
221
222
-
For instance, if we want to remove the last node “cat”. We use the last pointer to avoid iterating through the whole list. We check `last.previous` to get the “dog” node and make it the new `last` and remove its next reference to “cat”. Since nothing is pointing to “cat” then is out of the list and eventually is deleted from memory by the garbage collector.
222
+
For instance, if we want to remove the last node “cat”. We use the last pointer to avoid iterating through the whole list. We check `last.previous` to get the “dog” node and make it the new `last` and remove its next reference to “cat”. Since nothing is pointing to “cat”, it is out of the list and eventually is deleted from memory by the garbage collector.
223
223
224
224
.Linked List's remove from the end of the list
225
225
[source, javascript]
@@ -238,7 +238,7 @@ To remove a node from the middle, we make the surrounding nodes to bypass the on
238
238
image::image28.png[image,width=528,height=259]
239
239
240
240
241
-
In the illustration, we are removing the middle node “dog” by making art’s `next` variable to point to cat and cat’s `previous` to be “art” totally bypassing “dog”.
241
+
In the illustration, we are removing the middle node “dog” by making art’s `next` variable to point to cat and cat’s `previous` to be “art”, totally bypassing “dog”.
242
242
243
243
Let’s implement it:
244
244
@@ -261,14 +261,14 @@ So far, we have seen two liner data structures with different use cases. Here’
261
261
.2+.^s| Data Structure 2+^s| Searching By 3+^s| Inserting at the 3+^s| Deleting from .2+.^s| Space
If you compare the singly linked list vs. doubly linked list, you will notice that the main difference is deleting elements from the end. For a singly list is *O(n)*, while for a doubly list is *O(1)*.
271
+
If you compare the singly linked list vs. doubly linked list, you will notice that the main difference is inserting elements to and deleting elements from the end. For a singly linked list, it's *O(n)*, while a doubly linked list is *O(1)*.
272
272
273
273
Comparing an array with a doubly linked list, both have different use cases:
274
274
@@ -284,4 +284,4 @@ Use a doubly linked list when:
284
284
* You want to insert elements at the start and end of the list. The linked list has O(1) while array has O(n).
285
285
* You want to save some memory when dealing with possibly large data sets. Arrays pre-allocate a large chunk of contiguous memory on initialization. Lists are more “grow as you go”.
286
286
287
-
For the next two linear data structures <<part02-linear-data-structures#stack>> and <<part02-linear-data-structures#queue>>, we are going to use a doubly linked list to implement them. We could use an array as well, but since inserting/deleting from the start perform better on linked-list, we are going use that.
287
+
For the next two linear data structures <<part02-linear-data-structures#stack>> and <<part02-linear-data-structures#queue>>, we are going to use a doubly linked list to implement them. We could use an array as well, but since inserting/deleting from the start performs better with linked-lists, we are going use that.
Copy file name to clipboardExpand all lines: book/content/part02/stack.asc
+4-4Lines changed: 4 additions & 4 deletions
Original file line number
Diff line number
Diff line change
@@ -11,16 +11,16 @@ endif::[]
11
11
(((LIFO)))
12
12
The stack is a data structure that restricts the way you add and remove data. It only allows you to insert and retrieve in a *Last-In-First-Out* (LIFO) fashion.
13
13
14
-
An analogy is to think the stack is a rod and the data are discs. You can only take out the last one you put in.
14
+
An analogy is to think that the stack is a rod and the data are discs. You can only take out the last one you put in.
15
15
16
16
.Stack data structure is like a stack of disks: the last element in is the first element out
17
17
image::image29.png[image,width=240,height=238]
18
18
19
19
// #Change image from https://www.khanacademy.org/computing/computer-science/algorithms/towers-of-hanoi/a/towers-of-hanoi[Khan Academy]#
20
20
21
-
As you can see in the image above, If you insert the disks in the order `5`, `4`, `3`, `2`, `1`. Then you can remove them on `1`, `2`, `3`, `4`, `5`.
21
+
As you can see in the image above, If you insert the disks in the order `5`, `4`, `3`, `2`, `1`, then you can remove them in `1`, `2`, `3`, `4`, `5`.
22
22
23
-
The stack inserts items to the end of the collection and also removes from the end. Both, an array and linked list would do it in constant time. However, since we don’t need the Array’s random access, a linked list makes more sense.
23
+
The stack inserts items to the end of the collection and also removes from the end. Both an array and linked list would do it in constant time. However, since we don’t need the Array’s random access, a linked list makes more sense.
24
24
25
25
.Stack's constructor
26
26
[source, javascript]
@@ -84,4 +84,4 @@ Implementing the stack with an array and linked list would lead to the same time
84
84
|===
85
85
// end::table[]
86
86
87
-
It's not very common to search for values on a stack (other Data Structures are better suited for this). Stacks especially useful for implementing <<part03-graph-data-structures#dfs-tree, Depth-First Search>>.
87
+
It's not very common to search for values on a stack (other Data Structures are better suited for this). Stacks are especially useful for implementing <<part03-graph-data-structures#dfs-tree, Depth-First Search>>.
Copy file name to clipboardExpand all lines: book/content/preface.asc
+3-3Lines changed: 3 additions & 3 deletions
Original file line number
Diff line number
Diff line change
@@ -3,15 +3,15 @@
3
3
4
4
=== What is in this book?
5
5
6
-
_{doctitle}_ is a book that can be read from cover to cover, where each section builds on top of the previous one. Also, it can be used as a reference manual where developers can refresh specific topics before an interview or looking for ideas to solve a problem optimally. (Check out the <<a-time-complexity-cheatsheet#,Time Complexity Cheatsheet>> and <<index#, topical index>>)
6
+
_{doctitle}_ is a book that can be read from cover to cover, where each section builds on top of the previous one. Also, it can be used as a reference manual where developers can refresh specific topics before an interview or look for ideas to solve a problem optimally. (Check out the <<a-time-complexity-cheatsheet#,Time Complexity Cheatsheet>> and <<index#, topical index>>)
7
7
8
-
This publication is designed to be concise, intending to serve software developers looking to get a firm conceptual understanding of data structures in a quick yet in-depth fashion. After reading this book, the reader should have a fundamental knowledge of algorithms, including when and where to apply it, what are the trade-offs of using one data structure over the other. The reader will then be able to make intelligent decisions about algorithms and data structures in their projects require.
8
+
This publication is designed to be concise, intending to serve software developers looking to get a firm conceptual understanding of data structures in a quick yet in-depth fashion. After reading this book, the reader should have a fundamental knowledge of algorithms, including when and where to apply it, what are the trade-offs of using one data structure over the other. The reader will then be able to make intelligent decisions about algorithms and data structures in their projects.
9
9
10
10
=== Who this book is for
11
11
12
12
This book is for software developers familiar with JavaScript looking to improve their problem-solving skills or preparing for a job interview.
13
13
14
-
NOTE: You can apply the concepts in this book to any programming language. However, instead of doing examples in pseudo-code we are going to use JavaScript to implement the code examples.
14
+
NOTE: You can apply the concepts in this book to any programming language. However, instead of doing examples in pseudo-code, we are going to use JavaScript to implement the code examples.
Copy file name to clipboardExpand all lines: book/part02-linear-data-structures.asc
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -3,7 +3,7 @@
3
3
4
4
Data Structures comes in many flavors. There’s no one to rule them all. You have to know the tradeoffs so you can choose the right one for the job.
5
5
6
-
Even though in your day-to-day, you might not need to re-implementing them, knowing how they work internally would help you how when to use over the other or even tweak them to create a new one. We are going to explore the most common data structures time and space complexity.
6
+
Even though in your day-to-day, you might not need to re-implementing them, knowing how they work internally would help you know when to use one over the other or even tweak them to create a new one. We are going to explore the most common data structures' time and space complexity.
7
7
8
8
.In this part we are going to learn about the following linear data structures:
0 commit comments