Skip to content

Commit 9ae2d12

Browse files
committedJun 23, 2019
epub completed!!!!!

34 files changed

+205
-186
lines changed
 

‎book-pro/A-time-complexity-cheatsheet.asc

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -12,13 +12,15 @@ include::content/part01/algorithms-analysis.asc[tag=table]
1212

1313
=== Linear Data Structures
1414

15-
include::part02-linear-data-structures.asc[tag=table]
15+
// <<part02-linear-data-structures#linear-data-structures-table>>
16+
17+
include::content/part02/array-vs-list-vs-queue-vs-stack.asc[tag=table]
1618

1719
=== Trees and Maps Data Structures
1820

1921
This section covers Binary Search Tree (BST) time complexity (Big O).
2022

21-
include::part03-graph-data-structures.asc[tag=table]
23+
include::content/part03/time-complexity-graph-data-structures.asc[tag=table]
2224

2325
include::content/part03/graph.asc[tag=table]
2426

‎book-pro/B-self-balancing-binary-search-trees.asc

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,5 @@
11
[appendix]
2+
[[b-self-balancing-binary-search-trees]]
23
== Self-balancing Binary Search Trees
34

45
Binary Search Trees (BST) are an excellent data structure to find elements very fast _O(log n)_.
@@ -27,6 +28,7 @@ As you might notice, we balanced the tree in the example by doing a rotation.
2728
To be more specific we rotated node `1` to the left to balance the tree.
2829
Let's examine all the possible rotation we can do to balance a tree.
2930

31+
[[tree-rotations]]
3032
=== Tree Rotations
3133
(((Tree Rotations)))
3234
We can do single rotations left and right and also we can do double rotations.

‎book-pro/C-AVL-tree.asc

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,5 @@
11
[appendix]
2+
[[c-avl-tree]]
23
== AVL Tree
34
(((AVL Tree)))
45
(((Tree, AVL)))
@@ -59,4 +60,4 @@ include::../src/data-structures/trees/avl-tree.js[tag=balance]
5960
The first thing we do is to see if one subtree is longer than the other.
6061
If so, then we check the children balance to determine if need a single or double rotation and in which direction.
6162

62-
You can review <<Tree Rotations>> in case you want a refresher.
63+
You can review <<b-self-balancing-binary-search-trees#tree-rotations>> in case you want a refresher.

‎book-pro/config/Rakefile

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -32,18 +32,18 @@ namespace :book do
3232
puts `#{cmd}`
3333
puts "\t-- HTML output at progit.html"
3434

35-
puts "\r\nConverting to PDF... (this one takes a while)"
36-
cmd = "bundle exec asciidoctor-pdf #{require} #{require_pdf} #{params} #{input} 2>&1"
37-
puts cmd
38-
puts `#{cmd}`
39-
puts " -- PDF output at progit.pdf"
40-
41-
# puts "\r\n>> Converting to EPub..."
42-
# # cmd = "bundle exec asciidoctor-epub3 #{require} -a ebook-validate #{params} #{input} 2>&1"
43-
# cmd = "bundle exec asciidoctor-epub3 -a ebook-validate #{require} #{params} #{input} 2>&1"
35+
# puts "\r\nConverting to PDF... (this one takes a while)"
36+
# cmd = "bundle exec asciidoctor-pdf #{require} #{require_pdf} #{params} #{input} 2>&1"
4437
# puts cmd
4538
# puts `#{cmd}`
46-
# puts "\t-- Epub output at progit.epub"
39+
# puts " -- PDF output at progit.pdf"
40+
41+
puts "\r\n>> Converting to EPub..."
42+
# cmd = "bundle exec asciidoctor-epub3 #{require} -a ebook-validate #{params} #{input} 2>&1"
43+
cmd = "bundle exec asciidoctor-epub3 #{require} #{params} #{input} 2>&1"
44+
puts cmd
45+
puts `#{cmd}`
46+
puts "\t-- Epub output at progit.epub"
4747

4848
# puts "\r\nConverting to Mobi (kf8)..."
4949
# puts `bundle exec asciidoctor-epub3 #{params} -a ebook-format=kf8 #{input} 2>&1`

‎book-pro/content/colophon.asc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
[colophon#colophon%nonfacing]
2-
== {doctitle}
2+
== Data Structures & Algorithms in JavaScript
33

44
Copyright © {docyear} Adrian Mejia
55

‎book-pro/content/part01/algorithms-analysis.asc

Lines changed: 3 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -60,27 +60,15 @@ To give you a clearer picture of how different algorithms perform as the input s
6060
|Find all permutations of a string |4 sec. |> vigintillion years |> centillion years |∞ |∞
6161
|=============================================================================================
6262

63-
Most algorithms are affected by the size of the input (`n`). Let's say you need to arrange numbers in ascending order. Sorting ten items will naturally take less time than sorting out 2 million. But, how much longer? As the input size grow, some algorithms take proportionally more time, we classify them as <<linear, linear>> runtime [or `O(n)`]. Others might take power two longer; we call them <<quadratic, quadratic>> running time [or `O(n^2^)`].
63+
Most algorithms are affected by the size of the input (`n`). Let's say you need to arrange numbers in ascending order. Sorting ten items will naturally take less time than sorting out 2 million. But, how much longer? As the input size grow, some algorithms take proportionally more time, we classify them as <<part01-algorithms-analysis#linear, linear>> runtime [or `O(n)`]. Others might take power two longer; we call them <<part01-algorithms-analysis#quadratic, quadratic>> running time [or `O(n^2^)`].
6464

65-
<<part04#merge-sort>>
66-
67-
// <<part04#>>
68-
// <<part04#_algorithms_toolbox>>
69-
// <<part04#algorithms_toolbox>>
70-
71-
// <<part04-algorithmic-toolbox#_algorithms_toolbox>>
72-
// <<part04-algorithmic-toolbox#algorithms_toolbox>>
73-
74-
// <<ch04-git-on-the-server#_getting_git_on_a_server>>
75-
76-
77-
From another perspective, if you keep the input size the same and run different algorithms implementations, you would notice the difference between an efficient algorithm and a slow one. For example, a good sorting algorithm is <<chapter-4#merge-sort>>, and an inefficient algorithm for large inputs is <<chapter-4#selection-sort>>.
65+
From another perspective, if you keep the input size the same and run different algorithms implementations, you would notice the difference between an efficient algorithm and a slow one. For example, a good sorting algorithm is <<part04-algorithmic-toolbox#merge-sort>>, and an inefficient algorithm for large inputs is <<part04-algorithmic-toolbox#selection-sort>>.
7866
Organizing 1 million elements with merge sort takes 20 seconds while bubble sort takes 12 days, ouch!
7967
The amazing thing is that both programs are solving the same problem with equal data and hardware; and yet, there's a big difference in time!
8068

8169
After completing this book, you are going to _think algorithmically_.
8270
You will be able to scale your programs while you are designing them.
83-
Find bottlenecks of existing software and have an <<Algorithmic Toolbox>> to optimize algorithms and make them faster without having to pay more for cloud computing (e.g., AWS EC2 instances). [big]#💸#
71+
Find bottlenecks of existing software and have an <<part04-algorithmic-toolbox#algorithms-toolbox>> to optimize algorithms and make them faster without having to pay more for cloud computing (e.g., AWS EC2 instances). [big]#💸#
8472

8573
<<<
8674
==== Increasing your code performance

‎book-pro/content/part01/big-o-examples.asc

Lines changed: 32 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -21,6 +21,7 @@ image:images/image5.png[CPU time needed vs. Algorithm runtime as the input size
2121

2222
The above chart shows how the running time of an algorithm is related to the amount of work the CPU has to perform. As you can see O(1) and O(log n) are very scalable. However, O(n^2^) and worst can make your computer run for years [big]#😵# on large datasets. We are going to give some examples so you can identify each one.
2323

24+
[[constant]]
2425
==== Constant
2526
(((Constant)))
2627
(((Runtime, Constant)))
@@ -39,17 +40,18 @@ Let's implement a function that finds out if an array is empty or not.
3940
include::{codedir}/runtimes/01-is-empty.js[tag=isEmpty]
4041
----
4142

42-
Another more real life example is adding an element to the begining of a <<Linked List>>. You can check out the implementation <<linked-list-inserting-beginning, here>>.
43+
Another more real life example is adding an element to the begining of a <<part02-linear-data-structures#linked-list>>. You can check out the implementation <<part02-linear-data-structures#linked-list-inserting-beginning, here>>.
4344

4445
As you can see, in both examples (array and linked list) if the input is a collection of 10 elements or 10M it would take the same amount of time to execute. You can't get any more performance than this!
4546

47+
[[logarithmic]]
4648
==== Logarithmic
4749
(((Logarithmic)))
4850
(((Runtime, Logarithmic)))
4951
Represented in Big O notation as *O(log n)*, when an algorithm has this running time it means that as the size of the input grows the number of operations grows very slowly. Logarithmic algorithms are very scalable. One example is the *binary search*.
5052
indexterm:[Runtime, Logarithmic]
5153

52-
[#logarithmic-example]
54+
[[logarithmic-example]]
5355
===== Searching on a sorted array
5456

5557
The binary search only works for sorted lists. It starts searching for an element on the middle of the array and then it moves to the right or left depending if the value you are looking for is bigger or smaller.
@@ -71,7 +73,7 @@ Finding the runtime of recursive algorithms is not very obvious sometimes. It re
7173
(((Runtime, Linear)))
7274
Linear algorithms are one of the most common runtimes. It’s represented as *O(n)*. Usually, an algorithm has a linear running time when it iterates over all the elements in the input.
7375

74-
[#linear-example]
76+
[[linear-example]]
7577
===== Finding duplicates in an array using a map
7678

7779
Let’s say that we want to find duplicate elements in an array. What’s the first implementation that comes to mind? Check out this implementation:
@@ -92,12 +94,13 @@ As we learned before, the big O cares about the worst-case scenario, where we wo
9294

9395
Space complexity is also *O(n)* since we are using an auxiliary data structure. We have a map that in the worst case (no duplicates) it will hold every word.
9496

97+
[[linearithmic]]
9598
==== Linearithmic
9699
(((Linearithmic)))
97100
(((Runtime, Linearithmic)))
98101
An algorithm with a linearithmic runtime is represented as _O(n log n)_. This one is important because it is the best runtime for sorting! Let’s see the merge-sort.
99102

100-
[#linearithmic-example]
103+
[[linearithmic-example]]
101104
===== Sorting elements in an array
102105

103106
The ((Merge Sort)), like its name indicates, has two functions merge and sort. Let’s start with the sort function:
@@ -136,10 +139,10 @@ Running times that are quadratic, O(n^2^), are the ones to watch out for. They u
136139

137140
Usually, they have double-nested loops that where each one visits all or most elements in the input. One example of this is a naïve implementation to find duplicate words on an array.
138141

139-
[#quadratic-example]
142+
[[quadratic-example]]
140143
===== Finding duplicates in an array (naïve approach)
141144

142-
If you remember we have solved this problem more efficiently on the <<Linear, Linear>> section. We solved this problem before using an _O(n)_, let’s solve it this time with an _O(n^2^)_:
145+
If you remember we have solved this problem more efficiently on the <<part01-algorithms-analysis#linear, Linear>> section. We solved this problem before using an _O(n)_, let’s solve it this time with an _O(n^2^)_:
143146

144147
// image:images/image12.png[image,width=527,height=389]
145148

@@ -151,14 +154,15 @@ include::{codedir}/runtimes/05-has-duplicates-naive.js[tag=hasDuplicates]
151154

152155
As you can see, we have two nested loops causing the running time to be quadratic. How much different is a linear vs. quadratic algorithm?
153156

154-
Let’s say you want to find a duplicated middle name in a phone directory book of a city of ~1 million people. If you use this quadratic solution you would have to wait for ~12 days to get an answer [big]#🐢#; while if you use the <<Linear, linear solution>> you will get the answer in seconds! [big]#🚀#
157+
Let’s say you want to find a duplicated middle name in a phone directory book of a city of ~1 million people. If you use this quadratic solution you would have to wait for ~12 days to get an answer [big]#🐢#; while if you use the <<part01-algorithms-analysis#linear, linear solution>> you will get the answer in seconds! [big]#🚀#
155158

159+
[[cubic]]
156160
==== Cubic
157161
(((Cubic)))
158162
(((Runtime, Cubic)))
159163
Cubic *O(n^3^)* and higher polynomial functions usually involve many nested loops. As an example of a cubic algorithm is a multi-variable equation solver (using brute force):
160164

161-
[#cubic-example]
165+
[[cubic-example]]
162166
===== Solving a multi-variable equation
163167

164168
Let’s say we want to find the solution for this multi-variable equation:
@@ -179,14 +183,15 @@ WARNING: This just an example, there are better ways to solve multi-variable equ
179183

180184
As you can see three nested loops usually translates to O(n^3^). If you have a four variable equation and four nested loops it would be O(n^4^) and so on when we have a runtime in the form of _O(n^c^)_, where _c > 1_, we can refer as a *polynomial runtime*.
181185

186+
[[exponential]]
182187
==== Exponential
183188
(((Exponential)))
184189
(((Runtime, Exponential)))
185190
Exponential runtimes, O(2^n^), means that every time the input grows by one the number of operations doubles. Exponential programs are only usable for a tiny number of elements (<100) otherwise it might not finish on your lifetime. [big]#💀#
186191

187192
Let’s do an example.
188193

189-
[#exponential-example]
194+
[[exponential-example]]
190195
===== Finding subsets of a set
191196

192197
Finding all distinct subsets of a given set can be implemented as follows:
@@ -209,6 +214,7 @@ include::{codedir}/runtimes/07-sub-sets.js[tag=snippet]
209214

210215
Every time the input grows by one the resulting array doubles. That’s why it has an *O(2^n^)*.
211216

217+
[[factorial]]
212218
==== Factorial
213219
(((Factorial)))
214220
(((Runtime, Factorial)))
@@ -225,7 +231,7 @@ A factorial is the multiplication of all the numbers less than itself down to 1.
225231
- 11! = 39,916,800
226232
****
227233

228-
[#factorial-example]
234+
[[factorial-example]]
229235
===== Getting all permutations of a word
230236
(((Permutations)))
231237
(((Words permutations)))
@@ -257,35 +263,35 @@ We went through 8 of the most common time complexities and provided examples for
257263
|Example(s)
258264

259265
|O(1)
260-
|<<Constant>>
261-
|<<constant-example>>
266+
|<<part01-algorithms-analysis#constant>>
267+
|<<part01-algorithms-analysis#constant-example>>
262268

263269
|O(log n)
264-
|<<Logarithmic>>
265-
|<<logarithmic-example>>
270+
|<<part01-algorithms-analysis#logarithmic>>
271+
|<<part01-algorithms-analysis#logarithmic-example>>
266272

267273
|O(n)
268-
|<<Linear>>
269-
|<<linear-example>>
274+
|<<part01-algorithms-analysis#linear>>
275+
|<<part01-algorithms-analysis#linear-example>>
270276

271277
|O(n log n)
272-
|<<Linearithmic>>
273-
|<<linearithmic-example>>
278+
|<<part01-algorithms-analysis#linearithmic>>
279+
|<<part01-algorithms-analysis#linearithmic-example>>
274280

275281
|O(n^2^)
276-
|<<Quadratic>>
277-
|<<quadratic-example>>
282+
|<<part01-algorithms-analysis#quadratic>>
283+
|<<part01-algorithms-analysis#quadratic-example>>
278284

279285
|O(n^3^)
280-
|<<Cubic>>
281-
|<<cubic-example>>
286+
|<<part01-algorithms-analysis#cubic>>
287+
|<<part01-algorithms-analysis#cubic-example>>
282288

283289
|O(2^n^)
284-
|<<Exponential>>
285-
|<<exponential-example>>
290+
|<<part01-algorithms-analysis#exponential>>
291+
|<<part01-algorithms-analysis#exponential-example>>
286292

287293
|O(n!)
288-
|<<Factorial>>
289-
|<<factorial-example>>
294+
|<<part01-algorithms-analysis#factorial>>
295+
|<<part01-algorithms-analysis#factorial-example>>
290296
|===
291297
// end::table[]
Lines changed: 36 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,36 @@
1+
=== Array vs. Linked List & Queue vs. Stack
2+
3+
In this part of the book, we explored the most used linear data structures such as Arrays, Linked Lists, Stacks and Queues. We implemented them and discussed the runtime of their operations.
4+
5+
.Use Arrays when…
6+
* You need to access data in random order fast (using an index).
7+
* Your data is multi-dimensional (e.g., matrix, tensor).
8+
9+
.Use Linked Lists when:
10+
* You will access your data sequentially.
11+
* You want to save memory and only allocate memory as you need it.
12+
* You want constant time to remove/add from extremes of the list.
13+
14+
.Use a Queue when:
15+
* You need to access your data in a first-come, first served basis (FIFO).
16+
* You need to implement a <<part03-graph-data-structures#bfs-tree, Breadth-First Search>>
17+
18+
.Use a Stack when:
19+
* You need to access your data as last-in, first-out (LIFO).
20+
* You need to implement a <<part03-graph-data-structures#dfs-tree, Depth-First Search>>
21+
(((Tables, Linear DS, Array/Lists/Stack/Queue complexities)))
22+
23+
[[linear-data-structures-table]]
24+
// tag::table[]
25+
.Time/Space Complexity of Linear Data Structures (Array, LinkedList, Stack & Queues)
26+
|===
27+
.2+.^s| Data Structure 2+^s| Searching By 3+^s| Inserting at the 3+^s| Deleting from .2+.^s| Space
28+
^|_Index/Key_ ^|_Value_ ^|_beginning_ ^|_middle_ ^|_end_ ^|_beginning_ ^|_middle_ ^|_end_
29+
| <<part02-linear-data-structures#array>> ^|O(1) ^|O(n) ^|O(n) ^|O(n) ^|O(1) ^|O(n) ^|O(n) ^|O(1) ^|O(n)
30+
| <<part02-linear-data-structures#singly-linked-list>> ^|O(n) ^|O(n) ^|O(1) ^|O(n) ^|O(1) ^|O(1) ^|O(n) ^|*O(n)* ^|O(n)
31+
| <<part02-linear-data-structures#doubly-linked-list>> ^|O(n) ^|O(n) ^|O(1) ^|O(n) ^|O(1) ^|O(1) ^|O(n) ^|*O(1)* ^|O(n)
32+
| <<part02-linear-data-structures#stack>> ^|- ^|- ^|- ^|- ^|O(1) ^|- ^|- ^|O(1) ^|O(n)
33+
| Queue (w/array) ^|- ^|- ^|- ^|- ^|*O(n)* ^|- ^|- ^|O(1) ^|O(n)
34+
| <<part02-linear-data-structures#queue>> (w/list) ^|- ^|- ^|- ^|- ^|O(1) ^|- ^|- ^|O(1) ^|O(n)
35+
|===
36+
// end::table[]

‎book-pro/content/part02/array.asc

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,7 @@
33
// :codedir: ../../src
44
// endif::[]
55

6+
[[array]]
67
=== Array
78
(((Array)))
89
(((Data Structures, Linear, Array)))
@@ -119,6 +120,7 @@ The `push()` method adds one or more elements to the end of an array and returns
119120
Runtime: O(1).
120121
****
121122

123+
[[array-search-by-value]]
122124
==== Searching by value and index
123125

124126
Searching by index is very easy using the `[]` operator:

‎book-pro/content/part02/linked-list.asc

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
// :imagesdir: ../images
33
// :codedir: ../../src
44
// endif::[]
5-
5+
[[linked-list]]
66
=== Linked List
77
(((Linked List)))
88
(((List)))
@@ -14,7 +14,7 @@ A list (or Linked List) is a linear data structure where each node is "linked" t
1414
- Doubly: every node has a reference to the next and previous object
1515
- Circular: the last element points to the first one.
1616

17-
17+
[[singly-linked-list]]
1818
==== Singly Linked List
1919

2020
Each element or node is *connected* to the next one by a reference. When a node only has one connection it's called *singly linked list*:
@@ -24,6 +24,7 @@ image:images/image19.png[image,width=498,height=97]
2424

2525
Usually, a Linked List is referenced by the first element in called *head* (or *root* node). For instance, if you want to get the `cat` element from the example above, then the only way to get there is using the `next` field on the head node. You would get `art` first, then use the next field recursively until you eventually get the `cat` element.
2626

27+
[[doubly-linked-list]]
2728
==== Doubly Linked List
2829

2930
When each node has a connection to the `next` item and also the `previous` one, then we have a *doubly linked list*.
@@ -123,7 +124,7 @@ image:images/image23.png[image,width=498,height=217]
123124

124125
To insert at the beginning, we create a new node with the next reference to the current first node. Then we make first the new node. In code, it would look something like this:
125126

126-
[#linked-list-inserting-beginning]
127+
[[linked-list-inserting-beginning]]
127128

128129
.Add item to the beginning of a Linked List
129130
[source, javascript]
@@ -278,9 +279,9 @@ Use arrays when:
278279

279280
Use a doubly linked list when:
280281

281-
* You want to access elements in a *sequential* manner only like <<Stack>> or <<Queue>>.
282+
* You want to access elements in a *sequential* manner only like <<part02-linear-data-structures#stack>> or <<part02-linear-data-structures#queue>>.
282283

283284
* You want to insert elements at the start and end of the list. The linked list has O(1) while array has O(n).
284285
* You want to save some memory when dealing with possibly large data sets. Arrays pre-allocate a large chunk of contiguous memory on initialization. Lists are more “grow as you go”.
285286

286-
For the next two linear data structures <<Stack>> and <<Queue>>, we are going to use a doubly linked list to implement them. We could use an array as well, but since inserting/deleting from the start perform better on linked-list, we are going use that.
287+
For the next two linear data structures <<part02-linear-data-structures#stack>> and <<part02-linear-data-structures#queue>>, we are going to use a doubly linked list to implement them. We could use an array as well, but since inserting/deleting from the start perform better on linked-list, we are going use that.

‎book-pro/content/part02/queue.asc

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,7 @@
33
// :codedir: ../../src
44
// endif::[]
55

6+
[[queue]]
67
=== Queue
78
(((Queue)))
89
(((Data Structures, Linear, Queue)))

‎book-pro/content/part02/stack.asc

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,7 @@
33
// :codedir: ../../src
44
// endif::[]
55

6+
[[stack]]
67
=== Stack
78
(((Stack)))
89
(((Data Structures, Linear, Stack)))
@@ -83,4 +84,4 @@ Implementing the stack with an array and linked list would lead to the same time
8384
|===
8485
// end::table[]
8586

86-
It's not very common to search for values on a stack (other Data Structures are better suited for this). Stacks especially useful for implementing <<Depth-First Search for Binary Tree, Depth-First Search>>.
87+
It's not very common to search for values on a stack (other Data Structures are better suited for this). Stacks especially useful for implementing <<part03-graph-data-structures#dfs-tree, Depth-First Search>>.

‎book-pro/content/part03/graph-search.asc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -76,9 +76,9 @@ DFS and BFS can implementation can be almost identical; the difference is the un
7676
include::{codedir}/data-structures/graphs/graph.js[tag=graphSearch,indent=0]
7777
----
7878

79-
Using an <<Stack>> (LIFO) for DFS will make use keep visiting the last node children while having a <<Queue>> (FIFO) will allow to visit adjacent nodes first and "queue" their children for later visiting.
79+
Using an <<part02-linear-data-structures#stack>> (LIFO) for DFS will make use keep visiting the last node children while having a <<part02-linear-data-structures#queue>> (FIFO) will allow to visit adjacent nodes first and "queue" their children for later visiting.
8080

81-
TIP: you can also implement the DFS as a recursive function, similar to what we did in the <<Depth-First Search for Binary Tree, DFS for trees>>.
81+
TIP: you can also implement the DFS as a recursive function, similar to what we did in the <<part03-graph-data-structures#dfs-tree, DFS for trees>>.
8282

8383
You might wonder what the difference between search algorithms in a tree and a graph is? Check out the next section.
8484

‎book-pro/content/part03/graph.asc

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,7 @@
33
// :codedir: ../../src
44
// endif::[]
55

6+
[[graph]]
67
=== Graph
78
(((Graph)))
89
(((Data Structures, Non-Linear, Graph)))
@@ -199,7 +200,7 @@ include::{codedir}/data-structures/graphs/graph.js[tag=addVertex, indent=0]
199200

200201
If the node doesn't exist, then we create the new node and add it to a `HashMap`.
201202

202-
TIP: <<Map>> stores key/pair value very efficiently. Lookup is `O(1)`.
203+
TIP: <<part03-graph-data-structures#map>> stores key/pair value very efficiently. Lookup is `O(1)`.
203204

204205
The `key` is the node's value, while the `value` is the newly created node.
205206

‎book-pro/content/part03/hashmap.asc

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,7 @@
33
// :codedir: ../../src
44
// endif::[]
55

6+
[[hashmap]]
67
==== HashMap
78
(((HashMap)))
89
(((HashTable)))
@@ -239,7 +240,7 @@ include::{codedir}/data-structures/maps/hash-maps/hash-map.js[tag=getEntry, inde
239240
----
240241
<1> Convert key to an array index.
241242
<2> If the bucket is empty create a new linked list
242-
<3> Use Linked list's <<Searching by value>> method to find value on the bucket.
243+
<3> Use Linked list's <<part02-linear-data-structures#array-search-by-value>> method to find value on the bucket.
243244
<4> Return `bucket` and `entry` if found.
244245

245246
With the help of the `getEntry` method, we can do the `HashMap.get` and `HashMap.has` methods:

‎book-pro/content/part03/map.asc

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
// :imagesdir: ../images
33
// :codedir: ../../src
44
// endif::[]
5-
5+
[[map]]
66
=== Map
77
(((Map)))
88
(((Data Structures, Non-Linear, Map)))
@@ -23,7 +23,7 @@ In short, you set `key`/`value` pair and then you can get the `value` using the
2323
The attractive part of Maps is that they are very performant usually *O(1)* or *O(log n)* depending on the implementation. We can implement the maps using two different underlying data structures:
2424

2525
* *HashMap*: it’s a map implementation using an *array* and a *hash function*. The job of the hash function is to convert the `key` into an index that maps to the `value`. Optimized HashMap can have an average runtime of *O(1)*.
26-
* *TreeMap*: it’s a map implementation that uses a self-balanced Binary Search Tree (like <<AVL Tree>>). The BST nodes store the key, and the value and nodes are sorted by key guaranteeing an *O(log n)* look up.
26+
* *TreeMap*: it’s a map implementation that uses a self-balanced Binary Search Tree (like <<c-avl-tree#>>). The BST nodes store the key, and the value and nodes are sorted by key guaranteeing an *O(log n)* look up.
2727

2828
<<<
2929
include::hashmap.asc[]
@@ -41,7 +41,7 @@ include::treemap.asc[]
4141

4242
.When to use a TreeMap vs. HashMap?
4343
* `HashMap` is more time-efficient. A `TreeMap` is more space-efficient.
44-
* `TreeMap` search complexity is *O(log n)*, while an optimized `HashMap` is *O(1)* on average. 
44+
* `TreeMap` search complexity is *O(log n)*, while an optimized `HashMap` is *O(1)* on average.
4545
* `HashMap`’s keys are in insertion order (or random depending in the implementation). `TreeMap`’s keys are always sorted.
4646
* `TreeMap` offers some statistical data for free such as: get minimum, get maximum, median, find ranges of keys. `HashMap` doesn’t.
4747
* `TreeMap` has a guarantee always an *O(log n)*, while `HashMap`s has an amortized time of *O(1)* but in the rare case of a rehash, it would take an *O(n)*.

‎book-pro/content/part03/set.asc

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,7 @@
33
// :codedir: ../../src
44
// endif::[]
55

6+
[[set]]
67
=== Set
78
(((Set)))
89
(((Data Structures, Non-Linear, Set)))
@@ -37,7 +38,7 @@ TIP: A hint... it should perform all operations in *O(1)** or at most *O(log n)*
3738

3839
If we use a `map`, we can accomplish this. However, maps use a key/value pair. If we only use the keys, we can avoid duplicates. Since in a `map` you can only have one key at a time.
3940

40-
As you might remember from the <<Map>> chapter, there are two ways of implementing a `map` and both can be used to create a `set`. Let's explore the difference between the two implementations are.
41+
As you might remember from the <<part03-graph-data-structures#map>> chapter, there are two ways of implementing a `map` and both can be used to create a `set`. Let's explore the difference between the two implementations are.
4142

4243
==== HashSet vs TreeSet
4344

@@ -50,6 +51,7 @@ We can implement a `map` using a *balanced BST* and using a *hash function*. If
5051

5152
Let’s implement both!
5253

54+
[[tree-set]]
5355
==== TreeSet
5456
(((TreeSet)))
5557
(((Data Structures, Non-Linear, TreeSet)))
@@ -151,6 +153,7 @@ Check out our https://github.com/amejiarosario/dsa.js/blob/f69b744a1bddd3d99243c
151153

152154
Let’s now, implement a `HashSet`.
153155

156+
[[hashset]]
154157
==== HashSet
155158
(((HashSet)))
156159
(((Data Structures, Non-Linear, HashSet)))
Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,20 @@
1+
=== Summary
2+
3+
In this section, we learned about Graphs applications, properties and how we can create them. We mention that you can represent a graph as a matrix or as a list of adjacencies. We went for implementing the later since it's more space efficient. We cover the basic graph operations like adding and removing nodes and edges. In the algorithms section, we are going to cover searching values in the graph.
4+
(((Tables, Non-Linear DS, BST/Maps/Sets Complexities)))
5+
6+
// tag::table[]
7+
.Time and Space Complexity for Graph-based Data Structures
8+
|===
9+
.2+.^s| Data Structure 2+^s| Searching By .2+^.^s| Insert .2+^.^s| Delete .2+^.^s| Space Complexity
10+
^|_Index/Key_ ^|_Value_
11+
| <<part03-graph-data-structures#bst, BST (unbalanced)>> ^|- ^|O(n) ^|O(n) ^|O(n) ^|O(n)
12+
| <<b-self-balancing-binary-search-trees#, BST (balanced)>> ^|- ^|O(log n) ^|O(log n) ^|O(log n) ^|O(n)
13+
| Hash Map (naïve) ^|O(n) ^|O(n) ^|O(n) ^|O(n) ^|O(n)
14+
| <<part03-graph-data-structures#hashmap>> (optimized) ^|O(1)* ^|O(n) ^|O(1)* ^|O(1)* ^|O(1)*
15+
| <<part03-graph-data-structures#treemap>> (Red-Black Tree) ^|O(log n) ^|O(n) ^|O(log n) ^|O(log n) ^|O(log n)
16+
| <<part03-graph-data-structures#hashset>> ^|- ^|O(n) ^|O(1)* ^|O(1)* ^|O(1)*
17+
| <<part03-graph-data-structures#tree-set>> ^|- ^|O(n) ^|O(log n) ^|O(log n) ^|O(log n)
18+
|===
19+
{empty}* = Amortized run time. E.g. rehashing might affect run time to *O(n)*.
20+
// end::table[]

‎book-pro/content/part03/tree-intro.asc

Lines changed: 9 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,7 @@
33
// :codedir: ../../src
44
// endif::[]
55

6+
[[tree]]
67
=== Tree
78
(((Tree)))
89
(((Data Structures, Non-Linear, Tree)))
@@ -15,7 +16,7 @@ As you can see in the picture above, this data structure resembles an inverted t
1516

1617
==== Implementing a Tree
1718

18-
Implementing a tree is not too hard. It’s similar to a <<Linked List>>. The main difference is that instead of having a `next` and `previous` links, we have an infinite number of linked nodes (children/descendants).
19+
Implementing a tree is not too hard. It’s similar to a <<part02-linear-data-structures#linked-list>>. The main difference is that instead of having a `next` and `previous` links, we have an infinite number of linked nodes (children/descendants).
1920

2021
.Tree's node constructor
2122
[source, javascript]
@@ -26,9 +27,9 @@ include::{codedir}/data-structures/trees/tree-node.js[tag=snippet]
2627
Simple! Right? But there are some constraints that you have to keep at all times.
2728

2829
.Tree data structures constraints
29-
1. *Loops*: You have to be careful *not* to make a circular loop. Otherwise, this wouldn’t be a tree anymore but a <<Graph, graph data structure>>! E.g., Node A has B as a child, then Node B list Node A as its descendant forming a loop. ‍️
30-
2. *Parents*: A node with more than two parents. Again, if that happens is no longer a tree but a <<Graph, graph>>.
31-
3. *Root*: a tree must have only one root. Two non-connected parts are not a tree. <<Graph>> can have non-connected portions and doesn’t have root.
30+
1. *Loops*: You have to be careful *not* to make a circular loop. Otherwise, this wouldn’t be a tree anymore but a <<part03-graph-data-structures#graph, graph data structure>>! E.g., Node A has B as a child, then Node B list Node A as its descendant forming a loop. ‍️
31+
2. *Parents*: A node with more than two parents. Again, if that happens is no longer a tree but a <<part03-graph-data-structures#graph>>.
32+
3. *Root*: a tree must have only one root. Two non-connected parts are not a tree. <<part03-graph-data-structures#graph>> can have non-connected portions and doesn’t have root.
3233

3334
==== Basic concepts
3435

@@ -66,12 +67,12 @@ image:images/image32.png[image,width=321,height=193]
6667
Binary trees are one of the most used kinds of tree, and they are used to build other data structures.
6768

6869
.Binary Tree Applications
69-
- <<Map>>
70-
- <<Set>>
70+
- <<part03-graph-data-structures#map>>
71+
- <<part03-graph-data-structures#set>>
7172
- Priority Queues
72-
- <<Binary Search Tree (BST)>>
73-
73+
- <<part03-graph-data-structures#bst>>
7474

75+
[[bst]]
7576
===== Binary Search Tree (BST)
7677
(((Binary Search Tree)))
7778
(((Data Structures, Non-Linear, Binary Search Tree)))

‎book-pro/content/part03/tree-search-traversal.asc

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ Why do we care? Well, there are specific problems that you can solve more optima
2929
3030
Let's cover the Breadth-first search (BFS) and Depth-first search (DFS).
3131
32-
[Breadth First Search]
32+
[[bfs-tree]]
3333
==== Breadth-First Search for Binary Tree
3434
(((BFS)))
3535
(((Breadth-First Search)))
@@ -43,7 +43,7 @@ Let's how can we implement it!
4343
include::{codedir}/data-structures/trees/binary-search-tree.js[tag=bfs,indent=0]
4444
----
4545
46-
As you see, the BFS uses a <<Queue>> data structure. We enqueue all the children of the current node and then dequeue them as we visit them.
46+
As you see, the BFS uses a <<part02-linear-data-structures#queue>> data structure. We enqueue all the children of the current node and then dequeue them as we visit them.
4747
4848
Note the asterisk (`*`) in front of the function means that this function is a generator that yields values.
4949
(((JavaScript Notes, Generators)))
@@ -83,7 +83,7 @@ console.log(Array.from(dummyIdMaker())); // [0, 1, 2]
8383
8484
****
8585
86-
86+
[[dfs-tree]]
8787
==== Depth-First Search for Binary Tree
8888
(((DFS)))
8989
(((Depth-First Search)))
@@ -96,8 +96,8 @@ Depth-First search goes deep (depth) before going wide. It means that starting f
9696
include::{codedir}/data-structures/trees/binary-search-tree.js[tag=dfs,indent=0]
9797
----
9898
99-
This is an iterative implementation of a DFS using an <<Stack>>.
100-
It's almost identical to the BFS, but instead of using a <<Queue>> we use a Stack.
99+
This is an iterative implementation of a DFS using an <<part02-linear-data-structures#stack>>.
100+
It's almost identical to the BFS, but instead of using a <<part02-linear-data-structures#queue>> we use a Stack.
101101
We can also implement it as recursive functions are we are going to see in the <<Binary Tree Traversal>> section.
102102
103103
==== Depth-First Search vs. Breadth-First Search

‎book-pro/content/part03/treemap.asc

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,7 @@
33
// :codedir: ../../src
44
// endif::[]
55

6+
[[treemap]]
67
==== TreeMap
78
(((TreeMap)))
89
(((Data Structures, Non-Linear, TreeMap)))

‎book-pro/content/part04/algorithmic-toolbox.asc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
[[_algorithms_toolbox]]
1+
[[algorithms-toolbox]]
22
=== Algorithmic Toolbox
33

44
Have you ever given a programming problem and freeze without knowing where to start?
@@ -18,7 +18,7 @@ TIP: TL;DR: Don't start coding right away. First, solve the problem, then write
1818
.. If anything else fails, how would you solve it the dumbest way possible (brute force). We can optimize it later.
1919
. *Test* your algorithm idea with multiple examples
2020
. *Optimize* the solution –Only optimize when you have something working don't try to do both at the same time!
21-
.. Can you trade-off space for speed? Use a <<HashMap>> to speed up results!
21+
.. Can you trade-off space for speed? Use a <<part03-graph-data-structures#hashmap>> to speed up results!
2222
.. Do you have a bunch of recursive and overlapping problems? Try <<Dynamic Programming>>.
2323
.. Re-read requirements and see if you can take advantage of anything. E.g. is the array sorted?
2424
. *Write Code*, yes, now you can code.

‎book-pro/content/part04/backtracking.asc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ it stops and steps back (backtracks) to try another alternative.
1010
Some examples that use backtracking is a solving Sudoku/crosswords puzzle, and graph operations.
1111

1212
ifndef::backend-pdf[]
13-
image:Sudoku_solved_by_bactracking.gif[]
13+
image:images/Sudoku_solved_by_bactracking.gif[]
1414
endif::backend-pdf[]
1515

1616
Listing all possible solutions might sound like a brute force.
@@ -58,7 +58,7 @@ For instace, if you are given the word `art` these are the possible permutations
5858

5959
Now, let's implement the program to generate all permutations of a word.
6060

61-
NOTE: We already solved this problem using an <<Getting all permutations of a word, iterative program>>, now let's do it using backtracking.
61+
NOTE: We already solved this problem using an <<part01-algorithms-analysis#factorial-example, iterative program>>, now let's do it using backtracking.
6262

6363
.Word permutations using backtracking
6464
[source, javascript]

‎book-pro/content/part04/bubble-sort.asc

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -66,15 +66,15 @@ console.log(b); //️↪️ 1
6666
Without the destructuring assignment, swapping two values requires a temporary variable.
6767
****
6868

69-
Bubble sort has a <<Quadratic>> running time, as you might infer from the nested for-loop.
69+
Bubble sort has a <<part01-algorithms-analysis#quadratic>> running time, as you might infer from the nested for-loop.
7070

7171
===== Bubble Sort Properties
7272

7373
- <<Stable>>: [big]#✅# Yes
7474
- <<In-place>>: [big]#✅# Yes
7575
- <<Online>>: [big]#✅# Yes
7676
- <<Adaptive>>: [big]#✅# Yes, _O(n)_ when already sorted
77-
- Time Complexity: [big]#⛔️# <<Quadratic>> _O(n^2^)_
78-
- Space Complexity: [big]#✅# <<Constant>> _O(1)_
77+
- Time Complexity: [big]#⛔️# <<part01-algorithms-analysis#quadratic>> _O(n^2^)_
78+
- Space Complexity: [big]#✅# <<part01-algorithms-analysis#constant>> _O(1)_
7979

8080
indexterm:[Runtime, Quadratic]

‎book-pro/content/part04/divide-and-conquer.asc

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -8,10 +8,10 @@ It splits the input into manageable parts recursively and finally joins solved p
88
We have already implemented some algorithms using the divide and conquer technique.
99

1010
.Examples of divide and conquer algorithms:
11-
- <<Merge Sort>>: *divides* the input into pairs, sort them, and them *join* all the pieces in ascending order.
12-
- <<Quicksort>>: *splits* the data by a random number called "pivot", then move everything smaller than the pivot to the left and anything more significant to the right. Repeat the process on the left and right side. Note: since this works in place doesn't need a "join" part.
13-
- <<logarithmic-example, Binary Search>>: find a value in a sorted collection by *splitting* the data in half until it sees the value.
14-
- <<Getting all permutations of a word, Permutations>>: *Take out* the first element from the input and solve permutation for the remainder of the data recursively, then *join* results and append the items that were taken out.
11+
- <<part04-algorithmic-toolbox#merge-sort>>: *divides* the input into pairs, sort them, and them *join* all the pieces in ascending order.
12+
- <<part04-algorithmic-toolbox#quicksort>>: *splits* the data by a random number called "pivot", then move everything smaller than the pivot to the left and anything more significant to the right. Repeat the process on the left and right side. Note: since this works in place doesn't need a "join" part.
13+
- <<part01-algorithms-analysis#logarithmic-example, Binary Search>>: find a value in a sorted collection by *splitting* the data in half until it sees the value.
14+
- <<part01-algorithms-analysis#factorial-example, Permutations>>: *Take out* the first element from the input and solve permutation for the remainder of the data recursively, then *join* results and append the items that were taken out.
1515
1616
.In general, the divide and conquer algorithms have the following pattern:
1717
1. *Divide* data into subproblems.

‎book-pro/content/part04/greedy-algorithms.asc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -34,8 +34,8 @@ This algorithm only gives one shot at finding the solution and never goes back t
3434
Don't get the wrong idea; some greedy algorithms work very well if they are designed correctly.
3535

3636
.Some examples of greedy algorithms that works well:
37-
- <<Selection Sort>>: we select the best (minimum value) remove it from the input and then select the next minimum until everything is processed.
38-
- <<Merge Sort>>: the "merge" uses a greedy algorithm, where it combines two sorted arrays by looking at their current values and choosing the best (minimum) at every time.
37+
- <<part04-algorithmic-toolbox#selection-sort>>: we select the best (minimum value) remove it from the input and then select the next minimum until everything is processed.
38+
- <<part04-algorithmic-toolbox#merge-sort>>: the "merge" uses a greedy algorithm, where it combines two sorted arrays by looking at their current values and choosing the best (minimum) at every time.
3939
indexterm:[Merge Sort]
4040
4141

‎book-pro/content/part04/insertion-sort.asc

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,4 @@
1+
[[insertion-sort]]
12
==== Insertion Sort
23

34
(((Sorting, Insertion Sort)))
@@ -29,8 +30,8 @@ include::{codedir}/algorithms/sorting/insertion-sort.js[tag=sort, indent=0]
2930
- <<In-place>>: [big]#✅# Yes
3031
- <<Online>>: [big]#✅# Yes
3132
- <<Adaptive>>: [big]#✅# Yes
32-
- Time Complexity: [big]#⛔️# <<Quadratic>> _O(n^2^)_
33-
- Space Complexity: [big]#✅# <<Constant>> _O(1)_
33+
- Time Complexity: [big]#⛔️# <<part01-algorithms-analysis#quadratic>> _O(n^2^)_
34+
- Space Complexity: [big]#✅# <<part01-algorithms-analysis#constant>> _O(1)_
3435

3536
(((Quadratic)))
3637
(((Runtime, Quadratic)))

‎book-pro/content/part04/merge-sort.asc

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -42,7 +42,7 @@ include::{codedir}/algorithms/sorting/merge-sort.js[tag=merge, indent=0]
4242
<2> If `array1` current element (`i1`) has the lowest value, we insert it into the `mergedArray` if not we then insert `array2` element.
4343
<3> `mergedArray` is `array1` and `array2` combined in ascending order (sorted).
4444

45-
Merge sort has an _O(n log n)_ running time. For more details about how to extract the runtime go to <<Linearithmic>> section.
45+
Merge sort has an _O(n log n)_ running time. For more details about how to extract the runtime go to <<part01-algorithms-analysis#linearithmic>> section.
4646

4747
===== Merge Sort Properties
4848

@@ -51,8 +51,8 @@ Merge sort has an _O(n log n)_ running time. For more details about how to extr
5151
- <<Online>>: [big]#️❌# No, new elements will require to sort the whole array.
5252
- <<Adaptive>>: [big]#️❌# No, mostly sorted array takes the same time O(n log n).
5353
- Recursive: Yes
54-
- Time Complexity: [big]#✅# <<Linearithmic>> _O(n log n)_
55-
- Space Complexity: [big]#⚠️# <<Linear>> _O(n)_, use auxiliary memory
54+
- Time Complexity: [big]#✅# <<part01-algorithms-analysis#linearithmic>> _O(n log n)_
55+
- Space Complexity: [big]#⚠️# <<part01-algorithms-analysis#linear>> _O(n)_, use auxiliary memory
5656

5757
(((Linearithmic)))
5858
(((Runtime, Linearithmic)))

‎book-pro/content/part04/quick-sort.asc

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -3,13 +3,14 @@
33
// :codedir: ../../src
44
// endif::[]
55

6+
[[quicksort]]
67
==== Quicksort
78
(((Sorting, QuickSort)))
89
(((QuickSort)))
910
Quicksort is an efficient recursive sorting algorithm that uses <<Divide and Conquer, divide and conquer>> paradigm to sort faster. It can be implemented in-place, so it doesn't require additional memory.
1011

1112
indexterm:[Divide and Conquer]
12-
In practice, quicksort outperforms other sorting algorithms like <<Merge Sort>>. And, of course, It also outperforms simple sorting algorithms like <<Selection Sort>>, <<Insertion Sort>> and <<Bubble Sort>>.
13+
In practice, quicksort outperforms other sorting algorithms like <<part04-algorithmic-toolbox#merge-sort>>. And, of course, It also outperforms simple sorting algorithms like <<part04-algorithmic-toolbox#selection-sort>>, <<part04-algorithmic-toolbox#insertion-sort>> and <<part04-algorithmic-toolbox#insertion-sort>>.
1314

1415
Quicksort picks a "pivot" element (preferably random) and move all the parts that are smaller than the pivot to the right and the ones that are bigger to the left. It does this recursively until all the array is sorted.
1516

@@ -72,7 +73,7 @@ And you can see the implementation of `shuffle` below:
7273
include::{codedir}/algorithms/sorting/sorting-common.js[tag=shuffle, indent=0]
7374
----
7475

75-
With the optimization, Quicksort has an _O(n log n)_ running time. Similar to the merge sort we divide the array into halves each time. For more details about how to extract the runtime go to <<Linearithmic>>.
76+
With the optimization, Quicksort has an _O(n log n)_ running time. Similar to the merge sort we divide the array into halves each time. For more details about how to extract the runtime go to <<part01-algorithms-analysis#linearithmic>>.
7677

7778
===== Quicksort Properties
7879

@@ -81,8 +82,8 @@ With the optimization, Quicksort has an _O(n log n)_ running time. Similar to th
8182
- <<Adaptive>>: [big]#️❌# No, mostly sorted array takes the same time O(n log n).
8283
- <<Online>>: [big]#️❌# No, the pivot element can be choose at random.
8384
- Recursive: Yes
84-
- Time Complexity: [big]#✅# <<Linearithmic>> _O(n log n)_
85-
- Space Complexity: [big]#✅# <<Constant>> _O(1)_
85+
- Time Complexity: [big]#✅# <<part01-algorithms-analysis#linearithmic>> _O(n log n)_
86+
- Space Complexity: [big]#✅# <<part01-algorithms-analysis#constant>> _O(1)_
8687

8788
(((Linearithmic)))
8889
(((Runtime, Linearithmic)))

‎book-pro/content/part04/selection-sort.asc

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -37,8 +37,8 @@ One index is for the position in question (selection/left) and another one for f
3737
- <<Stable>>: [big]#️️❌# No
3838
- <<Adaptive>>: [big]#️️❌# No
3939
- <<Online>>: [big]#️️❌# No
40-
- Time Complexity: [big]#⛔️# <<Quadratic>> _O(n^2^)_
41-
- Space Complexity: [big]#✅# <<Constant>> _O(1)_
40+
- Time Complexity: [big]#⛔️# <<part01-algorithms-analysis#quadratic>> _O(n^2^)_
41+
- Space Complexity: [big]#✅# <<part01-algorithms-analysis#constant>> _O(1)_
4242

4343
*Why selection sort is not stable?*
4444

@@ -50,7 +50,7 @@ Initially, we select the first element `2a` and check if there's anything less t
5050
Now, we have: `1, 5, 2b, 2a`.
5151
There you have it, `2b` now comes before `2a`.
5252

53-
// CAUTION: In practice, selection sort performance is the worst compared <<Bubble Sort>> and <<Insertion Sort>>. The only advantage of selection sort is that it minimizes the number of swaps. In case, that swapping is expensive, then it could make sense to use this one over the others.
53+
// CAUTION: In practice, selection sort performance is the worst compared <<part04-algorithmic-toolbox#insertion-sort>> and <<part04-algorithmic-toolbox#insertion-sort>>. The only advantage of selection sort is that it minimizes the number of swaps. In case, that swapping is expensive, then it could make sense to use this one over the others.
5454

5555
(((Quadratic)))
5656
(((Runtime, Quadratic)))

‎book-pro/content/part04/sorting-algorithms.asc

Lines changed: 15 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -4,13 +4,13 @@ Sorting is one of the most common solutions when we want to extract some insight
44
We can sort to get the maximum or minimum value and many algorithmic problems involves sorting data first.
55

66
.We are going to explore three basic sorting algorithms _O(n^2^)_ which have low overhead:
7-
- <<Bubble Sort>>
8-
- <<Selection Sort>>
9-
- <<Insertion Sort>>
7+
- <<part04-algorithmic-toolbox#insertion-sort>>
8+
- <<part04-algorithmic-toolbox#selection-sort>>
9+
- <<part04-algorithmic-toolbox#insertion-sort>>
1010
1111
.and then discuss efficient sorting algorithms _O(n log n)_ such as:
12-
- <<Merge Sort>>
13-
- <<Quicksort>>
12+
- <<part04-algorithmic-toolbox#merge-sort>>
13+
- <<part04-algorithmic-toolbox#quicksort>>
1414
1515
Before we dive into the most well-known sorting algorithms, let's discuss the sorting properties.
1616

@@ -115,22 +115,22 @@ We explored many algorithms some of them simple and other more performant. Also,
115115
[cols="20,80"]
116116
|===
117117
| Algorithms | Comments
118-
| <<Bubble Sort>> | Swap pairs bubbling up largest numbers to the right
119-
| <<Insertion Sort>> | Look for biggest number to the left and swap it with current
120-
| <<Selection Sort>> | Iterate array looking for smallest value to the right
121-
| <<Merge Sort>> | Split numbers in pairs, sort pairs and join them in ascending order
122-
| <<Quicksort>> | Choose a pivot, set smaller values to the left and bigger to the right.
118+
| <<part04-algorithmic-toolbox#insertion-sort>> | Swap pairs bubbling up largest numbers to the right
119+
| <<part04-algorithmic-toolbox#insertion-sort>> | Look for biggest number to the left and swap it with current
120+
| <<part04-algorithmic-toolbox#selection-sort>> | Iterate array looking for smallest value to the right
121+
| <<part04-algorithmic-toolbox#merge-sort>> | Split numbers in pairs, sort pairs and join them in ascending order
122+
| <<part04-algorithmic-toolbox#quicksort>> | Choose a pivot, set smaller values to the left and bigger to the right.
123123
// | Tim sort | Hybrid of merge sort and insertion sort
124124
|===
125125

126126
.Sorting algorithms time/space complexity and properties
127127
|===
128128
| Algorithms | Avg | Best | Worst | Space | Stable | In-place | Online | Adaptive
129-
| <<Bubble Sort>> | O(n^2^) | O(n) | O(n^2^) | O(1) | Yes | Yes | Yes | Yes
130-
| <<Insertion Sort>> | O(n^2^) | O(n) | O(n^2^) | O(1) | Yes | Yes | Yes | Yes
131-
| <<Selection Sort>> | O(n^2^) | O(n^2^) | O(n^2^) | O(1) | No | Yes | No | No
132-
| <<Merge Sort>> | O(n log n) | O(n log n) | O(n log n) | O(n) | Yes | No | No | No
133-
| <<Quicksort>> | O(n log n) | O(n^2^) | O(n log n) | O(log n) | Yes | Yes | No | No
129+
| <<part04-algorithmic-toolbox#insertion-sort>> | O(n^2^) | O(n) | O(n^2^) | O(1) | Yes | Yes | Yes | Yes
130+
| <<part04-algorithmic-toolbox#insertion-sort>> | O(n^2^) | O(n) | O(n^2^) | O(1) | Yes | Yes | Yes | Yes
131+
| <<part04-algorithmic-toolbox#selection-sort>> | O(n^2^) | O(n^2^) | O(n^2^) | O(1) | No | Yes | No | No
132+
| <<part04-algorithmic-toolbox#merge-sort>> | O(n log n) | O(n log n) | O(n log n) | O(n) | Yes | No | No | No
133+
| <<part04-algorithmic-toolbox#quicksort>> | O(n log n) | O(n^2^) | O(n log n) | O(log n) | Yes | Yes | No | No
134134
// | Tim sort | O(n log n) | O(log n) | Yes | No | No | Yes
135135
|===
136136
// end::table[]

‎book-pro/part02-linear-data-structures.asc

Lines changed: 7 additions & 39 deletions
Original file line numberDiff line numberDiff line change
@@ -6,12 +6,12 @@ Data Structures comes in many flavors. There’s no one to rule them all. You ha
66
Even though in your day-to-day, you might not need to re-implementing them, knowing how they work internally would help you how when to use over the other or even tweak them to create a new one. We are going to explore the most common data structures time and space complexity.
77

88
.In this part we are going to learn about the following linear data structures:
9-
- <<Array>>
10-
- <<Linked List>>
11-
- <<Stack>>
12-
- <<Queue>>
9+
- <<part02-linear-data-structures#array>>
10+
- <<part02-linear-data-structures#linked-list>>
11+
- <<part02-linear-data-structures#stack>>
12+
- <<part02-linear-data-structures#queue>>
1313

14-
Later, in the next part, we are going to explore non-linear data structures like <<Graph, Graphs>> and <<Tree, Trees>>.
14+
Later, in the next part, we are going to explore non-linear data structures like <<part03-graph-data-structures#graph, Graphs>> and <<part03-graph-data-structures#tree, Trees>>.
1515

1616
ifdef::backend-html5[]
1717
If you want to have a general overview of each one, take a look at the following interactive diagram:
@@ -33,39 +33,7 @@ include::content/part02/stack.asc[]
3333
<<<
3434
include::content/part02/queue.asc[]
3535

36+
<<<
37+
include::content/part02/array-vs-list-vs-queue-vs-stack.asc[]
3638

37-
=== Array vs. Linked List & Queue vs. Stack
38-
39-
In this part of the book, we explored the most used linear data structures such as Arrays, Linked Lists, Stacks and Queues. We implemented them and discussed the runtime of their operations.
40-
41-
.Use Arrays when…
42-
* You need to access data in random order fast (using an index).
43-
* Your data is multi-dimensional (e.g., matrix, tensor).
44-
45-
.Use Linked Lists when:
46-
* You will access your data sequentially.
47-
* You want to save memory and only allocate memory as you need it.
48-
* You want constant time to remove/add from extremes of the list.
49-
50-
.Use a Queue when:
51-
* You need to access your data in a first-come, first served basis (FIFO).
52-
* You need to implement a <<Breadth-First Search for Binary Tree, Breadth-First Search>>
53-
54-
.Use a Stack when:
55-
* You need to access your data as last-in, first-out (LIFO).
56-
* You need to implement a <<Depth-First Search for Binary Tree, Depth-First Search>>
57-
(((Tables, Linear DS, Array/Lists/Stack/Queue complexities)))
5839

59-
// tag::table[]
60-
.Time/Space Complexity of Linear Data Structures (Array, LinkedList, Stack & Queues)
61-
|===
62-
.2+.^s| Data Structure 2+^s| Searching By 3+^s| Inserting at the 3+^s| Deleting from .2+.^s| Space
63-
^|_Index/Key_ ^|_Value_ ^|_beginning_ ^|_middle_ ^|_end_ ^|_beginning_ ^|_middle_ ^|_end_
64-
| <<Array>> ^|O(1) ^|O(n) ^|O(n) ^|O(n) ^|O(1) ^|O(n) ^|O(n) ^|O(1) ^|O(n)
65-
| <<Singly Linked List>> ^|O(n) ^|O(n) ^|O(1) ^|O(n) ^|O(1) ^|O(1) ^|O(n) ^|*O(n)* ^|O(n)
66-
| <<Doubly Linked List>> ^|O(n) ^|O(n) ^|O(1) ^|O(n) ^|O(1) ^|O(1) ^|O(n) ^|*O(1)* ^|O(n)
67-
| <<Stack>> ^|- ^|- ^|- ^|- ^|O(1) ^|- ^|- ^|O(1) ^|O(n)
68-
| Queue (w/array) ^|- ^|- ^|- ^|- ^|*O(n)* ^|- ^|- ^|O(1) ^|O(n)
69-
| <<Queue>> (w/list) ^|- ^|- ^|- ^|- ^|O(1) ^|- ^|- ^|O(1) ^|O(n)
70-
|===
71-
// end::table[]

‎book-pro/part03-graph-data-structures.asc

Lines changed: 6 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -4,10 +4,10 @@
44
Graph-based data structures are everywhere whether you realize it or not. You can find them in databases, Web (HTML DOM tree), search algorithms, finding the best route to get home and many more uses. We are going to learn the basic concepts and when to choose one over the other.
55

66
.In this chapter we are going to learn:
7-
- Exciting <<Graph>> data structure applications
8-
- Searching efficiently with a <<Tree>> data structures.
9-
- One of the most versatile data structure of all <<HashMap>>.
10-
- Keeping dups out with a <<Set>>.
7+
- Exciting <<part03-graph-data-structures#graph>> data structure applications
8+
- Searching efficiently with a <<part03-graph-data-structures#tree>> data structures.
9+
- One of the most versatile data structure of all <<part03-graph-data-structures#hashmap>>.
10+
- Keeping dups out with a <<part03-graph-data-structures#set>>.
1111
By the end of this section, you will know the data structures trade-offs and when to use one over the other.
1212

1313
include::content/part03/tree-intro.asc[]
@@ -33,24 +33,6 @@ include::content/part03/graph.asc[]
3333
<<<
3434
include::content/part03/graph-search.asc[]
3535

36+
<<<
37+
include::content/part03/time-complexity-graph-data-structures.asc[]
3638

37-
=== Summary
38-
39-
In this section, we learned about Graphs applications, properties and how we can create them. We mention that you can represent a graph as a matrix or as a list of adjacencies. We went for implementing the later since it's more space efficient. We cover the basic graph operations like adding and removing nodes and edges. In the algorithms section, we are going to cover searching values in the graph.
40-
(((Tables, Non-Linear DS, BST/Maps/Sets Complexities)))
41-
42-
// tag::table[]
43-
.Time and Space Complexity for Non-Linear Data Structures
44-
|===
45-
.2+.^s| Data Structure 2+^s| Searching By .2+^.^s| Insert .2+^.^s| Delete .2+^.^s| Space Complexity
46-
^|_Index/Key_ ^|_Value_
47-
| <<Binary Search Tree, BST (unbalanced)>> ^|- ^|O(n) ^|O(n) ^|O(n) ^|O(n)
48-
| <<Self-balancing Binary Search Trees, BST (balanced)>> ^|- ^|O(log n) ^|O(log n) ^|O(log n) ^|O(n)
49-
| Hash Map (naïve) ^|O(n) ^|O(n) ^|O(n) ^|O(n) ^|O(n)
50-
| <<HashMap>> (optimized) ^|O(1)* ^|O(n) ^|O(1)* ^|O(1)* ^|O(1)*
51-
| <<TreeMap>> (Red-Black Tree) ^|O(log n) ^|O(n) ^|O(log n) ^|O(log n) ^|O(log n)
52-
| <<HashSet>> ^|- ^|O(n) ^|O(1)* ^|O(1)* ^|O(1)*
53-
| <<TreeSet>> ^|- ^|O(n) ^|O(log n) ^|O(log n) ^|O(log n)
54-
|===
55-
{empty}* = Amortized run time. E.g. rehashing might affect run time to *O(n)*.
56-
// end::table[]

‎book-pro/part04-algorithmic-toolbox.asc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ IMPORTANT: There's not a single approach to solve all problems but knowing well-
88

99
We are going to start with <<Sorting Algorithms>>
1010
// and searching algorithms,
11-
such as <<Insertion Sort>>, <<Merge Sort>> and some others.
11+
such as <<part04-algorithmic-toolbox#insertion-sort>>, <<part04-algorithmic-toolbox#merge-sort>> and some others.
1212
Later, you are going to learn some algorithmic paradigms that will help you to identify common patterns and solve problems from different angles.
1313

1414

0 commit comments

Comments
 (0)
Please sign in to comment.