Skip to content

Commit 099e149

Browse files
committedJun 22, 2019
dsa links working!!

File tree

4 files changed

+200
-195
lines changed

4 files changed

+200
-195
lines changed
 

‎book-pro/ch02-git-basics-chapter.asc

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,10 @@ We'll also show you how to set up Git to ignore certain files and file patterns,
1717
- Section2: <<ch01-getting-started#another-level>>
1818
- Section3: <<ch01-getting-started#links2>>
1919

20+
.Links to DSA
21+
- Chapter: <<part01#>>
22+
- Section: <<part01#constant-example>>
23+
2024
In <<ch01-getting-started#_first_time>> we used it to specify our name, email address and editor preference before we even got started using Git.
2125

2226
[source,console]

‎book-pro/content/part01/big-o-examples.asc

Lines changed: 192 additions & 192 deletions
Original file line numberDiff line numberDiff line change
@@ -31,261 +31,261 @@ Represented as *O(1)*, it means that regardless of the input size the number of
3131

3232
Let's implement a function that finds out if an array is empty or not.
3333

34-
//.is-empty.js
35-
//image:images/image6.png[image,width=528,height=401]
34+
// //.is-empty.js
35+
// //image:images/image6.png[image,width=528,height=401]
3636

37-
[source, javascript]
38-
----
39-
include::{codedir}/runtimes/01-is-empty.js[tag=isEmpty]
40-
----
37+
// [source, javascript]
38+
// ----
39+
// include::{codedir}/runtimes/01-is-empty.js[tag=isEmpty]
40+
// ----
4141

42-
Another more real life example is adding an element to the begining of a <<Linked List>>. You can check out the implementation <<linked-list-inserting-beginning, here>>.
42+
// Another more real life example is adding an element to the begining of a <<Linked List>>. You can check out the implementation <<linked-list-inserting-beginning, here>>.
4343

44-
As you can see, in both examples (array and linked list) if the input is a collection of 10 elements or 10M it would take the same amount of time to execute. You can't get any more performance than this!
44+
// As you can see, in both examples (array and linked list) if the input is a collection of 10 elements or 10M it would take the same amount of time to execute. You can't get any more performance than this!
4545

46-
==== Logarithmic
47-
(((Logarithmic)))
48-
(((Runtime, Logarithmic)))
49-
Represented in Big O notation as *O(log n)*, when an algorithm has this running time it means that as the size of the input grows the number of operations grows very slowly. Logarithmic algorithms are very scalable. One example is the *binary search*.
50-
indexterm:[Runtime, Logarithmic]
46+
// ==== Logarithmic
47+
// (((Logarithmic)))
48+
// (((Runtime, Logarithmic)))
49+
// Represented in Big O notation as *O(log n)*, when an algorithm has this running time it means that as the size of the input grows the number of operations grows very slowly. Logarithmic algorithms are very scalable. One example is the *binary search*.
50+
// indexterm:[Runtime, Logarithmic]
5151

52-
[#logarithmic-example]
53-
===== Searching on a sorted array
52+
// [#logarithmic-example]
53+
// ===== Searching on a sorted array
5454

55-
The binary search only works for sorted lists. It starts searching for an element on the middle of the array and then it moves to the right or left depending if the value you are looking for is bigger or smaller.
55+
// The binary search only works for sorted lists. It starts searching for an element on the middle of the array and then it moves to the right or left depending if the value you are looking for is bigger or smaller.
5656

57-
// image:images/image7.png[image,width=528,height=437]
57+
// // image:images/image7.png[image,width=528,height=437]
5858

59-
[source, javascript]
60-
----
61-
include::{codedir}/runtimes/02-binary-search.js[tag=binarySearchRecursive]
62-
----
59+
// [source, javascript]
60+
// ----
61+
// include::{codedir}/runtimes/02-binary-search.js[tag=binarySearchRecursive]
62+
// ----
6363

64-
This binary search implementation is a recursive algorithm, which means that the function `binarySearch` calls itself multiple times until the solution is found. The binary search split the array in half every time.
64+
// This binary search implementation is a recursive algorithm, which means that the function `binarySearch` calls itself multiple times until the solution is found. The binary search split the array in half every time.
6565

66-
Finding the runtime of recursive algorithms is not very obvious sometimes. It requires some tools like recursion trees or the https://adrianmejia.com/blog/2018/04/24/analysis-of-recursive-algorithms/[Master Theorem]. The `binarySearch` divides the input in half each time. As a rule of thumb, when you have an algorithm that divides the data in half on each call you are most likely in front of a logarithmic runtime: _O(log n)_.
66+
// Finding the runtime of recursive algorithms is not very obvious sometimes. It requires some tools like recursion trees or the https://adrianmejia.com/blog/2018/04/24/analysis-of-recursive-algorithms/[Master Theorem]. The `binarySearch` divides the input in half each time. As a rule of thumb, when you have an algorithm that divides the data in half on each call you are most likely in front of a logarithmic runtime: _O(log n)_.
6767

68-
[[linear]]
69-
==== Linear
70-
(((Linear)))
71-
(((Runtime, Linear)))
72-
Linear algorithms are one of the most common runtimes. It’s represented as *O(n)*. Usually, an algorithm has a linear running time when it iterates over all the elements in the input.
68+
// [[linear]]
69+
// ==== Linear
70+
// (((Linear)))
71+
// (((Runtime, Linear)))
72+
// Linear algorithms are one of the most common runtimes. It’s represented as *O(n)*. Usually, an algorithm has a linear running time when it iterates over all the elements in the input.
7373

74-
[#linear-example]
75-
===== Finding duplicates in an array using a map
74+
// [#linear-example]
75+
// ===== Finding duplicates in an array using a map
7676

77-
Let’s say that we want to find duplicate elements in an array. What’s the first implementation that comes to mind? Check out this implementation:
77+
// Let’s say that we want to find duplicate elements in an array. What’s the first implementation that comes to mind? Check out this implementation:
7878

79-
// image:images/image8.png[image,width=528,height=383]
79+
// // image:images/image8.png[image,width=528,height=383]
8080

81-
[source, javascript]
82-
----
83-
include::{codedir}/runtimes/03-has-duplicates.js[tag=hasDuplicates]
84-
----
81+
// [source, javascript]
82+
// ----
83+
// include::{codedir}/runtimes/03-has-duplicates.js[tag=hasDuplicates]
84+
// ----
8585

86-
.`hasDuplicates` has multiple scenarios:
87-
* *Best-case scenario*: first two elements are duplicates. It only has to visit two elements.
88-
* *Worst-case scenario*: no duplicated or duplicated are the last two. In either case, it has to visit every item on the array.
89-
* *Average-case scenario*: duplicates are somewhere in the middle of the collection. Only, half of the array will be visited.
86+
// .`hasDuplicates` has multiple scenarios:
87+
// * *Best-case scenario*: first two elements are duplicates. It only has to visit two elements.
88+
// * *Worst-case scenario*: no duplicated or duplicated are the last two. In either case, it has to visit every item on the array.
89+
// * *Average-case scenario*: duplicates are somewhere in the middle of the collection. Only, half of the array will be visited.
9090

91-
As we learned before, the big O cares about the worst-case scenario, where we would have to visit every element on the array. So, we have an *O(n)* runtime.
91+
// As we learned before, the big O cares about the worst-case scenario, where we would have to visit every element on the array. So, we have an *O(n)* runtime.
9292

93-
Space complexity is also *O(n)* since we are using an auxiliary data structure. We have a map that in the worst case (no duplicates) it will hold every word.
93+
// Space complexity is also *O(n)* since we are using an auxiliary data structure. We have a map that in the worst case (no duplicates) it will hold every word.
9494

95-
==== Linearithmic
96-
(((Linearithmic)))
97-
(((Runtime, Linearithmic)))
98-
An algorithm with a linearithmic runtime is represented as _O(n log n)_. This one is important because it is the best runtime for sorting! Let’s see the merge-sort.
95+
// ==== Linearithmic
96+
// (((Linearithmic)))
97+
// (((Runtime, Linearithmic)))
98+
// An algorithm with a linearithmic runtime is represented as _O(n log n)_. This one is important because it is the best runtime for sorting! Let’s see the merge-sort.
9999

100-
[#linearithmic-example]
101-
===== Sorting elements in an array
100+
// [#linearithmic-example]
101+
// ===== Sorting elements in an array
102102

103-
The ((Merge Sort)), like its name indicates, has two functions merge and sort. Let’s start with the sort function:
103+
// The ((Merge Sort)), like its name indicates, has two functions merge and sort. Let’s start with the sort function:
104104

105-
// image:images/image9.png[image,width=528,height=383]
105+
// // image:images/image9.png[image,width=528,height=383]
106106

107-
.Sort part of the mergeSort
108-
[source, javascript]
109-
----
110-
include::{codedir}/algorithms/sorting/merge-sort.js[tag=splitSort]
111-
----
112-
<1> If the array only has two elements we can sort them manually.
113-
<2> We divide the array into two halves.
114-
<3> Merge the two parts recursively with the `merge` function explained below
107+
// .Sort part of the mergeSort
108+
// [source, javascript]
109+
// ----
110+
// include::{codedir}/algorithms/sorting/merge-sort.js[tag=splitSort]
111+
// ----
112+
// <1> If the array only has two elements we can sort them manually.
113+
// <2> We divide the array into two halves.
114+
// <3> Merge the two parts recursively with the `merge` function explained below
115115

116-
// image:images/image10.png[image,width=528,height=380]
116+
// // image:images/image10.png[image,width=528,height=380]
117117

118-
.Merge part of the mergeSort
119-
[source, javascript]
120-
----
121-
include::{codedir}/algorithms/sorting/merge-sort.js[tag=merge]
122-
----
118+
// .Merge part of the mergeSort
119+
// [source, javascript]
120+
// ----
121+
// include::{codedir}/algorithms/sorting/merge-sort.js[tag=merge]
122+
// ----
123123

124-
The merge function combines two sorted arrays in ascending order. Let’s say that we want to sort the array `[9, 2, 5, 1, 7, 6]`. In the following illustration, you can see what each function does.
124+
// The merge function combines two sorted arrays in ascending order. Let’s say that we want to sort the array `[9, 2, 5, 1, 7, 6]`. In the following illustration, you can see what each function does.
125125

126-
.Mergesort visualization. Shows the split, sort and merge steps
127-
image:images/image11.png[Mergesort visualization,width=500,height=600]
126+
// .Mergesort visualization. Shows the split, sort and merge steps
127+
// image:images/image11.png[Mergesort visualization,width=500,height=600]
128128

129-
How do we obtain the running time of the merge sort algorithm? The mergesort divides the array in half each time in the split phase, _log n_, and the merge function join each splits, _n_. The total work we have *O(n log n)*. There more formal ways to reach to this runtime like using the https://adrianmejia.com/blog/2018/04/24/analysis-of-recursive-algorithms/[Master Method] and https://www.cs.cornell.edu/courses/cs3110/2012sp/lectures/lec20-master/lec20.html[recursion trees].
129+
// How do we obtain the running time of the merge sort algorithm? The mergesort divides the array in half each time in the split phase, _log n_, and the merge function join each splits, _n_. The total work we have *O(n log n)*. There more formal ways to reach to this runtime like using the https://adrianmejia.com/blog/2018/04/24/analysis-of-recursive-algorithms/[Master Method] and https://www.cs.cornell.edu/courses/cs3110/2012sp/lectures/lec20-master/lec20.html[recursion trees].
130130

131-
[[quadratic]]
132-
==== Quadratic
133-
(((Quadratic)))
134-
(((Runtime, Quadratic)))
135-
Running times that are quadratic, O(n^2^), are the ones to watch out for. They usually don’t scale well when they have a large amount of data to process.
131+
// [[quadratic]]
132+
// ==== Quadratic
133+
// (((Quadratic)))
134+
// (((Runtime, Quadratic)))
135+
// Running times that are quadratic, O(n^2^), are the ones to watch out for. They usually don’t scale well when they have a large amount of data to process.
136136

137-
Usually, they have double-nested loops that where each one visits all or most elements in the input. One example of this is a naïve implementation to find duplicate words on an array.
137+
// Usually, they have double-nested loops that where each one visits all or most elements in the input. One example of this is a naïve implementation to find duplicate words on an array.
138138

139-
[#quadratic-example]
140-
===== Finding duplicates in an array (naïve approach)
139+
// [#quadratic-example]
140+
// ===== Finding duplicates in an array (naïve approach)
141141

142-
If you remember we have solved this problem more efficiently on the <<Linear, Linear>> section. We solved this problem before using an _O(n)_, let’s solve it this time with an _O(n^2^)_:
142+
// If you remember we have solved this problem more efficiently on the <<Linear, Linear>> section. We solved this problem before using an _O(n)_, let’s solve it this time with an _O(n^2^)_:
143143

144-
// image:images/image12.png[image,width=527,height=389]
144+
// // image:images/image12.png[image,width=527,height=389]
145145

146-
.Naïve implementation of has duplicates function
147-
[source, javascript]
148-
----
149-
include::{codedir}/runtimes/05-has-duplicates-naive.js[tag=hasDuplicates]
150-
----
146+
// .Naïve implementation of has duplicates function
147+
// [source, javascript]
148+
// ----
149+
// include::{codedir}/runtimes/05-has-duplicates-naive.js[tag=hasDuplicates]
150+
// ----
151151

152-
As you can see, we have two nested loops causing the running time to be quadratic. How much different is a linear vs. quadratic algorithm?
152+
// As you can see, we have two nested loops causing the running time to be quadratic. How much different is a linear vs. quadratic algorithm?
153153

154-
Let’s say you want to find a duplicated middle name in a phone directory book of a city of ~1 million people. If you use this quadratic solution you would have to wait for ~12 days to get an answer [big]#🐢#; while if you use the <<Linear, linear solution>> you will get the answer in seconds! [big]#🚀#
154+
// Let’s say you want to find a duplicated middle name in a phone directory book of a city of ~1 million people. If you use this quadratic solution you would have to wait for ~12 days to get an answer [big]#🐢#; while if you use the <<Linear, linear solution>> you will get the answer in seconds! [big]#🚀#
155155

156-
==== Cubic
157-
(((Cubic)))
158-
(((Runtime, Cubic)))
159-
Cubic *O(n^3^)* and higher polynomial functions usually involve many nested loops. As an example of a cubic algorithm is a multi-variable equation solver (using brute force):
156+
// ==== Cubic
157+
// (((Cubic)))
158+
// (((Runtime, Cubic)))
159+
// Cubic *O(n^3^)* and higher polynomial functions usually involve many nested loops. As an example of a cubic algorithm is a multi-variable equation solver (using brute force):
160160

161-
[#cubic-example]
162-
===== Solving a multi-variable equation
161+
// [#cubic-example]
162+
// ===== Solving a multi-variable equation
163163

164-
Let’s say we want to find the solution for this multi-variable equation:
164+
// Let’s say we want to find the solution for this multi-variable equation:
165165

166-
_3x + 9y + 8z = 79_
166+
// _3x + 9y + 8z = 79_
167167

168-
A naïve approach to solve this will be the following program:
168+
// A naïve approach to solve this will be the following program:
169169

170-
//image:images/image13.png[image,width=528,height=448]
170+
// //image:images/image13.png[image,width=528,height=448]
171171

172-
.Naïve implementation of multi-variable equation solver
173-
[source, javascript]
174-
----
175-
include::{codedir}/runtimes/06-multi-variable-equation-solver.js[tag=findXYZ]
176-
----
172+
// .Naïve implementation of multi-variable equation solver
173+
// [source, javascript]
174+
// ----
175+
// include::{codedir}/runtimes/06-multi-variable-equation-solver.js[tag=findXYZ]
176+
// ----
177177

178-
WARNING: This just an example, there are better ways to solve multi-variable equations.
178+
// WARNING: This just an example, there are better ways to solve multi-variable equations.
179179

180-
As you can see three nested loops usually translates to O(n^3^). If you have a four variable equation and four nested loops it would be O(n^4^) and so on when we have a runtime in the form of _O(n^c^)_, where _c > 1_, we can refer as a *polynomial runtime*.
180+
// As you can see three nested loops usually translates to O(n^3^). If you have a four variable equation and four nested loops it would be O(n^4^) and so on when we have a runtime in the form of _O(n^c^)_, where _c > 1_, we can refer as a *polynomial runtime*.
181181

182-
==== Exponential
183-
(((Exponential)))
184-
(((Runtime, Exponential)))
185-
Exponential runtimes, O(2^n^), means that every time the input grows by one the number of operations doubles. Exponential programs are only usable for a tiny number of elements (<100) otherwise it might not finish on your lifetime. [big]#💀#
182+
// ==== Exponential
183+
// (((Exponential)))
184+
// (((Runtime, Exponential)))
185+
// Exponential runtimes, O(2^n^), means that every time the input grows by one the number of operations doubles. Exponential programs are only usable for a tiny number of elements (<100) otherwise it might not finish on your lifetime. [big]#💀#
186186

187-
Let’s do an example.
187+
// Let’s do an example.
188188

189-
[#exponential-example]
190-
===== Finding subsets of a set
189+
// [#exponential-example]
190+
// ===== Finding subsets of a set
191191

192-
Finding all distinct subsets of a given set can be implemented as follows:
192+
// Finding all distinct subsets of a given set can be implemented as follows:
193193

194-
// image:images/image14.png[image,width=528,height=401]
194+
// // image:images/image14.png[image,width=528,height=401]
195195

196-
.Subsets in a Set
197-
[source, javascript]
198-
----
199-
include::{codedir}/runtimes/07-sub-sets.js[tag=snippet]
200-
----
201-
<1> Base case is empty element.
202-
<2> For each element from the input append it to the results array.
203-
<3> The new results array will be what it was before + the duplicated with the appended element.
196+
// .Subsets in a Set
197+
// [source, javascript]
198+
// ----
199+
// include::{codedir}/runtimes/07-sub-sets.js[tag=snippet]
200+
// ----
201+
// <1> Base case is empty element.
202+
// <2> For each element from the input append it to the results array.
203+
// <3> The new results array will be what it was before + the duplicated with the appended element.
204204

205-
//.The way this algorithm generates all subsets is:
206-
//1. The base case is an empty element (line 13). E.g. ['']
207-
//2. For each element from the input append it to the results array (line 16)
208-
//3. The new results array will be what it was before + the duplicated with the appended element (line 17)
205+
// //.The way this algorithm generates all subsets is:
206+
// //1. The base case is an empty element (line 13). E.g. ['']
207+
// //2. For each element from the input append it to the results array (line 16)
208+
// //3. The new results array will be what it was before + the duplicated with the appended element (line 17)
209209

210-
Every time the input grows by one the resulting array doubles. That’s why it has an *O(2^n^)*.
210+
// Every time the input grows by one the resulting array doubles. That’s why it has an *O(2^n^)*.
211211

212-
==== Factorial
213-
(((Factorial)))
214-
(((Runtime, Factorial)))
215-
Factorial runtime, O(n!), is not scalable at all. Even with input sizes of ~10 elements, it will take a couple of seconds to compute. It’s that slow! [big]*🍯🐝*
212+
// ==== Factorial
213+
// (((Factorial)))
214+
// (((Runtime, Factorial)))
215+
// Factorial runtime, O(n!), is not scalable at all. Even with input sizes of ~10 elements, it will take a couple of seconds to compute. It’s that slow! [big]*🍯🐝*
216216

217-
.Factorial
218-
****
219-
A factorial is the multiplication of all the numbers less than itself down to 1.
217+
// .Factorial
218+
// ****
219+
// A factorial is the multiplication of all the numbers less than itself down to 1.
220220

221-
.For instance:
222-
- 3! = 3 x 2 x 1 = 6
223-
- 5! = 5 x 4 x 3 x 2 x 1 = 120
224-
- 10! = 3,628,800
225-
- 11! = 39,916,800
226-
****
221+
// .For instance:
222+
// - 3! = 3 x 2 x 1 = 6
223+
// - 5! = 5 x 4 x 3 x 2 x 1 = 120
224+
// - 10! = 3,628,800
225+
// - 11! = 39,916,800
226+
// ****
227227

228-
[#factorial-example]
229-
===== Getting all permutations of a word
230-
(((Permutations)))
231-
(((Words permutations)))
232-
One classic example of an _O(n!)_ algorithm is finding all the different words that can be formed with a given set of letters.
228+
// [#factorial-example]
229+
// ===== Getting all permutations of a word
230+
// (((Permutations)))
231+
// (((Words permutations)))
232+
// One classic example of an _O(n!)_ algorithm is finding all the different words that can be formed with a given set of letters.
233233

234-
.Word's permutations
235-
// image:images/image15.png[image,width=528,height=377]
236-
[source, javascript]
237-
----
238-
include::{codedir}/runtimes/08-permutations.js[tag=snippet]
239-
----
234+
// .Word's permutations
235+
// // image:images/image15.png[image,width=528,height=377]
236+
// [source, javascript]
237+
// ----
238+
// include::{codedir}/runtimes/08-permutations.js[tag=snippet]
239+
// ----
240240

241-
As you can see in the `getPermutations` function, the resulting array is the factorial of the word length.
241+
// As you can see in the `getPermutations` function, the resulting array is the factorial of the word length.
242242

243-
Factorial start very slow and then it quickly becomes uncontrollable. A word size of just 11 characters would take a couple of hours in most computers!
244-
[big]*🤯*
243+
// Factorial start very slow and then it quickly becomes uncontrollable. A word size of just 11 characters would take a couple of hours in most computers!
244+
// [big]*🤯*
245245

246-
==== Summary
246+
// ==== Summary
247247

248-
We went through 8 of the most common time complexities and provided examples for each of them. Hopefully, this will give you a toolbox to analyze algorithms.
249-
(((Tables, Intro, Common time complexities and examples)))
248+
// We went through 8 of the most common time complexities and provided examples for each of them. Hopefully, this will give you a toolbox to analyze algorithms.
249+
// (((Tables, Intro, Common time complexities and examples)))
250250

251-
// tag::table[]
252-
.Most common algorithmic running times and their examples
253-
[cols="2,2,5",options="header"]
254-
|===
255-
|Big O Notation
256-
|Name
257-
|Example(s)
251+
// // tag::table[]
252+
// .Most common algorithmic running times and their examples
253+
// [cols="2,2,5",options="header"]
254+
// |===
255+
// |Big O Notation
256+
// |Name
257+
// |Example(s)
258258

259-
|O(1)
260-
|<<Constant>>
261-
|<<constant-example>>
259+
// |O(1)
260+
// |<<Constant>>
261+
// |<<constant-example>>
262262

263-
|O(log n)
264-
|<<Logarithmic>>
265-
|<<logarithmic-example>>
263+
// |O(log n)
264+
// |<<Logarithmic>>
265+
// |<<logarithmic-example>>
266266

267-
|O(n)
268-
|<<Linear>>
269-
|<<linear-example>>
270-
271-
|O(n log n)
272-
|<<Linearithmic>>
273-
|<<linearithmic-example>>
274-
275-
|O(n^2^)
276-
|<<Quadratic>>
277-
|<<quadratic-example>>
278-
279-
|O(n^3^)
280-
|<<Cubic>>
281-
|<<cubic-example>>
282-
283-
|O(2^n^)
284-
|<<Exponential>>
285-
|<<exponential-example>>
286-
287-
|O(n!)
288-
|<<Factorial>>
289-
|<<factorial-example>>
290-
|===
291-
// end::table[]
267+
// |O(n)
268+
// |<<Linear>>
269+
// |<<linear-example>>
270+
271+
// |O(n log n)
272+
// |<<Linearithmic>>
273+
// |<<linearithmic-example>>
274+
275+
// |O(n^2^)
276+
// |<<Quadratic>>
277+
// |<<quadratic-example>>
278+
279+
// |O(n^3^)
280+
// |<<Cubic>>
281+
// |<<cubic-example>>
282+
283+
// |O(2^n^)
284+
// |<<Exponential>>
285+
// |<<exponential-example>>
286+
287+
// |O(n!)
288+
// |<<Factorial>>
289+
// |<<factorial-example>>
290+
// |===
291+
// // end::table[]

‎book-pro/part01-algorithms-analysis.asc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ In this part, we are going to cover the basics of algorithms analysis. Also, we
55

66
include::content/part01/algorithms-analysis.asc[]
77

8-
// include::content/part01/big-o-examples.asc[]
8+
include::content/part01/big-o-examples.asc[]
99

1010
=== Summary
1111

‎book-pro/progit.asc

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -12,11 +12,12 @@ Adrian Mejia
1212

1313
ifdef::ebook-format[:leveloffset: -1]
1414

15-
include::part01-algorithms-analysis.asc[]
15+
include::ch02-git-basics-chapter.asc[]
16+
17+
// include::part01-algorithms-analysis.asc[]
1618

1719
include::ch01-getting-started.asc[]
1820

19-
include::ch02-git-basics-chapter.asc[]
2021

2122
include::[]
2223

0 commit comments

Comments
 (0)
Please sign in to comment.