Skip to content

chore(book): improves grammar #80

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Sep 12, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
30 changes: 14 additions & 16 deletions book/D-interview-questions-solutions.asc
Original file line number Diff line number Diff line change
Expand Up @@ -803,21 +803,21 @@ graph G {

The red connections are critical; if we remove any, some servers won't be reachable.

We can solve this problem in one pass using DFS. But for that, we keep track of the nodes that are part of a loop (strongly connected components). To do that, we use the time of visit (or depth in the recursion) each node.
We can solve this problem in one pass using DFS. But for that, we keep track of the nodes that are part of a loop (strongly connected components). We use the time of visit (or depth in the recursion) each node.

For example C, if we start on `c0`, it belongs to group 0, then we move c1, c2, and c3, increasing the depth counter. Each one will be on its own group since there's no loop.

For example B, we can start at `b0`, and then we move to `b1` and `b2`. However, `b2` circles back to `b0`, which is on group 0. We can update the group of `b1` and `b2` to be 0 since they are all connected in a loop.

For an *undirected graph*, If we found a node on our dfs, that we have previously visited, we found a loop! We can mark all of them with the lowest group number. We know we have a critical path when it's a connection that links two different groups. For example A, they all will belong to group 0, since they are all in a loop. For Example B, we will have `b0`, `b1`, and `b2` on the same group while `b3` will be on a different group.
For an *undirected graph*, If we found a node on our DFS, that we have previously visited, we found a loop! We can mark all of them with the lowest group number. We know we have a critical path when it's a connection that links two different groups. For example A, they all will belong to group 0, since they are all in a loop. For Example B, we will have `b0`, `b1`, and `b2` on the same group while `b3` will be on a different group.

*Algorithm*:

* Build the graph as an adjacency list (map + array)
* Run dfs on any node. E.g. `0`.
** Keep track of the nodes that you have seen using `group` array. But instead of marking them as seen or not. Let's mark it with the `depth`.
** Visit all the adjacent nodes that are NOT the parent.
** If we see a node that we have visited yet, do a dfs on it and increase the depth.
** If we see a node that we have visited yet, do a DFS on it and increase the depth.
** If the adjacent node has a lower grouping number, update the current node with it.
** If the adjacent node has a higher grouping number, then we found a critical path.

Expand Down Expand Up @@ -863,18 +863,16 @@ The first thing we need to understand is all the different possibilities for ove
// my own image
image::intervals-overlap-cases-owned.png[merge intervals cases]

One way to solve this problem, is sorting by start time. That will eliminate half of the cases!

Since A will always start before B, only 3 cases apply:
- No overlap: `[[1, 3], [4, 6]]`.
- Overlap at the end: `[[1, 3], [2, 4]]`.
- Eclipse: `[[1, 9], [3, 7]]`.
One way to solve this problem is sorting by start time. That will eliminate half of the cases! A will always start before B. Only 3 cases apply:
- No overlap: E.g.,`[[1, 3], [4, 6]]`.
- Overlap at the end: E.g., `[[1, 3], [2, 4]]`.
- Eclipse: E.g.,`[[1, 9], [3, 7]]`.

*Algorithm*:

* Sort intervals by start time
* If the `curr`ent interval's start time is _equal_ or less than the `last` interval's end time, then we have an overlap.
** Overlaps has two cases: 1) `curr`'s end is larger 2) `last`'s end is larger. For both cases `Math.max` works.
** Overlaps has two cases: 1) `curr`'s end is larger 2) `last`'s end is larger. For both cases, `Math.max` works.
* If there's no overlap, we add the interval to the solution.

*Implementation*:
Expand All @@ -884,12 +882,12 @@ Since A will always start before B, only 3 cases apply:
include::interview-questions/merge-intervals.js[tags=description;solution]
----

For the first interval, it will be added straight to the solution array. For all others, we will do the comparison.
For the first interval, it will be added straight to the solution array. For all others, we will make a comparison.

*Complexity Analysis*:

- Time: `O(n log n)`. Standard libraries has a sorting time of `O(n log n)`, then we visit each interval in `O(n)`.
- Space: `O(n)`. In the worst-case is when there's no overlapping intervals. The size of the solution array would be `n`.
- Time: `O(n log n)`. Standard libraries have a sorting time of `O(n log n)`, then we visit each interval in `O(n)`.
- Space: `O(n)`. In the worst-case is when there are no overlapping intervals. The size of the solution array would be `n`.



Expand All @@ -902,9 +900,9 @@ For the first interval, it will be added straight to the solution array. For all
[#sorting-q-sort-colors]
include::content/part04/sorting-algorithms.asc[tag=sorting-q-sort-colors]

We are asked to sort an array with 3 possible values. If we use the standard sorting method `Array.sort`, that will be `O(n log n)`. However, we are asked to solve in linear time and constant space complexity.
We are asked to sort an array with 3 possible values. If we use the standard sorting method `Array.sort`, that will be `O(n log n)`. However, there's a requirement to solve it in linear time and constant space complexity.

The concept on quicksort can help here. We can choose 1 as a pivot and move everything less than 1 to the left and everything bigger than 1 to the right.
The concept of quicksort can help here. We can choose `1` as a pivot and move everything less than 1 to the left and everything more significant than 1 to the right.

*Algorithm*:

Expand All @@ -922,7 +920,7 @@ The concept on quicksort can help here. We can choose 1 as a pivot and move ever
include::interview-questions/sort-colors.js[tags=description;solution]
----

We are using the destructive assigment to swap the elements. Here's another version a little bit more compact.
We are using the destructive assignment to swap the elements. Here's another version a little bit more compact.

[source, javascript]
----
Expand Down
16 changes: 8 additions & 8 deletions book/content/part04/sorting-algorithms.asc
Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,8 @@ endif::[]

=== Sorting Algorithms

Sorting is one of the most common solutions when we want to extract some insights about a collection of data.
We can sort to get the maximum or minimum value and many algorithmic problems involves sorting data first.
Sorting is one of the most common solutions when we want to extract some insights about data.
We can sort to get the maximum or minimum value, and many algorithmic problems can benefit from sorting.

.We are going to explore three basic sorting algorithms _O(n^2^)_ which have low overhead:
- <<part04-algorithmic-toolbox#bubble-sort>>
Expand All @@ -21,15 +21,15 @@ Before we dive into the most well-known sorting algorithms, let's discuss the so

==== Sorting Properties

Sorting implementations with the same time complexity might manipulate the data differently. We want to understand these differences so we can be aware of the side-effects it will have on data or extra resources they will require. For instance, some solutions will need auxiliary memory to store temporary data while sorting while others can do it in place.
Sorting implementations with the same time complexity might manipulate the data differently. We want to understand these differences to be aware of the side effects it will have on data or extra resources they will require. For instance, some solutions will need auxiliary memory to store temporary data while sorting, while others can do it in place.

Sorting properties are stable, adaptive, online and in-place. Let's go one by one.
Sorting properties are stable, adaptive, online, and in-place. Let's go one by one.

===== Stable
(((Sorting, stable)))
An ((stable sorting)) algorithms keep the relative order of items with the same comparison criteria.

This especially useful when you want to sort on multiple phases.
This incredibly useful when you want to sort on multiple phases.

.Let's say you have the following data:
[source, javascript]
Expand Down Expand Up @@ -82,7 +82,7 @@ Both results are sorted by `age`; however, having a stable sorting is better if
===== In-place
(((Sorting, in-place)))
An ((in-place sorting)) algorithm would have a _space complexity_ of O(1). In other words, it does not use any other auxiliary memory because it moves the items in the collection itself.
No requiring extra memory for sorting is especially useful for memory constraint environments like robotics, smart devices, or embedded systems in appliances.
No extra memory for sorting is especially useful for large amounts of data or in memory constraint environments like robotics, smart devices, or embedded systems in appliances.

===== Online
(((Sorting, online)))
Expand Down Expand Up @@ -111,7 +111,7 @@ include::quick-sort.asc[]
<<<
==== Summary

We explored many algorithms some of them simple and other more performant. Also, we cover the properties of sorting algorithms such as stable, in-place, online and adaptive.
We explored the most common sorting algorithms, some of which are simple and others more performant. Also, we cover the properties of sorting algorithms such as stable, in-place, online, and adaptive.
(((Tables, Algorithms, Sorting Complexities)))
(((Tables, Algorithms, Sorting Summary)))

Expand Down Expand Up @@ -162,7 +162,7 @@ We explored many algorithms some of them simple and other more performant. Also,

// end::sorting-q-merge-intervals[]

// _Seen in interviews at: X._
// _Seen in interviews at: Facebook, Amazon, Bloomberg._

*Starter code*:

Expand Down