Skip to content

Commit d1571a9

Browse files
committed
add assumption to entropy vs class error in tree splitting
1 parent 65d656d commit d1571a9

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

faq/decisiontree-error-vs-entropy.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -60,7 +60,7 @@ Now, is it possible to learn this hypothesis (i.e., tree model) by minimizing th
6060

6161
![](./decisiontree-error-vs-entropy/Slide2.png)
6262

63-
As we can see, the Information Gain after the first split is exactly 0, since average classification error of the 2 child nodes is exactly the same as the classification error of the parent node (40/120 = 0.3333333). In this case, splitting the initial training set wouldn't yield any improvement in terms of our classification error criterion, and thus, the tree algorithm would stop at this point.
63+
As we can see, the Information Gain after the first split is exactly 0, since average classification error of the 2 child nodes is exactly the same as the classification error of the parent node (40/120 = 0.3333333). In this case, splitting the initial training set wouldn't yield any improvement in terms of our classification error criterion, and thus, the tree algorithm would stop at this point (for this statement to be true, we have to make the assumption that neither splitting on feature x2 nor x3 would lead to an Information gain as well).
6464

6565
Next, let's see what happens if we use Entropy as an impurity metric:
6666

0 commit comments

Comments
 (0)