You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/cnn/utils/cv_train.html
+12-57Lines changed: 12 additions & 57 deletions
Original file line number
Diff line number
Diff line change
@@ -72,38 +72,7 @@
72
72
<divclass='section-link'>
73
73
<ahref='#section-0'>#</a>
74
74
</div>
75
-
<h1>Cross-Validation & Early Stopping</h1>
76
-
<p>Implementation of fundamental techniques namely <em>Cross-Validation</em> and <em>Early Stopping</em>
77
-
<h3>Cross-Validation</h3>
78
-
<p>
79
-
Getting data is expensive and in some cases, one has no option but to use a limited amount of data for training their machine learning models.
80
-
This is where Cross-Validation is useful. Steps are as follows:
81
-
<oltype = "1">
82
-
<li> Split the data in K folds </li>
83
-
<li> Use K-1 folds to train a set of models</li>
84
-
<li> Validate the models on the remaining fold</li>
85
-
<li> Repeat (1) and (2) for all the folds</li>
86
-
<li> Average the performance over all runs</li>
87
-
</ol>
88
-
</p>
89
-
<h3>Early-Stopping</h3>
90
-
Deep Learning networks are prone to overfitting, that is although overfitted models have a good performance on train set, they have poor generalization capabilities.
91
-
In other words, overfitted models have low bias and high variance. Lower the bias higher the capability of model to fit the data. Higher the variance higher the sensitivity with respect to training data.
<p>Therefore, user has to find a tradeoff between bias and variance.</p>
95
-
<p></p>
96
-
<p> Early-Stopping is one of the way to find this tradeoff. It helps to find a good setting of parameters and preventing overfitting on dataset and saving computation time.
97
-
This can be visualized through the following graph of train loss and validation loss over time: </p><br>
98
-
99
-
100
-
<ahref="https://www.deeplearningbook.org/contents/regularization.html"><imgsrc="Cross-validation.png" alt="Training v/s Validation set Loss"></a>
101
-
<br>
102
-
<p> It can be seen that train error continue to decrease but the validation error start to increase after around 40 epochs.
103
-
Therefore, our goal is to stop the training after the validation loss increases </p>
The discriminators test whether the generated images look real.</p>
89
90
<p>This file contains the model code as well as the training code.
90
91
We also have a Google Colab notebook.</p>
91
-
<p><ahref="https://colab.research.google.com/github/lab-ml/nn/blob/master/labml_nn/gan/cycle_gan.ipynb"><imgalt="Open In Colab" src="https://colab.research.google.com/assets/colab-badge.svg" /></a>
92
+
<p><ahref="https://colab.research.google.com/github/lab-ml/nn/blob/master/labml_nn/gan/cycle_gan/experiment.ipynb"><imgalt="Open In Colab" src="https://colab.research.google.com/assets/colab-badge.svg" /></a>
0 commit comments