Skip to content

Commit 43744a3

Browse files
author
“jake”
committed
.
1 parent 3da9573 commit 43744a3

File tree

4 files changed

+45
-0
lines changed

4 files changed

+45
-0
lines changed

exploratory.rst

+14
Original file line numberDiff line numberDiff line change
@@ -94,6 +94,20 @@ Using the 50 percentile to compare among different classes, it is easy to find f
9494
can have high prediction importance if they do not overlap. Also can be use for outlier detection.
9595
Features have to be **continuous**.
9696

97+
From different dataframes, displaying the same feature.
98+
99+
.. code:: python
100+
101+
df = pd.DataFrame({'normal': normal['Pressure'], 's1': cf6['Pressure'], 's2': cf12['Pressure'],
102+
's3': cf20['Pressure'], 's4': cf30['Pressure'],'s5': cf45['Pressure']})
103+
df.boxplot(figsize=(10,5));
104+
105+
.. image:: images/box3.png
106+
:scale: 50 %
107+
:align: center
108+
109+
From same dataframe with of a feature split by different y-labels
110+
97111
.. code:: python
98112
99113
plt.figure(figsize=(7, 5))

images/box3.PNG

15.2 KB
Loading

images/deep-activation1.png

39.3 KB
Loading

sl-deeplearning.rst

+31
Original file line numberDiff line numberDiff line change
@@ -53,6 +53,37 @@ Model architecture can also be displayed in a graph. Or we can print as a summar
5353

5454
model summary printout
5555

56+
Activation Functions
57+
----------------------
58+
59+
Hidden Layers
60+
***************
61+
62+
ReLu (Rectified Linear units) is very popular compared to the now mostly obsolete sigmoid & tanh functions because it
63+
avoids vanishing gradient problem and has faster convergence. However, ReLu can only be used in hidden layers.
64+
Also, some gradients can be fragile during training and can die.
65+
It can cause a weight update which will makes it never activate on any data point again. Simply saying that ReLu could result in Dead Neurons.
66+
67+
To fix this problem another modification was introduced called Leaky ReLu to fix the problem of dying neurons.
68+
It introduces a small slope to keep the updates alive.
69+
We then have another variant made form both ReLu and Leaky ReLu called Maxout function .
70+
71+
.. figure:: images/deep-activation.png
72+
:width: 500px
73+
:align: center
74+
75+
https://towardsdatascience.com/activation-functions-and-its-types-which-is-better-a9a5310cc8f
76+
77+
78+
79+
Output Layer
80+
************
81+
82+
Sigmoid: Binary Classification
83+
Softmax: Multi-Class Classification
84+
85+
86+
5687
ANN
5788
-----------
5889

0 commit comments

Comments
 (0)