Skip to content

Commit 8dea1c8

Browse files
committedSep 14, 2017
Udacity Deep Learning Nano Degree
Added Weights Initialization
1 parent 6cbfca4 commit 8dea1c8

File tree

6 files changed

+941
-0
lines changed

6 files changed

+941
-0
lines changed
 
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,46 @@
1+
# Deep Learning Nanodegree Foundation
2+
3+
This repository contains material related to Udacity's [Deep Learning Nanodegree Foundation](https://www.udacity.com/course/deep-learning-nanodegree-foundation--nd101) program. It consists of a bunch of tutorial notebooks for various deep learning topics. In most cases, the notebooks lead you through implementing models such as convolutional networks, recurrent networks, and GANs. There are other topics covered such as weight intialization and batch normalization.
4+
5+
There are also notebooks used as projects for the Nanodegree program. In the program itself, the projects are reviewed by Udacity experts, but they are available here as well.
6+
7+
## Table Of Contents
8+
9+
### Tutorials
10+
11+
* [Sentiment Analysis with Numpy](https://github.com/udacity/deep-learning/tree/master/sentiment-network): [Andrew Trask](http://iamtrask.github.io/) leads you through building a sentiment analysis model, predicting if some text is positive or negative.
12+
* [Intro to TensorFlow](https://github.com/udacity/deep-learning/tree/master/intro-to-tensorflow): Starting building neural networks with Tensorflow.
13+
* [Weight Intialization](https://github.com/udacity/deep-learning/tree/master/weight-initialization): Explore how initializing network weights affects performance.
14+
* [Autoencoders](https://github.com/udacity/deep-learning/tree/master/autoencoder): Build models for image compression and denoising, using feed-forward and convolution networks in TensorFlow.
15+
* [Transfer Learning (ConvNet)](https://github.com/udacity/deep-learning/tree/master/transfer-learning). In practice, most people don't train their own large networkd on huge datasets, but use pretrained networks such as VGGnet. Here you'll use VGGnet to classify images of flowers without training a network on the images themselves.
16+
* [Intro to Recurrent Networks (Character-wise RNN)](https://github.com/udacity/deep-learning/tree/master/intro-to-rnns): Recurrent neural networks are able to use information about the sequence of data, such as the sequence of characters in text.
17+
* [Embeddings (Word2Vec)](https://github.com/udacity/deep-learning/tree/master/embeddings): Implement the Word2Vec model to find semantic representations of words for use in natural language processing.
18+
* [Sentiment Analysis RNN](https://github.com/udacity/deep-learning/tree/master/sentiment-rnn): Implement a recurrent neural network that can predict if a text sample is positive or negative.
19+
* [Tensorboard](https://github.com/udacity/deep-learning/tree/master/tensorboard): Use TensorBoard to visualize the network graph, as well as how parameters change through training.
20+
* [Reinforcement Learning (Q-Learning)](https://github.com/udacity/deep-learning/tree/master/reinforcement): Implement a deep Q-learning network to play a simple game from OpenAI Gym.
21+
* [Sequence to sequence](https://github.com/udacity/deep-learning/tree/master/seq2seq): Implement a sequence-to-sequence recurrent network.
22+
* [Batch normalization](https://github.com/udacity/deep-learning/tree/master/batch-norm): Learn how to improve training rates and network stability with batch normalizations.
23+
* [Generative Adversatial Network on MNIST](https://github.com/udacity/deep-learning/tree/master/gan_mnist): Train a simple generative adversarial network on the MNIST dataset.
24+
* [Deep Convolutional GAN (DCGAN)](https://github.com/udacity/deep-learning/tree/master/dcgan-svhn): Implement a DCGAN to generate new images based on the Street View House Numbers (SVHN) dataset.
25+
* [Intro to TFLearn](https://github.com/udacity/deep-learning/tree/master/intro-to-tflearn): A couple introductions to a high-level library for building neural networks.
26+
27+
### Projects
28+
29+
* [Your First Neural Network](https://github.com/udacity/deep-learning/tree/master/first-neural-network): Implement a neural network in Numpy to predict bike rentals.
30+
* [Image classification](https://github.com/udacity/deep-learning/tree/master/image-classification): Build a convolutional neural network with TensorFlow to classify CIFAR-10 images.
31+
* [Text Generation](https://github.com/udacity/deep-learning/tree/master/tv-script-generation): Train a recurrent neural network on scripts from The Simpson's (copyright Fox) to generate new scripts.
32+
* [Machine Translation](https://github.com/udacity/deep-learning/tree/master/language-translation): Train a sequence to sequence network for English to French translation (on a simple dataset)
33+
* [Face Generation](https://github.com/udacity/deep-learning/tree/master/face_generation): Use a DCGAN on the CelebA dataset to generate images of novel and realistic human faces.
34+
35+
36+
## Dependencies
37+
38+
Each directory has a `requirements.txt` describing the minimal dependencies required to run the notebooks in that directory.
39+
40+
### pip
41+
42+
To install these dependencies with pip, you can issue `pip3 install -r requirements.txt`.
43+
44+
### Conda Environments
45+
46+
You can find Conda environment files for the Deep Learning program in the `environments` folder. Note that environment files are platform dependent. Versions with `tensorflow-gpu` are labeled in the filename with "GPU".
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
MNIST_data/
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,116 @@
1+
import numpy as np
2+
import matplotlib.pyplot as plt
3+
import tensorflow as tf
4+
5+
6+
def hist_dist(title, distribution_tensor, hist_range=(-4, 4)):
7+
"""
8+
Display histogram of a TF distribution
9+
"""
10+
with tf.Session() as sess:
11+
values = sess.run(distribution_tensor)
12+
13+
plt.title(title)
14+
plt.hist(values, np.linspace(*hist_range, num=len(values)/2))
15+
plt.show()
16+
17+
18+
def _get_loss_acc(dataset, weights):
19+
"""
20+
Get losses and validation accuracy of example neural network
21+
"""
22+
batch_size = 128
23+
epochs = 2
24+
learning_rate = 0.001
25+
26+
features = tf.placeholder(tf.float32)
27+
labels = tf.placeholder(tf.float32)
28+
learn_rate = tf.placeholder(tf.float32)
29+
30+
biases = [
31+
tf.Variable(tf.zeros([256])),
32+
tf.Variable(tf.zeros([128])),
33+
tf.Variable(tf.zeros([dataset.train.labels.shape[1]]))
34+
]
35+
36+
# Layers
37+
layer_1 = tf.nn.relu(tf.matmul(features, weights[0]) + biases[0])
38+
layer_2 = tf.nn.relu(tf.matmul(layer_1, weights[1]) + biases[1])
39+
logits = tf.matmul(layer_2, weights[2]) + biases[2]
40+
41+
# Training loss
42+
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=labels))
43+
44+
# Optimizer
45+
optimizer = tf.train.AdamOptimizer(learn_rate).minimize(loss)
46+
47+
# Accuracy
48+
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(labels, 1))
49+
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
50+
51+
# Measurements use for graphing loss
52+
loss_batch = []
53+
54+
with tf.Session() as session:
55+
session.run(tf.global_variables_initializer())
56+
batch_count = int((dataset.train.num_examples / batch_size))
57+
58+
# The training cycle
59+
for epoch_i in range(epochs):
60+
for batch_i in range(batch_count):
61+
batch_features, batch_labels = dataset.train.next_batch(batch_size)
62+
63+
# Run optimizer and get loss
64+
session.run(
65+
optimizer,
66+
feed_dict={features: batch_features, labels: batch_labels, learn_rate: learning_rate})
67+
l = session.run(
68+
loss,
69+
feed_dict={features: batch_features, labels: batch_labels, learn_rate: learning_rate})
70+
loss_batch.append(l)
71+
72+
valid_acc = session.run(
73+
accuracy,
74+
feed_dict={features: dataset.validation.images, labels: dataset.validation.labels, learn_rate: 1.0})
75+
76+
# Hack to Reset batches
77+
dataset.train._index_in_epoch = 0
78+
dataset.train._epochs_completed = 0
79+
80+
return loss_batch, valid_acc
81+
82+
83+
def compare_init_weights(
84+
dataset,
85+
title,
86+
weight_init_list,
87+
plot_n_batches=100):
88+
"""
89+
Plot loss and print stats of weights using an example neural network
90+
"""
91+
colors = ['r', 'b', 'g', 'c', 'y', 'k']
92+
label_accs = []
93+
label_loss = []
94+
95+
assert len(weight_init_list) <= len(colors), 'Too many inital weights to plot'
96+
97+
for i, (weights, label) in enumerate(weight_init_list):
98+
loss, val_acc = _get_loss_acc(dataset, weights)
99+
100+
plt.plot(loss[:plot_n_batches], colors[i], label=label)
101+
label_accs.append((label, val_acc))
102+
label_loss.append((label, loss[-1]))
103+
104+
plt.title(title)
105+
plt.xlabel('Batches')
106+
plt.ylabel('Loss')
107+
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
108+
plt.show()
109+
110+
print('After 858 Batches (2 Epochs):')
111+
print('Validation Accuracy')
112+
for label, val_acc in label_accs:
113+
print(' {:7.3f}% -- {}'.format(val_acc*100, label))
114+
print('Loss')
115+
for label, loss in label_loss:
116+
print(' {:7.3f} -- {}'.format(loss, label))
Loading
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,47 @@
1+
appdirs==1.4.3
2+
appnope==0.1.0
3+
bleach==2.0.0
4+
cycler==0.10.0
5+
decorator==4.0.11
6+
entrypoints==0.2.2
7+
html5lib==0.999999999
8+
ipykernel==4.5.2
9+
ipython==5.3.0
10+
ipython-genutils==0.2.0
11+
ipywidgets==6.0.0
12+
Jinja2==2.9.5
13+
jsonschema==2.6.0
14+
jupyter==1.0.0
15+
jupyter-client==5.0.0
16+
jupyter-console==5.1.0
17+
jupyter-core==4.3.0
18+
MarkupSafe==1.0
19+
matplotlib==2.0.0
20+
mistune==0.7.4
21+
nbconvert==5.1.1
22+
nbformat==4.3.0
23+
notebook==4.4.1
24+
numpy==1.12.1
25+
packaging==16.8
26+
pandocfilters==1.4.1
27+
pexpect==4.2.1
28+
pickleshare==0.7.4
29+
prompt-toolkit==1.0.14
30+
protobuf==3.2.0
31+
ptyprocess==0.5.1
32+
Pygments==2.2.0
33+
pyparsing==2.2.0
34+
python-dateutil==2.6.0
35+
pytz==2017.2
36+
pyzmq==16.0.2
37+
qtconsole==4.3.0
38+
simplegeneric==0.8.1
39+
six==1.10.0
40+
tensorflow==1.0.0
41+
terminado==0.6
42+
testpath==0.3
43+
tornado==4.4.3
44+
traitlets==4.3.2
45+
wcwidth==0.1.7
46+
webencodings==0.5
47+
widgetsnbextension==2.0.0

‎deep-learning/udacity-deeplearning/weight-initialization/weight_initialization.ipynb

+731
Large diffs are not rendered by default.

0 commit comments

Comments
 (0)
Please sign in to comment.