Skip to content

Commit e83f94b

Browse files
erocoarsoumith
authored andcommitted
typos (#243)
1 parent 2d44031 commit e83f94b

File tree

1 file changed

+4
-4
lines changed

1 file changed

+4
-4
lines changed

advanced_source/neural_style_tutorial.py

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@
1515
~~~~~~~~~~~~
1616
1717
The Neural-Style, or Neural-Transfer, is an algorithm that takes as
18-
input a content-image (e.g. a tortle), a style-image (e.g. artistic
18+
input a content-image (e.g. a turtle), a style-image (e.g. artistic
1919
waves) and return the content of the content-image as if it was
2020
'painted' using the artistic style of the style-image:
2121
@@ -202,7 +202,7 @@ def image_loader(image_name):
202202

203203

204204
######################################################################
205-
# Imported PIL images has values between 0 and 255. Transformed into torch
205+
# Imported PIL images have values between 0 and 255. Transformed into torch
206206
# tensors, their values are between 0 and 1. This is an important detail:
207207
# neural networks from torch library are trained with 0-1 tensor image. If
208208
# you try to feed the networks with 0-255 tensor images the activated
@@ -244,7 +244,7 @@ def imshow(tensor, title=None):
244244
#
245245
# The content loss is a function that takes as input the feature maps
246246
# :math:`F_{XL}` at a layer :math:`L` in a network fed by :math:`X` and
247-
# return the weigthed content distance :math:`w_{CL}.D_C^L(X,C)` between
247+
# returns the weigthed content distance :math:`w_{CL}.D_C^L(X,C)` between
248248
# this image and the content image. Hence, the weight :math:`w_{CL}` and
249249
# the target content :math:`F_{CL}` are parameters of the function. We
250250
# implement this function as a torch module with a constructor that takes
@@ -261,7 +261,7 @@ def imshow(tensor, title=None):
261261
# of the neural network. The computed loss is saved as a parameter of the
262262
# module.
263263
#
264-
# Finally, we define a fake ``backward`` method, that just call the
264+
# Finally, we define a fake ``backward`` method that just calls the
265265
# backward method of ``nn.MSELoss`` in order to reconstruct the gradient.
266266
# This method returns the computed loss: this will be useful when running
267267
# the gradient descent in order to display the evolution of style and

0 commit comments

Comments
 (0)