Adversarial example generation by FGSM: different normalization of training vs test images? #1032
Labels
Adversarial Training
Issues relating to the adversarial example generation tutorial
docathon-h1-2023
A label for the docathon in H1 2023
medium
In the Adversarial example generation tutorial the classifier from https://github.com/pytorch/examples/tree/master/mnist is used. However, this classifier is trained with input normalization
transforms.Normalize((0.1307,), (0.3081,))
while in the FGSM tutorial no normalization is used and the perturbed images are clamped to [0,1] - is this not a contradiction?The text was updated successfully, but these errors were encountered: