-
Notifications
You must be signed in to change notification settings - Fork 7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fixed reparametrization for shear X/Y in autoaugment ops #5384
Conversation
💊 CI failures summary and remediationsAs of commit fc3f95f (more details on the Dr. CI page): ✅ None of the CI failures appear to be your fault 💚
🚧 1 ongoing upstream failure:These were probably caused by upstream breakages that are not fixed yet.
This comment was automatically generated by Dr. CI (expand for details).Please report bugs/suggestions to the (internal) Dr. CI Users group. |
695605e
to
55e77b4
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, with a minor nit.
I assume you will merge after fixing the issue surfaced on the tests on a separate PR. Right?
a980357
to
9518fb1
Compare
f15d38d
to
fc3f95f
Compare
) Summary: * Added ref tests for shear X/Y * Added PIL tests and fixed tan(level) difference * Updated tests * Fixed reparam for shear X/Y in autoaugment * Fixed arc_level -> level as atan is applied internally * Fixed links Reviewed By: NicolasHug Differential Revision: D34140249 fbshipit-source-id: e7d984977599bbd71e57f403c022620315b052a1
Description:
Context
In this PR we are fixing parametrization bug in affine transformation matrix that we have with shear ops when applied to autoaugmentations. In torchvision we are constructing affine transformation matrix for shear X as
[1, tan(sx), 0, 0, 1, 0]
vision/torchvision/transforms/functional.py
Lines 976 to 982 in 21790df
and official autoaugment implementations are using
[1, sx, 0, 0, 1, 0]
, for example:https://github.com/tensorflow/models/blob/dd02069717128186b88afa8d857ce57d17957f03/research/autoaugment/augmentation_transforms.py#L290
The difference between two is very small in range of [-0.3, 0.3]:
In details, 1) autoaugment policy provides shear
magnitude
in radians, 2) asF.affine
works on shear in degrees and appliesmath.tan(math.radians(shear_degree))
we have to modifyf(magnitude)
in a form such that we have the equality:I do not expect a major impact of that when training with autoaugment.
Right now with the fix we'll have exactly the same results on Pillow images.
What was done
Fixed shear X/Y reparametrization mismatch between torchvision and one of the official autoaugment implementations. (In TF there are two ways of implementing autoaugmentations: a) using PIL as for CIFAR10 and b) using TF/keras image ops for ImageNet. The difference is in padding. For imagenet reflect padding is used. See Added center as top-left for shear X/Y ops for autoaugment #5285)
Added a test to check torchvision's implementation vs ref implementations using PIL
We are testing 4 cases: