-
Notifications
You must be signed in to change notification settings - Fork 512
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Minor fixes in the VQA tutorial #196
Conversation
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"vqa_resnet = VQA_Resnet_Model(vqa_net.module.text.embedding.num_embeddings)\n", | ||
"# `device_ids` contains a list of GPU ids which are used for paralelization supported by `DataParallel`\n", | ||
"vqa_resnet = torch.nn.DataParallel(vqa_resnet, device_ids=[0,1])\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removing fixed number of devices in order to be able to take advantage of all of them.
@@ -134,7 +128,7 @@ | |||
"metadata": {}, | |||
"outputs": [], | |||
"source": [ | |||
"vqa_net = torch.nn.DataParallel(Net(num_tokens), device_ids=[0,1])\n", | |||
"vqa_net = torch.nn.DataParallel(Net(num_tokens))\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removing fixed number of devices in order to be able to take advantage of all of them.
"\n", | ||
" def save_output(module, input, output):\n", | ||
" self.buffer = output\n", | ||
" with lock:\n", | ||
" self.buffer[output.device]= output\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: space before =
LGTM (beside the nit) Thanks, addressed the nit |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@NarineK has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
…ch#196) Summary: allow-large-files + Tested and added proper support for DataParallel that is being executed on all GPUs. Before we used to fix it to 2 devices only. Pull Request resolved: pytorch#196 Differential Revision: D18827192 Pulled By: NarineK fbshipit-source-id: 1c3c0021253e35078f1a72ac879edabc482a4ccb
…ch#196) Summary: allow-large-files + Tested and added proper support for DataParallel that is being executed on all GPUs. Before we used to fix it to 2 devices only. Pull Request resolved: pytorch#196 Differential Revision: D18827192 Pulled By: NarineK fbshipit-source-id: 1c3c0021253e35078f1a72ac879edabc482a4ccb
…ch#196) Summary: allow-large-files + Tested and added proper support for DataParallel that is being executed on all GPUs. Before we used to fix it to 2 devices only. Pull Request resolved: pytorch#196 Differential Revision: D18827192 Pulled By: NarineK fbshipit-source-id: 1c3c0021253e35078f1a72ac879edabc482a4ccb
Added Early Forward Termination Exception for the resnet model after accessing the 4th layer and tested that it works properly with DataParallel
Tested and added proper support for DataParallel that is being executed on all GPUs. Before we used to fix it to 2 devices only.