Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Minor fixes in the VQA tutorial #196

Closed
wants to merge 4 commits into from

Conversation

NarineK
Copy link
Contributor

@NarineK NarineK commented Dec 4, 2019

  • Added Early Forward Termination Exception for the resnet model after accessing the 4th layer and tested that it works properly with DataParallel

  • Tested and added proper support for DataParallel that is being executed on all GPUs. Before we used to fix it to 2 devices only.

"metadata": {},
"outputs": [],
"source": [
"vqa_resnet = VQA_Resnet_Model(vqa_net.module.text.embedding.num_embeddings)\n",
"# `device_ids` contains a list of GPU ids which are used for paralelization supported by `DataParallel`\n",
"vqa_resnet = torch.nn.DataParallel(vqa_resnet, device_ids=[0,1])\n",
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removing fixed number of devices in order to be able to take advantage of all of them.

@@ -134,7 +128,7 @@
"metadata": {},
"outputs": [],
"source": [
"vqa_net = torch.nn.DataParallel(Net(num_tokens), device_ids=[0,1])\n",
"vqa_net = torch.nn.DataParallel(Net(num_tokens))\n",
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removing fixed number of devices in order to be able to take advantage of all of them.

"\n",
" def save_output(module, input, output):\n",
" self.buffer = output\n",
" with lock:\n",
" self.buffer[output.device]= output\n",

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: space before =

@miguelmartin75
Copy link

miguelmartin75 commented Dec 5, 2019

LGTM (beside the nit)

Thanks, addressed the nit

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@NarineK has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@NarineK merged this pull request in 8386d1c.

miguelmartin75 pushed a commit to miguelmartin75/captum that referenced this pull request Dec 20, 2019
…ch#196)

Summary:
allow-large-files
+ Tested and added proper support for DataParallel that is being executed on all GPUs. Before we used to fix it to 2 devices only.
Pull Request resolved: pytorch#196

Differential Revision: D18827192

Pulled By: NarineK

fbshipit-source-id: 1c3c0021253e35078f1a72ac879edabc482a4ccb
miguelmartin75 pushed a commit to miguelmartin75/captum that referenced this pull request Dec 20, 2019
…ch#196)

Summary:
allow-large-files
+ Tested and added proper support for DataParallel that is being executed on all GPUs. Before we used to fix it to 2 devices only.
Pull Request resolved: pytorch#196

Differential Revision: D18827192

Pulled By: NarineK

fbshipit-source-id: 1c3c0021253e35078f1a72ac879edabc482a4ccb
NarineK added a commit to NarineK/captum-1 that referenced this pull request Nov 19, 2020
…ch#196)

Summary:
allow-large-files
+ Tested and added proper support for DataParallel that is being executed on all GPUs. Before we used to fix it to 2 devices only.
Pull Request resolved: pytorch#196

Differential Revision: D18827192

Pulled By: NarineK

fbshipit-source-id: 1c3c0021253e35078f1a72ac879edabc482a4ccb
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants