Skip to content

Commit 0da59c1

Browse files
author
Carlos Araya
committed
Several minor content changes:
- Modified getting started on landing page to address changes from delta PR - Added torchvision dependency text to tutorials (instructions on how to install via conda or pip) - Added getting started with captum insights content to site.
1 parent 30c432b commit 0da59c1

8 files changed

+55
-38
lines changed

docs/captum_insights.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ Interpreting model output in complex models can be difficult. Even with interpre
77

88
Captum Insights is an interpretability visualization widget built on top of Captum to facilitate model understanding. Captum Insights works across images, text, and other features to help users understand feature attribution. Some examples of the widget are below.
99

10-
Getting started with Captum Insights is easy. To analyze a sample model on CIFAR10 via Captum Insights execute the line below and navigate to the URL specified in the output.
10+
Getting started with Captum Insights is easy. You can learn how to use Captum Insights with the [Getting started with Captum Sights](/tutorials/CIFAR_TorchVision_Captum_Insights) tutorial. Alternatively, to analyze a sample model on CIFAR10 via Captum Insights execute the line below and navigate to the URL specified in the output.
1111

1212
```
1313
python -m captum.insights.example

tutorials/CIFAR_TorchVision_Captum_Insights.ipynb

+6-11
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44
"cell_type": "markdown",
55
"metadata": {},
66
"source": [
7-
"# Showcases Captum Insights with a simple model on CIFAR10 dataset"
7+
"# Getting started with Captum Insights: a simple model on CIFAR10 dataset"
88
]
99
},
1010
{
@@ -13,7 +13,11 @@
1313
"source": [
1414
"Demonstrates how to use Captum Insights embedded in a notebook to debug a CIFAR model and test samples. This is a slight modification of the CIFAR_TorchVision_Interpret notebook.\n",
1515
"\n",
16-
"More details about the model can be found here: https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html#sphx-glr-beginner-blitz-cifar10-tutorial-py"
16+
"More details about the model can be found here: https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html#sphx-glr-beginner-blitz-cifar10-tutorial-py\n ",
17+
"\n ",
18+
"**Note:** This tutorial uses torchvision. To download torchvision you can do one of the following:\n ",
19+
"- **Conda:** conda install torchvision -c pytorch\n ",
20+
"- **Pip:** pip3 install torchvision"
1721
]
1822
},
1923
{
@@ -161,15 +165,6 @@
161165
{
162166
"data": {
163167
"text/html": [
164-
"\n",
165-
" <iframe\n",
166-
" width=\"100%\"\n",
167-
" height=\"500px\"\n",
168-
" src=\"http://127.0.0.1:53361\"\n",
169-
" frameborder=\"0\"\n",
170-
" allowfullscreen\n",
171-
" ></iframe>\n",
172-
" "
173168
],
174169
"text/plain": [
175170
"<IPython.lib.display.IFrame at 0x127aeb150>"

tutorials/CIFAR_TorchVision_Interpret.ipynb

+5-1
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,11 @@
1313
"source": [
1414
"Demonstrates how to apply model interpretability algorithms from Captum library on CIFAR model and test samples.\n",
1515
"\n",
16-
"More details about the model can be found here: https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html#sphx-glr-beginner-blitz-cifar10-tutorial-py"
16+
"More details about the model can be found here: https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html#sphx-glr-beginner-blitz-cifar10-tutorial-py\n ",
17+
"\n ",
18+
"**Note:** This tutorial uses torchvision. To download torchvision you can do one of the following:\n ",
19+
"- **Conda:** conda install torchvision -c pytorch\n ",
20+
"- **Pip:** pip3 install torchvision"
1721
]
1822
},
1923
{

tutorials/Multimodal_VQA_Interpret.ipynb

+5-1
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,11 @@
1414
"In this notebook we demonstrate how to apply model interpretability algorithms from captum library on VQA models. More specifically we explain model predictions by applying integrated gradients on a small sample of image-question pairs. More details about Integrated gradients can be found in the original paper: https://arxiv.org/pdf/1703.01365.pdf\n",
1515
"\n",
1616
"As a reference VQA model we use the following open source implementation:\n",
17-
"https://github.com/Cyanogenoid/pytorch-vqa"
17+
"https://github.com/Cyanogenoid/pytorch-vqa\n ",
18+
"\n ",
19+
"**Note:** This tutorial uses torchvision. To download torchvision you can do one of the following:\n ",
20+
"- **Conda:** conda install torchvision -c pytorch\n ",
21+
"- **Pip:** pip3 install torchvision"
1822
]
1923
},
2024
{

tutorials/Resnet_TorchVision_Interpret.ipynb

+5-1
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,11 @@
1313
"source": [
1414
"This notebook demonstrates how to apply model interpretability algorithms on pretrained ResNet model using a handpicked images and visualizes the attributions for each pixel by overlaying them on the image.\n",
1515
"\n",
16-
"The interpretation algorithms that we use in this notebook are Integrated Gradients (w/o noise tunnel) and GradientShap. Noise tunnel allows to smoothen the attributions after adding gaussian noise to each input sample."
16+
"The interpretation algorithms that we use in this notebook are Integrated Gradients (w/o noise tunnel) and GradientShap. Noise tunnel allows to smoothen the attributions after adding gaussian noise to each input sample.\n ",
17+
"\n ",
18+
"**Note:** This tutorial uses torchvision. To download torchvision you can do one of the following:\n ",
19+
"- **Conda:** conda install torchvision -c pytorch\n ",
20+
"- **Pip:** pip3 install torchvision"
1721
]
1822
},
1923
{

website/pages/en/index.js

+25-23
Original file line numberDiff line numberDiff line change
@@ -118,53 +118,51 @@ import torch
118118
import torch.nn as nn
119119
120120
from captum.attr import (
121-
GradientShap,
122-
IntegratedGradients,
123-
LayerConductance,
124-
NeuronConductance,
125-
NoiseTunnel,
121+
IntegratedGradients
126122
)
127123
128124
class ToyModel(nn.Module):
129125
def __init__(self):
130126
super().__init__()
131-
132-
self.lin1 = nn.Linear(3, 4)
133-
self.lin1.weight = nn.Parameter(torch.ones(4, 3))
134-
self.lin1.bias = nn.Parameter(torch.tensor([-10.0, 1.0, 1.0, 1.0]))
127+
self.lin1 = nn.Linear(3, 3)
135128
self.relu = nn.ReLU()
136-
self.lin2 = nn.Linear(4, 1)
137-
self.lin2.weight = nn.Parameter(torch.ones(1, 4))
138-
self.lin2.bias = nn.Parameter(torch.tensor([-3.0]))
129+
self.lin2 = nn.Linear(3, 2)
130+
self.sigmoid = nn.Sigmoid()
131+
132+
# initialize weights and biases
133+
self.lin1.weight = nn.Parameter(torch.arange(0.0, 9.0).view(3, 3))
134+
self.lin1.bias = nn.Parameter(torch.zeros(1,3))
135+
self.lin2.weight = nn.Parameter(torch.arange(0.0, 6.0).view(2, 3))
136+
self.lin2.bias = nn.Parameter(torch.ones(1,2))
139137
140138
def forward(self, input):
141-
lin1 = self.lin1(input)
142-
relu = self.relu(lin1)
143-
lin2 = self.lin2(relu)
144-
return lin2
139+
return self.sigmoid(self.lin2(self.relu(self.lin1(input))))
145140
146141
147142
model = ToyModel()
148143
model.eval()
149-
torch.manual_seed(123)
150-
np.random.seed(124)
151144
`;
152145
// Example for defining an acquisition function
153146
const defineInputBaseline = `${pre}python
154147
input = torch.rand(2, 3)
155148
baseline = torch.zeros(2, 3)
156149
`;
150+
151+
const randomSeedsDefinition = `${pre}python
152+
torch.manual_seed(123)
153+
np.random.seed(123)
154+
`;
157155
// Example for optimizing candidates
158156
const instantiateApply = `${pre}python
159157
ig = IntegratedGradients(model)
160-
attributions, delta = ig.attribute(input, baseline)
161-
print('IG Attributions: ', attributions, ' Approximation error: ', delta)
158+
attributions, delta = ig.attribute(input, baseline, target=0, return_convergence_delta=True)
159+
print('IG Attributions: ', attributions, ' Convergence Delta: ', delta)
162160
`;
163161

164162
const igOutput = `${pre}python
165-
IG Attributions: tensor([[0.8883, 1.5497, 0.7550],
166-
[2.0657, 0.2219, 2.5996]])
167-
Approximation Error: 9.5367431640625e-07
163+
IG Attributions: tensor([[0.0628, 0.1314, 0.0747],
164+
[0.0930, 0.0120, 0.1639]])
165+
Convergence Delta: tensor([0., 0.])
168166
`;
169167
//
170168
const QuickStart = () => (
@@ -186,6 +184,10 @@ Approximation Error: 9.5367431640625e-07
186184
<h4>Create and prepare model:</h4>
187185
<MarkdownBlock>{createModelExample}</MarkdownBlock>
188186
</li>
187+
<li>
188+
<h4>To make computations deterministic, let's fix random seeds:</h4>
189+
<MarkdownBlock>{randomSeedsDefinition}</MarkdownBlock>
190+
</li>
189191
<li>
190192
<h4>Define input and baseline tensors:</h4>
191193
<MarkdownBlock>{defineInputBaseline}</MarkdownBlock>

website/pages/tutorials/index.js

+4
Original file line numberDiff line numberDiff line change
@@ -77,6 +77,10 @@ class TutorialHome extends React.Component {
7777
Using Captum and Integrated Gradients we interpret the output of several test questions and analyze the attribution scores
7878
of the text and visual parts of the model. Find the tutorial <a href="Multimodal_VQA_Interpret">here</a>.
7979

80+
<h4>Getting Started with Captum Insights:</h4>
81+
This tutorial demonstrates how to use Captum Insights for a vision model in a notebook setting. A simple pretrained torchvision
82+
CNN model is loaded and then used on the CIFAR dataset. Captum Insights is then loaded to visualize the interpretation of specific examples.
83+
Find the tutorial <a href="CIFAR_TorchVision_Captum_Insights">here</a>.
8084
</p>
8185
</body>
8286
</div>

website/tutorials.json

+4
Original file line numberDiff line numberDiff line change
@@ -18,6 +18,10 @@
1818
{
1919
"id": "Multimodal_VQA_Interpret",
2020
"title": "Intepreting multimodal models"
21+
},
22+
{
23+
"id": "CIFAR_TorchVision_Captum_Insights",
24+
"title": "Getting started with Captum Insights"
2125
}
2226
]
2327
}

0 commit comments

Comments
 (0)