You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
- Modified getting started on landing page to address changes from delta PR
- Added torchvision dependency text to tutorials (instructions on how to install via conda or pip)
- Added getting started with captum insights content to site.
Copy file name to clipboardexpand all lines: docs/captum_insights.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -7,7 +7,7 @@ Interpreting model output in complex models can be difficult. Even with interpre
7
7
8
8
Captum Insights is an interpretability visualization widget built on top of Captum to facilitate model understanding. Captum Insights works across images, text, and other features to help users understand feature attribution. Some examples of the widget are below.
9
9
10
-
Getting started with Captum Insights is easy. To analyze a sample model on CIFAR10 via Captum Insights execute the line below and navigate to the URL specified in the output.
10
+
Getting started with Captum Insights is easy. You can learn how to use Captum Insights with the [Getting started with Captum Sights](/tutorials/CIFAR_TorchVision_Captum_Insights) tutorial. Alternatively, to analyze a sample model on CIFAR10 via Captum Insights execute the line below and navigate to the URL specified in the output.
Copy file name to clipboardexpand all lines: tutorials/CIFAR_TorchVision_Captum_Insights.ipynb
+6-11
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,7 @@
4
4
"cell_type": "markdown",
5
5
"metadata": {},
6
6
"source": [
7
-
"# Showcases Captum Insights with a simple model on CIFAR10 dataset"
7
+
"# Getting started with Captum Insights: a simple model on CIFAR10 dataset"
8
8
]
9
9
},
10
10
{
@@ -13,7 +13,11 @@
13
13
"source": [
14
14
"Demonstrates how to use Captum Insights embedded in a notebook to debug a CIFAR model and test samples. This is a slight modification of the CIFAR_TorchVision_Interpret notebook.\n",
15
15
"\n",
16
-
"More details about the model can be found here: https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html#sphx-glr-beginner-blitz-cifar10-tutorial-py"
16
+
"More details about the model can be found here: https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html#sphx-glr-beginner-blitz-cifar10-tutorial-py\n",
17
+
"\n",
18
+
"**Note:** This tutorial uses torchvision. To download torchvision you can do one of the following:\n",
Copy file name to clipboardexpand all lines: tutorials/CIFAR_TorchVision_Interpret.ipynb
+5-1
Original file line number
Diff line number
Diff line change
@@ -13,7 +13,11 @@
13
13
"source": [
14
14
"Demonstrates how to apply model interpretability algorithms from Captum library on CIFAR model and test samples.\n",
15
15
"\n",
16
-
"More details about the model can be found here: https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html#sphx-glr-beginner-blitz-cifar10-tutorial-py"
16
+
"More details about the model can be found here: https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html#sphx-glr-beginner-blitz-cifar10-tutorial-py\n",
17
+
"\n",
18
+
"**Note:** This tutorial uses torchvision. To download torchvision you can do one of the following:\n",
Copy file name to clipboardexpand all lines: tutorials/Multimodal_VQA_Interpret.ipynb
+5-1
Original file line number
Diff line number
Diff line change
@@ -14,7 +14,11 @@
14
14
"In this notebook we demonstrate how to apply model interpretability algorithms from captum library on VQA models. More specifically we explain model predictions by applying integrated gradients on a small sample of image-question pairs. More details about Integrated gradients can be found in the original paper: https://arxiv.org/pdf/1703.01365.pdf\n",
15
15
"\n",
16
16
"As a reference VQA model we use the following open source implementation:\n",
17
-
"https://github.com/Cyanogenoid/pytorch-vqa"
17
+
"https://github.com/Cyanogenoid/pytorch-vqa\n ",
18
+
"\n ",
19
+
"**Note:** This tutorial uses torchvision. To download torchvision you can do one of the following:\n ",
Copy file name to clipboardexpand all lines: tutorials/Resnet_TorchVision_Interpret.ipynb
+5-1
Original file line number
Diff line number
Diff line change
@@ -13,7 +13,11 @@
13
13
"source": [
14
14
"This notebook demonstrates how to apply model interpretability algorithms on pretrained ResNet model using a handpicked images and visualizes the attributions for each pixel by overlaying them on the image.\n",
15
15
"\n",
16
-
"The interpretation algorithms that we use in this notebook are Integrated Gradients (w/o noise tunnel) and GradientShap. Noise tunnel allows to smoothen the attributions after adding gaussian noise to each input sample."
16
+
"The interpretation algorithms that we use in this notebook are Integrated Gradients (w/o noise tunnel) and GradientShap. Noise tunnel allows to smoothen the attributions after adding gaussian noise to each input sample.\n ",
17
+
"\n ",
18
+
"**Note:** This tutorial uses torchvision. To download torchvision you can do one of the following:\n ",
0 commit comments