Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reduce number of iterations in USB tutorial #2771

Merged
merged 16 commits into from
Mar 5, 2024
Merged
1 change: 0 additions & 1 deletion .jenkins/validate_tutorials_built.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,6 @@
"intermediate_source/_torch_export_nightly_tutorial", # does not work on release
"advanced_source/super_resolution_with_onnxruntime",
"advanced_source/ddp_pipeline", # requires 4 gpus
"advanced_source/usb_semisup_learn", # in the current form takes 140+ minutes to build - can be enabled when the build time is reduced
"prototype_source/fx_graph_mode_ptq_dynamic",
"prototype_source/vmap_recipe",
"prototype_source/torchscript_freezing",
Expand Down
Binary file added _static/img/usb_semisup_learn/code.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
33 changes: 20 additions & 13 deletions advanced_source/usb_semisup_learn.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
**Author**: `Hao Chen <https://github.com/Hhhhhhao>`_

Unified Semi-supervised learning Benchmark (USB) is a semi-supervised
learning framework built upon PyTorch.
learning (SSL) framework built upon PyTorch.
Based on Datasets and Modules provided by PyTorch, USB becomes a flexible,
modular, and easy-to-use framework for semi-supervised learning.
It supports a variety of semi-supervised learning algorithms, including
Expand All @@ -17,7 +17,7 @@
This tutorial will walk you through the basics of using the USB lighting
package.
Let's get started by training a ``FreeMatch``/``SoftMatch`` model on
CIFAR-10 using pretrained ViT!
CIFAR-10 using pretrained Vision Transformers (ViT)!
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wrote out what SSL and ViT stood for (please double check) since they are used throughout the tutorial and initially I didn't know what they meant. Maybe for ML experts they are obvious...

And we will show it is easy to change the semi-supervised algorithm and train
on imbalanced datasets.

Expand Down Expand Up @@ -64,6 +64,9 @@
# Now, let's use USB to train ``FreeMatch`` and ``SoftMatch`` on CIFAR-10.
# First, we need to install USB package ``semilearn`` and import necessary API
# functions from USB.
# If you are running this in Google Colab, install ``semilearn`` by running:
# ``!pip install semilearn``.
#
# Below is a list of functions we will use from ``semilearn``:
#
# - ``get_dataset`` to load dataset, here we use CIFAR-10
Expand All @@ -77,6 +80,10 @@
# - ``Trainer``: a Trainer class for training and evaluating the
# algorithm on dataset
#
# Note that a CUDA-enabled backend is required for training with the ``semilearn`` package.
# See `Enabling CUDA in Google Colab <https://pytorch.org/tutorials/beginner/colab#using-cuda>`__ for instructions
# on enabling CUDA in Google Colab.
#
import semilearn
from semilearn import get_dataset, get_data_loader, get_net_builder, get_algorithm, get_config, Trainer

Expand All @@ -92,7 +99,7 @@

# optimization configs
'epoch': 1,
'num_train_iter': 4000,
'num_train_iter': 500,
'num_eval_iter': 500,
'num_log_iter': 50,
'optim': 'AdamW',
Expand Down Expand Up @@ -141,16 +148,16 @@

######################################################################
# We can start training the algorithms on CIFAR-10 with 40 labels now.
# We train for 4000 iterations and evaluate every 500 iterations.
# We train for 500 iterations and evaluate every 500 iterations.
#
trainer = Trainer(config, algorithm)
trainer.fit(train_lb_loader, train_ulb_loader, eval_loader)


######################################################################
# Finally, let's evaluate the trained model on the validation set.
# After training 4000 iterations with ``FreeMatch`` on only 40 labels of
# CIFAR-10, we obtain a classifier that achieves above 93 accuracy on the validation set.
# After training 500 iterations with ``FreeMatch`` on only 40 labels of
# CIFAR-10, we obtain a classifier that achieves around 87% accuracy on the validation set.
trainer.evaluate(eval_loader)


Expand All @@ -174,7 +181,7 @@

# optimization configs
'epoch': 1,
'num_train_iter': 4000,
'num_train_iter': 500,
'num_eval_iter': 500,
'num_log_iter': 50,
'optim': 'AdamW',
Expand Down Expand Up @@ -225,7 +232,7 @@

######################################################################
# We can start Train the algorithms on CIFAR-10 with 40 labels now.
# We train for 4000 iterations and evaluate every 500 iterations.
# We train for 500 iterations and evaluate every 500 iterations.
#
trainer = Trainer(config, algorithm)
trainer.fit(train_lb_loader, train_ulb_loader, eval_loader)
Expand All @@ -239,8 +246,8 @@


######################################################################
# References
# [1] USB: https://github.com/microsoft/Semi-supervised-learning
# [2] Kihyuk Sohn et al. FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence
# [3] Yidong Wang et al. FreeMatch: Self-adaptive Thresholding for Semi-supervised Learning
# [4] Hao Chen et al. SoftMatch: Addressing the Quantity-Quality Trade-off in Semi-supervised Learning
# References:
# - [1] USB: https://github.com/microsoft/Semi-supervised-learning
# - [2] Kihyuk Sohn et al. FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence
# - [3] Yidong Wang et al. FreeMatch: Self-adaptive Thresholding for Semi-supervised Learning
# - [4] Hao Chen et al. SoftMatch: Addressing the Quantity-Quality Trade-off in Semi-supervised Learning
8 changes: 8 additions & 0 deletions beginner_source/colab.rst
Original file line number Diff line number Diff line change
Expand Up @@ -93,3 +93,11 @@ Hopefully this example will give you a good starting point for running
some of the more complex tutorials in Colab. As we evolve our use of
Colab on the PyTorch tutorials site, we'll look at ways to make this
easier for users.

Enabling CUDA
~~~~~~~~~~~~~~~~
Some tutorials require a CUDA-enabled device (NVIDIA GPU), which involves
changing the Runtime type prior to executing the tutorial.
To change the Runtime in Google Colab, on the top drop-down menu select **Runtime**,
then select **Change runtime type**. Under **Hardware accelerator**, select ``T4 GPU``,
then click ``Save``.
Loading