-
Notifications
You must be signed in to change notification settings - Fork 257
/
Copy pathquantized_transfer_learning_tutorial.html
1100 lines (877 loc) Β· 76.8 KB
/
quantized_transfer_learning_tutorial.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
<!DOCTYPE html>
<!--[if IE 8]><html class="no-js lt-ie9" lang="en" > <![endif]-->
<!--[if gt IE 8]><!--> <html class="no-js" lang="en" > <!--<![endif]-->
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>(experimental) Quantized Transfer Learning for Computer Vision Tutorial — PyTorch Tutorials 1.4.0 documentation</title>
<link rel="stylesheet" href="../_static/css/theme.css" type="text/css" />
<!-- <link rel="stylesheet" href="../_static/pygments.css" type="text/css" /> -->
<link rel="stylesheet" href="../_static/gallery.css" type="text/css" />
<link rel="index" title="Index" href="../genindex.html" />
<link rel="search" title="Search" href="../search.html" />
<link rel="next" title="(experimental) Dynamic Quantization on BERT" href="dynamic_quantization_bert_tutorial.html" />
<link rel="prev" title="(experimental) Static Quantization with Eager Mode in PyTorch" href="../advanced/static_quantization_tutorial.html" />
<script src="../_static/js/modernizr.min.js"></script>
</head>
<a class="github-ribbon" href="https://github.com/9bow/PyTorch-tutorials-kr">
<img class="ribbon-img" width="149" height="149" src="https://github.blog/wp-content/uploads/2008/12/forkme_left_red_aa0000.png?resize=149%2C149" class="attachment-full size-full" alt="Fork me on GitHub" data-recalc-dims="1">
</a>
<div class="container-fluid header-holder tutorials-header" id="header-holder">
<div class="container">
<div class="header-container">
<a class="header-logo" href="https://pytorch.org/" aria-label="PyTorch"></a>
<div class="main-menu">
<ul>
<li>
<a href="https://pytorch.org/get-started">Get Started</a>
</li>
<li>
<a href="https://pytorch.org/features">Features</a>
</li>
<li>
<a href="https://pytorch.org/ecosystem">Ecosystem</a>
</li>
<li>
<a href="https://pytorch.org/blog/">Blog</a>
</li>
<li class="active">
<a href="https://pytorch.org/tutorials">Tutorials</a>
</li>
<li>
<a href="https://pytorch.org/docs/stable/index.html">Docs</a>
</li>
<li>
<a href="https://pytorch.org/resources">Resources</a>
</li>
<li>
<a href="https://github.com/pytorch/pytorch">Github</a>
</li>
</ul>
</div>
<a class="main-menu-open-button" href="#" data-behavior="open-mobile-menu"></a>
</div>
</div>
</div>
<body class="pytorch-body">
<div class="table-of-contents-link-wrapper">
<span>Table of Contents</span>
<a href="#" class="toggle-table-of-contents" data-behavior="toggle-table-of-contents"></a>
</div>
<nav data-toggle="wy-nav-shift" class="pytorch-left-menu" id="pytorch-left-menu">
<div class="pytorch-side-scroll">
<div class="pytorch-menu pytorch-menu-vertical" data-spy="affix" role="navigation" aria-label="main navigation">
<div class="pytorch-left-menu-search">
<div class="version">
1.4.0
</div>
<div role="search">
<form id="rtd-search-form" class="wy-form" action="../search.html" method="get">
<input type="text" name="q" placeholder="Search Tutorials" />
<input type="hidden" name="check_keywords" value="yes" />
<input type="hidden" name="area" value="default" />
</form>
</div>
</div>
<p class="caption"><span class="caption-text">μμνκΈ° (Getting Started)</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="../beginner/deep_learning_60min_blitz.html">νμ΄ν μΉ(PyTorch)λ‘ λ₯λ¬λνκΈ°: 60λΆλ§μ λμ₯λ΄κΈ°</a></li>
<li class="toctree-l1"><a class="reference internal" href="../beginner/data_loading_tutorial.html">μ¬μ©μ μ μ Dataset, Dataloader, Transforms μμ±νκΈ°</a></li>
<li class="toctree-l1"><a class="reference internal" href="tensorboard_tutorial.html">Visualizing Models, Data, and Training with TensorBoard</a></li>
</ul>
<p class="caption"><span class="caption-text">μ΄λ―Έμ§ (Image)</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="torchvision_tutorial.html">TorchVision κ°μ²΄ κ²μΆ λ―ΈμΈμ‘°μ (Finetuning) νν 리μΌ</a></li>
<li class="toctree-l1"><a class="reference internal" href="../beginner/transfer_learning_tutorial.html">μ»΄ν¨ν° λΉμ (Vision)μ μν μ μ΄νμ΅(Transfer Learning)</a></li>
<li class="toctree-l1"><a class="reference internal" href="spatial_transformer_tutorial.html">κ³΅κ° λ³νκΈ° λ€νΈμν¬(Spatial Transformer Networks) νν 리μΌ</a></li>
<li class="toctree-l1"><a class="reference internal" href="../advanced/neural_style_tutorial.html">PyTorchλ₯Ό μ΄μ©ν μ κ²½λ§-λ³ν(Neural-Transfer)</a></li>
<li class="toctree-l1"><a class="reference internal" href="../beginner/fgsm_tutorial.html">μ λμ μμ μμ±(Adversarial Example Generation)</a></li>
</ul>
<p class="caption"><span class="caption-text">μ€λμ€ (Audio)</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="../beginner/audio_preprocessing_tutorial.html">torchaudio Tutorial</a></li>
</ul>
<p class="caption"><span class="caption-text">ν
μ€νΈ (Text)</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="char_rnn_classification_tutorial.html">κΈ°μ΄λΆν° μμνλ NLP: λ¬Έμ-λ¨μ RNNμΌλ‘ μ΄λ¦ λΆλ₯νκΈ°</a></li>
<li class="toctree-l1"><a class="reference internal" href="char_rnn_generation_tutorial.html">κΈ°μ΄λΆν° μμνλ NLP: λ¬Έμ-λ¨μ RNNμΌλ‘ μ΄λ¦ μμ±νκΈ°</a></li>
<li class="toctree-l1"><a class="reference internal" href="seq2seq_translation_tutorial.html">κΈ°μ΄λΆν° μμνλ NLP: Sequence to Sequence λ€νΈμν¬μ Attentionμ μ΄μ©ν λ²μ</a></li>
<li class="toctree-l1"><a class="reference internal" href="../beginner/text_sentiment_ngrams_tutorial.html">Text Classification with TorchText</a></li>
<li class="toctree-l1"><a class="reference internal" href="../beginner/torchtext_translation_tutorial.html">TorchTextλ‘ μΈμ΄ λ²μνκΈ°</a></li>
<li class="toctree-l1"><a class="reference internal" href="../beginner/transformer_tutorial.html">Sequence-to-Sequence Modeling with nn.Transformer and TorchText</a></li>
</ul>
<p class="caption"><span class="caption-text">Named Tensor (experimental)</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="named_tensor_tutorial.html">(experimental) Introduction to Named Tensors in PyTorch</a></li>
</ul>
<p class="caption"><span class="caption-text">κ°ν νμ΅</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="reinforcement_q_learning.html">κ°ν νμ΅ (DQN) νν 리μΌ</a></li>
</ul>
<p class="caption"><span class="caption-text">PyTorch λͺ¨λΈμ μ΄μνκ²½μ λ°°ν¬νκΈ°</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="flask_rest_api_tutorial.html">Flaskλ₯Ό μ΄μ©νμ¬ Pythonμμ PyTorchλ₯Ό REST APIλ‘ λ°°ν¬νκΈ°</a></li>
<li class="toctree-l1"><a class="reference internal" href="../beginner/Intro_to_TorchScript_tutorial.html">TorchScript μκ°</a></li>
<li class="toctree-l1"><a class="reference internal" href="../advanced/cpp_export.html">C++μμ TorchScript λͺ¨λΈ λ‘λ©νκΈ°</a></li>
<li class="toctree-l1"><a class="reference internal" href="../advanced/super_resolution_with_onnxruntime.html">(μ ν) PyTorch λͺ¨λΈμ ONNXμΌλ‘ λ³ννκ³ ONNX λ°νμμμ μ€ννκΈ°</a></li>
</ul>
<p class="caption"><span class="caption-text">λ³λ ¬ & λΆμ° νμ΅</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="model_parallel_tutorial.html">Single-Machine Model Parallel Best Practices</a></li>
<li class="toctree-l1"><a class="reference internal" href="ddp_tutorial.html">Getting Started with Distributed Data Parallel</a></li>
<li class="toctree-l1"><a class="reference internal" href="dist_tuto.html">PyTorchλ‘ λΆμ° μ΄ν리μΌμ΄μ
κ°λ°νκΈ°</a></li>
<li class="toctree-l1"><a class="reference internal" href="rpc_tutorial.html">Getting Started with Distributed RPC Framework</a></li>
<li class="toctree-l1"><a class="reference internal" href="../beginner/aws_distributed_training_tutorial.html">(advanced) PyTorch 1.0 Distributed Trainer with Amazon AWS</a></li>
</ul>
<p class="caption"><span class="caption-text">PyTorch νμ₯νκΈ°</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="../advanced/torch_script_custom_ops.html">Extending TorchScript with Custom C++ Operators</a></li>
<li class="toctree-l1"><a class="reference internal" href="../advanced/torch_script_custom_classes.html">Extending TorchScript with Custom C++ Classes</a></li>
<li class="toctree-l1"><a class="reference internal" href="../advanced/numpy_extensions_tutorial.html">Creating Extensions Using numpy and scipy</a></li>
<li class="toctree-l1"><a class="reference internal" href="../advanced/cpp_extension.html">Custom C++ and CUDA Extensions</a></li>
</ul>
<p class="caption"><span class="caption-text">λͺ¨λΈ μ΅μ ν</span></p>
<ul class="current">
<li class="toctree-l1"><a class="reference internal" href="../advanced/dynamic_quantization_tutorial.html">(experimental) Dynamic Quantization on an LSTM Word Language Model</a></li>
<li class="toctree-l1"><a class="reference internal" href="../advanced/static_quantization_tutorial.html">(experimental) Static Quantization with Eager Mode in PyTorch</a></li>
<li class="toctree-l1 current"><a class="current reference internal" href="#">(experimental) Quantized Transfer Learning for Computer Vision Tutorial</a></li>
<li class="toctree-l1"><a class="reference internal" href="dynamic_quantization_bert_tutorial.html">(experimental) Dynamic Quantization on BERT</a></li>
<li class="toctree-l1"><a class="reference internal" href="pruning_tutorial.html">Pruning Tutorial</a></li>
</ul>
<p class="caption"><span class="caption-text">λ€λ₯Έ μΈμ΄μμμ PyTorch</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="../advanced/cpp_frontend.html">Using the PyTorch C++ Frontend</a></li>
</ul>
<p class="caption"><span class="caption-text">PyTorch Fundamentals In-Depth</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="../beginner/pytorch_with_examples.html">μμ λ‘ λ°°μ°λ νμ΄ν μΉ(PyTorch)</a></li>
<li class="toctree-l1"><a class="reference internal" href="../beginner/nn_tutorial.html">What is <cite>torch.nn</cite> <em>really</em>?</a></li>
</ul>
</div>
</div>
</nav>
<div class="pytorch-container">
<div class="pytorch-page-level-bar" id="pytorch-page-level-bar">
<div class="pytorch-breadcrumbs-wrapper">
<div role="navigation" aria-label="breadcrumbs navigation">
<ul class="pytorch-breadcrumbs">
<li>
<a href="../index.html">
Tutorials
</a> >
</li>
<li>(experimental) Quantized Transfer Learning for Computer Vision Tutorial</li>
<li class="pytorch-breadcrumbs-aside">
<a href="../_sources/intermediate/quantized_transfer_learning_tutorial.rst.txt" rel="nofollow"><img src="../_static/images/view-page-source-icon.svg"></a>
</li>
</ul>
</div>
</div>
<div class="pytorch-shortcuts-wrapper" id="pytorch-shortcuts-wrapper">
Shortcuts
</div>
</div>
<section data-toggle="wy-nav-shift" id="pytorch-content-wrap" class="pytorch-content-wrap">
<div class="pytorch-content-left">
<div class="pytorch-call-to-action-links">
<div id="tutorial-type">intermediate/quantized_transfer_learning_tutorial</div>
<div id="google-colab-link">
<img class="call-to-action-img" src="../_static/images/pytorch-colab.svg"/>
<div class="call-to-action-desktop-view">Run in Google Colab</div>
<div class="call-to-action-mobile-view">Colab</div>
</div>
<div id="download-notebook-link">
<img class="call-to-action-notebook-img" src="../_static/images/pytorch-download.svg"/>
<div class="call-to-action-desktop-view">Download Notebook</div>
<div class="call-to-action-mobile-view">Notebook</div>
</div>
<div id="github-view-link">
<img class="call-to-action-img" src="../_static/images/pytorch-github.svg"/>
<div class="call-to-action-desktop-view">View on GitHub</div>
<div class="call-to-action-mobile-view">GitHub</div>
</div>
</div>
<div class="rst-content">
<div role="main" class="main-content" itemscope="itemscope" itemtype="http://schema.org/Article">
<article itemprop="articleBody" id="pytorch-article" class="pytorch-article">
<div class="section" id="experimental-quantized-transfer-learning-for-computer-vision-tutorial">
<h1>(experimental) Quantized Transfer Learning for Computer Vision Tutorial<a class="headerlink" href="#experimental-quantized-transfer-learning-for-computer-vision-tutorial" title="Permalink to this headline">ΒΆ</a></h1>
<div class="admonition tip">
<p class="admonition-title">Tip</p>
<p>To get the most of this tutorial, we suggest using this
<a class="reference external" href="https://colab.research.google.com/github/pytorch/tutorials/blob/gh-pages/_downloads/quantized_transfer_learning_tutorial.ipynb">Colab Version</a>.
This will allow you to experiment with the information presented below.</p>
</div>
<p><strong>Author</strong>: <a class="reference external" href="https://github.com/z-a-f">Zafar Takhirov</a></p>
<p><strong>Reviewed by</strong>: <a class="reference external" href="https://github.com/raghuramank100">Raghuraman Krishnamoorthi</a></p>
<p><strong>Edited by</strong>: <a class="reference external" href="https://github.com/jlin27">Jessica Lin</a></p>
<p>This tutorial builds on the original <a class="reference external" href="https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html">PyTorch Transfer Learning</a>
tutorial, written by <a class="reference external" href="https://chsasank.github.io/">Sasank Chilamkurthy</a>.</p>
<p>Transfer learning refers to techniques that make use of a pretrained model for
application on a different data-set.
There are two main ways the transfer learning is used:</p>
<ol class="arabic simple">
<li><p><strong>ConvNet as a fixed feature extractor</strong>: Here, you <a class="reference external" href="https://arxiv.org/abs/1706.04983">βfreezeβ</a>
the weights of all the parameters in the network except that of the final
several layers (aka βthe headβ, usually fully connected layers).
These last layers are replaced with new ones initialized with random
weights and only these layers are trained.</p></li>
<li><p><strong>Finetuning the ConvNet</strong>: Instead of random initializaion, the model is
initialized using a pretrained network, after which the training proceeds as
usual but with a different dataset.
Usually the head (or part of it) is also replaced in the network in
case there is a different number of outputs.
It is common in this method to set the learning rate to a smaller number.
This is done because the network is already trained, and only minor changes
are required to βfinetuneβ it to a new dataset.</p></li>
</ol>
<p>You can also combine the above two methods:
First you can freeze the feature extractor, and train the head. After
that, you can unfreeze the feature extractor (or part of it), set the
learning rate to something smaller, and continue training.</p>
<p>In this part you will use the first method β extracting the features
using a quantized model.</p>
<div class="section" id="part-0-prerequisites">
<h2>Part 0. Prerequisites<a class="headerlink" href="#part-0-prerequisites" title="Permalink to this headline">ΒΆ</a></h2>
<p>Before diving into the transfer learning, let us review the βprerequisitesβ,
such as installations and data loading/visualizations.</p>
<div class="highlight-python notranslate"><div class="highlight"><pre><span></span><span class="c1"># Imports</span>
<span class="kn">import</span> <span class="nn">copy</span>
<span class="kn">import</span> <span class="nn">matplotlib.pyplot</span> <span class="kn">as</span> <span class="nn">plt</span>
<span class="kn">import</span> <span class="nn">numpy</span> <span class="kn">as</span> <span class="nn">np</span>
<span class="kn">import</span> <span class="nn">os</span>
<span class="kn">import</span> <span class="nn">time</span>
<span class="n">plt</span><span class="o">.</span><span class="n">ion</span><span class="p">()</span>
</pre></div>
</div>
<div class="section" id="installing-the-nightly-build">
<h3>Installing the Nightly Build<a class="headerlink" href="#installing-the-nightly-build" title="Permalink to this headline">ΒΆ</a></h3>
<p>Because you will be using the experimental parts of the PyTorch, it is
recommended to install the latest version of <code class="docutils literal notranslate"><span class="pre">torch</span></code> and
<code class="docutils literal notranslate"><span class="pre">torchvision</span></code>. You can find the most recent instructions on local
installation <a class="reference external" href="https://pytorch.org/get-started/locally/">here</a>.
For example, to install without GPU support:</p>
<div class="highlight-shell notranslate"><div class="highlight"><pre><span></span>pip install numpy
pip install --pre torch torchvision -f https://download.pytorch.org/whl/nightly/cpu/torch_nightly.html
<span class="c1"># For CUDA support use https://download.pytorch.org/whl/nightly/cu101/torch_nightly.html</span>
</pre></div>
</div>
</div>
<div class="section" id="load-data">
<h3>Load Data<a class="headerlink" href="#load-data" title="Permalink to this headline">ΒΆ</a></h3>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>This section is identical to the original transfer learning tutorial.</p>
</div>
<p>We will use <code class="docutils literal notranslate"><span class="pre">torchvision</span></code> and <code class="docutils literal notranslate"><span class="pre">torch.utils.data</span></code> packages to load
the data.</p>
<p>The problem you are going to solve today is classifying <strong>ants</strong> and
<strong>bees</strong> from images. The dataset contains about 120 training images
each for ants and bees. There are 75 validation images for each class.
This is considered a very small dataset to generalize on. However, since
we are using transfer learning, we should be able to generalize
reasonably well.</p>
<p><em>This dataset is a very small subset of imagenet.</em></p>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>Download the data from <a class="reference external" href="https://download.pytorch.org/tutorial/hymenoptera_data.zip">here</a>
and extract it to the <code class="docutils literal notranslate"><span class="pre">data</span></code> directory.</p>
</div>
<div class="highlight-python notranslate"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">torch</span>
<span class="kn">from</span> <span class="nn">torchvision</span> <span class="kn">import</span> <span class="n">transforms</span><span class="p">,</span> <span class="n">datasets</span>
<span class="c1"># Data augmentation and normalization for training</span>
<span class="c1"># Just normalization for validation</span>
<span class="n">data_transforms</span> <span class="o">=</span> <span class="p">{</span>
<span class="s1">'train'</span><span class="p">:</span> <span class="n">transforms</span><span class="o">.</span><span class="n">Compose</span><span class="p">([</span>
<span class="n">transforms</span><span class="o">.</span><span class="n">Resize</span><span class="p">(</span><span class="mi">224</span><span class="p">),</span>
<span class="n">transforms</span><span class="o">.</span><span class="n">RandomCrop</span><span class="p">(</span><span class="mi">224</span><span class="p">),</span>
<span class="n">transforms</span><span class="o">.</span><span class="n">RandomHorizontalFlip</span><span class="p">(),</span>
<span class="n">transforms</span><span class="o">.</span><span class="n">ToTensor</span><span class="p">(),</span>
<span class="n">transforms</span><span class="o">.</span><span class="n">Normalize</span><span class="p">([</span><span class="mf">0.485</span><span class="p">,</span> <span class="mf">0.456</span><span class="p">,</span> <span class="mf">0.406</span><span class="p">],</span> <span class="p">[</span><span class="mf">0.229</span><span class="p">,</span> <span class="mf">0.224</span><span class="p">,</span> <span class="mf">0.225</span><span class="p">])</span>
<span class="p">]),</span>
<span class="s1">'val'</span><span class="p">:</span> <span class="n">transforms</span><span class="o">.</span><span class="n">Compose</span><span class="p">([</span>
<span class="n">transforms</span><span class="o">.</span><span class="n">Resize</span><span class="p">(</span><span class="mi">224</span><span class="p">),</span>
<span class="n">transforms</span><span class="o">.</span><span class="n">CenterCrop</span><span class="p">(</span><span class="mi">224</span><span class="p">),</span>
<span class="n">transforms</span><span class="o">.</span><span class="n">ToTensor</span><span class="p">(),</span>
<span class="n">transforms</span><span class="o">.</span><span class="n">Normalize</span><span class="p">([</span><span class="mf">0.485</span><span class="p">,</span> <span class="mf">0.456</span><span class="p">,</span> <span class="mf">0.406</span><span class="p">],</span> <span class="p">[</span><span class="mf">0.229</span><span class="p">,</span> <span class="mf">0.224</span><span class="p">,</span> <span class="mf">0.225</span><span class="p">])</span>
<span class="p">]),</span>
<span class="p">}</span>
<span class="n">data_dir</span> <span class="o">=</span> <span class="s1">'data/hymenoptera_data'</span>
<span class="n">image_datasets</span> <span class="o">=</span> <span class="p">{</span><span class="n">x</span><span class="p">:</span> <span class="n">datasets</span><span class="o">.</span><span class="n">ImageFolder</span><span class="p">(</span><span class="n">os</span><span class="o">.</span><span class="n">path</span><span class="o">.</span><span class="n">join</span><span class="p">(</span><span class="n">data_dir</span><span class="p">,</span> <span class="n">x</span><span class="p">),</span>
<span class="n">data_transforms</span><span class="p">[</span><span class="n">x</span><span class="p">])</span>
<span class="k">for</span> <span class="n">x</span> <span class="ow">in</span> <span class="p">[</span><span class="s1">'train'</span><span class="p">,</span> <span class="s1">'val'</span><span class="p">]}</span>
<span class="n">dataloaders</span> <span class="o">=</span> <span class="p">{</span><span class="n">x</span><span class="p">:</span> <span class="n">torch</span><span class="o">.</span><span class="n">utils</span><span class="o">.</span><span class="n">data</span><span class="o">.</span><span class="n">DataLoader</span><span class="p">(</span><span class="n">image_datasets</span><span class="p">[</span><span class="n">x</span><span class="p">],</span> <span class="n">batch_size</span><span class="o">=</span><span class="mi">16</span><span class="p">,</span>
<span class="n">shuffle</span><span class="o">=</span><span class="bp">True</span><span class="p">,</span> <span class="n">num_workers</span><span class="o">=</span><span class="mi">8</span><span class="p">)</span>
<span class="k">for</span> <span class="n">x</span> <span class="ow">in</span> <span class="p">[</span><span class="s1">'train'</span><span class="p">,</span> <span class="s1">'val'</span><span class="p">]}</span>
<span class="n">dataset_sizes</span> <span class="o">=</span> <span class="p">{</span><span class="n">x</span><span class="p">:</span> <span class="nb">len</span><span class="p">(</span><span class="n">image_datasets</span><span class="p">[</span><span class="n">x</span><span class="p">])</span> <span class="k">for</span> <span class="n">x</span> <span class="ow">in</span> <span class="p">[</span><span class="s1">'train'</span><span class="p">,</span> <span class="s1">'val'</span><span class="p">]}</span>
<span class="n">class_names</span> <span class="o">=</span> <span class="n">image_datasets</span><span class="p">[</span><span class="s1">'train'</span><span class="p">]</span><span class="o">.</span><span class="n">classes</span>
<span class="n">device</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">device</span><span class="p">(</span><span class="s2">"cuda:0"</span> <span class="k">if</span> <span class="n">torch</span><span class="o">.</span><span class="n">cuda</span><span class="o">.</span><span class="n">is_available</span><span class="p">()</span> <span class="k">else</span> <span class="s2">"cpu"</span><span class="p">)</span>
</pre></div>
</div>
</div>
<div class="section" id="visualize-a-few-images">
<h3>Visualize a few images<a class="headerlink" href="#visualize-a-few-images" title="Permalink to this headline">ΒΆ</a></h3>
<p>Letβs visualize a few training images so as to understand the data
augmentations.</p>
<div class="highlight-python notranslate"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">torchvision</span>
<span class="k">def</span> <span class="nf">imshow</span><span class="p">(</span><span class="n">inp</span><span class="p">,</span> <span class="n">title</span><span class="o">=</span><span class="bp">None</span><span class="p">,</span> <span class="n">ax</span><span class="o">=</span><span class="bp">None</span><span class="p">,</span> <span class="n">figsize</span><span class="o">=</span><span class="p">(</span><span class="mi">5</span><span class="p">,</span> <span class="mi">5</span><span class="p">)):</span>
<span class="sd">"""Imshow for Tensor."""</span>
<span class="n">inp</span> <span class="o">=</span> <span class="n">inp</span><span class="o">.</span><span class="n">numpy</span><span class="p">()</span><span class="o">.</span><span class="n">transpose</span><span class="p">((</span><span class="mi">1</span><span class="p">,</span> <span class="mi">2</span><span class="p">,</span> <span class="mi">0</span><span class="p">))</span>
<span class="n">mean</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">array</span><span class="p">([</span><span class="mf">0.485</span><span class="p">,</span> <span class="mf">0.456</span><span class="p">,</span> <span class="mf">0.406</span><span class="p">])</span>
<span class="n">std</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">array</span><span class="p">([</span><span class="mf">0.229</span><span class="p">,</span> <span class="mf">0.224</span><span class="p">,</span> <span class="mf">0.225</span><span class="p">])</span>
<span class="n">inp</span> <span class="o">=</span> <span class="n">std</span> <span class="o">*</span> <span class="n">inp</span> <span class="o">+</span> <span class="n">mean</span>
<span class="n">inp</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">clip</span><span class="p">(</span><span class="n">inp</span><span class="p">,</span> <span class="mi">0</span><span class="p">,</span> <span class="mi">1</span><span class="p">)</span>
<span class="k">if</span> <span class="n">ax</span> <span class="ow">is</span> <span class="bp">None</span><span class="p">:</span>
<span class="n">fig</span><span class="p">,</span> <span class="n">ax</span> <span class="o">=</span> <span class="n">plt</span><span class="o">.</span><span class="n">subplots</span><span class="p">(</span><span class="mi">1</span><span class="p">,</span> <span class="n">figsize</span><span class="o">=</span><span class="n">figsize</span><span class="p">)</span>
<span class="n">ax</span><span class="o">.</span><span class="n">imshow</span><span class="p">(</span><span class="n">inp</span><span class="p">)</span>
<span class="n">ax</span><span class="o">.</span><span class="n">set_xticks</span><span class="p">([])</span>
<span class="n">ax</span><span class="o">.</span><span class="n">set_yticks</span><span class="p">([])</span>
<span class="k">if</span> <span class="n">title</span> <span class="ow">is</span> <span class="ow">not</span> <span class="bp">None</span><span class="p">:</span>
<span class="n">ax</span><span class="o">.</span><span class="n">set_title</span><span class="p">(</span><span class="n">title</span><span class="p">)</span>
<span class="c1"># Get a batch of training data</span>
<span class="n">inputs</span><span class="p">,</span> <span class="n">classes</span> <span class="o">=</span> <span class="nb">next</span><span class="p">(</span><span class="nb">iter</span><span class="p">(</span><span class="n">dataloaders</span><span class="p">[</span><span class="s1">'train'</span><span class="p">]))</span>
<span class="c1"># Make a grid from batch</span>
<span class="n">out</span> <span class="o">=</span> <span class="n">torchvision</span><span class="o">.</span><span class="n">utils</span><span class="o">.</span><span class="n">make_grid</span><span class="p">(</span><span class="n">inputs</span><span class="p">,</span> <span class="n">nrow</span><span class="o">=</span><span class="mi">4</span><span class="p">)</span>
<span class="n">fig</span><span class="p">,</span> <span class="n">ax</span> <span class="o">=</span> <span class="n">plt</span><span class="o">.</span><span class="n">subplots</span><span class="p">(</span><span class="mi">1</span><span class="p">,</span> <span class="n">figsize</span><span class="o">=</span><span class="p">(</span><span class="mi">10</span><span class="p">,</span> <span class="mi">10</span><span class="p">))</span>
<span class="n">imshow</span><span class="p">(</span><span class="n">out</span><span class="p">,</span> <span class="n">title</span><span class="o">=</span><span class="p">[</span><span class="n">class_names</span><span class="p">[</span><span class="n">x</span><span class="p">]</span> <span class="k">for</span> <span class="n">x</span> <span class="ow">in</span> <span class="n">classes</span><span class="p">],</span> <span class="n">ax</span><span class="o">=</span><span class="n">ax</span><span class="p">)</span>
</pre></div>
</div>
</div>
<div class="section" id="support-function-for-model-training">
<h3>Support Function for Model Training<a class="headerlink" href="#support-function-for-model-training" title="Permalink to this headline">ΒΆ</a></h3>
<p>Below is a generic function for model training.
This function also</p>
<ul class="simple">
<li><p>Schedules the learning rate</p></li>
<li><p>Saves the best model</p></li>
</ul>
<div class="highlight-python notranslate"><div class="highlight"><pre><span></span><span class="k">def</span> <span class="nf">train_model</span><span class="p">(</span><span class="n">model</span><span class="p">,</span> <span class="n">criterion</span><span class="p">,</span> <span class="n">optimizer</span><span class="p">,</span> <span class="n">scheduler</span><span class="p">,</span> <span class="n">num_epochs</span><span class="o">=</span><span class="mi">25</span><span class="p">,</span> <span class="n">device</span><span class="o">=</span><span class="s1">'cpu'</span><span class="p">):</span>
<span class="sd">"""</span>
<span class="sd"> Support function for model training.</span>
<span class="sd"> Args:</span>
<span class="sd"> model: Model to be trained</span>
<span class="sd"> criterion: Optimization criterion (loss)</span>
<span class="sd"> optimizer: Optimizer to use for training</span>
<span class="sd"> scheduler: Instance of ``torch.optim.lr_scheduler``</span>
<span class="sd"> num_epochs: Number of epochs</span>
<span class="sd"> device: Device to run the training on. Must be 'cpu' or 'cuda'</span>
<span class="sd"> """</span>
<span class="n">since</span> <span class="o">=</span> <span class="n">time</span><span class="o">.</span><span class="n">time</span><span class="p">()</span>
<span class="n">best_model_wts</span> <span class="o">=</span> <span class="n">copy</span><span class="o">.</span><span class="n">deepcopy</span><span class="p">(</span><span class="n">model</span><span class="o">.</span><span class="n">state_dict</span><span class="p">())</span>
<span class="n">best_acc</span> <span class="o">=</span> <span class="mf">0.0</span>
<span class="k">for</span> <span class="n">epoch</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">num_epochs</span><span class="p">):</span>
<span class="k">print</span><span class="p">(</span><span class="s1">'Epoch {}/{}'</span><span class="o">.</span><span class="n">format</span><span class="p">(</span><span class="n">epoch</span><span class="p">,</span> <span class="n">num_epochs</span> <span class="o">-</span> <span class="mi">1</span><span class="p">))</span>
<span class="k">print</span><span class="p">(</span><span class="s1">'-'</span> <span class="o">*</span> <span class="mi">10</span><span class="p">)</span>
<span class="c1"># Each epoch has a training and validation phase</span>
<span class="k">for</span> <span class="n">phase</span> <span class="ow">in</span> <span class="p">[</span><span class="s1">'train'</span><span class="p">,</span> <span class="s1">'val'</span><span class="p">]:</span>
<span class="k">if</span> <span class="n">phase</span> <span class="o">==</span> <span class="s1">'train'</span><span class="p">:</span>
<span class="n">model</span><span class="o">.</span><span class="n">train</span><span class="p">()</span> <span class="c1"># Set model to training mode</span>
<span class="k">else</span><span class="p">:</span>
<span class="n">model</span><span class="o">.</span><span class="n">eval</span><span class="p">()</span> <span class="c1"># Set model to evaluate mode</span>
<span class="n">running_loss</span> <span class="o">=</span> <span class="mf">0.0</span>
<span class="n">running_corrects</span> <span class="o">=</span> <span class="mi">0</span>
<span class="c1"># Iterate over data.</span>
<span class="k">for</span> <span class="n">inputs</span><span class="p">,</span> <span class="n">labels</span> <span class="ow">in</span> <span class="n">dataloaders</span><span class="p">[</span><span class="n">phase</span><span class="p">]:</span>
<span class="n">inputs</span> <span class="o">=</span> <span class="n">inputs</span><span class="o">.</span><span class="n">to</span><span class="p">(</span><span class="n">device</span><span class="p">)</span>
<span class="n">labels</span> <span class="o">=</span> <span class="n">labels</span><span class="o">.</span><span class="n">to</span><span class="p">(</span><span class="n">device</span><span class="p">)</span>
<span class="c1"># zero the parameter gradients</span>
<span class="n">optimizer</span><span class="o">.</span><span class="n">zero_grad</span><span class="p">()</span>
<span class="c1"># forward</span>
<span class="c1"># track history if only in train</span>
<span class="k">with</span> <span class="n">torch</span><span class="o">.</span><span class="n">set_grad_enabled</span><span class="p">(</span><span class="n">phase</span> <span class="o">==</span> <span class="s1">'train'</span><span class="p">):</span>
<span class="n">outputs</span> <span class="o">=</span> <span class="n">model</span><span class="p">(</span><span class="n">inputs</span><span class="p">)</span>
<span class="n">_</span><span class="p">,</span> <span class="n">preds</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">max</span><span class="p">(</span><span class="n">outputs</span><span class="p">,</span> <span class="mi">1</span><span class="p">)</span>
<span class="n">loss</span> <span class="o">=</span> <span class="n">criterion</span><span class="p">(</span><span class="n">outputs</span><span class="p">,</span> <span class="n">labels</span><span class="p">)</span>
<span class="c1"># backward + optimize only if in training phase</span>
<span class="k">if</span> <span class="n">phase</span> <span class="o">==</span> <span class="s1">'train'</span><span class="p">:</span>
<span class="n">loss</span><span class="o">.</span><span class="n">backward</span><span class="p">()</span>
<span class="n">optimizer</span><span class="o">.</span><span class="n">step</span><span class="p">()</span>
<span class="c1"># statistics</span>
<span class="n">running_loss</span> <span class="o">+=</span> <span class="n">loss</span><span class="o">.</span><span class="n">item</span><span class="p">()</span> <span class="o">*</span> <span class="n">inputs</span><span class="o">.</span><span class="n">size</span><span class="p">(</span><span class="mi">0</span><span class="p">)</span>
<span class="n">running_corrects</span> <span class="o">+=</span> <span class="n">torch</span><span class="o">.</span><span class="n">sum</span><span class="p">(</span><span class="n">preds</span> <span class="o">==</span> <span class="n">labels</span><span class="o">.</span><span class="n">data</span><span class="p">)</span>
<span class="k">if</span> <span class="n">phase</span> <span class="o">==</span> <span class="s1">'train'</span><span class="p">:</span>
<span class="n">scheduler</span><span class="o">.</span><span class="n">step</span><span class="p">()</span>
<span class="n">epoch_loss</span> <span class="o">=</span> <span class="n">running_loss</span> <span class="o">/</span> <span class="n">dataset_sizes</span><span class="p">[</span><span class="n">phase</span><span class="p">]</span>
<span class="n">epoch_acc</span> <span class="o">=</span> <span class="n">running_corrects</span><span class="o">.</span><span class="n">double</span><span class="p">()</span> <span class="o">/</span> <span class="n">dataset_sizes</span><span class="p">[</span><span class="n">phase</span><span class="p">]</span>
<span class="k">print</span><span class="p">(</span><span class="s1">'{} Loss: {:.4f} Acc: {:.4f}'</span><span class="o">.</span><span class="n">format</span><span class="p">(</span>
<span class="n">phase</span><span class="p">,</span> <span class="n">epoch_loss</span><span class="p">,</span> <span class="n">epoch_acc</span><span class="p">))</span>
<span class="c1"># deep copy the model</span>
<span class="k">if</span> <span class="n">phase</span> <span class="o">==</span> <span class="s1">'val'</span> <span class="ow">and</span> <span class="n">epoch_acc</span> <span class="o">></span> <span class="n">best_acc</span><span class="p">:</span>
<span class="n">best_acc</span> <span class="o">=</span> <span class="n">epoch_acc</span>
<span class="n">best_model_wts</span> <span class="o">=</span> <span class="n">copy</span><span class="o">.</span><span class="n">deepcopy</span><span class="p">(</span><span class="n">model</span><span class="o">.</span><span class="n">state_dict</span><span class="p">())</span>
<span class="k">print</span><span class="p">()</span>
<span class="n">time_elapsed</span> <span class="o">=</span> <span class="n">time</span><span class="o">.</span><span class="n">time</span><span class="p">()</span> <span class="o">-</span> <span class="n">since</span>
<span class="k">print</span><span class="p">(</span><span class="s1">'Training complete in {:.0f}m {:.0f}s'</span><span class="o">.</span><span class="n">format</span><span class="p">(</span>
<span class="n">time_elapsed</span> <span class="o">//</span> <span class="mi">60</span><span class="p">,</span> <span class="n">time_elapsed</span> <span class="o">%</span> <span class="mi">60</span><span class="p">))</span>
<span class="k">print</span><span class="p">(</span><span class="s1">'Best val Acc: {:4f}'</span><span class="o">.</span><span class="n">format</span><span class="p">(</span><span class="n">best_acc</span><span class="p">))</span>
<span class="c1"># load best model weights</span>
<span class="n">model</span><span class="o">.</span><span class="n">load_state_dict</span><span class="p">(</span><span class="n">best_model_wts</span><span class="p">)</span>
<span class="k">return</span> <span class="n">model</span>
</pre></div>
</div>
</div>
<div class="section" id="support-function-for-visualizing-the-model-predictions">
<h3>Support Function for Visualizing the Model Predictions<a class="headerlink" href="#support-function-for-visualizing-the-model-predictions" title="Permalink to this headline">ΒΆ</a></h3>
<p>Generic function to display predictions for a few images</p>
<div class="highlight-python notranslate"><div class="highlight"><pre><span></span><span class="k">def</span> <span class="nf">visualize_model</span><span class="p">(</span><span class="n">model</span><span class="p">,</span> <span class="n">rows</span><span class="o">=</span><span class="mi">3</span><span class="p">,</span> <span class="n">cols</span><span class="o">=</span><span class="mi">3</span><span class="p">):</span>
<span class="n">was_training</span> <span class="o">=</span> <span class="n">model</span><span class="o">.</span><span class="n">training</span>
<span class="n">model</span><span class="o">.</span><span class="n">eval</span><span class="p">()</span>
<span class="n">current_row</span> <span class="o">=</span> <span class="n">current_col</span> <span class="o">=</span> <span class="mi">0</span>
<span class="n">fig</span><span class="p">,</span> <span class="n">ax</span> <span class="o">=</span> <span class="n">plt</span><span class="o">.</span><span class="n">subplots</span><span class="p">(</span><span class="n">rows</span><span class="p">,</span> <span class="n">cols</span><span class="p">,</span> <span class="n">figsize</span><span class="o">=</span><span class="p">(</span><span class="n">cols</span><span class="o">*</span><span class="mi">2</span><span class="p">,</span> <span class="n">rows</span><span class="o">*</span><span class="mi">2</span><span class="p">))</span>
<span class="k">with</span> <span class="n">torch</span><span class="o">.</span><span class="n">no_grad</span><span class="p">():</span>
<span class="k">for</span> <span class="n">idx</span><span class="p">,</span> <span class="p">(</span><span class="n">imgs</span><span class="p">,</span> <span class="n">lbls</span><span class="p">)</span> <span class="ow">in</span> <span class="nb">enumerate</span><span class="p">(</span><span class="n">dataloaders</span><span class="p">[</span><span class="s1">'val'</span><span class="p">]):</span>
<span class="n">imgs</span> <span class="o">=</span> <span class="n">imgs</span><span class="o">.</span><span class="n">cpu</span><span class="p">()</span>
<span class="n">lbls</span> <span class="o">=</span> <span class="n">lbls</span><span class="o">.</span><span class="n">cpu</span><span class="p">()</span>
<span class="n">outputs</span> <span class="o">=</span> <span class="n">model</span><span class="p">(</span><span class="n">imgs</span><span class="p">)</span>
<span class="n">_</span><span class="p">,</span> <span class="n">preds</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">max</span><span class="p">(</span><span class="n">outputs</span><span class="p">,</span> <span class="mi">1</span><span class="p">)</span>
<span class="k">for</span> <span class="n">jdx</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">imgs</span><span class="o">.</span><span class="n">size</span><span class="p">()[</span><span class="mi">0</span><span class="p">]):</span>
<span class="n">imshow</span><span class="p">(</span><span class="n">imgs</span><span class="o">.</span><span class="n">data</span><span class="p">[</span><span class="n">jdx</span><span class="p">],</span> <span class="n">ax</span><span class="o">=</span><span class="n">ax</span><span class="p">[</span><span class="n">current_row</span><span class="p">,</span> <span class="n">current_col</span><span class="p">])</span>
<span class="n">ax</span><span class="p">[</span><span class="n">current_row</span><span class="p">,</span> <span class="n">current_col</span><span class="p">]</span><span class="o">.</span><span class="n">axis</span><span class="p">(</span><span class="s1">'off'</span><span class="p">)</span>
<span class="n">ax</span><span class="p">[</span><span class="n">current_row</span><span class="p">,</span> <span class="n">current_col</span><span class="p">]</span><span class="o">.</span><span class="n">set_title</span><span class="p">(</span><span class="s1">'predicted: {}'</span><span class="o">.</span><span class="n">format</span><span class="p">(</span><span class="n">class_names</span><span class="p">[</span><span class="n">preds</span><span class="p">[</span><span class="n">jdx</span><span class="p">]]))</span>
<span class="n">current_col</span> <span class="o">+=</span> <span class="mi">1</span>
<span class="k">if</span> <span class="n">current_col</span> <span class="o">>=</span> <span class="n">cols</span><span class="p">:</span>
<span class="n">current_row</span> <span class="o">+=</span> <span class="mi">1</span>
<span class="n">current_col</span> <span class="o">=</span> <span class="mi">0</span>
<span class="k">if</span> <span class="n">current_row</span> <span class="o">>=</span> <span class="n">rows</span><span class="p">:</span>
<span class="n">model</span><span class="o">.</span><span class="n">train</span><span class="p">(</span><span class="n">mode</span><span class="o">=</span><span class="n">was_training</span><span class="p">)</span>
<span class="k">return</span>
<span class="n">model</span><span class="o">.</span><span class="n">train</span><span class="p">(</span><span class="n">mode</span><span class="o">=</span><span class="n">was_training</span><span class="p">)</span>
</pre></div>
</div>
</div>
</div>
<div class="section" id="part-1-training-a-custom-classifier-based-on-a-quantized-feature-extractor">
<h2>Part 1. Training a Custom Classifier based on a Quantized Feature Extractor<a class="headerlink" href="#part-1-training-a-custom-classifier-based-on-a-quantized-feature-extractor" title="Permalink to this headline">ΒΆ</a></h2>
<p>In this section you will use a βfrozenβ quantized feature extractor, and
train a custom classifier head on top of it. Unlike floating point
models, you donβt need to set requires_grad=False for the quantized
model, as it has no trainable parameters. Please, refer to the
<a class="reference external" href="https://pytorch.org/docs/stable/quantization.html">documentation</a> for
more details.</p>
<p>Load a pretrained model: for this exercise you will be using
<a class="reference external" href="https://pytorch.org/hub/pytorch_vision_resnet/">ResNet-18</a>.</p>
<div class="highlight-python notranslate"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">torchvision.models.quantization</span> <span class="kn">as</span> <span class="nn">models</span>
<span class="c1"># You will need the number of filters in the `fc` for future use.</span>
<span class="c1"># Here the size of each output sample is set to 2.</span>
<span class="c1"># Alternatively, it can be generalized to nn.Linear(num_ftrs, len(class_names)).</span>
<span class="n">model_fe</span> <span class="o">=</span> <span class="n">models</span><span class="o">.</span><span class="n">resnet18</span><span class="p">(</span><span class="n">pretrained</span><span class="o">=</span><span class="bp">True</span><span class="p">,</span> <span class="n">progress</span><span class="o">=</span><span class="bp">True</span><span class="p">,</span> <span class="n">quantize</span><span class="o">=</span><span class="bp">True</span><span class="p">)</span>
<span class="n">num_ftrs</span> <span class="o">=</span> <span class="n">model_fe</span><span class="o">.</span><span class="n">fc</span><span class="o">.</span><span class="n">in_features</span>
</pre></div>
</div>
<p>At this point you need to modify the pretrained model. The model
has the quantize/dequantize blocks in the beginning and the end. However,
because you will only use the feature extractor, the dequantizatioin layer has
to move right before the linear layer (the head). The easiest way to do that
is to wrap the model in the <code class="docutils literal notranslate"><span class="pre">nn.Sequential</span></code> module.</p>
<p>The first step is to isolate the feature extractor in the ResNet
model. Although in this example you are tasked to use all layers except
<code class="docutils literal notranslate"><span class="pre">fc</span></code> as the feature extractor, in reality, you can take as many parts
as you need. This would be useful in case you would like to replace some
of the convolutional layers as well.</p>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>When separating the feature extractor from the rest of a quantized
model, you have to manually place the quantizer/dequantized in the
beginning and the end of the parts you want to keep quantized.</p>
</div>
<p>The function below creates a model with a custom head.</p>
<div class="highlight-python notranslate"><div class="highlight"><pre><span></span><span class="kn">from</span> <span class="nn">torch</span> <span class="kn">import</span> <span class="n">nn</span>
<span class="k">def</span> <span class="nf">create_combined_model</span><span class="p">(</span><span class="n">model_fe</span><span class="p">):</span>
<span class="c1"># Step 1. Isolate the feature extractor.</span>
<span class="n">model_fe_features</span> <span class="o">=</span> <span class="n">nn</span><span class="o">.</span><span class="n">Sequential</span><span class="p">(</span>
<span class="n">model_fe</span><span class="o">.</span><span class="n">quant</span><span class="p">,</span> <span class="c1"># Quantize the input</span>
<span class="n">model_fe</span><span class="o">.</span><span class="n">conv1</span><span class="p">,</span>
<span class="n">model_fe</span><span class="o">.</span><span class="n">bn1</span><span class="p">,</span>
<span class="n">model_fe</span><span class="o">.</span><span class="n">relu</span><span class="p">,</span>
<span class="n">model_fe</span><span class="o">.</span><span class="n">maxpool</span><span class="p">,</span>
<span class="n">model_fe</span><span class="o">.</span><span class="n">layer1</span><span class="p">,</span>
<span class="n">model_fe</span><span class="o">.</span><span class="n">layer2</span><span class="p">,</span>
<span class="n">model_fe</span><span class="o">.</span><span class="n">layer3</span><span class="p">,</span>
<span class="n">model_fe</span><span class="o">.</span><span class="n">layer4</span><span class="p">,</span>
<span class="n">model_fe</span><span class="o">.</span><span class="n">avgpool</span><span class="p">,</span>
<span class="n">model_fe</span><span class="o">.</span><span class="n">dequant</span><span class="p">,</span> <span class="c1"># Dequantize the output</span>
<span class="p">)</span>
<span class="c1"># Step 2. Create a new "head"</span>
<span class="n">new_head</span> <span class="o">=</span> <span class="n">nn</span><span class="o">.</span><span class="n">Sequential</span><span class="p">(</span>
<span class="n">nn</span><span class="o">.</span><span class="n">Dropout</span><span class="p">(</span><span class="n">p</span><span class="o">=</span><span class="mf">0.5</span><span class="p">),</span>
<span class="n">nn</span><span class="o">.</span><span class="n">Linear</span><span class="p">(</span><span class="n">num_ftrs</span><span class="p">,</span> <span class="mi">2</span><span class="p">),</span>
<span class="p">)</span>
<span class="c1"># Step 3. Combine, and don't forget the quant stubs.</span>
<span class="n">new_model</span> <span class="o">=</span> <span class="n">nn</span><span class="o">.</span><span class="n">Sequential</span><span class="p">(</span>
<span class="n">model_fe_features</span><span class="p">,</span>
<span class="n">nn</span><span class="o">.</span><span class="n">Flatten</span><span class="p">(</span><span class="mi">1</span><span class="p">),</span>
<span class="n">new_head</span><span class="p">,</span>
<span class="p">)</span>
<span class="k">return</span> <span class="n">new_model</span>
</pre></div>
</div>
<div class="admonition warning">
<p class="admonition-title">Warning</p>
<p>Currently the quantized models can only be run on CPU.
However, it is possible to send the non-quantized parts of the model to a GPU.</p>
</div>
<div class="highlight-python notranslate"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">torch.optim</span> <span class="kn">as</span> <span class="nn">optim</span>
<span class="n">new_model</span> <span class="o">=</span> <span class="n">create_combined_model</span><span class="p">(</span><span class="n">model_fe</span><span class="p">)</span>
<span class="n">new_model</span> <span class="o">=</span> <span class="n">new_model</span><span class="o">.</span><span class="n">to</span><span class="p">(</span><span class="s1">'cpu'</span><span class="p">)</span>
<span class="n">criterion</span> <span class="o">=</span> <span class="n">nn</span><span class="o">.</span><span class="n">CrossEntropyLoss</span><span class="p">()</span>
<span class="c1"># Note that we are only training the head.</span>
<span class="n">optimizer_ft</span> <span class="o">=</span> <span class="n">optim</span><span class="o">.</span><span class="n">SGD</span><span class="p">(</span><span class="n">new_model</span><span class="o">.</span><span class="n">parameters</span><span class="p">(),</span> <span class="n">lr</span><span class="o">=</span><span class="mf">0.01</span><span class="p">,</span> <span class="n">momentum</span><span class="o">=</span><span class="mf">0.9</span><span class="p">)</span>
<span class="c1"># Decay LR by a factor of 0.1 every 7 epochs</span>
<span class="n">exp_lr_scheduler</span> <span class="o">=</span> <span class="n">optim</span><span class="o">.</span><span class="n">lr_scheduler</span><span class="o">.</span><span class="n">StepLR</span><span class="p">(</span><span class="n">optimizer_ft</span><span class="p">,</span> <span class="n">step_size</span><span class="o">=</span><span class="mi">7</span><span class="p">,</span> <span class="n">gamma</span><span class="o">=</span><span class="mf">0.1</span><span class="p">)</span>
</pre></div>
</div>
<div class="section" id="train-and-evaluate">
<h3>Train and evaluate<a class="headerlink" href="#train-and-evaluate" title="Permalink to this headline">ΒΆ</a></h3>
<p>This step takes around 15-25 min on CPU. Because the quantized model can
only run on the CPU, you cannot run the training on GPU.</p>
<div class="highlight-python notranslate"><div class="highlight"><pre><span></span><span class="n">new_model</span> <span class="o">=</span> <span class="n">train_model</span><span class="p">(</span><span class="n">new_model</span><span class="p">,</span> <span class="n">criterion</span><span class="p">,</span> <span class="n">optimizer_ft</span><span class="p">,</span> <span class="n">exp_lr_scheduler</span><span class="p">,</span>
<span class="n">num_epochs</span><span class="o">=</span><span class="mi">25</span><span class="p">,</span> <span class="n">device</span><span class="o">=</span><span class="s1">'cpu'</span><span class="p">)</span>
<span class="n">visualize_model</span><span class="p">(</span><span class="n">new_model</span><span class="p">)</span>
<span class="n">plt</span><span class="o">.</span><span class="n">tight_layout</span><span class="p">()</span>
</pre></div>
</div>
</div>
</div>
<div class="section" id="part-2-finetuning-the-quantizable-model">
<h2>Part 2. Finetuning the Quantizable Model<a class="headerlink" href="#part-2-finetuning-the-quantizable-model" title="Permalink to this headline">ΒΆ</a></h2>
<p>In this part, we fine tune the feature extractor used for transfer
learning, and quantize the feature extractor. Note that in both part 1
and 2, the feature extractor is quantized. The difference is that in
part 1, we use a pretrained quantized model. In this part, we create a
quantized feature extractor after fine tuning on the data-set of
interest, so this is a way to get better accuracy with transfer learning
while having the benefits of quantization. Note that in our specific
example, the training set is really small (120 images) so the benefits
of fine tuning the entire model is not apparent. However, the procedure
shown here will improve accuracy for transfer learning with larger
datasets.</p>
<p>The pretrained feature extractor must be quantizable.
To make sure it is quantizable, perform the following steps:</p>
<blockquote>
<div><ol class="arabic simple">
<li><p>Fuse <code class="docutils literal notranslate"><span class="pre">(Conv,</span> <span class="pre">BN,</span> <span class="pre">ReLU)</span></code>, <code class="docutils literal notranslate"><span class="pre">(Conv,</span> <span class="pre">BN)</span></code>, and <code class="docutils literal notranslate"><span class="pre">(Conv,</span> <span class="pre">ReLU)</span></code> using
<code class="docutils literal notranslate"><span class="pre">torch.quantization.fuse_modules</span></code>.</p></li>
<li><p>Connect the feature extractor with a custom head.
This requires dequantizing the output of the feature extractor.</p></li>
<li><p>Insert fake-quantization modules at appropriate locations
in the feature extractor to mimic quantization during training.</p></li>
</ol>
</div></blockquote>
<p>For step (1), we use models from <code class="docutils literal notranslate"><span class="pre">torchvision/models/quantization</span></code>, which
have a member method <code class="docutils literal notranslate"><span class="pre">fuse_model</span></code>. This function fuses all the <code class="docutils literal notranslate"><span class="pre">conv</span></code>,
<code class="docutils literal notranslate"><span class="pre">bn</span></code>, and <code class="docutils literal notranslate"><span class="pre">relu</span></code> modules. For custom models, this would require calling
the <code class="docutils literal notranslate"><span class="pre">torch.quantization.fuse_modules</span></code> API with the list of modules to fuse
manually.</p>
<p>Step (2) is performed by the <code class="docutils literal notranslate"><span class="pre">create_combined_model</span></code> function
used in the previous section.</p>
<p>Step (3) is achieved by using <code class="docutils literal notranslate"><span class="pre">torch.quantization.prepare_qat</span></code>, which
inserts fake-quantization modules.</p>
<p>As step (4), you can start βfinetuningβ the model, and after that convert
it to a fully quantized version (Step 5).</p>
<p>To convert the fine tuned model into a quantized model you can call the
<code class="docutils literal notranslate"><span class="pre">torch.quantization.convert</span></code> function (in our case only
the feature extractor is quantized).</p>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>Because of the random initialization your results might differ from
the results shown in this tutorial.</p>
</div>
<p># notice <cite>quantize=False</cite>
model = models.resnet18(pretrained=True, progress=True, quantize=False)
num_ftrs = model.fc.in_features</p>
<p># Step 1
model.train()
model.fuse_model()
# Step 2
model_ft = create_combined_model(model)
model_ft[0].qconfig = torch.quantization.default_qat_qconfig # Use default QAT configuration
# Step 3
model_ft = torch.quantization.prepare_qat(model_ft, inplace=True)</p>
<div class="section" id="finetuning-the-model">
<h3>Finetuning the model<a class="headerlink" href="#finetuning-the-model" title="Permalink to this headline">ΒΆ</a></h3>
<p>In the current tutorial the whole model is fine tuned. In
general, this will lead to higher accuracy. However, due to the small
training set used here, we end up overfitting to the training set.</p>
<p>Step 4. Fine tune the model</p>
<div class="highlight-python notranslate"><div class="highlight"><pre><span></span><span class="k">for</span> <span class="n">param</span> <span class="ow">in</span> <span class="n">model_ft</span><span class="o">.</span><span class="n">parameters</span><span class="p">():</span>
<span class="n">param</span><span class="o">.</span><span class="n">requires_grad</span> <span class="o">=</span> <span class="bp">True</span>
<span class="n">model_ft</span><span class="o">.</span><span class="n">to</span><span class="p">(</span><span class="n">device</span><span class="p">)</span> <span class="c1"># We can fine-tune on GPU if available</span>
<span class="n">criterion</span> <span class="o">=</span> <span class="n">nn</span><span class="o">.</span><span class="n">CrossEntropyLoss</span><span class="p">()</span>
<span class="c1"># Note that we are training everything, so the learning rate is lower</span>
<span class="c1"># Notice the smaller learning rate</span>
<span class="n">optimizer_ft</span> <span class="o">=</span> <span class="n">optim</span><span class="o">.</span><span class="n">SGD</span><span class="p">(</span><span class="n">model_ft</span><span class="o">.</span><span class="n">parameters</span><span class="p">(),</span> <span class="n">lr</span><span class="o">=</span><span class="mf">1e-3</span><span class="p">,</span> <span class="n">momentum</span><span class="o">=</span><span class="mf">0.9</span><span class="p">,</span> <span class="n">weight_decay</span><span class="o">=</span><span class="mf">0.1</span><span class="p">)</span>
<span class="c1"># Decay LR by a factor of 0.3 every several epochs</span>
<span class="n">exp_lr_scheduler</span> <span class="o">=</span> <span class="n">optim</span><span class="o">.</span><span class="n">lr_scheduler</span><span class="o">.</span><span class="n">StepLR</span><span class="p">(</span><span class="n">optimizer_ft</span><span class="p">,</span> <span class="n">step_size</span><span class="o">=</span><span class="mi">5</span><span class="p">,</span> <span class="n">gamma</span><span class="o">=</span><span class="mf">0.3</span><span class="p">)</span>
<span class="n">model_ft_tuned</span> <span class="o">=</span> <span class="n">train_model</span><span class="p">(</span><span class="n">model_ft</span><span class="p">,</span> <span class="n">criterion</span><span class="p">,</span> <span class="n">optimizer_ft</span><span class="p">,</span> <span class="n">exp_lr_scheduler</span><span class="p">,</span>
<span class="n">num_epochs</span><span class="o">=</span><span class="mi">25</span><span class="p">,</span> <span class="n">device</span><span class="o">=</span><span class="n">device</span><span class="p">)</span>
</pre></div>
</div>
<p>Step 5. Convert to quantized model</p>
<div class="highlight-python notranslate"><div class="highlight"><pre><span></span><span class="kn">from</span> <span class="nn">torch.quantization</span> <span class="kn">import</span> <span class="n">convert</span>
<span class="n">model_ft_tuned</span><span class="o">.</span><span class="n">cpu</span><span class="p">()</span>
<span class="n">model_quantized_and_trained</span> <span class="o">=</span> <span class="n">convert</span><span class="p">(</span><span class="n">model_ft_tuned</span><span class="p">,</span> <span class="n">inplace</span><span class="o">=</span><span class="bp">False</span><span class="p">)</span>
</pre></div>
</div>
<p>Lets see how the quantized model performs on a few images</p>
<div class="highlight-python notranslate"><div class="highlight"><pre><span></span><span class="n">visualize_model</span><span class="p">(</span><span class="n">model_quantized_and_trained</span><span class="p">)</span>
<span class="n">plt</span><span class="o">.</span><span class="n">ioff</span><span class="p">()</span>
<span class="n">plt</span><span class="o">.</span><span class="n">tight_layout</span><span class="p">()</span>
<span class="n">plt</span><span class="o">.</span><span class="n">show</span><span class="p">()</span>
</pre></div>
</div>
</div>
</div>
</div>
</article>
</div>
<footer>
<div class="rst-footer-buttons" role="navigation" aria-label="footer navigation">
<a href="dynamic_quantization_bert_tutorial.html" class="btn btn-neutral float-right" title="(experimental) Dynamic Quantization on BERT" accesskey="n" rel="next">Next <img src="../_static/images/chevron-right-orange.svg" class="next-page"></a>
<a href="../advanced/static_quantization_tutorial.html" class="btn btn-neutral" title="(experimental) Static Quantization with Eager Mode in PyTorch" accesskey="p" rel="prev"><img src="../_static/images/chevron-right-orange.svg" class="previous-page"> Previous</a>
</div>
<hr class="helpful-hr hr-top">
<div class="helpful-container">
<div class="helpful-question">Was this helpful?</div>
<div class="helpful-question yes-link" data-behavior="was-this-helpful-event" data-response="yes">Yes</div>
<div class="helpful-question no-link" data-behavior="was-this-helpful-event" data-response="no">No</div>
<div class="was-helpful-thank-you">Thank you</div>
</div>
<hr class="helpful-hr hr-bottom"/>
<div role="contentinfo">
<p>
© Copyright 2019, PyTorch.
</p>
</div>
<div>
Built with <a href="http://sphinx-doc.org/">Sphinx</a> using a <a href="https://github.com/rtfd/sphinx_rtd_theme">theme</a> provided by <a href="https://readthedocs.org">Read the Docs</a>.
</div>
</footer>
</div>
</div>
<div class="pytorch-content-right" id="pytorch-content-right">
<div class="pytorch-right-menu" id="pytorch-right-menu">
<div class="pytorch-side-scroll" id="pytorch-side-scroll-right">
<ul>
<li><a class="reference internal" href="#">(experimental) Quantized Transfer Learning for Computer Vision Tutorial</a><ul>
<li><a class="reference internal" href="#part-0-prerequisites">Part 0. Prerequisites</a><ul>
<li><a class="reference internal" href="#installing-the-nightly-build">Installing the Nightly Build</a></li>
<li><a class="reference internal" href="#load-data">Load Data</a></li>
<li><a class="reference internal" href="#visualize-a-few-images">Visualize a few images</a></li>
<li><a class="reference internal" href="#support-function-for-model-training">Support Function for Model Training</a></li>
<li><a class="reference internal" href="#support-function-for-visualizing-the-model-predictions">Support Function for Visualizing the Model Predictions</a></li>
</ul>
</li>
<li><a class="reference internal" href="#part-1-training-a-custom-classifier-based-on-a-quantized-feature-extractor">Part 1. Training a Custom Classifier based on a Quantized Feature Extractor</a><ul>
<li><a class="reference internal" href="#train-and-evaluate">Train and evaluate</a></li>
</ul>
</li>
<li><a class="reference internal" href="#part-2-finetuning-the-quantizable-model">Part 2. Finetuning the Quantizable Model</a><ul>
<li><a class="reference internal" href="#finetuning-the-model">Finetuning the model</a></li>
</ul>
</li>
</ul>
</li>
</ul>
</div>
</div>
</div>
</section>
</div>
<script type="text/javascript" id="documentation_options" data-url_root="../" src="../_static/documentation_options.js"></script>
<script type="text/javascript" src="../_static/jquery.js"></script>
<script type="text/javascript" src="../_static/underscore.js"></script>
<script type="text/javascript" src="../_static/doctools.js"></script>
<script type="text/javascript" src="../_static/language_data.js"></script>
<script async="async" type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.5/latest.js?config=TeX-AMS-MML_HTMLorMML"></script>
<script type="text/javascript" src="../_static/js/vendor/popper.min.js"></script>
<script type="text/javascript" src="../_static/js/vendor/bootstrap.min.js"></script>
<script type="text/javascript" src="../_static/js/theme.js"></script>
<script type="text/javascript">
jQuery(function () {
SphinxRtdTheme.Navigation.enable(true);
});
</script>
<script>
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
})(window,document,'script','https://www.google-analytics.com/analytics.js','ga');
ga('create', 'UA-71919972-3', 'auto');
ga('send', 'pageview');
$("[data-behavior='call-to-action-event']").on('click', function(){
ga('send', {
hitType: 'event',
eventCategory: 'Download',
eventAction: 'click',
eventLabel: $(this).attr("data-response")
});
});
$("[data-behavior='was-this-helpful-event']").on('click', function(){
$(".helpful-question").hide();
$(".was-helpful-thank-you").show();
ga('send', {
hitType: 'event',
eventCategory: 'Was this Helpful?',
eventAction: 'click',
eventLabel: $(this).attr("data-response")
});
});
if (location.pathname == "/") {
$(".helpful-container").hide();
$(".hr-bottom").hide();
}
</script>
<!-- Begin Footer -->
<div class="container-fluid docs-tutorials-resources" id="docs-tutorials-resources">
<div class="container">
<div class="row">
<div class="col-md-4 text-center">
<h2>Docs</h2>
<p>Access comprehensive developer documentation for PyTorch</p>
<a class="with-right-arrow" href="https://pytorch.org/docs/stable/index.html">View Docs</a>
</div>
<div class="col-md-4 text-center">
<h2>Tutorials</h2>
<p>Get in-depth tutorials for beginners and advanced developers</p>
<a class="with-right-arrow" href="https://pytorch.org/tutorials">View Tutorials</a>
</div>
<div class="col-md-4 text-center">
<h2>Resources</h2>
<p>Find development resources and get your questions answered</p>
<a class="with-right-arrow" href="https://pytorch.org/resources">View Resources</a>
</div>
</div>
</div>
</div>
<footer class="site-footer">
<div class="container footer-container">
<div class="footer-logo-wrapper">
<a href="https://pytorch.org/" class="footer-logo"></a>
</div>
<div class="footer-links-wrapper">
<div class="footer-links-col">
<ul>
<li class="list-title"><a href="https://pytorch.org/">PyTorch</a></li>
<li><a href="https://pytorch.org/get-started">Get Started</a></li>
<li><a href="https://pytorch.org/features">Features</a></li>
<li><a href="https://pytorch.org/ecosystem">Ecosystem</a></li>
<li><a href="https://pytorch.org/blog/">Blog</a></li>
<li><a href="https://pytorch.org/resources">Resources</a></li>
</ul>
</div>
<div class="footer-links-col">
<ul>
<li class="list-title"><a href="https://pytorch.org/support">Support</a></li>
<li><a href="https://pytorch.org/tutorials">Tutorials</a></li>
<li><a href="https://pytorch.org/docs/stable/index.html">Docs</a></li>
<li><a href="https://discuss.pytorch.org" target="_blank">Discuss</a></li>
<li><a href="https://github.com/pytorch/pytorch/issues" target="_blank">Github Issues</a></li>
<li><a href="https://pytorch.slack.com" target="_blank">Slack</a></li>
<li><a href="https://github.com/pytorch/pytorch/blob/master/CONTRIBUTING.md" target="_blank">Contributing</a></li>
</ul>
</div>
<div class="footer-links-col follow-us-col">
<ul>
<li class="list-title">Follow Us</li>
<li>
<div id="mc_embed_signup">
<form
action="https://twitter.us14.list-manage.com/subscribe/post?u=75419c71fe0a935e53dfa4a3f&id=91d0dccd39"
method="post"
id="mc-embedded-subscribe-form"
name="mc-embedded-subscribe-form"
class="email-subscribe-form validate"
target="_blank"
novalidate>
<div id="mc_embed_signup_scroll" class="email-subscribe-form-fields-wrapper">
<div class="mc-field-group">
<label for="mce-EMAIL" style="display:none;">Email Address</label>
<input type="email" value="" name="EMAIL" class="required email" id="mce-EMAIL" placeholder="Email Address">
</div>
<div id="mce-responses" class="clear">
<div class="response" id="mce-error-response" style="display:none"></div>
<div class="response" id="mce-success-response" style="display:none"></div>
</div> <!-- real people should not fill this in and expect good things - do not remove this or risk form bot signups-->
<div style="position: absolute; left: -5000px;" aria-hidden="true"><input type="text" name="b_75419c71fe0a935e53dfa4a3f_91d0dccd39" tabindex="-1" value=""></div>