You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: README.md
+52
Original file line number
Diff line number
Diff line change
@@ -74,3 +74,55 @@ Just like for training, you can run `image_sample.py` through MPI to use multipl
74
74
You can change the number of sampling steps using the `--timestep_respacing` argument. For example, `--timestep_respacing 250` uses 250 steps to sample. Passing `--timestep_respacing ddim250` is similar, but uses the uniform stride from the [DDIM paper](https://arxiv.org/abs/2010.02502) rather than our stride.
75
75
76
76
To sample using [DDIM](https://arxiv.org/abs/2010.02502), pass `--use_ddim True`.
77
+
78
+
## Experiment hyper-parameters
79
+
80
+
This section includes run flags for training the main models in the paper. Note that the batch sizes are specified for single-GPU training, even though most of these runs will not naturally fit on a single GPU. To address this, either set `--microbatch` to a small value (e.g. 4) to train on one GPU, or run with MPI and divide `--batch_size` by the number of GPUs.
81
+
82
+
Unconditional ImageNet-64 with our `L_hybrid` objective and cosine noise schedule:
0 commit comments