Skip to content

Commit 40d74e6

Browse files
mattipfacebook-github-bot
authored andcommitted
breakup optim, cuda documentation (pytorch#55673)
Summary: Related to pytorch#52256 Use autosummary instead of autofunction to create subpages for optim and cuda functions/classes. Also fix some minor formatting issues in optim.LBFGS and cuda.stream docstings Pull Request resolved: pytorch#55673 Reviewed By: jbschlosser Differential Revision: D27747741 Pulled By: zou3519 fbshipit-source-id: 070681f840cdf4433a44af75be3483f16e5acf7d
1 parent fd15557 commit 40d74e6

File tree

4 files changed

+125
-89
lines changed

4 files changed

+125
-89
lines changed

docs/source/cuda.rst

Lines changed: 79 additions & 43 deletions
Original file line numberDiff line numberDiff line change
@@ -1,70 +1,106 @@
11
torch.cuda
22
===================================
3-
3+
.. automodule:: torch.cuda
44
.. currentmodule:: torch.cuda
55

6-
.. automodule:: torch.cuda
7-
:members:
6+
.. autosummary::
7+
:toctree: generated
8+
:nosignatures:
9+
10+
StreamContext
11+
can_device_access_peer
12+
current_blas_handle
13+
current_device
14+
current_stream
15+
default_stream
16+
device
17+
device_count
18+
device_of
19+
get_arch_list
20+
get_device_capability
21+
get_device_name
22+
get_device_properties
23+
get_gencode_flags
24+
init
25+
ipc_collect
26+
is_available
27+
is_initialized
28+
set_device
29+
set_stream
30+
stream
31+
synchronize
832

933
Random Number Generator
1034
-------------------------
11-
.. autofunction:: get_rng_state
12-
.. autofunction:: get_rng_state_all
13-
.. autofunction:: set_rng_state
14-
.. autofunction:: set_rng_state_all
15-
.. autofunction:: manual_seed
16-
.. autofunction:: manual_seed_all
17-
.. autofunction:: seed
18-
.. autofunction:: seed_all
19-
.. autofunction:: initial_seed
35+
.. autosummary::
36+
:toctree: generated
37+
:nosignatures:
38+
39+
get_rng_state
40+
get_rng_state_all
41+
set_rng_state
42+
set_rng_state_all
43+
manual_seed
44+
manual_seed_all
45+
seed
46+
seed_all
47+
initial_seed
2048

2149

2250
Communication collectives
2351
-------------------------
2452

25-
.. autofunction:: torch.cuda.comm.broadcast
26-
27-
.. autofunction:: torch.cuda.comm.broadcast_coalesced
53+
.. autosummary::
54+
:toctree: generated
55+
:nosignatures:
2856

29-
.. autofunction:: torch.cuda.comm.reduce_add
30-
31-
.. autofunction:: torch.cuda.comm.scatter
32-
33-
.. autofunction:: torch.cuda.comm.gather
57+
comm.broadcast
58+
comm.broadcast_coalesced
59+
comm.reduce_add
60+
comm.scatter
61+
comm.gather
3462

3563
Streams and events
3664
------------------
65+
.. autosummary::
66+
:toctree: generated
67+
:nosignatures:
3768

38-
.. autoclass:: Stream
39-
:members:
40-
41-
.. autoclass:: Event
42-
:members:
69+
Stream
70+
Event
4371

4472
Memory management
4573
-----------------
46-
.. autofunction:: empty_cache
47-
.. autofunction:: list_gpu_processes
48-
.. autofunction:: memory_stats
49-
.. autofunction:: memory_summary
50-
.. autofunction:: memory_snapshot
51-
.. autofunction:: memory_allocated
52-
.. autofunction:: max_memory_allocated
53-
.. autofunction:: reset_max_memory_allocated
54-
.. autofunction:: memory_reserved
55-
.. autofunction:: max_memory_reserved
56-
.. autofunction:: set_per_process_memory_fraction
74+
.. autosummary::
75+
:toctree: generated
76+
:nosignatures:
77+
78+
empty_cache
79+
list_gpu_processes
80+
memory_stats
81+
memory_summary
82+
memory_snapshot
83+
memory_allocated
84+
max_memory_allocated
85+
reset_max_memory_allocated
86+
memory_reserved
87+
max_memory_reserved
88+
set_per_process_memory_fraction
89+
memory_cached
90+
max_memory_cached
91+
reset_max_memory_cached
92+
reset_peak_memory_stats
5793
.. FIXME The following doesn't seem to exist. Is it supposed to?
5894
https://github.com/pytorch/pytorch/issues/27785
5995
.. autofunction:: reset_max_memory_reserved
60-
.. autofunction:: memory_cached
61-
.. autofunction:: max_memory_cached
62-
.. autofunction:: reset_max_memory_cached
63-
.. autofunction:: reset_peak_memory_stats
6496
6597
NVIDIA Tools Extension (NVTX)
6698
-----------------------------
6799

68-
.. autofunction:: torch.cuda.nvtx.mark
69-
.. autofunction:: torch.cuda.nvtx.range_push
70-
.. autofunction:: torch.cuda.nvtx.range_pop
100+
.. autosummary::
101+
:toctree: generated
102+
:nosignatures:
103+
104+
nvtx.mark
105+
nvtx.range_push
106+
nvtx.range_pop

docs/source/optim.rst

Lines changed: 44 additions & 44 deletions
Original file line numberDiff line numberDiff line change
@@ -102,33 +102,39 @@ Example::
102102

103103
.. _optimizer-algorithms:
104104

105-
Algorithms
105+
Base class
106106
----------
107107

108108
.. autoclass:: Optimizer
109-
:members:
110-
.. autoclass:: Adadelta
111-
:members:
112-
.. autoclass:: Adagrad
113-
:members:
114-
.. autoclass:: Adam
115-
:members:
116-
.. autoclass:: AdamW
117-
:members:
118-
.. autoclass:: SparseAdam
119-
:members:
120-
.. autoclass:: Adamax
121-
:members:
122-
.. autoclass:: ASGD
123-
:members:
124-
.. autoclass:: LBFGS
125-
:members:
126-
.. autoclass:: RMSprop
127-
:members:
128-
.. autoclass:: Rprop
129-
:members:
130-
.. autoclass:: SGD
131-
:members:
109+
110+
.. autosummary::
111+
:toctree: generated
112+
:nosignatures:
113+
114+
Optimizer.add_param_group
115+
Optimizer.load_state_dict
116+
Optimizer.state_dict
117+
Optimizer.step
118+
Optimizer.zero_grad
119+
120+
Algorithms
121+
----------
122+
123+
.. autosummary::
124+
:toctree: generated
125+
:nosignatures:
126+
127+
Adadelta
128+
Adagrad
129+
Adam
130+
AdamW
131+
SparseAdam
132+
Adamax
133+
ASGD
134+
LBFGS
135+
RMSprop
136+
Rprop
137+
SGD
132138

133139
How to adjust learning rate
134140
---------------------------
@@ -155,26 +161,20 @@ should write your code this way:
155161
if you are calling ``scheduler.step()`` at the wrong time.
156162

157163

158-
.. autoclass:: torch.optim.lr_scheduler.LambdaLR
159-
:members:
160-
.. autoclass:: torch.optim.lr_scheduler.MultiplicativeLR
161-
:members:
162-
.. autoclass:: torch.optim.lr_scheduler.StepLR
163-
:members:
164-
.. autoclass:: torch.optim.lr_scheduler.MultiStepLR
165-
:members:
166-
.. autoclass:: torch.optim.lr_scheduler.ExponentialLR
167-
:members:
168-
.. autoclass:: torch.optim.lr_scheduler.CosineAnnealingLR
169-
:members:
170-
.. autoclass:: torch.optim.lr_scheduler.ReduceLROnPlateau
171-
:members:
172-
.. autoclass:: torch.optim.lr_scheduler.CyclicLR
173-
:members:
174-
.. autoclass:: torch.optim.lr_scheduler.OneCycleLR
175-
:members:
176-
.. autoclass:: torch.optim.lr_scheduler.CosineAnnealingWarmRestarts
177-
:members:
164+
.. autosummary::
165+
:toctree: generated
166+
:nosignatures:
167+
168+
lr_scheduler.LambdaLR
169+
lr_scheduler.MultiplicativeLR
170+
lr_scheduler.StepLR
171+
lr_scheduler.MultiStepLR
172+
lr_scheduler.ExponentialLR
173+
lr_scheduler.CosineAnnealingLR
174+
lr_scheduler.ReduceLROnPlateau
175+
lr_scheduler.CyclicLR
176+
lr_scheduler.OneCycleLR
177+
lr_scheduler.CosineAnnealingWarmRestarts
178178

179179
Stochastic Weight Averaging
180180
---------------------------

torch/cuda/__init__.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -376,7 +376,7 @@ def __exit__(self, type: Any, value: Any, traceback: Any):
376376

377377
def stream(stream: Optional['torch.cuda.Stream']) -> StreamContext: # type: ignore
378378
r"""Wrapper around the Context-manager StreamContext that
379-
selects a given stream.
379+
selects a given stream.
380380
381381
Arguments:
382382
stream (Stream): selected stream. This manager is a no-op if it's

torch/optim/lbfgs.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -182,7 +182,7 @@ def _strong_wolfe(obj_func,
182182

183183
class LBFGS(Optimizer):
184184
"""Implements L-BFGS algorithm, heavily inspired by `minFunc
185-
<https://www.cs.ubc.ca/~schmidtm/Software/minFunc.html>`.
185+
<https://www.cs.ubc.ca/~schmidtm/Software/minFunc.html>`_.
186186
187187
.. warning::
188188
This optimizer doesn't support per-parameter options and parameter

0 commit comments

Comments
 (0)