Skip to content

fix missed code-blocks because of syntax error (#1496) #1497

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
May 3, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions advanced_source/extend_dispatcher.rst
Original file line number Diff line number Diff line change
Expand Up @@ -53,6 +53,7 @@ You can choose any of keys above to prototype your customized backend.
To create a Tensor on ``PrivateUse1`` backend, you need to set dispatch key in ``TensorImpl`` constructor.

.. code-block:: cpp

/* Example TensorImpl constructor */
TensorImpl(
Storage&& storage,
Expand Down
1 change: 1 addition & 0 deletions advanced_source/torch-script-parallelism.rst
Original file line number Diff line number Diff line change
Expand Up @@ -207,6 +207,7 @@ Let's use the profiler along with the Chrome trace export functionality to
visualize the performance of our parallelized model:

.. code-block:: python

with torch.autograd.profiler.profile() as prof:
ens(x)
prof.export_chrome_trace('parallel.json')
Expand Down
6 changes: 3 additions & 3 deletions advanced_source/torch_script_custom_ops.rst
Original file line number Diff line number Diff line change
Expand Up @@ -605,7 +605,7 @@ Along with a small ``CMakeLists.txt`` file:

At this point, we should be able to build the application:

.. code-block::
.. code-block:: shell

$ mkdir build
$ cd build
Expand Down Expand Up @@ -645,7 +645,7 @@ At this point, we should be able to build the application:

And run it without passing a model just yet:

.. code-block::
.. code-block:: shell

$ ./example_app
usage: example_app <path-to-exported-script-module>
Expand All @@ -672,7 +672,7 @@ The last line will serialize the script function into a file called
"example.pt". If we then pass this serialized model to our C++ application, we
can run it straight away:

.. code-block::
.. code-block:: shell

$ ./example_app example.pt
terminate called after throwing an instance of 'torch::jit::script::ErrorReport'
Expand Down
2 changes: 1 addition & 1 deletion beginner_source/hyperparameter_tuning_tutorial.py
Original file line number Diff line number Diff line change
Expand Up @@ -431,7 +431,7 @@ def main(num_samples=10, max_num_epochs=10, gpus_per_trial=2):
######################################################################
# If you run the code, an example output could look like this:
#
# .. code-block::
# ::
#
# Number of trials: 10 (10 TERMINATED)
# +-----+------+------+-------------+--------------+---------+------------+--------------------+
Expand Down
3 changes: 2 additions & 1 deletion prototype_source/fx_graph_mode_ptq_static.rst
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,8 @@ we'll have a separate tutorial to show how to make the part of the model we want
We also have a tutorial for `FX Graph Mode Post Training Dynamic Quantization <https://pytorch.org/tutorials/prototype/fx_graph_mode_ptq_dynamic.html>`_.
tldr; The FX Graph Mode API looks like the following:

.. code:: python
.. code:: python

import torch
from torch.quantization import get_default_qconfig
# Note that this is temporary, we'll expose these functions to torch.quantization after official releasee
Expand Down
2 changes: 1 addition & 1 deletion recipes_source/android_native_app_with_custom_op.rst
Original file line number Diff line number Diff line change
Expand Up @@ -704,7 +704,7 @@ If you check the android logcat:

You should see logs with tag 'PyTorchNativeApp' that prints x, y, and the result of the model forward, which we print with ``log`` function in ``NativeApp/app/src/main/cpp/pytorch_nativeapp.cpp``.

.. code-block::
::

I/PyTorchNativeApp(26968): x: -0.9484 -1.1757 -0.5832 0.9144 0.8867 1.0933 -0.4004 -0.3389
I/PyTorchNativeApp(26968): -1.0343 1.5200 -0.7625 -1.5724 -1.2073 0.4613 0.2730 -0.6789
Expand Down