@@ -105,6 +105,50 @@ speaking, the structure of your registrations will look like this:
105
105
that provides implementations for all basic operators on the XLA dispatch
106
106
key.
107
107
108
+
109
+ For operators that do not need autograd
110
+ ---------------------------------------
111
+
112
+ In the next section, we will discuss how to add autograd support to an operator.
113
+ But for the ops that do not need autograd support, the following line can be
114
+ added to improve useability and make your op behave like PyTorch's built-in
115
+ operators.
116
+
117
+ .. code-block :: cpp
118
+
119
+ REGISTER_AUTOGRAD_NOT_IMPLEMENTED_FALLBACK(myops, "myadd");
120
+
121
+ Including the above line will an register an Autograd kernel that appends a dummy
122
+ `NotImplemented ` node on forward (preserving the `require_grad `-ness of
123
+ the inputs). On backward, the `NotImplemented ` node raises an error. This
124
+ can be helpful for debugging in larger models where previously it can be hard
125
+ to pin-point exactly where the `requires_grad `-ness is lost during the forward pass.
126
+
127
+ This macro **always ** registers both the Autograd and ADInplaceOrView kernels
128
+ whether or not your operator is view or inplace. As a consequence,
129
+ **there may be an additional overhead for non-view-or-inplace ops in
130
+ inference mode when the input is non-inference. ** If your operator is neither
131
+ view nor in-place, and this is an issue for your use case, consider only
132
+ registering the Autograd kernel:
133
+
134
+ .. code-block :: cpp
135
+
136
+ TORCH_LIBRARY_IMPL(myops, Autograd, m) {
137
+ m.def(op, autogradNotImplementedFallback());
138
+ }
139
+
140
+
141
+ Inplace or view ops
142
+ ^^^^^^^^^^^^^^^^^^^
143
+
144
+ For operators that dispatch through the Autograd boxed kernels, we rely on operator
145
+ schema information in our logic. To ensure correctness and best possible performance,
146
+ if your op mutates an input in-place or returns a tensor that aliases with
147
+ one of the inputs, it is important to ensure that your schema properly reflects
148
+ this. See
149
+ `here <https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/README.md >`_
150
+ for more information on how to annotate the schema.
151
+
108
152
.. _autograd-support :
109
153
110
154
Adding autograd support
0 commit comments