Skip to content

Commit ff78d74

Browse files
zou3519facebook-github-bot
authored andcommitted
Don't generate named tensor functions to RegistrationFunctions.h (pytorch#26685)
Summary: Pull Request resolved: pytorch#26685 This prevents XLA from picking up on named tensor APIs. I ran into some problems while attempting to support dimname overloads in XLA; since we don't need the first iteration of named tensors to work with XLA this is OK. Test Plan: - run CI. Differential Revision: D17538893 Pulled By: zou3519 fbshipit-source-id: 93d579c93f5b1dc68541c07c4a3d61792859507d
1 parent 05f7081 commit ff78d74

File tree

1 file changed

+5
-1
lines changed

1 file changed

+5
-1
lines changed

aten/src/ATen/function_wrapper.py

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1335,7 +1335,11 @@ def add_namedtensor_enabled_macro(code):
13351335
raise Exception("broadcasting is not yet supported for native functions, "
13361336
"but specified for function {}", option['name'])
13371337

1338-
if BUILD_NAMEDTENSOR or not is_named_tensor_only:
1338+
# RegistrationDeclarations.h is used downstream in XLA, where XLA uses
1339+
# it as the "source of truth" for pytorch ops and generates code based
1340+
# on it. We don't pass named tensor only functions there because XLA
1341+
# doesn't support them.
1342+
if not is_named_tensor_only:
13391343
top_env['registration_declarations'].append(
13401344
REGISTRATION_DECLARATION.substitute(option))
13411345
if (option['use_c10_dispatcher'] != 'no'):

0 commit comments

Comments
 (0)