Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Changed all code and comments that used the phrase "sparse compiler" to instead use "sparsifier" #71875

Conversation

TimAtGoogle
Copy link
Contributor

The changes in this p.r. mostly center around the tests that use the flag sparse_compiler (also: sparse-compiler).

@llvmbot llvmbot added mlir:gpu mlir:sparse Sparse compiler in MLIR mlir labels Nov 9, 2023
@TimAtGoogle TimAtGoogle requested a review from aartbik November 9, 2023 22:51
@llvmbot
Copy link
Member

llvmbot commented Nov 9, 2023

@llvm/pr-subscribers-mlir-sparse

@llvm/pr-subscribers-mlir-gpu

Author: Tim Harvey (TimAtGoogle)

Changes

The changes in this p.r. mostly center around the tests that use the flag sparse_compiler (also: sparse-compiler).


Patch is 215.66 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/71875.diff

113 Files Affected:

  • (modified) mlir/benchmark/python/common.py (+1-1)
  • (modified) mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorType.h (+2-2)
  • (modified) mlir/include/mlir/Dialect/SparseTensor/Pipelines/Passes.h (+7-7)
  • (modified) mlir/lib/Dialect/SparseTensor/Pipelines/SparseTensorPipelines.cpp (+5-5)
  • (modified) mlir/test/Dialect/SparseTensor/sparse_vector_concat.mlir (+1-1)
  • (modified) mlir/test/Dialect/SparseTensor/sparse_vector_mv.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/block.mlir (+5-5)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/concatenate_dim_0.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/concatenate_dim_0_permute.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/concatenate_dim_1.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/concatenate_dim_1_permute.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/dense_output.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/dense_output_bf16.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/dense_output_f16.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/dual_sparse_conv_2d.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/reshape_dot.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_abs.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_binary.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_block_matmul.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_cast.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_cmp.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_codegen_dim.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_codegen_foreach.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_collapse_shape.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_complex32.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_complex64.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_complex_ops.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_constant_to_sparse_tensor.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_conv_1d_nwc_wcf.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_conv_2d.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_conv_2d_nchw_fchw.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_conv_2d_nhwc_hwcf.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_conv_3d.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_conv_3d_ndhwc_dhwcf.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_conversion.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_conversion_block.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_conversion_dyn.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_conversion_element.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_conversion_ptr.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_conversion_sparse2dense.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_conversion_sparse2sparse.mlir (+7-7)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_coo_test.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_dot.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_ds.mlir (+5-5)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_expand.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_expand_shape.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_filter_conv2d.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_flatten.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_foreach_slices.mlir (+5-5)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_index.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_index_dense.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_insert_1d.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_insert_2d.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_insert_3d.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_loose.mlir (+5-5)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_matmul.mlir (+8-8)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_matmul_slice.mlir (+5-5)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_matrix_ops.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_matvec.mlir (+8-8)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_mttkrp.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_out_mult_elt.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_out_reduction.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_out_simple.mlir (+5-5)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_pack.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_pack_libgen.mlir (+5-5)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_pooling_nhwc.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_quantized_matmul.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_re_im.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_reduce_custom.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_reduce_custom_prod.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_reduce_custom_sum.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_reductions.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_reductions_min.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_reductions_prod.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_reshape.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_rewrite_push_back.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_rewrite_sort_coo.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_sampled_matmul.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_sampled_mm_fusion.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_scale.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_scf_nested.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_select.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_semiring_select.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_sign.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_sorted_coo.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_spmm.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_storage.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_strided_conv_2d_nhwc_hwcf.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_sum.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_sum_bf16.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_sum_c32.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_sum_f16.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_tanh.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_tensor_mul.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_tensor_ops.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_transpose.mlir (+7-7)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_transpose_coo.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_unary.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_vector_ops.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/GPU/CUDA/sm80-lt/sparse-matmul-2-4-lib-from-linalg.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/GPU/CUDA/sm80-lt/sparse-matmul-2-4-prune.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/GPU/CUDA/sparse-gemm-lib.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/GPU/CUDA/sparse-matmul-lib.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/GPU/CUDA/sparse-matvec-const.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/GPU/CUDA/sparse-matvec-lib.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/GPU/CUDA/sparse-matvec.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/GPU/CUDA/sparse-sampled-matmul-lib.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/GPU/CUDA/sparse-sddmm-lib.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/python/test_SDDMM.py (+2-2)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/python/test_SpMM.py (+2-2)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/python/test_output.py (+2-2)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/python/test_stress.py (+2-2)
  • (added) mlir/test/Integration/Dialect/SparseTensor/python/tools/sparsifier.py (+39)
diff --git a/mlir/benchmark/python/common.py b/mlir/benchmark/python/common.py
index c605726df2a5f64..b2dfc134b7c9040 100644
--- a/mlir/benchmark/python/common.py
+++ b/mlir/benchmark/python/common.py
@@ -15,7 +15,7 @@ def setup_passes(mlir_module):
         "parallelization-strategy=none"
         " vectorization-strategy=none vl=1 enable-simd-index32=False"
     )
-    pipeline = f"sparse-compiler{{{opt}}}"
+    pipeline = f"sparsifier{{{opt}}}"
     PassManager.parse(pipeline).run(mlir_module)
 
 
diff --git a/mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorType.h b/mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorType.h
index e808057cf6b0a67..4f6984e84aecc04 100644
--- a/mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorType.h
+++ b/mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorType.h
@@ -23,7 +23,7 @@ namespace sparse_tensor {
 ///
 /// (1) To provide a uniform API for querying aspects of sparse-tensor
 /// types; in particular, to make the "dimension" vs "level" distinction
-/// overt (i.e., explicit everywhere).  Thus, throughout the sparse-compiler
+/// overt (i.e., explicit everywhere).  Thus, throughout the sparsifier
 /// this class should be preferred over using `RankedTensorType` or
 /// `ShapedType` directly, since the methods of the latter do not make
 /// the "dimension" vs "level" distinction overt.
@@ -34,7 +34,7 @@ namespace sparse_tensor {
 /// That is, we want to manipulate dense-tensor types using the same API
 /// that we use for manipulating sparse-tensor types; both to keep the
 /// "dimension" vs "level" distinction overt, and to avoid needing to
-/// handle certain cases specially in the sparse-compiler.
+/// handle certain cases specially in the sparsifier.
 ///
 /// (3) To provide uniform handling of "defaults".  In particular
 /// this means that dense-tensors should always return the same answers
diff --git a/mlir/include/mlir/Dialect/SparseTensor/Pipelines/Passes.h b/mlir/include/mlir/Dialect/SparseTensor/Pipelines/Passes.h
index 4de83034b0386d1..1fa4a4bb9f0bbfd 100644
--- a/mlir/include/mlir/Dialect/SparseTensor/Pipelines/Passes.h
+++ b/mlir/include/mlir/Dialect/SparseTensor/Pipelines/Passes.h
@@ -23,12 +23,12 @@ using namespace llvm::cl;
 namespace mlir {
 namespace sparse_tensor {
 
-/// Options for the "sparse-compiler" pipeline.  So far this only contains
+/// Options for the "sparsifier" pipeline.  So far this only contains
 /// a subset of the options that can be set for the underlying passes,
 /// because it must be manually kept in sync with the tablegen files
 /// for those passes.
-struct SparseCompilerOptions
-    : public PassPipelineOptions<SparseCompilerOptions> {
+struct SparsifierOptions
+    : public PassPipelineOptions<SparsifierOptions> {
   // These options must be kept in sync with `SparsificationBase`.
   // TODO(57514): These options are duplicated in Passes.td.
   PassOptions::Option<mlir::SparseParallelizationStrategy> parallelization{
@@ -172,15 +172,15 @@ struct SparseCompilerOptions
 // Building and Registering.
 //===----------------------------------------------------------------------===//
 
-/// Adds the "sparse-compiler" pipeline to the `OpPassManager`.  This
+/// Adds the "sparsifier" pipeline to the `OpPassManager`.  This
 /// is the standard pipeline for taking sparsity-agnostic IR using
 /// the sparse-tensor type and lowering it to LLVM IR with concrete
 /// representations and algorithms for sparse tensors.
-void buildSparseCompiler(OpPassManager &pm,
-                         const SparseCompilerOptions &options);
+void buildSparsifier(OpPassManager &pm,
+                         const SparsifierOptions &options);
 
 /// Registers all pipelines for the `sparse_tensor` dialect.  At present,
-/// this includes only "sparse-compiler".
+/// this includes only "sparsifier".
 void registerSparseTensorPipelines();
 
 } // namespace sparse_tensor
diff --git a/mlir/lib/Dialect/SparseTensor/Pipelines/SparseTensorPipelines.cpp b/mlir/lib/Dialect/SparseTensor/Pipelines/SparseTensorPipelines.cpp
index 3ed8bba2514aaf9..09119e85ef245ea 100644
--- a/mlir/lib/Dialect/SparseTensor/Pipelines/SparseTensorPipelines.cpp
+++ b/mlir/lib/Dialect/SparseTensor/Pipelines/SparseTensorPipelines.cpp
@@ -29,8 +29,8 @@
 // Pipeline implementation.
 //===----------------------------------------------------------------------===//
 
-void mlir::sparse_tensor::buildSparseCompiler(
-    OpPassManager &pm, const SparseCompilerOptions &options) {
+void mlir::sparse_tensor::buildSparsifier(
+    OpPassManager &pm, const SparsifierOptions &options) {
   pm.addNestedPass<func::FuncOp>(createLinalgGeneralizationPass());
   pm.addPass(createSparsificationAndBufferizationPass(
       getBufferizationOptionsForSparsification(
@@ -99,10 +99,10 @@ void mlir::sparse_tensor::buildSparseCompiler(
 //===----------------------------------------------------------------------===//
 
 void mlir::sparse_tensor::registerSparseTensorPipelines() {
-  PassPipelineRegistration<SparseCompilerOptions>(
-      "sparse-compiler",
+  PassPipelineRegistration<SparsifierOptions>(
+      "sparsifier",
       "The standard pipeline for taking sparsity-agnostic IR using the"
       " sparse-tensor type, and lowering it to LLVM IR with concrete"
       " representations and algorithms for sparse tensors.",
-      buildSparseCompiler);
+      buildSparsifier);
 }
diff --git a/mlir/test/Dialect/SparseTensor/sparse_vector_concat.mlir b/mlir/test/Dialect/SparseTensor/sparse_vector_concat.mlir
index e5521228c433a8c..12b7f7cafc9b4f2 100644
--- a/mlir/test/Dialect/SparseTensor/sparse_vector_concat.mlir
+++ b/mlir/test/Dialect/SparseTensor/sparse_vector_concat.mlir
@@ -1,4 +1,4 @@
-// RUN: mlir-opt %s --sparse-compiler="enable-runtime-library=false vl=2 reassociate-fp-reductions=true enable-index-optimizations=true"
+// RUN: mlir-opt %s --sparsifier="enable-runtime-library=false vl=2 reassociate-fp-reductions=true enable-index-optimizations=true"
 
 #MAT_D_C = #sparse_tensor.encoding<{
   map = (d0, d1) -> (d0 : dense, d1 : compressed)
diff --git a/mlir/test/Dialect/SparseTensor/sparse_vector_mv.mlir b/mlir/test/Dialect/SparseTensor/sparse_vector_mv.mlir
index 0170efeb33f561b..e25c3a02f91271c 100644
--- a/mlir/test/Dialect/SparseTensor/sparse_vector_mv.mlir
+++ b/mlir/test/Dialect/SparseTensor/sparse_vector_mv.mlir
@@ -1,4 +1,4 @@
-// RUN: mlir-opt %s -sparse-compiler="vl=8" |  FileCheck %s
+// RUN: mlir-opt %s -sparsifier="vl=8" |  FileCheck %s
 
 #Dense = #sparse_tensor.encoding<{
   map = (d0, d1) -> (d0 : dense, d1 : dense)
diff --git a/mlir/test/Integration/Dialect/SparseTensor/CPU/block.mlir b/mlir/test/Integration/Dialect/SparseTensor/CPU/block.mlir
index b77c1b42baf7ec6..d92165e98cea4a1 100755
--- a/mlir/test/Integration/Dialect/SparseTensor/CPU/block.mlir
+++ b/mlir/test/Integration/Dialect/SparseTensor/CPU/block.mlir
@@ -5,10 +5,10 @@
 // config could be moved to lit.local.cfg. However, there are downstream users that
 //  do not use these LIT config files. Hence why this is kept inline.
 //
-// DEFINE: %{sparse_compiler_opts} = enable-runtime-library=true
-// DEFINE: %{sparse_compiler_opts_sve} = enable-arm-sve=true %{sparse_compiler_opts}
-// DEFINE: %{compile} = mlir-opt %s --sparse-compiler="%{sparse_compiler_opts}"
-// DEFINE: %{compile_sve} = mlir-opt %s --sparse-compiler="%{sparse_compiler_opts_sve}"
+// DEFINE: %{sparsifier_opts} = enable-runtime-library=true
+// DEFINE: %{sparsifier_opts_sve} = enable-arm-sve=true %{sparsifier_opts}
+// DEFINE: %{compile} = mlir-opt %s --sparsifier="%{sparsifier_opts}"
+// DEFINE: %{compile_sve} = mlir-opt %s --sparsifier="%{sparsifier_opts_sve}"
 // DEFINE: %{run_libs} = -shared-libs=%mlir_c_runner_utils,%mlir_runner_utils
 // DEFINE: %{run_opts} = -e entry -entry-point-result=void
 // DEFINE: %{run} = mlir-cpu-runner %{run_opts} %{run_libs}
@@ -22,7 +22,7 @@
 //
 // TODO: enable!
 // Do the same run, but now with direct IR generation.
-// REDEFINE: %{sparse_compiler_opts} = enable-runtime-library=false
+// REDEFINE: %{sparsifier_opts} = enable-runtime-library=false
 // R_UN: %{compile} | env %{env} %{run} | FileCheck %s
 
 !Filename = !llvm.ptr
diff --git a/mlir/test/Integration/Dialect/SparseTensor/CPU/concatenate_dim_0.mlir b/mlir/test/Integration/Dialect/SparseTensor/CPU/concatenate_dim_0.mlir
index 0523ce8ed9efab4..8894b385d3cd932 100644
--- a/mlir/test/Integration/Dialect/SparseTensor/CPU/concatenate_dim_0.mlir
+++ b/mlir/test/Integration/Dialect/SparseTensor/CPU/concatenate_dim_0.mlir
@@ -5,10 +5,10 @@
 // config could be moved to lit.local.cfg. However, there are downstream users that
 //  do not use these LIT config files. Hence why this is kept inline.
 //
-// DEFINE: %{sparse_compiler_opts} = enable-runtime-library=true
-// DEFINE: %{sparse_compiler_opts_sve} = enable-arm-sve=true %{sparse_compiler_opts}
-// DEFINE: %{compile} = mlir-opt %s --sparse-compiler="%{sparse_compiler_opts}"
-// DEFINE: %{compile_sve} = mlir-opt %s --sparse-compiler="%{sparse_compiler_opts_sve}"
+// DEFINE: %{sparsifier_opts} = enable-runtime-library=true
+// DEFINE: %{sparsifier_opts_sve} = enable-arm-sve=true %{sparsifier_opts}
+// DEFINE: %{compile} = mlir-opt %s --sparsifier="%{sparsifier_opts}"
+// DEFINE: %{compile_sve} = mlir-opt %s --sparsifier="%{sparsifier_opts_sve}"
 // DEFINE: %{run_libs} = -shared-libs=%mlir_c_runner_utils,%mlir_runner_utils
 // DEFINE: %{run_opts} = -e entry -entry-point-result=void
 // DEFINE: %{run} = mlir-cpu-runner %{run_opts} %{run_libs}
@@ -20,11 +20,11 @@
 // RUN: %{compile} | %{run} | FileCheck %s
 //
 // Do the same run, but now with direct IR generation.
-// REDEFINE: %{sparse_compiler_opts} = enable-runtime-library=false enable-buffer-initialization=true
+// REDEFINE: %{sparsifier_opts} = enable-runtime-library=false enable-buffer-initialization=true
 // RUN: %{compile} | %{run} | FileCheck %s
 //
 // Do the same run, but now with direct IR generation and vectorization.
-// REDEFINE: %{sparse_compiler_opts} = enable-runtime-library=false enable-buffer-initialization=true vl=2 reassociate-fp-reductions=true enable-index-optimizations=true
+// REDEFINE: %{sparsifier_opts} = enable-runtime-library=false enable-buffer-initialization=true vl=2 reassociate-fp-reductions=true enable-index-optimizations=true
 // RUN: %{compile} | %{run} | FileCheck %s
 //
 // Do the same run, but now with direct IR generation and VLA vectorization.
diff --git a/mlir/test/Integration/Dialect/SparseTensor/CPU/concatenate_dim_0_permute.mlir b/mlir/test/Integration/Dialect/SparseTensor/CPU/concatenate_dim_0_permute.mlir
index ba92efc6257c333..11edd854ec08a5b 100644
--- a/mlir/test/Integration/Dialect/SparseTensor/CPU/concatenate_dim_0_permute.mlir
+++ b/mlir/test/Integration/Dialect/SparseTensor/CPU/concatenate_dim_0_permute.mlir
@@ -5,10 +5,10 @@
 // config could be moved to lit.local.cfg. However, there are downstream users that
 //  do not use these LIT config files. Hence why this is kept inline.
 //
-// DEFINE: %{sparse_compiler_opts} = enable-runtime-library=true
-// DEFINE: %{sparse_compiler_opts_sve} = enable-arm-sve=true %{sparse_compiler_opts}
-// DEFINE: %{compile} = mlir-opt %s --sparse-compiler="%{sparse_compiler_opts}"
-// DEFINE: %{compile_sve} = mlir-opt %s --sparse-compiler="%{sparse_compiler_opts_sve}"
+// DEFINE: %{sparsifier_opts} = enable-runtime-library=true
+// DEFINE: %{sparsifier_opts_sve} = enable-arm-sve=true %{sparsifier_opts}
+// DEFINE: %{compile} = mlir-opt %s --sparsifier="%{sparsifier_opts}"
+// DEFINE: %{compile_sve} = mlir-opt %s --sparsifier="%{sparsifier_opts_sve}"
 // DEFINE: %{run_libs} = -shared-libs=%mlir_c_runner_utils,%mlir_runner_utils
 // DEFINE: %{run_opts} = -e entry -entry-point-result=void
 // DEFINE: %{run} = mlir-cpu-runner %{run_opts} %{run_libs}
@@ -20,11 +20,11 @@
 // RUN: %{compile} | %{run} | FileCheck %s
 //
 // Do the same run, but now with direct IR generation.
-// REDEFINE: %{sparse_compiler_opts} = enable-runtime-library=false enable-buffer-initialization=true
+// REDEFINE: %{sparsifier_opts} = enable-runtime-library=false enable-buffer-initialization=true
 // RUN: %{compile} | %{run} | FileCheck %s
 //
 // Do the same run, but now with direct IR generation and vectorization.
-// REDEFINE: %{sparse_compiler_opts} = enable-runtime-library=false enable-buffer-initialization=true vl=2 reassociate-fp-reductions=true enable-index-optimizations=true
+// REDEFINE: %{sparsifier_opts} = enable-runtime-library=false enable-buffer-initialization=true vl=2 reassociate-fp-reductions=true enable-index-optimizations=true
 // RUN: %{compile} | %{run} | FileCheck %s
 //
 // Do the same run, but now with direct IR generation and VLA vectorization.
diff --git a/mlir/test/Integration/Dialect/SparseTensor/CPU/concatenate_dim_1.mlir b/mlir/test/Integration/Dialect/SparseTensor/CPU/concatenate_dim_1.mlir
index e02bafe720fc7d8..48d38257009201d 100644
--- a/mlir/test/Integration/Dialect/SparseTensor/CPU/concatenate_dim_1.mlir
+++ b/mlir/test/Integration/Dialect/SparseTensor/CPU/concatenate_dim_1.mlir
@@ -5,10 +5,10 @@
 // config could be moved to lit.local.cfg. However, there are downstream users that
 //  do not use these LIT config files. Hence why this is kept inline.
 //
-// DEFINE: %{sparse_compiler_opts} = enable-runtime-library=true
-// DEFINE: %{sparse_compiler_opts_sve} = enable-arm-sve=true %{sparse_compiler_opts}
-// DEFINE: %{compile} = mlir-opt %s --sparse-compiler="%{sparse_compiler_opts}"
-// DEFINE: %{compile_sve} = mlir-opt %s --sparse-compiler="%{sparse_compiler_opts_sve}"
+// DEFINE: %{sparsifier_opts} = enable-runtime-library=true
+// DEFINE: %{sparsifier_opts_sve} = enable-arm-sve=true %{sparsifier_opts}
+// DEFINE: %{compile} = mlir-opt %s --sparsifier="%{sparsifier_opts}"
+// DEFINE: %{compile_sve} = mlir-opt %s --sparsifier="%{sparsifier_opts_sve}"
 // DEFINE: %{run_libs} = -shared-libs=%mlir_c_runner_utils,%mlir_runner_utils
 // DEFINE: %{run_opts} = -e entry -entry-point-result=void
 // DEFINE: %{run} = mlir-cpu-runner %{run_opts} %{run_libs}
@@ -20,11 +20,11 @@
 // RUN: %{compile} | %{run} | FileCheck %s
 //
 // Do the same run, but now with direct IR generation.
-// REDEFINE: %{sparse_compiler_opts} = enable-runtime-library=false enable-buffer-initialization=true
+// REDEFINE: %{sparsifier_opts} = enable-runtime-library=false enable-buffer-initialization=true
 // RUN: %{compile} | %{run} | FileCheck %s
 //
 // Do the same run, but now with direct IR generation and vectorization.
-// REDEFINE: %{sparse_compiler_opts} = enable-runtime-library=false enable-buffer-initialization=true vl=4 reassociate-fp-reductions=true enable-index-optimizations=true
+// REDEFINE: %{sparsifier_opts} = enable-runtime-library=false enable-buffer-initialization=true vl=4 reassociate-fp-reductions=true enable-index-optimizations=true
 // RUN: %{compile} | %{run} | FileCheck %s
 
 #MAT_C_C = #sparse_tensor.encoding<{map = (d0, d1) -> (d0 : compressed, d1 : compressed)}>
diff --git a/mlir/test/Integration/Dialect/SparseTensor/CPU/concatenate_dim_1_permute.mlir b/mlir/test/Integration/Dialect/SparseTensor/CPU/concatenate_dim_1_permute.mlir
index e0988f044454fcb..dcdaa072c02fd8b 100644
--- a/mlir/test/Integration/Dialect/SparseTensor/CPU/concatenate_dim_1_permute.mlir
+++ b/mlir/test/Integration/Dialect/SparseTensor/CPU/concatenate_dim_1_permute.mlir
@@ -5,10 +5,10 @@
 // config could be moved to lit.local.cfg. However, there are downstream users that
 //  do not use these LIT config files. Hence why this is kept inline.
 //
-// DEFINE: %{sparse_compiler_opts} = enable-runtime-library=true
-// DEFINE: %{sparse_compiler_opts_sve} = enable-arm-sve=true %{sparse_compiler_opts}
-// DEFINE: %{compile} = mlir-opt %s --sparse-compiler="%{sparse_compiler_opts}"
-// DEFINE: %{compile_sve} = mlir-opt %s --sparse-compiler="%{sparse_compiler_opts_sve}"
+// DEFINE: %{sparsifier_opts} = enable-runtime-library=true
+// DEFINE: %{sparsifier_opts_sve} = enable-arm-sve=true %{sparsifier_opts}
+// DEFINE: %{compile} = mlir-opt %s --sparsifier="%{sparsifier_opts}"
+// DEFINE: %{compile_sve} = mlir-opt %s --sparsifier="%{sparsifier_opts_sve}"
 // DEFINE: %{run_libs} = -shared-libs=%mlir_c_runner_utils,%mlir_runner_utils
 // DEFINE: %{run_opts} = -e entry -entry-point-result=void
 // DEFINE: %{run} = mlir-cpu-runner %{run_opts} %{run_libs}
@@ -20,11 +20,11 @@
 // RUN: %{compile} | %{run} | FileCheck %s
 //
 // Do the same run, but now with direct IR generation.
-// REDEFINE: %{sparse_compiler_opts} = enable-runtime-library=false enable-buffer-initialization=true
+// REDEFINE: %{sparsifier_opts} = enable-runtime-library=false enable-buffer-initialization=true
 // RUN: %{compile} | %{run} | FileCheck %s
 //
 // Do the same run, but now with direct IR generation and vectorization.
-// REDEFINE: %{sparse_compiler_opts} = enable-runtime-library=false enable-buffer-initialization=true vl=2 reassociate-fp-reductions=true enable-index-optimizations=true
+// REDEFINE: %{sparsifier_opts} = enable-runtime-library=false enable-buffer-initialization=true vl=2 reassociate-fp-reductions=true enable-index-optimizations=true
 // RUN: %{compile} | %{run} | FileCheck %s
 //
 // Do the same run, but now with direct IR generation and VLA vectorization.
diff --git a/mlir/test/Integration/Dialect/SparseTensor/CPU/dense_output.mlir b/mlir/test/Integration/Dialect/SparseTensor/CPU/dense_output.mlir
index f11d396dc6f8f7d..5f6524a4b7af9ed 100644
--- a/mlir/test/Integration/Dialect/SparseTensor/CPU/dense_output.mlir
+++ b/mlir/test/Integration/Dialect/SparseTensor/CPU/dense_output.mlir
@@ -5,10 +5,10 @@
 // config could be moved to lit.local.cfg. However, there are downstream users that
 //  do not use these LIT config files. Hence why this is kept inline.
 //
-// DEFINE: %{sparse_compiler_opts} = enable-runtime-library=true
-// DEFINE: %{sparse_compiler_opts_sve} = enable-arm-sve=true %{sparse_compiler_opts}
-// DEFINE: %{compile} = mlir-opt %s --sparse-compiler="%{sparse_compiler_opts}"
-// DEFINE: %{compile_sve} = mlir-opt %s --sparse-compiler="%{sparse_compiler_opts_sve}"
+// DEFINE: %{sparsifier_opts} = enable-runtime-library=true
+// DEFINE: %{sparsifier_opts_sve} = enable-arm-sve=true %{sparsifier_opts}
+// DEFINE: %{compile} = mlir-opt %s --sparsifier="%{sparsifier_opts}"
+// DEFINE: %{compile_sve} = mlir-opt %s --sparsifier="%{sparsifier_opts_sve}"
 // DEFINE: %{run_libs} = -shared-libs=%mlir_c_runner_utils,%mlir_runner_utils
 // DEFINE: %{run_opts} = -e entry -entry-point-result=void
 // DEFINE: %{run} = mlir-cpu-runner %{run_opts} %{run_libs}
@@ -21,11 +21,11 @@
 // RUN: %{compile} | env %{env} %{run} | FileCheck %s
 //
 // Do the same run, but now with direct IR generation.
-// REDEFINE: %{sparse_compiler_opts} = enable-runtime-library=false
+// REDEFINE: %{sparsifier_opts} = enable-runtime-library=false
 // RUN: %{compile} | env %{env} %{run} | FileCheck %s
 //
 // Do the same run, but now with direct IR generation and vectorization.
-// REDEFINE: %{sparse_compiler_opts} = enable-runtime-library=false vl=2 reassociate-fp-reductions=true enable-index-optimizations=true
+// REDEFINE: %{sparsifier_opts} = enable-runtime-library=false vl=2 reassociate-fp-reductions=true enable-index-optimizations=true
 // RUN: %{compile} | env %{env} %{run} | FileCheck %s
 //
 // Do the same run, but now with direct IR generation and VLA vectorization.
diff --git a/mlir/test/Integration/Dialect/SparseTensor/CPU/dense_output_bf16.mlir b/mlir/test/Integration/Dialect/SparseTensor/CPU/dense_output_bf16.mlir
index 317c7af990f78c4..81cd2d81cbbc324 100644
--- a/mlir/test/Integration/Dialect/SparseTensor/CPU/dense_output_bf16.mlir
+++ b/mlir/test/Integration/Dialect/SparseTensor/CPU/dense_output_bf16.mlir
@@ -5,10 +5,10 @@
 // config could be moved to lit.local.cfg. However, there are downstream users that
 //  do not use these LIT config files. Hence why this is kept inline.
 //
-// DEFINE: %{sparse_compiler_opts} = enable-runtime-library=true
-// DEFINE: %{sparse_compiler_opts_sve} = enable-arm-sve=true %{sparse_compiler_opts}
-// DEFINE: %{compile} = mlir-opt %s --sparse-compiler="%{sparse_compiler_opts}"
-// DEFINE: %{compile_sve} = mlir-opt %s --sparse-compiler="%{sparse_compiler_opts_sve}"
+// DEFINE: %{sparsifier_opts} = enable-runtime-library=true
+// DEFINE: %{sparsifier_opts_sve} = enable-arm-sve=true %{sparsifier_opts}
+// DEFINE: %{compile} = mlir-opt %s --sparsifier="%{sparsifier_opts}"
+// DEFINE: %{compile_sve} = mlir-opt %s --spars...
[truncated]

@llvmbot
Copy link
Member

llvmbot commented Nov 9, 2023

@llvm/pr-subscribers-mlir

Author: Tim Harvey (TimAtGoogle)

Changes

The changes in this p.r. mostly center around the tests that use the flag sparse_compiler (also: sparse-compiler).


Patch is 215.66 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/71875.diff

113 Files Affected:

  • (modified) mlir/benchmark/python/common.py (+1-1)
  • (modified) mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorType.h (+2-2)
  • (modified) mlir/include/mlir/Dialect/SparseTensor/Pipelines/Passes.h (+7-7)
  • (modified) mlir/lib/Dialect/SparseTensor/Pipelines/SparseTensorPipelines.cpp (+5-5)
  • (modified) mlir/test/Dialect/SparseTensor/sparse_vector_concat.mlir (+1-1)
  • (modified) mlir/test/Dialect/SparseTensor/sparse_vector_mv.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/block.mlir (+5-5)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/concatenate_dim_0.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/concatenate_dim_0_permute.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/concatenate_dim_1.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/concatenate_dim_1_permute.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/dense_output.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/dense_output_bf16.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/dense_output_f16.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/dual_sparse_conv_2d.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/reshape_dot.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_abs.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_binary.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_block_matmul.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_cast.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_cmp.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_codegen_dim.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_codegen_foreach.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_collapse_shape.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_complex32.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_complex64.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_complex_ops.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_constant_to_sparse_tensor.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_conv_1d_nwc_wcf.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_conv_2d.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_conv_2d_nchw_fchw.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_conv_2d_nhwc_hwcf.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_conv_3d.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_conv_3d_ndhwc_dhwcf.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_conversion.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_conversion_block.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_conversion_dyn.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_conversion_element.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_conversion_ptr.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_conversion_sparse2dense.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_conversion_sparse2sparse.mlir (+7-7)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_coo_test.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_dot.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_ds.mlir (+5-5)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_expand.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_expand_shape.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_filter_conv2d.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_flatten.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_foreach_slices.mlir (+5-5)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_index.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_index_dense.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_insert_1d.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_insert_2d.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_insert_3d.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_loose.mlir (+5-5)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_matmul.mlir (+8-8)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_matmul_slice.mlir (+5-5)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_matrix_ops.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_matvec.mlir (+8-8)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_mttkrp.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_out_mult_elt.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_out_reduction.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_out_simple.mlir (+5-5)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_pack.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_pack_libgen.mlir (+5-5)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_pooling_nhwc.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_quantized_matmul.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_re_im.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_reduce_custom.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_reduce_custom_prod.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_reduce_custom_sum.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_reductions.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_reductions_min.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_reductions_prod.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_reshape.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_rewrite_push_back.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_rewrite_sort_coo.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_sampled_matmul.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_sampled_mm_fusion.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_scale.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_scf_nested.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_select.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_semiring_select.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_sign.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_sorted_coo.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_spmm.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_storage.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_strided_conv_2d_nhwc_hwcf.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_sum.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_sum_bf16.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_sum_c32.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_sum_f16.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_tanh.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_tensor_mul.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_tensor_ops.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_transpose.mlir (+7-7)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_transpose_coo.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_unary.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_vector_ops.mlir (+6-6)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/GPU/CUDA/sm80-lt/sparse-matmul-2-4-lib-from-linalg.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/GPU/CUDA/sm80-lt/sparse-matmul-2-4-prune.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/GPU/CUDA/sparse-gemm-lib.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/GPU/CUDA/sparse-matmul-lib.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/GPU/CUDA/sparse-matvec-const.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/GPU/CUDA/sparse-matvec-lib.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/GPU/CUDA/sparse-matvec.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/GPU/CUDA/sparse-sampled-matmul-lib.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/GPU/CUDA/sparse-sddmm-lib.mlir (+1-1)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/python/test_SDDMM.py (+2-2)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/python/test_SpMM.py (+2-2)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/python/test_output.py (+2-2)
  • (modified) mlir/test/Integration/Dialect/SparseTensor/python/test_stress.py (+2-2)
  • (added) mlir/test/Integration/Dialect/SparseTensor/python/tools/sparsifier.py (+39)
diff --git a/mlir/benchmark/python/common.py b/mlir/benchmark/python/common.py
index c605726df2a5f64..b2dfc134b7c9040 100644
--- a/mlir/benchmark/python/common.py
+++ b/mlir/benchmark/python/common.py
@@ -15,7 +15,7 @@ def setup_passes(mlir_module):
         "parallelization-strategy=none"
         " vectorization-strategy=none vl=1 enable-simd-index32=False"
     )
-    pipeline = f"sparse-compiler{{{opt}}}"
+    pipeline = f"sparsifier{{{opt}}}"
     PassManager.parse(pipeline).run(mlir_module)
 
 
diff --git a/mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorType.h b/mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorType.h
index e808057cf6b0a67..4f6984e84aecc04 100644
--- a/mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorType.h
+++ b/mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorType.h
@@ -23,7 +23,7 @@ namespace sparse_tensor {
 ///
 /// (1) To provide a uniform API for querying aspects of sparse-tensor
 /// types; in particular, to make the "dimension" vs "level" distinction
-/// overt (i.e., explicit everywhere).  Thus, throughout the sparse-compiler
+/// overt (i.e., explicit everywhere).  Thus, throughout the sparsifier
 /// this class should be preferred over using `RankedTensorType` or
 /// `ShapedType` directly, since the methods of the latter do not make
 /// the "dimension" vs "level" distinction overt.
@@ -34,7 +34,7 @@ namespace sparse_tensor {
 /// That is, we want to manipulate dense-tensor types using the same API
 /// that we use for manipulating sparse-tensor types; both to keep the
 /// "dimension" vs "level" distinction overt, and to avoid needing to
-/// handle certain cases specially in the sparse-compiler.
+/// handle certain cases specially in the sparsifier.
 ///
 /// (3) To provide uniform handling of "defaults".  In particular
 /// this means that dense-tensors should always return the same answers
diff --git a/mlir/include/mlir/Dialect/SparseTensor/Pipelines/Passes.h b/mlir/include/mlir/Dialect/SparseTensor/Pipelines/Passes.h
index 4de83034b0386d1..1fa4a4bb9f0bbfd 100644
--- a/mlir/include/mlir/Dialect/SparseTensor/Pipelines/Passes.h
+++ b/mlir/include/mlir/Dialect/SparseTensor/Pipelines/Passes.h
@@ -23,12 +23,12 @@ using namespace llvm::cl;
 namespace mlir {
 namespace sparse_tensor {
 
-/// Options for the "sparse-compiler" pipeline.  So far this only contains
+/// Options for the "sparsifier" pipeline.  So far this only contains
 /// a subset of the options that can be set for the underlying passes,
 /// because it must be manually kept in sync with the tablegen files
 /// for those passes.
-struct SparseCompilerOptions
-    : public PassPipelineOptions<SparseCompilerOptions> {
+struct SparsifierOptions
+    : public PassPipelineOptions<SparsifierOptions> {
   // These options must be kept in sync with `SparsificationBase`.
   // TODO(57514): These options are duplicated in Passes.td.
   PassOptions::Option<mlir::SparseParallelizationStrategy> parallelization{
@@ -172,15 +172,15 @@ struct SparseCompilerOptions
 // Building and Registering.
 //===----------------------------------------------------------------------===//
 
-/// Adds the "sparse-compiler" pipeline to the `OpPassManager`.  This
+/// Adds the "sparsifier" pipeline to the `OpPassManager`.  This
 /// is the standard pipeline for taking sparsity-agnostic IR using
 /// the sparse-tensor type and lowering it to LLVM IR with concrete
 /// representations and algorithms for sparse tensors.
-void buildSparseCompiler(OpPassManager &pm,
-                         const SparseCompilerOptions &options);
+void buildSparsifier(OpPassManager &pm,
+                         const SparsifierOptions &options);
 
 /// Registers all pipelines for the `sparse_tensor` dialect.  At present,
-/// this includes only "sparse-compiler".
+/// this includes only "sparsifier".
 void registerSparseTensorPipelines();
 
 } // namespace sparse_tensor
diff --git a/mlir/lib/Dialect/SparseTensor/Pipelines/SparseTensorPipelines.cpp b/mlir/lib/Dialect/SparseTensor/Pipelines/SparseTensorPipelines.cpp
index 3ed8bba2514aaf9..09119e85ef245ea 100644
--- a/mlir/lib/Dialect/SparseTensor/Pipelines/SparseTensorPipelines.cpp
+++ b/mlir/lib/Dialect/SparseTensor/Pipelines/SparseTensorPipelines.cpp
@@ -29,8 +29,8 @@
 // Pipeline implementation.
 //===----------------------------------------------------------------------===//
 
-void mlir::sparse_tensor::buildSparseCompiler(
-    OpPassManager &pm, const SparseCompilerOptions &options) {
+void mlir::sparse_tensor::buildSparsifier(
+    OpPassManager &pm, const SparsifierOptions &options) {
   pm.addNestedPass<func::FuncOp>(createLinalgGeneralizationPass());
   pm.addPass(createSparsificationAndBufferizationPass(
       getBufferizationOptionsForSparsification(
@@ -99,10 +99,10 @@ void mlir::sparse_tensor::buildSparseCompiler(
 //===----------------------------------------------------------------------===//
 
 void mlir::sparse_tensor::registerSparseTensorPipelines() {
-  PassPipelineRegistration<SparseCompilerOptions>(
-      "sparse-compiler",
+  PassPipelineRegistration<SparsifierOptions>(
+      "sparsifier",
       "The standard pipeline for taking sparsity-agnostic IR using the"
       " sparse-tensor type, and lowering it to LLVM IR with concrete"
       " representations and algorithms for sparse tensors.",
-      buildSparseCompiler);
+      buildSparsifier);
 }
diff --git a/mlir/test/Dialect/SparseTensor/sparse_vector_concat.mlir b/mlir/test/Dialect/SparseTensor/sparse_vector_concat.mlir
index e5521228c433a8c..12b7f7cafc9b4f2 100644
--- a/mlir/test/Dialect/SparseTensor/sparse_vector_concat.mlir
+++ b/mlir/test/Dialect/SparseTensor/sparse_vector_concat.mlir
@@ -1,4 +1,4 @@
-// RUN: mlir-opt %s --sparse-compiler="enable-runtime-library=false vl=2 reassociate-fp-reductions=true enable-index-optimizations=true"
+// RUN: mlir-opt %s --sparsifier="enable-runtime-library=false vl=2 reassociate-fp-reductions=true enable-index-optimizations=true"
 
 #MAT_D_C = #sparse_tensor.encoding<{
   map = (d0, d1) -> (d0 : dense, d1 : compressed)
diff --git a/mlir/test/Dialect/SparseTensor/sparse_vector_mv.mlir b/mlir/test/Dialect/SparseTensor/sparse_vector_mv.mlir
index 0170efeb33f561b..e25c3a02f91271c 100644
--- a/mlir/test/Dialect/SparseTensor/sparse_vector_mv.mlir
+++ b/mlir/test/Dialect/SparseTensor/sparse_vector_mv.mlir
@@ -1,4 +1,4 @@
-// RUN: mlir-opt %s -sparse-compiler="vl=8" |  FileCheck %s
+// RUN: mlir-opt %s -sparsifier="vl=8" |  FileCheck %s
 
 #Dense = #sparse_tensor.encoding<{
   map = (d0, d1) -> (d0 : dense, d1 : dense)
diff --git a/mlir/test/Integration/Dialect/SparseTensor/CPU/block.mlir b/mlir/test/Integration/Dialect/SparseTensor/CPU/block.mlir
index b77c1b42baf7ec6..d92165e98cea4a1 100755
--- a/mlir/test/Integration/Dialect/SparseTensor/CPU/block.mlir
+++ b/mlir/test/Integration/Dialect/SparseTensor/CPU/block.mlir
@@ -5,10 +5,10 @@
 // config could be moved to lit.local.cfg. However, there are downstream users that
 //  do not use these LIT config files. Hence why this is kept inline.
 //
-// DEFINE: %{sparse_compiler_opts} = enable-runtime-library=true
-// DEFINE: %{sparse_compiler_opts_sve} = enable-arm-sve=true %{sparse_compiler_opts}
-// DEFINE: %{compile} = mlir-opt %s --sparse-compiler="%{sparse_compiler_opts}"
-// DEFINE: %{compile_sve} = mlir-opt %s --sparse-compiler="%{sparse_compiler_opts_sve}"
+// DEFINE: %{sparsifier_opts} = enable-runtime-library=true
+// DEFINE: %{sparsifier_opts_sve} = enable-arm-sve=true %{sparsifier_opts}
+// DEFINE: %{compile} = mlir-opt %s --sparsifier="%{sparsifier_opts}"
+// DEFINE: %{compile_sve} = mlir-opt %s --sparsifier="%{sparsifier_opts_sve}"
 // DEFINE: %{run_libs} = -shared-libs=%mlir_c_runner_utils,%mlir_runner_utils
 // DEFINE: %{run_opts} = -e entry -entry-point-result=void
 // DEFINE: %{run} = mlir-cpu-runner %{run_opts} %{run_libs}
@@ -22,7 +22,7 @@
 //
 // TODO: enable!
 // Do the same run, but now with direct IR generation.
-// REDEFINE: %{sparse_compiler_opts} = enable-runtime-library=false
+// REDEFINE: %{sparsifier_opts} = enable-runtime-library=false
 // R_UN: %{compile} | env %{env} %{run} | FileCheck %s
 
 !Filename = !llvm.ptr
diff --git a/mlir/test/Integration/Dialect/SparseTensor/CPU/concatenate_dim_0.mlir b/mlir/test/Integration/Dialect/SparseTensor/CPU/concatenate_dim_0.mlir
index 0523ce8ed9efab4..8894b385d3cd932 100644
--- a/mlir/test/Integration/Dialect/SparseTensor/CPU/concatenate_dim_0.mlir
+++ b/mlir/test/Integration/Dialect/SparseTensor/CPU/concatenate_dim_0.mlir
@@ -5,10 +5,10 @@
 // config could be moved to lit.local.cfg. However, there are downstream users that
 //  do not use these LIT config files. Hence why this is kept inline.
 //
-// DEFINE: %{sparse_compiler_opts} = enable-runtime-library=true
-// DEFINE: %{sparse_compiler_opts_sve} = enable-arm-sve=true %{sparse_compiler_opts}
-// DEFINE: %{compile} = mlir-opt %s --sparse-compiler="%{sparse_compiler_opts}"
-// DEFINE: %{compile_sve} = mlir-opt %s --sparse-compiler="%{sparse_compiler_opts_sve}"
+// DEFINE: %{sparsifier_opts} = enable-runtime-library=true
+// DEFINE: %{sparsifier_opts_sve} = enable-arm-sve=true %{sparsifier_opts}
+// DEFINE: %{compile} = mlir-opt %s --sparsifier="%{sparsifier_opts}"
+// DEFINE: %{compile_sve} = mlir-opt %s --sparsifier="%{sparsifier_opts_sve}"
 // DEFINE: %{run_libs} = -shared-libs=%mlir_c_runner_utils,%mlir_runner_utils
 // DEFINE: %{run_opts} = -e entry -entry-point-result=void
 // DEFINE: %{run} = mlir-cpu-runner %{run_opts} %{run_libs}
@@ -20,11 +20,11 @@
 // RUN: %{compile} | %{run} | FileCheck %s
 //
 // Do the same run, but now with direct IR generation.
-// REDEFINE: %{sparse_compiler_opts} = enable-runtime-library=false enable-buffer-initialization=true
+// REDEFINE: %{sparsifier_opts} = enable-runtime-library=false enable-buffer-initialization=true
 // RUN: %{compile} | %{run} | FileCheck %s
 //
 // Do the same run, but now with direct IR generation and vectorization.
-// REDEFINE: %{sparse_compiler_opts} = enable-runtime-library=false enable-buffer-initialization=true vl=2 reassociate-fp-reductions=true enable-index-optimizations=true
+// REDEFINE: %{sparsifier_opts} = enable-runtime-library=false enable-buffer-initialization=true vl=2 reassociate-fp-reductions=true enable-index-optimizations=true
 // RUN: %{compile} | %{run} | FileCheck %s
 //
 // Do the same run, but now with direct IR generation and VLA vectorization.
diff --git a/mlir/test/Integration/Dialect/SparseTensor/CPU/concatenate_dim_0_permute.mlir b/mlir/test/Integration/Dialect/SparseTensor/CPU/concatenate_dim_0_permute.mlir
index ba92efc6257c333..11edd854ec08a5b 100644
--- a/mlir/test/Integration/Dialect/SparseTensor/CPU/concatenate_dim_0_permute.mlir
+++ b/mlir/test/Integration/Dialect/SparseTensor/CPU/concatenate_dim_0_permute.mlir
@@ -5,10 +5,10 @@
 // config could be moved to lit.local.cfg. However, there are downstream users that
 //  do not use these LIT config files. Hence why this is kept inline.
 //
-// DEFINE: %{sparse_compiler_opts} = enable-runtime-library=true
-// DEFINE: %{sparse_compiler_opts_sve} = enable-arm-sve=true %{sparse_compiler_opts}
-// DEFINE: %{compile} = mlir-opt %s --sparse-compiler="%{sparse_compiler_opts}"
-// DEFINE: %{compile_sve} = mlir-opt %s --sparse-compiler="%{sparse_compiler_opts_sve}"
+// DEFINE: %{sparsifier_opts} = enable-runtime-library=true
+// DEFINE: %{sparsifier_opts_sve} = enable-arm-sve=true %{sparsifier_opts}
+// DEFINE: %{compile} = mlir-opt %s --sparsifier="%{sparsifier_opts}"
+// DEFINE: %{compile_sve} = mlir-opt %s --sparsifier="%{sparsifier_opts_sve}"
 // DEFINE: %{run_libs} = -shared-libs=%mlir_c_runner_utils,%mlir_runner_utils
 // DEFINE: %{run_opts} = -e entry -entry-point-result=void
 // DEFINE: %{run} = mlir-cpu-runner %{run_opts} %{run_libs}
@@ -20,11 +20,11 @@
 // RUN: %{compile} | %{run} | FileCheck %s
 //
 // Do the same run, but now with direct IR generation.
-// REDEFINE: %{sparse_compiler_opts} = enable-runtime-library=false enable-buffer-initialization=true
+// REDEFINE: %{sparsifier_opts} = enable-runtime-library=false enable-buffer-initialization=true
 // RUN: %{compile} | %{run} | FileCheck %s
 //
 // Do the same run, but now with direct IR generation and vectorization.
-// REDEFINE: %{sparse_compiler_opts} = enable-runtime-library=false enable-buffer-initialization=true vl=2 reassociate-fp-reductions=true enable-index-optimizations=true
+// REDEFINE: %{sparsifier_opts} = enable-runtime-library=false enable-buffer-initialization=true vl=2 reassociate-fp-reductions=true enable-index-optimizations=true
 // RUN: %{compile} | %{run} | FileCheck %s
 //
 // Do the same run, but now with direct IR generation and VLA vectorization.
diff --git a/mlir/test/Integration/Dialect/SparseTensor/CPU/concatenate_dim_1.mlir b/mlir/test/Integration/Dialect/SparseTensor/CPU/concatenate_dim_1.mlir
index e02bafe720fc7d8..48d38257009201d 100644
--- a/mlir/test/Integration/Dialect/SparseTensor/CPU/concatenate_dim_1.mlir
+++ b/mlir/test/Integration/Dialect/SparseTensor/CPU/concatenate_dim_1.mlir
@@ -5,10 +5,10 @@
 // config could be moved to lit.local.cfg. However, there are downstream users that
 //  do not use these LIT config files. Hence why this is kept inline.
 //
-// DEFINE: %{sparse_compiler_opts} = enable-runtime-library=true
-// DEFINE: %{sparse_compiler_opts_sve} = enable-arm-sve=true %{sparse_compiler_opts}
-// DEFINE: %{compile} = mlir-opt %s --sparse-compiler="%{sparse_compiler_opts}"
-// DEFINE: %{compile_sve} = mlir-opt %s --sparse-compiler="%{sparse_compiler_opts_sve}"
+// DEFINE: %{sparsifier_opts} = enable-runtime-library=true
+// DEFINE: %{sparsifier_opts_sve} = enable-arm-sve=true %{sparsifier_opts}
+// DEFINE: %{compile} = mlir-opt %s --sparsifier="%{sparsifier_opts}"
+// DEFINE: %{compile_sve} = mlir-opt %s --sparsifier="%{sparsifier_opts_sve}"
 // DEFINE: %{run_libs} = -shared-libs=%mlir_c_runner_utils,%mlir_runner_utils
 // DEFINE: %{run_opts} = -e entry -entry-point-result=void
 // DEFINE: %{run} = mlir-cpu-runner %{run_opts} %{run_libs}
@@ -20,11 +20,11 @@
 // RUN: %{compile} | %{run} | FileCheck %s
 //
 // Do the same run, but now with direct IR generation.
-// REDEFINE: %{sparse_compiler_opts} = enable-runtime-library=false enable-buffer-initialization=true
+// REDEFINE: %{sparsifier_opts} = enable-runtime-library=false enable-buffer-initialization=true
 // RUN: %{compile} | %{run} | FileCheck %s
 //
 // Do the same run, but now with direct IR generation and vectorization.
-// REDEFINE: %{sparse_compiler_opts} = enable-runtime-library=false enable-buffer-initialization=true vl=4 reassociate-fp-reductions=true enable-index-optimizations=true
+// REDEFINE: %{sparsifier_opts} = enable-runtime-library=false enable-buffer-initialization=true vl=4 reassociate-fp-reductions=true enable-index-optimizations=true
 // RUN: %{compile} | %{run} | FileCheck %s
 
 #MAT_C_C = #sparse_tensor.encoding<{map = (d0, d1) -> (d0 : compressed, d1 : compressed)}>
diff --git a/mlir/test/Integration/Dialect/SparseTensor/CPU/concatenate_dim_1_permute.mlir b/mlir/test/Integration/Dialect/SparseTensor/CPU/concatenate_dim_1_permute.mlir
index e0988f044454fcb..dcdaa072c02fd8b 100644
--- a/mlir/test/Integration/Dialect/SparseTensor/CPU/concatenate_dim_1_permute.mlir
+++ b/mlir/test/Integration/Dialect/SparseTensor/CPU/concatenate_dim_1_permute.mlir
@@ -5,10 +5,10 @@
 // config could be moved to lit.local.cfg. However, there are downstream users that
 //  do not use these LIT config files. Hence why this is kept inline.
 //
-// DEFINE: %{sparse_compiler_opts} = enable-runtime-library=true
-// DEFINE: %{sparse_compiler_opts_sve} = enable-arm-sve=true %{sparse_compiler_opts}
-// DEFINE: %{compile} = mlir-opt %s --sparse-compiler="%{sparse_compiler_opts}"
-// DEFINE: %{compile_sve} = mlir-opt %s --sparse-compiler="%{sparse_compiler_opts_sve}"
+// DEFINE: %{sparsifier_opts} = enable-runtime-library=true
+// DEFINE: %{sparsifier_opts_sve} = enable-arm-sve=true %{sparsifier_opts}
+// DEFINE: %{compile} = mlir-opt %s --sparsifier="%{sparsifier_opts}"
+// DEFINE: %{compile_sve} = mlir-opt %s --sparsifier="%{sparsifier_opts_sve}"
 // DEFINE: %{run_libs} = -shared-libs=%mlir_c_runner_utils,%mlir_runner_utils
 // DEFINE: %{run_opts} = -e entry -entry-point-result=void
 // DEFINE: %{run} = mlir-cpu-runner %{run_opts} %{run_libs}
@@ -20,11 +20,11 @@
 // RUN: %{compile} | %{run} | FileCheck %s
 //
 // Do the same run, but now with direct IR generation.
-// REDEFINE: %{sparse_compiler_opts} = enable-runtime-library=false enable-buffer-initialization=true
+// REDEFINE: %{sparsifier_opts} = enable-runtime-library=false enable-buffer-initialization=true
 // RUN: %{compile} | %{run} | FileCheck %s
 //
 // Do the same run, but now with direct IR generation and vectorization.
-// REDEFINE: %{sparse_compiler_opts} = enable-runtime-library=false enable-buffer-initialization=true vl=2 reassociate-fp-reductions=true enable-index-optimizations=true
+// REDEFINE: %{sparsifier_opts} = enable-runtime-library=false enable-buffer-initialization=true vl=2 reassociate-fp-reductions=true enable-index-optimizations=true
 // RUN: %{compile} | %{run} | FileCheck %s
 //
 // Do the same run, but now with direct IR generation and VLA vectorization.
diff --git a/mlir/test/Integration/Dialect/SparseTensor/CPU/dense_output.mlir b/mlir/test/Integration/Dialect/SparseTensor/CPU/dense_output.mlir
index f11d396dc6f8f7d..5f6524a4b7af9ed 100644
--- a/mlir/test/Integration/Dialect/SparseTensor/CPU/dense_output.mlir
+++ b/mlir/test/Integration/Dialect/SparseTensor/CPU/dense_output.mlir
@@ -5,10 +5,10 @@
 // config could be moved to lit.local.cfg. However, there are downstream users that
 //  do not use these LIT config files. Hence why this is kept inline.
 //
-// DEFINE: %{sparse_compiler_opts} = enable-runtime-library=true
-// DEFINE: %{sparse_compiler_opts_sve} = enable-arm-sve=true %{sparse_compiler_opts}
-// DEFINE: %{compile} = mlir-opt %s --sparse-compiler="%{sparse_compiler_opts}"
-// DEFINE: %{compile_sve} = mlir-opt %s --sparse-compiler="%{sparse_compiler_opts_sve}"
+// DEFINE: %{sparsifier_opts} = enable-runtime-library=true
+// DEFINE: %{sparsifier_opts_sve} = enable-arm-sve=true %{sparsifier_opts}
+// DEFINE: %{compile} = mlir-opt %s --sparsifier="%{sparsifier_opts}"
+// DEFINE: %{compile_sve} = mlir-opt %s --sparsifier="%{sparsifier_opts_sve}"
 // DEFINE: %{run_libs} = -shared-libs=%mlir_c_runner_utils,%mlir_runner_utils
 // DEFINE: %{run_opts} = -e entry -entry-point-result=void
 // DEFINE: %{run} = mlir-cpu-runner %{run_opts} %{run_libs}
@@ -21,11 +21,11 @@
 // RUN: %{compile} | env %{env} %{run} | FileCheck %s
 //
 // Do the same run, but now with direct IR generation.
-// REDEFINE: %{sparse_compiler_opts} = enable-runtime-library=false
+// REDEFINE: %{sparsifier_opts} = enable-runtime-library=false
 // RUN: %{compile} | env %{env} %{run} | FileCheck %s
 //
 // Do the same run, but now with direct IR generation and vectorization.
-// REDEFINE: %{sparse_compiler_opts} = enable-runtime-library=false vl=2 reassociate-fp-reductions=true enable-index-optimizations=true
+// REDEFINE: %{sparsifier_opts} = enable-runtime-library=false vl=2 reassociate-fp-reductions=true enable-index-optimizations=true
 // RUN: %{compile} | env %{env} %{run} | FileCheck %s
 //
 // Do the same run, but now with direct IR generation and VLA vectorization.
diff --git a/mlir/test/Integration/Dialect/SparseTensor/CPU/dense_output_bf16.mlir b/mlir/test/Integration/Dialect/SparseTensor/CPU/dense_output_bf16.mlir
index 317c7af990f78c4..81cd2d81cbbc324 100644
--- a/mlir/test/Integration/Dialect/SparseTensor/CPU/dense_output_bf16.mlir
+++ b/mlir/test/Integration/Dialect/SparseTensor/CPU/dense_output_bf16.mlir
@@ -5,10 +5,10 @@
 // config could be moved to lit.local.cfg. However, there are downstream users that
 //  do not use these LIT config files. Hence why this is kept inline.
 //
-// DEFINE: %{sparse_compiler_opts} = enable-runtime-library=true
-// DEFINE: %{sparse_compiler_opts_sve} = enable-arm-sve=true %{sparse_compiler_opts}
-// DEFINE: %{compile} = mlir-opt %s --sparse-compiler="%{sparse_compiler_opts}"
-// DEFINE: %{compile_sve} = mlir-opt %s --sparse-compiler="%{sparse_compiler_opts_sve}"
+// DEFINE: %{sparsifier_opts} = enable-runtime-library=true
+// DEFINE: %{sparsifier_opts_sve} = enable-arm-sve=true %{sparsifier_opts}
+// DEFINE: %{compile} = mlir-opt %s --sparsifier="%{sparsifier_opts}"
+// DEFINE: %{compile_sve} = mlir-opt %s --spars...
[truncated]

Copy link

github-actions bot commented Nov 9, 2023

✅ With the latest revision this PR passed the C/C++ code formatter.

@TimAtGoogle TimAtGoogle force-pushed the rename_all_occurences_of_sparse_compiler_to_sparsifier branch from 6e35b44 to bdee905 Compare November 13, 2023 23:49
@TimAtGoogle TimAtGoogle force-pushed the rename_all_occurences_of_sparse_compiler_to_sparsifier branch from bdee905 to 9735a54 Compare November 15, 2023 17:14
@@ -0,0 +1,39 @@
# Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would expect a rename from "sparse_compiler.py"
(or at least a removal of that file then?)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just pushed it with the file removed, so give it a second to check formattting.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see the rename/removal yet?

class SparseCompiler:
"""Sparse compiler class for compiling and building MLIR modules."""
class Sparsifier:
"""sparsifier class for compiling and building MLIR modules."""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would make this a capital S, as in "Sparsifier class for ..." to be consistent with the original

@TimAtGoogle TimAtGoogle merged commit dce7a7c into llvm:main Nov 15, 2023
zahiraam pushed a commit to zahiraam/llvm-project that referenced this pull request Nov 20, 2023
…to instead use "sparsifier" (llvm#71875)

The changes in this p.r. mostly center around the tests that use the
flag sparse_compiler (also: sparse-compiler).
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
mlir:gpu mlir:sparse Sparse compiler in MLIR mlir
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants