-
Notifications
You must be signed in to change notification settings - Fork 13.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement sparse kernel benchmarks (moderate level, independent, starter) #51650
Comments
assigned to @SaurabhJha |
Hi Aart, I would like to start on this. I have contributed to Clang before but this is going to be my first work in MLIR. You have listed three things that we need to solve this issue. Should I start with 1) and look at how LLVM uses benchmarks and post my findings here? We can decide on the next steps after that. Thank you, |
Hi Saurabh, |
Hi Aart, LLVM uses google benchmark library (https://github.com/google/benchmark) for microbenchmarks. The example use I looked at in particular is libc https://github.com/llvm/llvm-project/tree/main/libc/benchmarks. We could start in the same way for MLIR by having a benchmarks directory in it and setting up Google Benchmark for it similar to https://github.com/llvm/llvm-project/blob/main/libc/benchmarks/CMakeLists.txt Additionally, there is an external repo https://github.com/llvm/llvm-test-suite for llvm test suites which contains microbenchmarks, some example applications to test/benchmark, and external suites like SPEC (which are not included in the test-suite repo). I don't think this model of having a separate repo for tests is relevant to us for the purposes of this ticket but wanted to mention it. Later, we would want to integrate MLIR benchmarks to Buildbot using these instructions https://llvm.org/docs/HowToAddABuilder.html which you mentioned in 3rd point of your numbered list. Please let me know your thoughts or if you want more investigation before getting started here. Thanks, |
Hi! This issue may be a good introductory issue for people new to working on LLVM. If you would like to work on this issue, your first steps are:
For more instructions on how to submit a patch to LLVM, see our documentation. If you have any further questions about this issue, don't hesitate to ask via a comment on this Github issue. @llvm/issue-subscribers-good-first-issue |
Hi @Endilll @aartbik @SaurabhJha if no ones working on this issue i would like to work on this as a way to get started with the llvm project. Can you please assign me this issue. Also @SaurabhJha can you please share your progress on this issue. |
Having pure mlir-source benchmarks has become a bit less interesting for our current effort. We are very interested in actually designing a suite in e.g. PyTorch and use our current end-to-end ML compiler based on MLIR to test performance. Is that of interest to you too? |
Extended Description
The sparse compiler relies on FileCheck based tests (https://github.com/llvm/llvm-project/tree/main/mlir/test/Dialect/SparseTensor) and "regression" test (https://github.com/llvm/llvm-project/tree/main/mlir/test/Integration/Dialect/SparseTensor/CPU). These tests make sure that the generated IR is as expected and that the lowering runs "end to end". Most of these tests were developed in conjunction with particular features while they were being added.
However, we don't have any benchmarks to measure the performance of the generated code (and make sure this performance does not regress by later changes to the sparse compiler).
The entry requests adding such benchmarks to MLIR, which requires
(1) investigating the typical way in which LLVM at large integrates benchmarks
(2) find interesting sparse kernels to implement and measure
(3) integrate such tests in a continuous build (or at least frequently run system)
This can act as a good independent start (since it does not require intimate knowledge of the inner workings of MLIR's sparse compiler). The level is medium, however, since the engineering work is non-trivial.
The text was updated successfully, but these errors were encountered: