[Relax][PyTorch] Add support for sparse matrix multiplication#18499
[Relax][PyTorch] Add support for sparse matrix multiplication#18499tlopex merged 10 commits intoapache:mainfrom
Conversation
Summary of ChangesHello @Dayuxiaoshui, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the PyTorch frontend for TVM Relax by introducing support for several key operations. It enables the conversion of PyTorch models that utilize sparse matrix multiplications and random number generation, as well as Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces support for sparse matrix multiplication and random number generation from PyTorch's ExportedProgram to Relax. The implementation for sparse operations (_sparse_mm, _sparse_addmm) cleverly works by converting sparse tensors to dense, which is a practical approach for enabling model conversion. For torch.randn, a placeholder of zeros is used, with clear comments about this limitation. The PR also adds a handler for as_strided for contiguous memory layouts and refactors tensor conversion logic into a new _convert_pytorch_tensor_to_tvm helper, which cleans up the code and centralizes tensor handling. Overall, the changes are well-implemented and improve the frontend's capabilities. I have one minor suggestion to simplify the code.
…dom number generation This commit adds support for sparse matrix multiplication and random number generation in PyTorch frontend. Changes: - Add _sparse_mm() method to handle sparse matrix multiplication - Add _sparse_addmm() method to handle sparse addmm operations - Add _randn() method to handle torch.randn random number generation - Register these operations in the convert_map The fix ensures that PyTorch models containing sparse matrix operations and random number generation can be successfully converted to TVM Relax modules. Fixes #18476
Resolved merge conflict by keeping our sparse matrix operations support: - _sparse_mm() method - _sparse_addmm() method - _randn() method - Registration in convert_map Fixes #18476
|
@tlopex Request Review |
|
@Dayuxiaoshui Could you add tests for those new ops? |
|
@tlopex No problem, I'll add it. |
…number generation This commit adds support for sparse matrix multiplication, sparse addmm, and random number generation in PyTorch frontend. Changes: - Add _sparse_mm() method to handle sparse matrix multiplication - Add _sparse_addmm() method to handle sparse addmm operations - Add _randn() method to handle torch.randn random number generation - Register these operations in the convert_map - Add comprehensive tests for all three new operations The implementation converts sparse tensors to dense format before matrix operations, which enables model conversion for PyTorch models containing sparse computations. Fixes #18476
|
@tlopex By the way, checked and fixed the format of the test file |
…n output PyTorch's run_decompositions() decomposes _sparse_mm.default into full.default + _sparse_addmm.default (with beta=0). This commit updates the expected IR in test_sparse_mm to include the R.full operation that is generated by the decomposition.
|
Overall LGTM. Please remove the |
Add test_sparse_addmm and test_sparse_mm to verify sparse tensor operations. Also remove randn support as requested.
|
|
||
| # Create a tensor filled with zeros (as placeholder) | ||
| # In practice, this should use a random number generator | ||
| # For now, we use zeros as a workaround since TVM doesn't have built-in randn |
There was a problem hiding this comment.
I think we shouldn't support a valid function until it is available. So temporarily I think we don't need it.
There was a problem hiding this comment.
No problem, I agree with this point of view.
This commit adds support for sparse matrix multiplication and random number generation in PyTorch frontend.
Changes:
The fix ensures that PyTorch models containing sparse matrix operations
and random number generation can be successfully converted to TVM Relax modules.
Fixes #18476