Skip to content

[Relax][PyTorch] Add support for sparse matrix multiplication#18499

Merged
tlopex merged 10 commits intoapache:mainfrom
Dayuxiaoshui:main
Nov 28, 2025
Merged

[Relax][PyTorch] Add support for sparse matrix multiplication#18499
tlopex merged 10 commits intoapache:mainfrom
Dayuxiaoshui:main

Conversation

@Dayuxiaoshui
Copy link
Contributor

@Dayuxiaoshui Dayuxiaoshui commented Nov 25, 2025

This commit adds support for sparse matrix multiplication and random number generation in PyTorch frontend.

Changes:

  • Add _sparse_mm() method to handle sparse matrix multiplication
  • Add _sparse_addmm() method to handle sparse addmm operations
  • Register these operations in the convert_map

The fix ensures that PyTorch models containing sparse matrix operations
and random number generation can be successfully converted to TVM Relax modules.

Fixes #18476

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @Dayuxiaoshui, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the PyTorch frontend for TVM Relax by introducing support for several key operations. It enables the conversion of PyTorch models that utilize sparse matrix multiplications and random number generation, as well as as_strided operations, into TVM Relax modules. This broadens the range of PyTorch models that can be successfully compiled and optimized by TVM.

Highlights

  • Sparse Matrix Operations: Added support for PyTorch sparse matrix multiplication (_sparse_mm) and sparse addmm operations (_sparse_addmm) within the Relax frontend, converting sparse inputs to dense for processing.
  • Random Number Generation: Implemented support for torch.randn by introducing a _randn method, which currently acts as a placeholder by emitting relax.op.zeros until native random number generation is available in TVM.
  • Tensor Conversion Utility: Introduced a new static method _convert_pytorch_tensor_to_tvm to handle the robust conversion of PyTorch tensors to TVM tensors, including the necessary conversion of sparse PyTorch tensors to dense format and a fallback mechanism for DLPack conversion failures.
  • Strided Tensor Support: Added support for torch.as_strided operations, specifically for view-like cases where the provided strides align with a contiguous layout, translating them to relax.op.reshape.
  • Operation Registration: Registered the newly added _sparse_mm.default, _sparse_addmm.default, mul, as_strided.default, and randn.default operations in the create_convert_map to enable their translation during the conversion process.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for sparse matrix multiplication and random number generation from PyTorch's ExportedProgram to Relax. The implementation for sparse operations (_sparse_mm, _sparse_addmm) cleverly works by converting sparse tensors to dense, which is a practical approach for enabling model conversion. For torch.randn, a placeholder of zeros is used, with clear comments about this limitation. The PR also adds a handler for as_strided for contiguous memory layouts and refactors tensor conversion logic into a new _convert_pytorch_tensor_to_tvm helper, which cleans up the code and centralizes tensor handling. Overall, the changes are well-implemented and improve the frontend's capabilities. I have one minor suggestion to simplify the code.

…dom number generation

This commit adds support for sparse matrix multiplication and random number generation in PyTorch frontend.

Changes:
- Add _sparse_mm() method to handle sparse matrix multiplication
- Add _sparse_addmm() method to handle sparse addmm operations
- Add _randn() method to handle torch.randn random number generation
- Register these operations in the convert_map

The fix ensures that PyTorch models containing sparse matrix operations
and random number generation can be successfully converted to TVM Relax modules.

Fixes #18476
Resolved merge conflict by keeping our sparse matrix operations support:
- _sparse_mm() method
- _sparse_addmm() method
- _randn() method
- Registration in convert_map

Fixes #18476
@Dayuxiaoshui
Copy link
Contributor Author

Dayuxiaoshui commented Nov 25, 2025

@tlopex Request Review

@tlopex
Copy link
Member

tlopex commented Nov 26, 2025

@Dayuxiaoshui Could you add tests for those new ops?

@Dayuxiaoshui
Copy link
Contributor Author

@tlopex No problem, I'll add it.

…number generation

This commit adds support for sparse matrix multiplication, sparse addmm,
and random number generation in PyTorch frontend.

Changes:
- Add _sparse_mm() method to handle sparse matrix multiplication
- Add _sparse_addmm() method to handle sparse addmm operations
- Add _randn() method to handle torch.randn random number generation
- Register these operations in the convert_map
- Add comprehensive tests for all three new operations

The implementation converts sparse tensors to dense format before
matrix operations, which enables model conversion for PyTorch models
containing sparse computations.

Fixes #18476
@Dayuxiaoshui
Copy link
Contributor Author

@tlopex By the way, checked and fixed the format of the test file

…n output

PyTorch's run_decompositions() decomposes _sparse_mm.default into
full.default + _sparse_addmm.default (with beta=0). This commit updates
the expected IR in test_sparse_mm to include the R.full operation that
is generated by the decomposition.
@Dayuxiaoshui
Copy link
Contributor Author

@tlopex

@tlopex
Copy link
Member

tlopex commented Nov 27, 2025

Overall LGTM. Please remove the randn func. Besides, the added trailing commas in the TVMScript blocks introduce formatting inconsistencies with existing tests. Please reformat and revert them, keep only the logical changes. Thanks!

Add test_sparse_addmm and test_sparse_mm to verify sparse tensor
operations. Also remove randn support as requested.
Copy link
Member

@tlopex tlopex left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Thank you!


# Create a tensor filled with zeros (as placeholder)
# In practice, this should use a random number generator
# For now, we use zeros as a workaround since TVM doesn't have built-in randn
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we shouldn't support a valid function until it is available. So temporarily I think we don't need it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No problem, I agree with this point of view.

@tlopex tlopex merged commit 25a37e7 into apache:main Nov 28, 2025
10 checks passed
@tlopex tlopex changed the title [Relax][PyTorch] Add support for sparse matrix multiplication and random number generation [Relax][PyTorch] Add support for sparse matrix multiplication Nov 28, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Feature Request] Support for sparse matrix multiplication and random number generation in PyTorch frontend

2 participants