fix: Added Mps support in float fallback backends list #45687
Conversation
|
i think it makes more sense if we continue using int as the default for any hardware we are not aware of. because it's better to have the best perf on as many hardware as possible and only use float if it breaks it. |
|
because int histograms are easier to compute than floating ones |
|
@IlyasMoutawwakil I agree with you — using int by default makes sense for performance whenever it's reliably supported, since integer histograms are generally faster and cheaper to compute.
Because of that, I went with the explicit approach in this PR: int on CUDA (where we know it works well and gives the best perf) and float32 everywhere else for broad compatibility. |
|
Given that the large majority of users (especially for MoE models) run on CUDA, this approach gives them the best performance while ensuring reliable behavior on other backends like MPS. We can always expand the int64 support list later as more backends (ROCm, XPU, etc.) stabilize. |
|
It feels like this is a classic performance vs portability tradeoff:
One question I had: do we know how significant the performance difference is in practice for MoE workloads? If the gap is large, it might be worth keeping int as the default and falling back only when needed. But if the difference is small, the current approach (float for non-CUDA) seems safer. Also wondering whether a "try int → fallback to float" approach could be acceptable if the failure is rare and predictable. |
Yes, that's correct. However, the main issue here is more about hardware reality than a classic performance vs portability tradeoff. Integer histograms for torch.histc are still poorly supported outside of CUDA. They tend to fail on MPS, CPU, and several other backends (APU, TPU, etc.), and broader support may take a long time to mature. My current approach uses int only on CUDA (where we get the best performance) and float32 everywhere else. This gives strong performance for the majority of MoE users while ensuring reliable behavior across hardware. Regarding the "try int first, then fallback to float" idea — I actually considered it as well. While it could offer better performance on unknown hardware, I’m concerned it might introduce subtle issues such as problems with torch.compile, Python exception handling interfering with the memory cache, or inconsistent states on backends like Metal (MPS). That said, I might be overthinking it, so I’d really appreciate the maintainers’ conservative feedback on this. |
|
i wouldn't wanna penalize xpu and xla/tpu for example or any other custom torch backend that supports int histc |
Thank you for the response. I now see that this repo follows a performance-first, stability-second approach, which is a valid stance. I had initially gotten the priorities the other way around. |
|
basically we've had this experts impl for a couple months now, and no reports from xpu/hpu. so i'd say they probably already support int, otherwise they'd be the first to report it (we have a couple contributors from intel). |
That's reassuring. |
IlyasMoutawwakil
left a comment
There was a problem hiding this comment.
LGTM! thanks for the fix
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
|
I couldn't reproduce the failure using this command in my local device: uv run pytest tests/cli/test_download.py::test_cli_download_trust_remote. Perhaps local cache and automated test cache is handled differently? |
|
Hi @IlyasMoutawwakil @ArthurZucker @Cyrilvallez, could you check the latest change? CI was failing due to a distributed race condition with pytest-xdist. |
|
@rigen1048 thanks for investigating that's very helpful ! let's keep it in a separate PR tho since it would require reviewing from our ci/testing mantainers. |
Co-authored-by: Ilyas Moutawwakil <57442720+IlyasMoutawwakil@users.noreply.github.com>
…5687) * fix: Made histc_input robust for broader hardware * Fix lint issues with Ruff * fix mps support * Apply suggestion from @IlyasMoutawwakil * rerun ci * potential fix: ModuleNotFoundError caused by distributed race condition * Apply suggestions from code review Co-authored-by: Ilyas Moutawwakil <57442720+IlyasMoutawwakil@users.noreply.github.com> --------- Co-authored-by: Ilyas Moutawwakil <57442720+IlyasMoutawwakil@users.noreply.github.com>
What does this PR do?
Fixes a NotImplementedError: "histogram_mps" not implemented for 'Int' when running Mixture-of-Experts (MoE) models on Apple Silicon (MPS backend).
The error occurred in src/transformers/integrations/moe.py because torch.histc does not support integer dtypes on MPS. The original condition failed to account for the MPS backend. This PR improves the logic by checking specifically for CUDA instead, as float operations are more reliably supported across a wider range of hardware, including legacy devices.
Before:
floaton CPUinton all other backend (CUDA, MPS, XPU, TPU, etc)After:
inton CUDA (best performance)float32on all other backends (CPU, MPS, XPU, etc.)Fixes #45685
Code Agent Policy
The Transformers repo is currently being overwhelmed by a large number of PRs and issue comments written by
code agents. We are currently bottlenecked by our ability to review and respond to them. As a result,
we ask that new users do not submit pure code agent PRs at this time.
You may use code agents in drafting or to help you diagnose issues. We'd also ask autonomous "OpenClaw"-like agents
not to open any PRs or issues for the moment.
PRs that appear to be fully agent-written will probably be closed without review, and we may block users who do this
repeatedly or maliciously.
This is a rapidly-evolving situation that's causing significant shockwaves in the open-source community. As a result,
this policy is likely to be updated regularly in the near future. For more information, please read CONTRIBUTING.md.
Before submitting
Yes, discussed in #45685
documentation guidelines, and
here are tips on formatting docstrings.
Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
grouped_mm_experts_forwardlogic)