id int64 2.74B 3.05B | title stringlengths 1 255 | user stringlengths 2 26 | state stringclasses 2
values | labels listlengths 0 24 | comments int64 0 206 | author_association stringclasses 4
values | body stringlengths 7 62.5k ⌀ | is_title bool 1
class |
|---|---|---|---|---|---|---|---|---|
3,034,855,767 | static cuda launcher causes `RuntimeError: CUDA driver error: invalid device context` in torchtitan CI | bdhirsh | closed | [
"oncall: pt2",
"module: inductor",
"compile-cache"
] | 1 | CONTRIBUTOR | Here's a recent torchtitan CI job failure: https://github.com/pytorch/torchtitan/actions/runs/14691831856/job/41228192364#step:14:617
the repro command from torchtitan according to @tianyu-l is:
```
./run_train.sh --training.compile --activation_checkpoint.mode selective --activation_checkpoint.selective_ac_option op... | true |
3,034,853,080 | [dynamic shapes] use try-catch instead of guard_or_true for reshape_view_helper | pianpwk | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: fx",
"ciflow/pull"
] | 17 | CONTRIBUTOR | Test Plan: test_export
Differential Revision: D74033649
| true |
3,034,823,243 | [export] Add draft-export docs | angelayi | closed | [
"Merged",
"ciflow/trunk",
"release notes: export"
] | 4 | CONTRIBUTOR | Sample page: https://docs-preview.pytorch.org/pytorch/pytorch/152637/draft_export.html
cc @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @suo @ydwu4 @penguinwu | true |
3,034,810,915 | Switch to metal kernel for mul | skotapati | closed | [
"open source",
"release notes: mps",
"ciflow/mps"
] | 2 | COLLABORATOR | Draft PR
| true |
3,034,800,361 | TestFlexAttentionCUDA.test_GQA_score_mod7_cuda_float16 fails on h100 | BoyuanFeng | open | [
"triaged",
"oncall: pt2",
"module: higher order operators",
"module: pt2-dispatcher",
"module: flex attention"
] | 0 | CONTRIBUTOR | command to repro. This fails on h100.
```
python test/inductor/test_flex_attention.py TestFlexAttentionCUDA.test_GQA_score_mod7_cuda_float16
```
Error:
```
File "/data/users/boyuan/pytorch/test/inductor/test_flex_attention.py", line 412, in _check_equal
self.assertTrue(False, "Output/Grad with NaN")
AssertionEr... | true |
3,034,780,601 | Incorrect strides for `nonzero_static` compilation | GMNGeoffrey | open | [
"triaged",
"oncall: pt2",
"module: fakeTensor",
"module: pt2-dispatcher"
] | 0 | NONE | ### 🐛 Describe the bug
I am getting an output from `nonzero_static` with incorrect strides after being `torch.compile`'d. In older versions of torch, this manifests as some sort of runtime failure (I first encountered it as a GPU crash, which wasn't fun). In the latest stable and nightly versions, I'm instead seeing ... | true |
3,034,772,647 | [ca] wrap flex attention tests with compiled autograd | xmfan | open | [
"module: inductor",
"ciflow/inductor"
] | 2 | MEMBER | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152633
* #152119
* #151962
* #151731
* #151860
* #149707
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauha... | true |
3,034,767,709 | DISABLED test_torchvision_models_efficientnet_v2_l (__main__.TestVisionTracing) | pytorch-bot[bot] | open | [
"triaged",
"module: flaky-tests",
"skipped",
"module: fx"
] | 1 | NONE | Platforms: asan, linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_torchvision_models_efficientnet_v2_l&suite=TestVisionTracing&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41498178561).
Ove... | true |
3,034,665,581 | Fix two error messages involving Tensor.dense() | mhogervo | closed | [
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo"
] | 5 | CONTRIBUTOR | Two error messages in the codebase instruct the user to use `Tendor.dense()`. This method doesn't exist, but `Tensor.to_dense()` does, and this is what the user should be using instead.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @k... | true |
3,034,561,957 | [ROCm] Initial AITER Integration for mha_bwd asm kernels | alugorey | open | [
"module: rocm",
"triaged",
"open source",
"ciflow/trunk",
"topic: not user facing"
] | 4 | CONTRIBUTOR | Generates AITER plumbing via cmake. Calls into fav3 asm bwd CK kernels.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd | true |
3,034,508,195 | [DCP] Add 30min timeout for IPC communications in async checkpointing | MeetVadakkanchery | closed | [
"oncall: distributed",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: distributed (checkpoint)",
"oncall: distributed checkpointing"
] | 6 | CONTRIBUTOR | Summary:
### Diff Context
- Sometime background process can be stuck processing async checkpoint request, and trainer shutdown can occur before the background process completes.
- Fix, timeout the thread while reading the IPC queue for a response from background process.
Differential Revision: D74017700
... | true |
3,034,485,136 | Make PGO code state not sensitive to file path by hashing file content when the file is available. | laithsakka | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152628
In some internal frameworks, on second attempts the actual code is copied to a different path than previous attempts.
but its still the same. PGO will not work on those cased due to the following, sate entries before this... | true |
3,034,474,489 | [v2.7.1] Release Tracker | atalman | open | [
"oncall: releng",
"triaged",
"release tracker"
] | 11 | CONTRIBUTOR | This issue is for tracking cherry-picks to the release branch. Following is [release branch](https://github.com/pytorch/pytorch/tree/release/2.7) for the 2.7.1 release.
Our plan from this point is roughly the following:
* Phase 1 (until 5/19): Cherry-pick post deadline (End of day 5PM PST)
* Phase 2 (after 5/19): Pe... | true |
3,034,453,325 | [Dynamo] Guard serialization for TENSOR_SUBCLASS_METADATA_MATCH | jbschlosser | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"ciflow/pull"
] | 15 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152872
* #152865
* #152730
* #152729
* #152728
* #152727
* #152725
* #152724
* #152704
* __->__ #152626
This PR updates `GuardsStatePickler.reducer_override()` in `torch/_dynamo/guards.py` to handle reconstruction of traceable wrapper subc... | true |
3,034,451,300 | [CP] Fix the offsets to KV in backward | fegin | closed | [
"oncall: distributed",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152625
This is more semantically correct even though we currently assumed KV have the same lengths.
cc @H-Huang @awgu @wanchaol @fduwjj @wz337 @wconstab @d4l3k | true |
3,034,423,623 | [pytree] make `tree_*` functions accept both Python and C++ `PyTreeSpec` | XuehaiPan | open | [
"open source",
"ciflow/trunk",
"topic: not user facing",
"module: pytree",
"module: dynamo",
"ciflow/inductor"
] | 2 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #148328
* #148180
* #137400
* __->__ #152624
cc @zou3519 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
3,034,421,736 | modded-nanogpt flaky NCCL hang starting 3/30 nightly | xmfan | open | [
"needs reproduction",
"oncall: distributed",
"triaged"
] | 8 | MEMBER | ### 🐛 Describe the bug
From @YouJiaCheng,
> I evaluated performance of other nightly releases:
time and peak memory allocated
0208:≈1470s, 50380MiB
0209:≈1483s, 50380MiB
0301:1484-1487s, 50380MiB
0310:1482-1484s, 52129MiB
0315:1498-1500s, 52129MiB
0330:NCCL Hang first run
0401:NCCL Hang first run
0410:NCCL Hang fir... | true |
3,034,417,696 | Parameterized CUDA Graph Launch | galv | open | [
"open source",
"module: inductor",
"ciflow/inductor"
] | 5 | COLLABORATOR | This is a follow on to #137318 .
The main concern with that PR was robustness: We had no real way of knowing whether or not a particular 8-byte aligned 8-byte value in a parameter was really a pointer. This made it basically impossible to 100% guarantee correctness of replacing the arguments of a cuda graph, since y... | true |
3,034,386,674 | Stop proxy-ing autograd.Function.ctx into the graph | zou3519 | open | [
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152621
The reason why we did this before is because that's how our older
autograd.Function x Dynamo interaction work, but we've since adopted
newer designs that don't actually need the autograd.Function.ctx proxied
into the graph.
W... | true |
3,034,379,833 | BE: Swap functorch --> torch._higher_order_ops | seemethere | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: export"
] | 4 | MEMBER | Summary: Discovered when attempting to resolve arvr builds, should resolve issues around utilizing functorch through export.
Test Plan:
```
buck2 test arvr/mode/linux/opt //arvr/libraries/xrrp/ml/python/test:convert_to_etvk_test
```
Differential Revision: D74013898
| true |
3,034,352,816 | [fbgemm] Implement __obj_flatten__ for LinearPackedParamsBase | hl475 | closed | [
"module: cpu",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: quantization"
] | 19 | CONTRIBUTOR |
Differential Revision: D73991241
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168 | true |
3,034,342,808 | [CUDA][TF32] Account for TF32 in `test_conv2d_same_padding` | eqy | closed | [
"module: cuda",
"module: convolution",
"open source",
"Merged",
"module: tf32",
"ciflow/trunk",
"topic: not user facing"
] | 3 | COLLABORATOR | cc @ptrblck @msaroufim @jerryzh168 @zasdfgbnm | true |
3,034,298,667 | Pytorch Profiler crashes while using it with Pytorch Lightning module | MKaczkow | open | [
"oncall: profiler"
] | 0 | NONE | ### 🐛 Describe the bug
Pytorch Profiler crashes while using it with pytorch-lightning. I am attempting to profile some experiments, but keep getting errors like shown below. I've searched forum and gh issues and I'm aware of the following:
* [issue](https://github.com/pytorch/pytorch/issues/98124) (not relevant -> di... | true |
3,034,253,834 | [dynamo] Guard serialization for FUNCTORCH_STACK_MATCH | zhxchen17 | closed | [
"module: cpu",
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 9 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152723
* #152721
* #152716
* #152687
* __->__ #152616
* #152615
Make Functorch interpreters serializable most of the time, so that we can save the guards on functorch states.
## Test Cases:
0. torch.compile() without functorch layers... | true |
3,034,253,735 | [dynamo] Guard serialization for DUAL LEVEL. | zhxchen17 | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152723
* #152721
* #152716
* #152687
* #152616
* __->__ #152615
Seem dual level counter should be stored in OutputGraph so that the value can be preserved through roundtripping.
Differential Revision: [D74008786](https://our.internmc.faceb... | true |
3,034,252,160 | [WIP] Make FR vendor generic and try to enable it for gloo | fduwjj | open | [
"oncall: distributed",
"release notes: distributed (c10d)"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152614
* #152563
* #152585
cc @H-Huang @awgu @wanchaol @fegin @wz337 @wconstab @d4l3k | true |
3,034,214,821 | Revert "Cleanup VS 2019 refs in pytorch (#145863)" | xuhancn | open | [
"triaged",
"open source",
"ciflow/binaries",
"ciflow/trunk",
"release notes: releng",
"ciflow/xpu",
"ci-no-td"
] | 3 | COLLABORATOR | This reverts commit b45e6fa707ced2adb68eaf1a2c1ccb389a6283d7.
revert PRs:
https://github.com/pytorch/pytorch/pull/145863
https://github.com/pytorch/pytorch/pull/145319
| true |
3,034,210,237 | Enable AOTI for Metal inductor | malfet | open | [
"enhancement",
"module: mps",
"oncall: pt2",
"oncall: export",
"module: aotinductor"
] | 0 | CONTRIBUTOR | ### 🚀 The feature, motivation and pitch
Now that `torch.compile` is usable for MPS backend, we should extend it to Python-less environment (including ExecuTorch) and one avenue of enabling is is AOTI
https://github.com/pytorch/pytorch/blob/4c8dee7986d0da5cd8485b8d84323c425d228891/aten/src/ATen/test/mps_test_metal_li... | true |
3,034,187,972 | Makefile: refactor build, setup and lint rules | ariel-anieli | open | [
"triaged",
"open source",
"topic: not user facing"
] | 2 | NONE | Hello maintainers,
This is my first ever PR to the project; your feedback is much appreciated.
I am proposing to refactor some rules in Makefile. The output is unchanged.
After,
```shell
# make lint -n
lintrunner
# make quicklint -n
lintrunner
# make ios -n
./scripts/build_ios.sh
# make setup... | true |
3,034,170,316 | Update padding_mode type annotation to use Literal type (PaddingMode) | sudiptap | open | [
"triaged",
"open source",
"topic: not user facing"
] | 3 | NONE | Fixes #152280
| true |
3,034,159,846 | [Environment Variable] Use thread-safe getenv functions | cyyever | open | [
"oncall: distributed",
"oncall: jit",
"open source",
"NNC",
"release notes: linalg_frontend"
] | 1 | COLLABORATOR | Use thread-safe getenv wrapper in remaining code.
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @EikanWang @jgong5 @wenzhe-nrv @sanchitintel | true |
3,034,152,488 | [triton pin update] Run Inductor CI on pin updates for Triton and the PyTorch nightly branch | atalman | open | [
"oncall: releng",
"triaged",
"module: user triton"
] | 0 | CONTRIBUTOR | ### 🐛 Describe the bug
We would like to run Inductor CI on Triton pin updates So that we can see any regressions on the pin updates and notice any issues before accepting pin update.
This most will require us to be able to upload triton on PR so that it can be tested on inductor CI:
https://hud.pytorch.org/hud/pytor... | true |
3,034,094,316 | Loops impacting output when utilizing hooks | Thomas2419 | open | [
"module: nn",
"triaged"
] | 0 | NONE | ### 🐛 Describe the bug
Hello! I believe this is a hook related behaving oddly under loops.
When using a hook and a loop I'm getting unexpected output logits. I'm bringing this here as it ONLY happens when i use both and thus i believe is some weird hook+loop pytorch interaction and not a transformers interaction.
... | true |
3,034,063,675 | AOTI regression on SAM and tts-angular | zou3519 | open | [
"high priority",
"triage review",
"triaged",
"oncall: pt2",
"oncall: export",
"module: aotinductor"
] | 4 | CONTRIBUTOR | In aot_inductor_torchbench. See https://hud.pytorch.org/pytorch/pytorch/commit/701c0848b8695daa802c2d7ff2f9177faa6e1fe8#41477577732-box for failing logs.
It looks like these were both previously "pass" but now "fail_to_run", so at least there isn't silent incorrectness.
I'm going to flip the statuses on these so that... | true |
3,034,060,338 | Fix some inductor periodic benchmarks | zou3519 | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor",
"ciflow/inductor-periodic"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152621
* __->__ #152605
Some were reporting "pass" consistently on https://hud.pytorch.org/
Those are fine to flip.
I filed a separate issue for the now-regressions for AOTI:
https://github.com/pytorch/pytorch/issues/152606. These should b... | true |
3,034,031,609 | [Testing] Is FindCUDA.cmake from `Modules_CUDA_fix` called at all? | malfet | open | [] | 2 | CONTRIBUTOR | Fixes #ISSUE_NUMBER
| true |
3,034,023,783 | [BE] Delete `Module_CUDA_fix` | malfet | open | [
"release notes: build",
"topic: improvements"
] | 3 | CONTRIBUTOR | We should be using upstream find(CUDA) always, shouldn't we?
| true |
3,034,010,859 | [testing] 4 | zou3519 | closed | [
"release notes: releng",
"fx",
"module: inductor",
"ciflow/inductor",
"ciflow/inductor-periodic"
] | 1 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152602
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov | true |
3,033,950,638 | [multigraph] use backend specializations in compile_and_call_fx_graph | bobrenjc93 | open | [
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"release notes: AO frontend"
] | 2 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152601
* #152597
* #152596
The goal of this multigraph work is to enable a compiled region that has a single dynamo trace but multiple backend specializations. This work was inspired by vLLM who does this in a somewhat hacky way whe... | true |
3,033,950,555 | store backend specializations in StatelessSymbolicContext | bobrenjc93 | closed | [
"release notes: fx",
"fx",
"module: dynamo",
"ciflow/inductor"
] | 1 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152670
* #152601
* __->__ #152600
* #152597
* #152596
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
3,033,943,220 | [testing] 3 | zou3519 | closed | [
"topic: not user facing",
"ciflow/inductor-periodic"
] | 1 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152599
| true |
3,033,924,328 | [ez] fix grammar mistakes in StatefulSymbolicContext comment | bobrenjc93 | closed | [
"Merged",
"ciflow/trunk",
"release notes: fx",
"topic: not user facing",
"fx",
"ciflow/inductor"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152650
* #152601
* #152600
* #152597
* #152596
* __->__ #152598
* #151407
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv | true |
3,033,924,252 | [multigraph] add backend_specialization kwarg to mark_dynamic | bobrenjc93 | open | [
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 2 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152601
* __->__ #152597
* #152596
The goal of this multigraph work is to enable a compiled region that has a single dynamo trace but multiple backend specializations. This work was inspired by vLLM who does this in a somewhat hacky way whe... | true |
3,033,901,683 | [not for review] benchmark script | bobrenjc93 | open | [
"topic: not user facing"
] | 2 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152601
* #152597
* __->__ #152596
| true |
3,033,890,781 | ROCm, 7900 XTX: Pytorch FLASH_ATTENTION SDPA is 2.5x slower than MATH (fp16, head_dim 256, seqlen 4360, 12 heads) | FeepingCreature | open | [
"module: performance",
"module: rocm",
"triaged",
"module: sdpa"
] | 13 | NONE | edit: Title changed to highlight later discovery, original contents preserved for easier reading.
This was originally "ROCm, 7900 XTX: Pytorch SDPA is 2.5x slower than manual implementation with non-continuous v", but it turned out that the non-contiguous v didn't really matter.
---
I was trying to figure out why Au... | true |
3,033,812,815 | [c10d] Add support for ReduceOp::AVG in ProcessGroupMPI for FSDP2 | nariaki3551 | open | [
"oncall: distributed",
"open source",
"release notes: distributed (c10d)"
] | 6 | CONTRIBUTOR | Hi,
Currently, running FSDP2 with the MPI backend fails. This is because `ProcessGroupMPI` does not support reduce_scatter with `ReduceOp::AVG` used during the backward pass.
However, most MPI implementations (such as OpenMPI) do not natively support an `AVG` reduce operation. To address this, this patch adds sup... | true |
3,033,791,119 | Flex Attention doesn't scale with custom bias | danjenson | open | [
"triaged",
"oncall: pt2",
"module: pt2-dispatcher",
"module: flex attention"
] | 1 | NONE | ### 🐛 Describe the bug
When using FlexAttention with the following custom `RBFBias`, I cannot get FlexAttention to go above ~L=1600 without OOMing on an NVIDIA 24GB 4090. I also find that this implementation is about 6-8x slower than an implementation in JAX. Is there something I can configure to leverage the memory ... | true |
3,033,632,252 | [ratter-build] Cannot detect CUDA when build from source | hieupth | open | [
"needs reproduction",
"module: build",
"triaged"
] | 1 | NONE | Hi, I am building from source using `rattler-build` with this `recipe.yaml`
```yaml
context:
name: pytorch
version: nightly
rev: main
python: 3.12
gcc: 13.3
cuda: 12.8
cudnn: 9.8
archs: 8.6;9.0;10.0;12.0;12.6
package:
name: ${{name|lower}}
version: ${{version}}
source:
git: https://github.com/p... | true |
3,033,571,301 | Fix: promote scalar to MPS device in exec_binary_kernel | KAVYANSHTYAGI | open | [
"triaged",
"open source",
"release notes: mps"
] | 3 | NONE | **PR Summary**
This PR fixes an inconsistency in torch.copysign on the MPS backend when used with a scalar as the second operand. Scalars were being promoted to CPU tensors by default, leading to incorrect results due to cross-device operations.
**Repro Before Fix**
import torch
t = torch.tensor([1.0, 2.0, 3.... | true |
3,033,319,153 | Fix #152280: add Literal[…] PaddingMode to Conv modules | AnandVishesh1301 | open | [
"triaged",
"open source",
"release notes: AO frontend"
] | 3 | NONE | ## Description
Updates `padding_mode` type annotations in convolution modules to use `Literal` for improved type safety. This PR builds on #152458 by @sujeet4010, addressing unresolved MYPY errors in `torch/ao/nn/qat/modules/conv.py` and adding test coverage.
## Related Issues
- Resolves #152280 (original issue)
... | true |
3,033,318,715 | [Dynamo] Optimize dedupe region ancestor tracking | mlazos | open | [
"ciflow/trunk",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo",
"merging"
] | 6 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152589
* #152572
* #152570
* #152506
* #152410
* #152505
* #152389
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
3,033,316,017 | [WIP] suggest whitelist for dynamic shape recompilations | pianpwk | open | [
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo"
] | 2 | CONTRIBUTOR | For this toy example, running with `TORCH_LOGS="recompiles"`:
```
class Foo(torch.nn.Module):
def __init__(self):
super().__init__()
self.lin = torch.nn.Linear(4, 4)
self.attr = torch.randn(4)
def forward(self, x, y):
return self.lin(x) + self.attr + y
fn = torch.compi... | true |
3,033,254,808 | [Inductor] Introduce Wrapper IR line for symbolic call args | blaine-rister | closed | [
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: inductor (aoti)"
] | 3 | CONTRIBUTOR | Preparatory refactor for https://github.com/pytorch/pytorch/pull/146942.
This PR introduces a new wrapper IR line to represent symbolic call args. This deletes a little bit of duplicated code between the Python and C++ backends. In the main PR, having a Wrapper IR line for this also tells the FX backend what this pa... | true |
3,033,115,488 | [2/N] Use std::filesystem | cyyever | open | [
"oncall: distributed",
"triaged",
"open source",
"ciflow/inductor",
"release notes: inductor (aoti)"
] | 5 | COLLABORATOR | Use std::filesystem in most inductor code. This is follow-up of #152288 .
The check of `std::filesystem::create_directories` has been fixed because it may return false when the directory to create already exists.
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k | true |
3,033,099,094 | [c10d][fr] Decouple the core logic of FR with the entry and event type | fduwjj | closed | [
"oncall: distributed",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 6 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152614
* #152563
* __->__ #152585
We want to make FR generic enough so the first step is to make the FR a template struct so that most of common code logic can be reused. The reason for this is that CudaEvent does not inherit c10::Even... | true |
3,033,072,933 | How does torch.cudagraph capture a hybrid graph? | ghostplant | closed | [] | 1 | NONE | I have a model containing not only CUDA operations in some places, and also CPU operators in other places. I want to capture the whole graph as a single CUDAGraph to replay. Is it possible in Pytorch? | true |
3,033,072,462 | add support for 0 size shardedTensor and recalculate metadata from all_gather | duduyi2013 | closed | [
"oncall: distributed",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: distributed (sharded)"
] | 11 | CONTRIBUTOR | Summary:
change set
1. a ShardedTensor could have 0 size initially, the current check won't pass if the size is 0, added here
2. when we call ShardedTensor._init_from_local_shards, it will assume all the metadata is correct, all_gather to double check. In the new case, the metadata could be all 0 size, and the tensor h... | true |
3,033,047,728 | [MPS] Binary kernels produce incorrect results when one of the tensor arguments is from a wrapped scalar | qqaatw | closed | [
"triaged",
"module: regression",
"module: correctness (silent)",
"module: mps"
] | 4 | COLLABORATOR | ### 🐛 Describe the bug
Repro:
```python
import torch
tcpu = torch.tensor([1.0,2.0,3.0], device="cpu")
torch.copysign(tcpu, -2.0) # tensor([-1., -2., -3.])
t = torch.tensor([1.0,2.0,3.0], device="mps")
torch.copysign(t, -2.0) # tensor([1., 2., 3.], device='mps:0')
```
Internally, the scalar is wrapped into a cpu t... | true |
3,033,013,295 | [invoke_subgraph] rename identifiers to prevent python mangling | anijain2305 | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 11 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152494
* #152490
* #152383
* #152384
* __->__ #152581
* #152547
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
3,032,942,578 | [cutlass backend] cache filtered ops based on layouts | henrylhtsang | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 7 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #153006
* __->__ #152580
Differential Revision: [D73972687](https://our.internmc.facebook.com/intern/diff/D73972687/)
Add cache to store the list of filtered ops for a specific shape + layout + dtype (aka hash on input_nodes).
cc @voz... | true |
3,032,932,963 | [aoti] skip input symbol codegen for sympy expr w/ many symbols | ColinPeppler | closed | [
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: inductor (aoti)"
] | 3 | CONTRIBUTOR | Issue was that
- symbol-ids appeared out-of-order w.r.t to the order of the forward inputs
```
def forward(arg0 # [(s3 - 1) + s4, 32], arg1 #[(s3 - 1)] ..)
```
- this causes codegen to fail because it expects all the base symbols `s4,s3` to have been codegen-ed already.
- well, we can skip codegen-ing sympy expr... | true |
3,032,902,015 | [testing] 1 | zou3519 | closed | [
"release notes: releng",
"fx",
"module: inductor",
"ciflow/inductor",
"ciflow/inductor-periodic"
] | 1 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152578
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov | true |
3,032,891,842 | [cutlass backend] Minor lru_cache to slightly speed up filtering ops | henrylhtsang | closed | [
"fb-exported",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ci-no-td",
"ciflow/inductor-periodic"
] | 15 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152577
For default level, it went from 0.11332 seconds to Filtering took 0.10064 seconds.
You can't really apply lru_cache too aggressively. For example, hashing a cutlass op takes a long time.
Removing a log further bring it down ... | true |
3,032,875,949 | [Inductor] Fix int check again | mlazos | closed | [
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 3 | CONTRIBUTOR | Made an oss change to a diff train diff
@diff-train-skip-merge
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov | true |
3,032,861,344 | [IR] Input Adapter refactor prototype (#152459) | felixsu2006 | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: export"
] | 9 | CONTRIBUTOR | Summary:
1. Adding `input` field to `_adapt_flat_args` function
2. In `process_forward_inputs`, `reorder_kwargs` will now do nothing if no kwargs are provided (previously would error)
3. Pass `args` as input to `_adapt_flat_args`
These changes are made to update the InputAdapter
see more context in D73811508
Test P... | true |
3,032,857,106 | Added documentation for nonzero_static function (#152347) | sanjai-11 | closed | [
"triaged",
"open source",
"topic: not user facing"
] | 7 | NONE | Fixes #152347
This PR adds documentation for the nonzero_static function in PyTorch. | true |
3,032,848,140 | Allow decomposeK to fuse | PaulZhang12 | open | [
"module: inductor",
"ciflow/inductor"
] | 2 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152573
* #150654
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov | true |
3,032,830,059 | [Dynamo] Fix typing in graph_deduplication.py | mlazos | open | [
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152589
* __->__ #152572
* #152570
* #152506
* #152410
* #152505
* #152389
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
3,032,815,194 | [export] Ignore None buffers | angelayi | closed | [
"Merged",
"ciflow/trunk",
"release notes: export"
] | 3 | CONTRIBUTOR | Fixes https://github.com/pytorch/pytorch/issues/152467 | true |
3,032,802,827 | [Hierarchical Compile] Replace tracing alias and mutation check with dynamo impl | mlazos | open | [
"ciflow/trunk",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152589
* #152572
* __->__ #152570
* #152506
* #152410
* #152505
* #152389
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
3,032,723,115 | [ROCm] Update spack includes | jithunnair-amd | open | [
"module: rocm",
"triaged",
"open source",
"release notes: rocm",
"ciflow/rocm"
] | 6 | COLLABORATOR | * Cleans up code in `caffe2/CMakeLists.txt` to remove individual ROCm library include paths and use `ROCM_INCLUDE_DIRS` CMake var instead
* `ROCM_INCLUDE_DIRS` CMake var is set in `cmake/public/LoadHIP.cmake` by adding all the ROCm packages that PyTorch depends on
* `rocm_version.h` is provided by the `rocm-core` pac... | true |
3,032,717,939 | Allow Metal Binary iterator to take CPUScalar operands | skotapati | closed | [
"open source",
"release notes: mps",
"ciflow/mps"
] | 2 | COLLABORATOR | Currently the metal binary kernel can only take MPSTensors and errors out if a scalar value is passed in. The following change allows the binary kernel to work with cpu scalar inputs, without the need to initialize a new MPS tensor.
This is necessary for enabling binary logical/comparison ops via the metal kernel, ... | true |
3,032,702,917 | 🐛 Add `ciflow/pull`🦋 | malfet | closed | [
"Merged",
"topic: not user facing"
] | 3 | CONTRIBUTOR | To make it easier to workaround GitHub relibability issues, when it sometime fails to scheduled `on: pull_request` workflows
See https://github.com/pytorch/pytorch/issues/151322
But alas, it does not fixes problem at hand... | true |
3,032,688,256 | [Benchmark] High compilation time variance on benchmark dashboards | huydhn | open | [
"module: ci",
"triaged",
"module: infra"
] | 0 | CONTRIBUTOR | The issue is reported by compiler team (@zou3519, @oulgen) in which the compilation time seems to have a higher variance across runs. This happens on both PT2 (no compiler cache) and CacheBench dashboard, which seems to indicate an underlying problem with the runner.
 (oldest at bottom):
* __->__ #152565
>=0 is practically correct becuase we do model the runtime of some ops as 0.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @... | true |
3,032,678,721 | `torch.randint` can't handle large `high` argument (and in general high range of `torch.uint64`) | vadimkantorov | closed | [
"module: docs",
"triaged",
"actionable",
"module: python frontend"
] | 3 | CONTRIBUTOR | ### 🐛 Describe the bug
[Docs for `manual_seed`](https://pytorch.org/docs/stable/generated/torch.Generator.html#torch.Generator.manual_seed) say `The desired seed. Value must be within the inclusive range [-0x8000_0000_0000_0000, 0xffff_ffff_ffff_ffff].`. So trying to generate a seed:
`python -c 'import torch; print(... | true |
3,032,645,173 | [c10d][fr] Make FR vendor neutral so that other backends can use it | fduwjj | closed | [
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 7 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152614
* __->__ #152563
* #152585
Current FR code is built with `USE_C10D_NCCL` we should remove it to make it generic. And we keep existing API used by NCCL so that we can have some bc compatibility because lots of use cases are around FR... | true |
3,032,644,811 | xpu: rely on sycl/sycl.hpp to include bfloat16.hpp | dvrogozh | open | [
"triaged",
"open source",
"ciflow/trunk",
"ciflow/xpu",
"release notes: xpu",
"module: xpu"
] | 18 | CONTRIBUTOR | Fixes: https://github.com/intel/torch-xpu-ops/issues/1503
`sycl/ext/oneapi/bfloat16.hpp` header file is a DPC++ compiler internal header. It's not documented for usage (see extension specification linked below) and is not guaranteed to exist. Instead, documented usage of extension suggests to rely on including `sycl... | true |
3,032,596,817 | DISABLED test_graph_partition_reorder_cpu_and_gpu_interleave (__main__.CudaGraphTreeTests) | pytorch-bot[bot] | open | [
"module: rocm",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2 | NONE | Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_graph_partition_reorder_cpu_and_gpu_interleave&suite=CudaGraphTreeTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41432152468).
Over ... | true |
3,032,596,815 | DISABLED test_pending_fusion_pro_and_epi (__main__.TestPrologueFusion) | pytorch-bot[bot] | open | [
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2 | NONE | Platforms: rocm, inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_pending_fusion_pro_and_epi&suite=TestPrologueFusion&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41442212447).
Over the past 3... | true |
3,032,594,687 | DISABLED test_comprehensive_signal_windows_hamming_cuda_float32 (__main__.TestInductorOpInfoCUDA) | pytorch-bot[bot] | open | [
"high priority",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 3 | NONE | Platforms: inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_signal_windows_hamming_cuda_float32&suite=TestInductorOpInfoCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/414443194... | true |
3,032,594,660 | DISABLED test_comprehensive_amin_cuda_float64 (__main__.TestInductorOpInfoCUDA) | pytorch-bot[bot] | open | [
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2 | NONE | Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_amin_cuda_float64&suite=TestInductorOpInfoCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41442776015).
Over the past 3... | true |
3,032,496,059 | [BE] Update numba versions | malfet | open | [
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"ci-no-td"
] | 18 | CONTRIBUTOR | Let's see if PyTorch is compatible with latest
`test_unary_funcs` are no longer failing due to https://github.com/pytorch/pytorch/pull/148024 | true |
3,032,481,926 | [ONNX] Delete JitTraceConvertStrategy | titaiwangms | closed | [
"open source",
"Merged",
"ciflow/trunk",
"release notes: onnx"
] | 10 | COLLABORATOR | Fixes #151703
| true |
3,032,258,804 | PGO does not work on jobs for frameworks that copy code to different dirs at different attempts. | laithsakka | open | [
"triaged",
"oncall: pt2",
"module: dynamic shapes"
] | 0 | CONTRIBUTOR |
**internal Xrefs:**
```
attempt 0:[ https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/fire-swapna942-](https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/fire-swapna942-f725974742/attempt_0/version_0/rank_0/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=10000)[f... | true |
3,032,237,381 | Implemented `Size.__radd__` | randolf-scholz | open | [
"triaged",
"open source",
"release notes: python_frontend",
"module: python frontend"
] | 4 | CONTRIBUTOR | Fixes #144334
Builds on top of #146834 by @khushi-411 (I reused the `THPSize_add` method as-is)
The needed trick was to add `PyNumberMethods` because these Number Protocol appears to be responsible for `__radd__` (see https://stackoverflow.com/q/18794169)
cc @albanD | true |
3,032,138,329 | [BE] Replace func_name with __func__ | malfet | closed | [
"module: cpu",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: quantization",
"topic: not user facing"
] | 4 | CONTRIBUTOR | Summary: Not sure why one needs to preserve the name by hand
Test Plan: CI
Differential Revision: D73941209
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168 | true |
3,032,133,746 | Clean up conda usage in benchmark scripts | huydhn | closed | [
"Merged",
"topic: not user facing",
"test-config/default",
"module: dynamo",
"ciflow/inductor",
"suppress-bc-linter"
] | 3 | CONTRIBUTOR | Fixes https://github.com/pytorch/pytorch/issues/152123.
* Switch `benchmarks/dynamo/Makefile` to use uv. Note that these scripts are only used locally, so it's kind of ok to keep conda here IMO. But switching to uv is probably nicer to most folks.
* Delete some files that are outdated and not used anymore
cc @... | true |
3,032,096,260 | removing short-perf-test-cpu.sh and short-perf-test-gpu.sh | jeanschmidt | closed | [
"Merged",
"ciflow/binaries",
"ciflow/trunk",
"topic: not user facing",
"ciflow/periodic",
"ciflow/nightly",
"ciflow/unstable",
"ciflow/slow"
] | 4 | CONTRIBUTOR | When working on #148342 I realised that there is no reference from those files. So seems they are stale and can be safely removed.
| true |
3,032,085,521 | MPS varying seq len SDPA memory leak | SalmanMohammadi | open | [
"module: memory usage",
"triaged",
"module: mps",
"module: sdpa"
] | 2 | CONTRIBUTOR | ### 🐛 Describe the bug
After trying the fix from #152371 (thanks so much for landing this so quickly) However, I was still seeing memory leaks. I found another issue where memory usage on MPS explodes when the sequence length sufficiently varies for SDPA - this does not occur with CUDA.
 nodes don't get re-traced
2. If a node returns a tensor with a different `dtype`, it's still considered the same tensor a lot of the time.
It's actually re... | true |
3,032,064,980 | [invoke_subgraph] Unpacked operands | anijain2305 | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 6 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152494
* #152490
* #152383
* #152384
* #152581
* __->__ #152547
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauha... | true |
3,032,059,090 | Remove Conda Instructions | AlannaBurke | open | [
"module: docs",
"release notes: releng"
] | 1 | CONTRIBUTOR | Fixes #149551
Needs input on some of the instructions.
cc @svekars @sekyondaMeta | true |
3,032,051,955 | ci: Switch benchmark dependency to use pip | seemethere | open | [
"topic: not user facing"
] | 3 | MEMBER | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152843
* __->__ #152545
As an effort to reduce our dependency on conda we should use pip here,
also pins all the dependencies based on versions that I took today
(04/30/2024), realistically this should probably be in a
requirements.txt ... | true |
3,032,048,424 | Migrate perf_test/test_[gc]pu_speed_mnist.sh from conda to venv | jeanschmidt | closed | [
"topic: not user facing"
] | 1 | CONTRIBUTOR | Replace conda with venv on:
* `.ci/pytorch/perf_test/test_cpu_speed_mnist.sh`
* `.ci/pytorch/perf_test/test_gpu_speed_mnist.sh`
Fixes #148342
| true |
3,032,027,010 | strict multidimensional slicing | avikchaudhuri | open | [
"fb-exported",
"module: dynamo",
"ciflow/inductor"
] | 3 | CONTRIBUTOR | Differential Revision: D73937420
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
3,032,026,781 | [AOTI][CPU] Introduce config.cpp.use_decompose_tanh | hl475 | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 7 | CONTRIBUTOR | Summary: Previously D70489427 changed tanh impl to `.tanh()`, and this is causing some meta internal workload perf regression. This diff will introduce a config so we can set it based on need.
Differential Revision: D73909371
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @b... | true |
3,032,008,373 | Add parameters for monitor | yangw-dev | closed | [
"module: rocm",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3 | CONTRIBUTOR | Add log interval and log-data-collect interval to all test yml
Add upload step for all test yml files
next step:
enable perf test with utilization
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd | true |
3,031,973,325 | [CUDA] Rest peak memory stats before running `test_set_per_process_memory_fraction` | eqy | closed | [
"module: cuda",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 13 | COLLABORATOR | Otherwise previous tests can cause `application = int(total_memory * 0.499) - torch.cuda.max_memory_reserved()` to go negative
Hopefully abates current flakiness (see also https://github.com/pytorch/pytorch/issues/135115#:~:text=TestCuda.test_set_per_process_memory_fraction)
cc @ptrblck @msaroufim @jerryzh168 | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.