id
int64
2.74B
3.05B
title
stringlengths
1
255
user
stringlengths
2
26
state
stringclasses
2 values
labels
listlengths
0
24
comments
int64
0
206
author_association
stringclasses
4 values
body
stringlengths
7
62.5k
is_title
bool
1 class
3,044,797,354
[Dynamo] Replace `unimplemented` with `unimplemented_v2` in `torch/_dynamo/variables/misc.py` [2/2]
shink
open
[ "triaged", "open source", "topic: not user facing", "module: dynamo" ]
6
CONTRIBUTOR
Part of #147913 Follow up: #152274 Replace `unimplemented` with`unimplemented_v2` in `torch/_dynamo/variables/misc.py` cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
true
3,044,749,011
Add CUDA support for Adagrad(fused=True)
MeetThePatel
open
[ "triaged", "open source", "release notes: optim" ]
4
CONTRIBUTOR
This PR adds CUDA support for Adagrad(fused=True) optimizer, along with 3 minor changes: - Add a TensorLR variant for CPU Adagrad(fused=True). - Fix error message in `test/test_optim.py`, where the incorrect optimizer name was being printed. - Fix an error message in FusedSGD, where it was giving incorrect informati...
true
3,044,663,148
Allow zero sized dimensions in padding operations
sladyn98
open
[ "open source", "topic: not user facing" ]
4
NONE
Previously, the padding implementation in PadNd.cpp required all output dimensions to be strictly positive (> 0), which caused errors when padding tensors with zero-sized dimensions even when the padding for that dimension was also zero. This change relaxes the constraint to allow non-negative (>= 0) output dimensio...
true
3,044,623,989
fix test
yf225
closed
[ "topic: not user facing", "module: inductor", "ciflow/inductor" ]
1
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #153036 * #152775 cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
true
3,044,617,968
Add Split Softmax
AMindToThink
open
[ "module: nn", "triaged", "needs research" ]
2
NONE
Transformer models often forget their system prompts when processing long text due to the long distances between the source of the information and its destination. The Split Softmax function is a modification of softmax for use in attention that promotes the model to keep paying attention to the system prompt. It was ...
true
3,044,611,322
WIP: Fix caching when output has unbacked
aorenste
open
[ "release notes: fx", "ciflow/inductor" ]
2
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #153034
true
3,044,594,141
missalignment with differenet shape in F.linear with bf16 dtype
likelyzhao
open
[ "needs reproduction", "triaged", "module: bfloat16", "module: linear algebra", "module: padding" ]
1
NONE
### 🐛 Describe the bug For the F.linear function, when constructing matrix multiplications of varying dimensions via zero-padding, output consistency cannot be guaranteed under bf16 precision (outputs are consistent for some dimensions but inconsistent for others). ```python import torch import torch.nn.functional ...
true
3,044,591,634
DISABLED test_hook_with_closure (__main__.HooksTests)
pytorch-bot[bot]
open
[ "triaged", "module: flaky-tests", "skipped", "oncall: pt2", "module: dynamo" ]
2
NONE
Platforms: asan, linux, mac, macos, rocm This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_hook_with_closure&suite=HooksTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41763750570). Over the past 3 h...
true
3,044,591,578
DISABLED test_comprehensive_svd_cuda_float32 (__main__.TestInductorOpInfoCUDA)
pytorch-bot[bot]
open
[ "high priority", "triaged", "module: flaky-tests", "skipped", "oncall: pt2", "module: inductor" ]
2
NONE
Platforms: inductor This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_svd_cuda_float32&suite=TestInductorOpInfoCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41765230283). Over the past...
true
3,044,591,521
DISABLED test_comprehensive_amin_cuda_float32 (__main__.TestInductorOpInfoCUDA)
pytorch-bot[bot]
open
[ "high priority", "triaged", "module: flaky-tests", "skipped", "oncall: pt2", "module: inductor" ]
4
NONE
Platforms: inductor This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_amin_cuda_float32&suite=TestInductorOpInfoCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41762952404). Over the pas...
true
3,044,591,468
DISABLED test_comprehensive_asinh_cuda_float32 (__main__.TestInductorOpInfoCUDA)
pytorch-bot[bot]
open
[ "high priority", "triaged", "module: flaky-tests", "skipped", "oncall: pt2", "module: inductor" ]
4
NONE
Platforms: inductor This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_asinh_cuda_float32&suite=TestInductorOpInfoCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41765095197). Over the pa...
true
3,044,587,182
[Typing] Improve device typing for `torch.set_default_device()`
shink
closed
[ "open source", "Merged", "ciflow/trunk", "topic: not user facing" ]
12
CONTRIBUTOR
Part of: #152952 Here is the definition of `torch.types.Device`: https://github.com/pytorch/pytorch/blob/ab997d9ff584e8623de146b6eb9c9074081b045b/torch/types.py#L74 So `_Optional[_Union["torch.device", str, builtins.int]]` is equivalent to it. cc: @Skylion007
true
3,044,564,077
[Typing] Apply `torch.types.Device` in `torch/cuda/memory.py`
shink
open
[ "triaged", "open source", "topic: not user facing" ]
5
CONTRIBUTOR
Part of: #152952 Here is the definition of `torch.types.Device`: https://github.com/pytorch/pytorch/blob/ab997d9ff584e8623de146b6eb9c9074081b045b/torch/types.py#L74 It contains `int`, so the `int` in `Union[Device, int]` is redundant. cc: @Skylion007
true
3,044,559,891
remove register_fake
yf225
closed
[ "topic: not user facing", "module: inductor", "ciflow/inductor" ]
1
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #153026 * #152775 cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
true
3,044,513,586
Multiple CUDA graphs utilizing multiple CUDA GPUs encounter illegal memory access during replay
Atream
open
[ "triaged", "module: cuda graphs" ]
3
NONE
### 🐛 Describe the bug When capturing multiple CUDA graphs that use multiple CUDA GPUs, only the buffers related to the last captured CUDA graph are retained. As a result, only the last captured CUDA graph can be replayed successfully, while replaying other CUDA graphs leads to illegal memory access. Testing revealed...
true
3,044,502,694
[RFC] Enable XPU+FlexAttention on Intel GPU
liangan1
open
[ "triaged", "enhancement", "oncall: pt2", "module: higher order operators", "module: pt2-dispatcher", "module: xpu", "module: flex attention" ]
1
NONE
### 🚀 The feature, motivation and pitch ## Motivation The Attention has been the critical performance bottleneck in the current LLM models, and FlexAttention is a good choice to cover the broad variants in the transformers series models. With FlexAttention, it is easy for us to enable the paged attention and fused S...
true
3,044,480,415
Fix Codegen.cmake warning
cyyever
closed
[ "open source", "Merged", "ciflow/trunk", "topic: not user facing" ]
6
COLLABORATOR
Fix ``` CMake Warning (dev) in cmake/Codegen.cmake: A logical block opening on the line /var/lib/jenkins/workspace/cmake/Codegen.cmake:393 (if) closes on the line /var/lib/jenkins/workspace/cmake/Codegen.cmake:401 (endif) with mis-matching arguments. ``` by removing the condition in `endi...
true
3,044,472,261
XPU inference output abnormal with device 'XPU:1'
maxwell-zhengxu
open
[ "high priority", "triage review", "triaged", "module: xpu" ]
4
NONE
### 🐛 Describe the bug Two intel GPUs environment with work well environment, the inference output is always correct for device 'xpu:0' while random output abnormal for device 'xpu:1' ```python import torch import torchvision.models as models torch.manual_seed(0) model = models.resnet50(weights="ResNet50_Weights.D...
true
3,044,465,268
Adding a generic attribute for easier checkpoint discrepancy debugging.
githubsgi
open
[ "triaged", "open source" ]
5
CONTRIBUTOR
Adding a generic attributed called layer_id for the object that recompute_fn is a method of. This ties the checkpointing saved and recompute discrepancies to a layer in the model. topic: not user facing
true
3,044,464,840
Add a project section to pyproject.toml, making uv sync work
ezyang
open
[ "topic: not user facing" ]
7
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #153020 With this change, I can now run `uv sync -v` and get all dependencies I need and then trigger build of PyTorch. (The `-v` is good because the build takes a long time and uv hides progress by default.) Signed-off-by: Edwa...
true
3,044,455,802
[RFC][API-Unstable]Enable A16W4 on XPU Device
liangan1
open
[ "triaged", "module: xpu" ]
1
NONE
### 🚀 The feature, motivation and pitch ## Motivation As you know, the generation task with LLM is autoregressive and the GEMM computation of the decoding stage for the next token is memory bound. The weight only quantization with A16W4 has been widely adopted by the LLM inference, especially for the client GPU with...
true
3,044,434,059
DISABLED test_comprehensive_scatter_xpu_bool (__main__.TestInductorOpInfoXPU)
chuanqi129
closed
[ "triaged", "skipped", "module: xpu" ]
1
COLLABORATOR
Platforms: linux This test was disabled because it is failing on main branch ([recent examples](https://torch-ci.com/failure?failureCaptures=%5B%22'test%2Finductor%2Ftest_torchinductor_opinfo.py%3A%3ATestInductorOpInfoXPU%3A%3Atest_comprehensive_scatter_xpu_bool'%2C%20'test%2Finductor%2Ftest_torchinductor_opinfo.py%3A...
true
3,044,432,643
DISABLED test_comprehensive_scatter_xpu_int64 (__main__.TestInductorOpInfoXPU)
chuanqi129
closed
[ "triaged", "skipped", "module: xpu" ]
1
COLLABORATOR
Platforms: linux This test was disabled because it is failing on main branch ([recent examples](https://torch-ci.com/failure?failureCaptures=%5B%22'test%2Finductor%2Ftest_torchinductor_opinfo.py%3A%3ATestInductorOpInfoXPU%3A%3Atest_comprehensive_scatter_xpu_bool'%2C%20'test%2Finductor%2Ftest_torchinductor_opinfo.py%3A...
true
3,044,429,040
inconsistent grads between two types of `allgather`s
gameofdimension
open
[ "oncall: distributed", "module: autograd" ]
0
NONE
### 🐛 Describe the bug I've observed a gradient discrepancy between two PyTorch all-gather implementations: one using the DTensor API, and the other using all_gather_tensor_autograd. My goal is to implement a correct autograd-compatible all-gather operation, but I'm unsure which implementation (if either) produces th...
true
3,044,413,527
c10d/gloo: add ibverbs backend
d4l3k
open
[ "oncall: distributed", "fb-exported", "ciflow/trunk", "release notes: distributed (c10d)" ]
5
MEMBER
Summary: X-link: https://github.com/pytorch/gloo/pull/437 This provides a new "UnboundBuffer" implementation for Gloo ibverbs backend so it can be used with PyTorch. This currently is passing basic tests such as `reduce_test` and `send_recv_test` but there are a number of failures. Putting this up for review so the f...
true
3,044,402,645
Operations on different precision tensors in CPU lead to different outputs
Redempt1onzzZZ
closed
[ "module: cpu", "triaged", "module: edge cases" ]
3
NONE
### 🐛 Describe the bug A similar finding with [https://github.com/pytorch/pytorch/issues/152294](#152294), the bug also consist in "torch.addcdiv", it seems that using only number (65536) as input, it will be transform to inf, however when using array ([65536]), the calculation will run normally. ``` import torch inp...
true
3,044,401,500
[Lint] Add install command for GHA step
malfet
closed
[ "Merged", "topic: not user facing" ]
5
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #152719 * __->__ #153013 Otherwise, it fails to run the script
true
3,044,401,411
[Testing] Add logic for running MPS tests
malfet
closed
[ "Merged", "topic: not user facing" ]
3
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #153013 * #152719 * __->__ #153012 Prep change for getting rid of `_mac-test-mps.yml` A complete no-op for now, but will be used by PR above the stack, but they should be landed few days apart to avoid forcing lots of people to rebase thei...
true
3,044,392,004
[WIP][dynamic shapes] unbacked safer cat, repeat
pianpwk
open
[ "module: dynamo", "ciflow/inductor" ]
2
CONTRIBUTOR
With https://github.com/pytorch/pytorch/pull/150483, for https://github.com/pytorch/pytorch/issues/152473 cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
true
3,044,379,231
Detect NVSHMEM location
kwen2501
closed
[ "Merged", "ciflow/trunk", "release notes: distributed (c10d)" ]
4
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #153010 ### Changes - Detect NVSHMEM install location via `sysconfig.get_path("purelib")`, which typically resolves to `<conda_env>/lib/python/site-packages`, and NVSHMEM include and lib live under `nvidia/nvshmem` - Added link dir...
true
3,044,337,298
DISABLED test_comprehensive_scatter_xpu_bool (__main__.TestInductorOpInfoXPU)
etaf
open
[ "triaged", "skipped", "module: xpu" ]
1
COLLABORATOR
Platforms: <fill this in or delete. Valid labels are: asan, linux, mac, macos, rocm, win, windows.> This test was disabled because it is failing on main branch ([recent examples](https://torch-ci.com/failure?failureCaptures=%5B%22'test%2Finductor%2Ftest_torchinductor_opinfo.py%3A%3ATestInductorOpInfoXPU%3A%3Atest_comp...
true
3,044,335,974
DISABLED test_comprehensive_scatter_xpu_int64 (__main__.TestInductorOpInfoXPU)
etaf
open
[ "triaged", "skipped", "module: xpu" ]
1
COLLABORATOR
Platforms: <fill this in or delete. Valid labels are: asan, linux, mac, macos, rocm, win, windows.> This test was disabled because it is failing on main branch ([recent examples](https://torch-ci.com/failure?failureCaptures=%5B%22'test%2Finductor%2Ftest_torchinductor_opinfo.py%3A%3ATestInductorOpInfoXPU%3A%3Atest_comp...
true
3,044,318,690
Remove redundant type aliases of _device_t for torch.Device (#152952)
sanjai-11
open
[ "oncall: distributed", "module: cpu", "triaged", "module: mkldnn", "open source", "module: amp (automated mixed precision)", "release notes: quantization", "topic: not user facing", "module: inductor", "module: dynamo", "release notes: distributed (checkpoint)", "suppress-bc-linter", "module...
3
NONE
Fixes #152952 This PR removes redundant type aliases for `_device_t` and replaces them with `torch.types.Device` where applicable, to make the typing system more consistent across PyTorch. cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei...
true
3,044,295,252
[cutlass backend] Use src code to generate cutlass gemm name
henrylhtsang
open
[ "fb-exported", "ciflow/trunk", "topic: not user facing", "module: inductor", "ciflow/inductor" ]
9
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #153006 * #152580 Differential Revision: [D74288965](https://our.internmc.facebook.com/intern/diff/D74288965/) This shaves off 40s for at least small cases, since we don't have to recompile the kernel again. cc @voznesenskym...
true
3,044,256,520
[autograd][docs] Add more details on why save_for_backward is important in extending autograd note
soulitzer
open
[ "release notes: autograd" ]
1
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #153094 * __->__ #153005 cc @stas00
true
3,044,255,324
[WIP][Inductor-CPU] int8 WoQ concat linear
sanchitintel
open
[ "open source", "topic: not user facing", "module: inductor", "ciflow/inductor" ]
2
COLLABORATOR
WIP - [ ] Add UT corresponding to torchao pattern - [ ] Add perf data - [ ] Refactor cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
true
3,044,226,030
[cutlass backend] Skip cuda lib path if it is torch/lib
henrylhtsang
closed
[ "fb-exported", "Merged", "ciflow/trunk", "topic: not user facing", "module: inductor", "ciflow/inductor" ]
7
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #153003 Differential Revision: [D74284808](https://our.internmc.facebook.com/intern/diff/D74284808/) This is a bit risky for cutlass backend, so decided to separate it out. Tested offline. cc @voznesenskym @penguinwu @EikanWa...
true
3,044,220,979
[CI] Use sccache installed in docker image in xla build
clee2000
open
[ "topic: not user facing", "ciflow/pull" ]
2
CONTRIBUTOR
The edited comment should have the info Sccache stopped working on xla at some point near dec 17 2023. I am not sure what commit caused it. I think it was having trouble writing to the cache. Either way, there is an sccache already installed on the docker image, so we should use that instead of a binary from s3...
true
3,044,212,130
[cutlass backend][test] re-enable test_cuda_compile_command for fbcode
henrylhtsang
closed
[ "fb-exported", "Merged", "ciflow/trunk", "topic: not user facing", "module: inductor", "ciflow/inductor" ]
4
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #153001 Differential Revision: [D74284047](https://our.internmc.facebook.com/intern/diff/D74284047/) cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chen...
true
3,044,159,796
[export] Unflatten None
angelayi
open
[ "ciflow/trunk", "release notes: export" ]
3
CONTRIBUTOR
Fixes #ISSUE_NUMBER
true
3,044,149,425
`lintrunenr init` fails
malfet
open
[ "module: lint", "triaged", "module: devx" ]
2
CONTRIBUTOR
### 🐛 Describe the bug Attempting to run `lintrunner init` fails ``` % lintrunner init --take FLAKE8 Warning: Could not find a lintrunner config at: '.lintrunner.private.toml'. Continuing without using configuration file. [2025-05-06T22:17:48Z INFO lintrunner::linter] Initializing linter: 'FLAKE8' [2025-05-06T22:17:4...
true
3,044,141,928
[Dynamo][trace_rules] Add torch.distributed.fb.simple_fsdp to LEGACY_MOD_INLINELIST
yf225
closed
[ "topic: not user facing", "module: dynamo", "ciflow/inductor" ]
2
CONTRIBUTOR
Functions / modules in `torch.distributed.fb.simple_fsdp` are guaranteed to be traceable, and inlining into them is prerequisite for having both pre-forward / post-forward hooks to be in the same graph as forward for SimpleFSDP modules. Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __...
true
3,044,135,504
[Testing] Add copysign from scalar regression test
malfet
closed
[ "Merged", "release notes: python_frontend", "ciflow/mps" ]
3
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #152997 But instead of adding it just for MPS backend, add it to OpInfo Fixes https://github.com/pytorch/pytorch/issues/152582
true
3,044,082,824
DISABLED test_comprehensive_rsub_cuda_float64 (__main__.TestInductorOpInfoCUDA)
pytorch-bot[bot]
open
[ "high priority", "triaged", "module: flaky-tests", "skipped", "oncall: pt2", "module: inductor" ]
2
NONE
Platforms: inductor This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_rsub_cuda_float64&suite=TestInductorOpInfoCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41741925240). Over the pas...
true
3,044,071,880
[inductor] dtype promotion error in cat decomp
pianpwk
open
[ "ciflow/trunk", "module: inductor", "module: dynamo", "ciflow/inductor", "release notes: inductor", "merging" ]
4
CONTRIBUTOR
cloning single tensor wasn't following dtype promotion rules for SAM model: https://github.com/pytorch/pytorch/issues/152606 cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
true
3,044,034,530
[dynamo] Actually support functools.lru_cache
williamwen42
open
[ "triaged", "oncall: pt2", "module: dynamo", "dynamo-functools" ]
0
MEMBER
Followup to https://github.com/pytorch/pytorch/issues/146598 Currently, when Dynamo traces a `lru_cache`d function, we simply trace the underlying function. This is not sound when the underlying function depends on state outside that function (e.g. globals, cells). Fully supporting the cache lookup involved in `lru_...
true
3,044,025,410
[inductor] Fix ModularIndexing assumptions
isuruf
open
[ "module: cpu", "open source", "ciflow/trunk", "topic: not user facing", "module: inductor", "ciflow/inductor", "merging" ]
4
COLLABORATOR
Fixes https://github.com/pytorch/pytorch/issues/151198. Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #152993 Since the result of ModularIndexing can be zero due to the modulo operation, we should not make any assumption about ModularIndexing being positive cc @jgong5 ...
true
3,044,012,443
conv2d with int8 on CUDA: GET was unable to find an engine to execute this computation
c-f-h
open
[ "module: cuda", "module: convolution", "triaged" ]
2
NONE
### 🐛 Describe the bug The following script works fine if I switch to CPU, or change the tensor dtypes to float32. Otherwise, see the error below. ```py import torch device = torch.device("cuda") # works fine with "cpu" print(f"Using device: {device}") # works fine if both are float32 input = torch.randin...
true
3,043,985,559
[FrozenSet] Fixes for FrozenSet
guilhermeleobas
open
[ "open source", "module: dynamo", "ciflow/inductor" ]
2
COLLABORATOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #152991 * #152990 * #152908 * #152907 * #152989 * #152906 * #152905 * #152903 * #152902 * #152901 * #152904 * #152988 * #152987 * #150792 * #152900 * #153070 cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSupe...
true
3,043,985,422
[Set] Raise TypeError if set is called with the wrong number of arguments
guilhermeleobas
open
[ "open source", "module: dynamo", "ciflow/inductor" ]
2
COLLABORATOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #152991 * __->__ #152990 * #152908 * #152907 * #152989 * #152906 * #152905 * #152903 * #152902 * #152901 * #152904 * #152988 * #152987 * #150792 * #152900 * #153070 cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSupe...
true
3,043,985,252
[Set] Update `set.union` and `set.update` to support *args
guilhermeleobas
open
[ "open source", "module: dynamo", "ciflow/inductor" ]
2
COLLABORATOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #152991 * #152990 * #152908 * #152907 * __->__ #152989 * #152906 * #152905 * #152903 * #152902 * #152901 * #152904 * #152988 * #152987 * #150792 * #152900 * #153070 cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSupe...
true
3,043,984,885
[Set] Raise `TypeError` if argument is unhashable
guilhermeleobas
open
[ "open source", "topic: not user facing", "module: dynamo", "ciflow/inductor" ]
2
COLLABORATOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #152991 * #152990 * #152908 * #152907 * #152989 * #152906 * #152905 * #152903 * #152902 * #152901 * #152904 * __->__ #152988 * #152987 * #150792 * #152900 * #153070 cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSupe...
true
3,043,984,736
[Set] Handle exception in ConstantVariable operation
guilhermeleobas
open
[ "open source", "topic: not user facing", "module: dynamo", "ciflow/inductor" ]
2
COLLABORATOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #152991 * #152990 * #152908 * #152907 * #152989 * #152906 * #152905 * #152903 * #152902 * #152901 * #152904 * #152988 * __->__ #152987 * #150792 * #152900 * #153070 cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSupe...
true
3,043,976,311
[WIP] Add XPU support for FlightRecorder
frost-intel
open
[ "oncall: distributed", "open source", "ciflow/trunk", "release notes: distributed (c10d)", "topic: not user facing" ]
2
COLLABORATOR
This is the first part of bringing XPU/XCCL support for FlightRecorder. `AcceleratorEvent` is a generic interface for CUDAEvent and XPUEvent, which is used in FlightRecorder to work with both XCCL and NCCL. Since the actual instantiation of the FlightRecorder and DebugInfoWriter objects happens in ProcessGroupNCC...
true
3,043,971,158
`torch.load` can't deserialize `datetime` objects, even with the appropriate `safe_globals`
gtebbutt
open
[ "module: serialization", "triaged" ]
0
NONE
### 🐛 Describe the bug Spent a while chasing this one down on the assumption that a custom class from my code was being inadvertently saved, especially with the earlier message requiring `getattr` to be added to `safe_globals`, but it turns out it'll happen on any output containing a `datetime` object: ```python im...
true
3,043,958,275
[hop_schema] support gen_schema for invoke_subgraph
ydwu4
open
[ "topic: not user facing" ]
2
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #152984 * #152974 * #151067
true
3,043,956,173
compile_fx: make a compile event that corresponds to the fx_compile waitcounter
c00w
open
[ "ciflow/trunk", "topic: not user facing", "module: inductor", "ciflow/inductor" ]
4
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #152983 This is a pretty minor change, but by having exact correspondence, we can easily confirm data differences between perfetto and wait counters cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaoz...
true
3,043,912,132
[torch][ao] Properly strip tracking stats in _fold_conv_bn_qat for 1D
JakeStevens
open
[ "fb-exported", "release notes: quantization", "release notes: AO frontend" ]
5
NONE
Summary: _fold_conv_bn_qat has logic to remove the tracking stats. Currently, this includes a check that includes only torch.nn.modules.batchnorm.BatchNorm2d. As a result, the tracking stats are not properly removed when 1D is used. This diff updates to fix this. Test Plan: Run N7113483 without this fix. {F1977726982...
true
3,043,888,333
Catch TypeError from ValueRanges
jansel
open
[ "module: cpu", "fb-exported", "ciflow/trunk", "release notes: inductor" ]
3
CONTRIBUTOR
Summary: This is a possible workaround to https://fb.workplace.com/groups/1075192433118967/permalink/675836685333300/ Test Plan: Ask poster to confirm fix Differential Revision: D74268733 cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168
true
3,043,887,194
Fix `'TensorBox' object has no attribute 'is_input_buffer'`
jansel
open
[ "fb-exported", "ciflow/trunk", "module: inductor", "ciflow/inductor", "release notes: inductor" ]
4
CONTRIBUTOR
Summary: Fix for https://fb.workplace.com/groups/1075192433118967/permalink/1664491270855744/ Test Plan: Used reproducer from D74262030 Differential Revision: D74270090 cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kaden...
true
3,043,823,259
FPE when using `torch.lcm_` with int32 tensor and int16 scalar
SilentTester73
open
[ "module: crash", "module: cpu", "module: error checking", "triaged", "module: edge cases" ]
3
NONE
### 🐛 Describe the bug ### Description When using `torch.lcm_` in-place operation between a large int32 tensor and an int16 scalar, the program crashes with a floating point exception. The operation works fine with smaller tensors, but fails with a specific large tensor containing various integer values. ### Steps ...
true
3,043,782,002
[Pytorch] Add `torch.cuda.streams.Event` to save torch functions list
dongji-gao
open
[ "fb-exported" ]
4
CONTRIBUTOR
Summary: TSIA Test Plan: WIP Differential Revision: D74266940
true
3,043,769,613
[MegaCache] Make MegaCache generic to allow external plugins registration
tbohutyn
open
[ "triaged", "open source", "topic: not user facing", "module: inductor", "module: dynamo" ]
4
CONTRIBUTOR
Implements #152976 cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @oulgen
true
3,043,763,742
Refactor MegaCache to make it generic
tbohutyn
open
[ "oncall: pt2" ]
0
CONTRIBUTOR
### 🚀 The feature, motivation and pitch Refactoring MegaCache to make it generic would allow for external plugins' caches to register in MegaCache. It would also remove specific cache logic from it. Related to https://github.com/pytorch/pytorch/pull/143341 Proposed PR https://github.com/pytorch/pytorch/pull/152977 ...
true
3,043,748,868
[dtensor] Extend Partial partition of replicated tensor for min/max reduce
BowenBao
open
[ "oncall: distributed", "triaged", "open source", "topic: improvements", "ciflow/inductor", "release notes: distributed (dtensor)" ]
2
COLLABORATOR
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
true
3,043,721,503
[hop_schema] add HopSchemaGenerator to make it easier to create hop schema
ydwu4
open
[ "topic: not user facing" ]
2
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #152984 * __->__ #152974 * #151067
true
3,043,701,258
Adding XPU support to DTensor examples.
githubsgi
open
[ "oncall: distributed", "triaged", "open source", "topic: not user facing" ]
3
CONTRIBUTOR
Adds XPU support to visualize_sharding_example.py and comm_mode_features_example.py . topic: not user facing cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
true
3,043,700,902
avoid falling back to as_strided for non-contiguous in-place reshape.
laithsakka
open
[ "oncall: pt2" ]
0
CONTRIBUTOR
When non-contiguous tensor reshape operates has unbacked symbols, there is a very high probability of hitting data dependent errors if we call view_symint, hence instead we call as_strided instead. We could have cloned as well, but as_strided sounds more efficient. ``` if (!self.sym_numel().has_hint() || !produc...
true
3,043,698,093
DISABLED test_comprehensive_scatter_xpu_int32 (__main__.TestInductorOpInfoXPU)
chuanqi129
open
[ "triaged", "skipped", "module: xpu" ]
1
COLLABORATOR
Platforms: linux This test was disabled because it is failing on main branch ([recent examples](https://torch-ci.com/failure?failureCaptures=%5B%22'test%2Finductor%2Ftest_torchinductor_opinfo.py%3A%3ATestInductorOpInfoXPU%3A%3Atest_comprehensive_scatter_xpu_int32'%22%5D)). cc @gujinghui @EikanWang @fengyuan14 @guangy...
true
3,043,694,521
DISABLED test_comprehensive_gather_xpu_int64 (__main__.TestInductorOpInfoXPU)
chuanqi129
open
[ "triaged", "skipped", "module: xpu" ]
1
COLLABORATOR
Platforms: linux This test was disabled because it is failing on main branch ([recent examples](https://torch-ci.com/failure?failureCaptures=%5B%22'test%2Finductor%2Ftest_torchinductor_opinfo.py%3A%3ATestInductorOpInfoXPU%3A%3Atest_comprehensive_gather_xpu_int64'%22%5D)). cc @gujinghui @EikanWang @fengyuan14 @guangye...
true
3,043,681,942
[nativert] Move GraphSignature to pytorch core
yiming0416
open
[ "fb-exported", "topic: not user facing" ]
9
CONTRIBUTOR
Summary: Torch Native Runtime RFC: https://github.com/pytorch/rfcs/pull/72 An in-memory representation of `GraphSignature` for graph specs of an exported program, which will be consumed by the runtime. Test Plan: Added tests under `test/cpp/nativert/test_graph_signature.cpp` Differential Revision: D73895378
true
3,043,660,961
[inductor] Generate synthetic offsets appropriately for autotuning _scaled_grouped_mm
bertmaher
open
[ "topic: not user facing", "module: inductor", "ciflow/inductor" ]
2
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #152968 Summary: The autotuner is using zero-filled tensors to autotune _scaled_grouped_mm and that's not appropriate for the offsets tensor, since it essentially corresponds to "no input" and thus yields invalid perf results. ...
true
3,043,651,973
[ATen][CUDA] Optimize 128 bit vectorization
pytorchbot
closed
[ "open source", "release notes: cuda" ]
1
COLLABORATOR
Fixes #147376. As per request: https://github.com/pytorch/pytorch/pull/145746#pullrequestreview-2642118301 This PR omits sm80 or older of using vec8 kernels due to long compilation and large binary size. cc @ptrblck @msaroufim @eqy @jerryzh168 @manuelcandales @SherlockNoMad @angelayi
true
3,043,651,250
[Memento] On-demand mode using without torch api
mzzchy
open
[ "fb-exported", "topic: not user facing" ]
11
CONTRIBUTOR
Differential Revision: D74179606
true
3,043,618,388
WIP so many changes to generate non-as strided view
laithsakka
open
[ "ciflow/inductor" ]
2
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #152965 * #152722 * #148872
true
3,043,614,211
[FSDP2] need dummy forward/backward to stay SPMD
weifengpy
open
[ "oncall: distributed", "triaged" ]
2
CONTRIBUTOR
### 🚀 The feature, motivation and pitch FSDP2 assumes SPMD on every rank, meaning every rank needs to call forward/backward to issue all-gather / reduce-scatter However, user reported two cases that some rank might be skipping forward/backward * torchtune might mask all the activations. they have to create a dummy i...
true
3,043,598,064
DTensor support for dynamic shapes is soft
bdhirsh
open
[ "oncall: distributed", "oncall: pt2" ]
1
CONTRIBUTOR
The state of DTensor + compile + dynamic shapes today is roughly: (1) for generic "pt2-friendly" tensor subclasses, we support compiling them with dynamic shapes. This includes cases where both the outer subclass shape and it's inner tensor shape(s) vary independently. (2) At the same time, dynamic shapes support imp...
true
3,043,571,434
TestNestedTensorOpInfoCUDA.test_compile_backward_matmul_cuda_float32 Test Failure
nWEIdia
open
[ "module: tests", "triaged", "module: nestedtensor" ]
3
COLLABORATOR
### 🐛 Describe the bug Steps to Reproduce: please see https://github.com/pytorch/pytorch/issues/152962#issuecomment-2859328199 `Traceback (most recent call last): File "/usr/lib/python3.12/unittest/case.py", line 58, in testPartExecutor yield File "/usr/lib/python3.12/unittest/case.py", line 539, in subTest...
true
3,043,543,233
[Dynamo] Remove unused guard PYMODULE_MATCH
jbschlosser
closed
[ "Merged", "ciflow/trunk", "topic: not user facing", "module: dynamo", "ciflow/inductor" ]
3
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #152729 * __->__ #152961 * #152872 * #152865 * #152730 * #152728 * #152727 * #152725 Not used anywhere: https://www.internalfb.com/code/search?q=repo%3Afbcode%20PYMODULE_MATCH cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @X...
true
3,043,469,694
Change aoti cpp tests to run serially within file
yushangdi
open
[ "ciflow/trunk", "topic: not user facing", "ciflow/inductor", "skip-url-lint" ]
7
CONTRIBUTOR
Fixes #152674 https://github.com/pytorch/pytorch/issues/152889 https://github.com/pytorch/pytorch/issues/152888 https://github.com/pytorch/pytorch/issues/152891 `--dist=loadfile` ensures all tests in the same source file run in the same worker. Tests like `FreeInactiveConstantBufferRuntimeConstantFoldingCud...
true
3,043,427,737
docs: Improve documentation for NCCL timeout / watchdog variables
booxter
open
[ "oncall: distributed", "triaged", "open source", "release notes: distributed (c10d)" ]
2
NONE
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
true
3,043,403,327
Follow up to #152209, remove compat patch
clee2000
open
[ "topic: not user facing" ]
1
CONTRIBUTOR
Remove compat patch that lets PRs that haven't rebased base #152209 still have docker images. Merge this next week
true
3,043,388,225
[CI] Upgrade sccache to 0.10.0
clee2000
closed
[ "Merged", "ciflow/trunk", "topic: not user facing" ]
3
CONTRIBUTOR
Newest release handles cuda better, and I think this fixes the cases I saw where some cuda related builds weren't being cached correctly
true
3,043,298,872
[ROCm] unkip test_non_standard_bool except for failings ops
pragupta
open
[ "module: rocm", "open source", "ciflow/rocm", "ciflow/inductor-rocm" ]
2
CONTRIBUTOR
Fixes #ISSUE_NUMBER cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
true
3,043,285,407
UNSTABLE pull / linux-docs / build-docs-functorch-false
malfet
closed
[ "module: docs", "module: ci", "triaged", "unstable" ]
2
CONTRIBUTOR
Jobs fails with infinite redirects, likely due to the changes happening to the doc website, see https://github.com/pytorch/pytorch/actions/runs/14862967281/job/41733878657 cc @svekars @sekyondaMeta @AlannaBurke @seemethere @pytorch/pytorch-dev-infra
true
3,043,272,004
DTensor placement propagation for `slice` fails during recompile due to SymInts
lw
open
[ "oncall: distributed", "oncall: pt2" ]
0
CONTRIBUTOR
### 🐛 Describe the bug This code fails: ```py import torch import torch.distributed torch.distributed.init_process_group(backend="nccl", rank=0, world_size=1, device_id=torch.device("cuda", 0), init_method="tcp://127.0.0.1:2743") device_mesh = torch.distributed.device_mesh.DeviceMesh.from_group(torch.distributed.gro...
true
3,043,159,561
[nativert] Move Placement to pytorch core
yushangdi
open
[ "fb-exported", "ciflow/trunk", "topic: not user facing" ]
13
CONTRIBUTOR
Summary: Move Placement to pytorch core. Using `torch::nativert::isSameDevice` explicitly in code to avoid confusion with the `isSameDevice` in torch namespace. Test Plan: ``` buck run fbcode//mode/dev-nosan //caffe2/test/cpp/nativert:placement_test ./bin/test_nativert ``` OSS and internal CI Diffe...
true
3,043,123,452
Remove redundant type aliases of _device for torch.Device
Skylion007
open
[ "good first issue", "triaged", "actionable" ]
5
COLLABORATOR
### 🚀 The feature, motivation and pitch We should remove redundant type aliases for `_device_t` and replace with `torch.types.Device` where appropriate to make the typing system a bit more consistent. #152935 is a good step in that direction ### Alternatives _No response_ ### Additional context _No response_
true
3,043,119,298
[ROCm] Ck gemm architecture guard
alugorey
open
[ "module: rocm", "triaged", "open source" ]
2
CONTRIBUTOR
Prevents CK gemms from being built unless explicitly specified. USE_ROCM_CK_GEMM controls the build, on by default on ROCm platform cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
true
3,043,110,284
Add NestedTensorHPU to to_padded_tensor in native_functions.yaml
sfraczek
open
[ "triaged", "open source", "ciflow/xpu", "release notes: xpu" ]
5
NONE
null
true
3,043,004,042
[dtensor] add privateuse1 SDPA op support to DTensor
1274085042
open
[ "oncall: distributed", "triaged", "open source" ]
2
CONTRIBUTOR
**Summary** This PR adds _scaled_dot_product_fused_attention_overrideable and _scaled_dot_product_fused_attention_overrideable_backward to DTensor ops @drisspg @fegin @d4l3k @wanchaol @albanD cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
true
3,042,998,223
[Linter] Add linter to detect device-bias hard code in test cases.
etaf
open
[ "open source", "topic: not user facing" ]
2
COLLABORATOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #152948 * #152945 Since XPU does not gate community pull requests, we’ve observed that contributors often hardcode "cuda" in functions decorated with @requires_gpu() when adding new test cases. This causes the tests to fail on XPU an...
true
3,042,984,947
Clean up of CUTLASS_VERSION
narekmalk
closed
[ "open source", "Merged", "ciflow/trunk", "topic: not user facing" ]
10
CONTRIBUTOR
Fixes #152847
true
3,042,957,740
[dtensor] add privateuse1 SDPA op support to DTensor
1274085042
closed
[ "oncall: distributed", "open source" ]
3
CONTRIBUTOR
**Summary** This PR adds _scaled_dot_product_fused_attention_overrideable and _scaled_dot_product_fused_attention_overrideable_backward to DTensor ops @drisspg @fegin @d4l3k @wanchaol @albanD cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
true
3,042,791,350
[Break XPU] Fix XPU UT failures introduced by community.
etaf
closed
[ "open source", "Merged", "ciflow/trunk", "topic: not user facing", "module: inductor", "ciflow/inductor", "keep-going", "ciflow/xpu" ]
3
COLLABORATOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #152948 * __->__ #152945 cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
true
3,042,756,025
DISABLED test_compiler_collectives_automatic_dynamic_tensor (__main__.TestMultiProc)
pytorch-bot[bot]
open
[ "high priority", "triaged", "module: flaky-tests", "skipped", "oncall: pt2", "module: dynamo" ]
2
NONE
Platforms: inductor This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_compiler_collectives_automatic_dynamic_tensor&suite=TestMultiProc&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41701856727). Over th...
true
3,042,755,895
DISABLED test_comprehensive_ormqr_cuda_float64 (__main__.TestInductorOpInfoCUDA)
pytorch-bot[bot]
open
[ "high priority", "triaged", "module: flaky-tests", "skipped", "oncall: pt2", "module: inductor" ]
2
NONE
Platforms: inductor This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_ormqr_cuda_float64&suite=TestInductorOpInfoCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41707147808). Over the pa...
true
3,042,748,855
aten._scaled_dot_product_efficient_attention returns LSE padded to next highest multiple of 32
a-r-r-o-w
open
[ "module: cuda", "triaged", "module: sdpa" ]
2
CONTRIBUTOR
### 🐛 Describe the bug Hi! This is less of a bug report and more of an ask of why the behaviour is this way. With the following code to obtain LSE from efficient attention backend, the shape of the LSE tensor is `[1, 2, 32]`. It is expected that the size in dim=2 should match the sequence length, which is `8` in thi...
true
3,042,681,757
ROCm: no HIP device available if device is already initialized
stefanozampini
open
[ "module: rocm", "triaged" ]
0
NONE
### 🐛 Describe the bug If I first initialize the HIP environment from `cupy`, `torch` does not detect it ``` $ python -c 'import cupy; print(cupy.cuda.is_available()); import torch; print(torch.cuda.is_available())' True False ``` However, as can be seen below, it should ``` $ python -c 'import cupy; print(cupy.cuda....
true
3,042,513,699
[Don't merge] Debug
mengfei25
open
[ "triaged", "open source", "module: dynamo" ]
3
CONTRIBUTOR
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
true