id
int64
2.74B
3.05B
title
stringlengths
1
255
user
stringlengths
2
26
state
stringclasses
2 values
labels
listlengths
0
24
comments
int64
0
206
author_association
stringclasses
4 values
body
stringlengths
7
62.5k
is_title
bool
1 class
3,048,328,783
Floating Point exception in Convolution with disabled SMT
Flamefire
open
[]
0
COLLABORATOR
### 🐛 Describe the bug Using NNPACK for convolution on a system with disabled SMT causes a `Floating Point exception" caused by a divide-by-zero, terminating the program. This can be easily reproduced with `python nn/test_convolution.py TestConvolutionNN.test_conv2d_discontiguous_weight` This can be traced to a cal...
true
3,048,208,603
miss doc for torch.segment_reduce
shadow150519
open
[]
0
NONE
### 📚 The doc issue I have noticed there is a function called segment_reduce, but I can't find its doc, will it have a better performance than torch.scatter_reduce since torch.scatter_reduce is more general? ### Suggest a potential alternative/fix _No response_
true
3,048,164,781
`torch.batch_norm` shows inconsistent error behavior between CPU and GPU
SilentTester73
open
[]
1
NONE
### 🐛 Describe the bug ## Description When `torch.batch_norm` is called with one of `running_mean` or `running_var` as a tensor and the other as `None`, an internal assertion `Expected has_running_mean == has_running_var to be true, but got false` is triggered on CUDA-enabled GPUs. However, this error is *not* trigg...
true
3,048,015,448
The parameters of in_proj_bias in MultiheadAttention are zeros
Neronjust2017
open
[]
0
NONE
### 🐛 Describe the bug I use nn.MultiheadAttention in my model ``` self.multihead_attn = nn.MultiheadAttention(d_model, nhead, dropout=dropout, batch_...
true
3,047,984,875
Avoid using system_clock
cyyever
open
[ "oncall: distributed", "module: cpu", "open source", "release notes: quantization" ]
1
COLLABORATOR
This PR replaces most `std::chrono::system_clock` with `std::chrono::steady_clock` if the duration is used in condition variables. Ideally system clocks should be used only to log wall-clock times. Some `high_resolution_clock` are also changed to `steady_clock` because its resolution is not required in the context. ...
true
3,047,982,027
[ROCm][CI] Update build-environment for mi300 workflows
jithunnair-amd
open
[ "module: rocm", "open source", "topic: not user facing", "ciflow/rocm" ]
1
COLLABORATOR
so their test times are tracked separately in https://raw.githubusercontent.com/pytorch/test-infra/generated-stats/stats/test-times.json. Currently, both MI200 and MI300 test times get combined into the same key `linux-focal-rocm-py3.10` cc @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hong...
true
3,047,909,723
Inconsistent behavior between CPU and GPU implementations of `torch.arange`
SilentTester73
open
[]
1
NONE
### 🐛 Describe the bug ## Description When using `torch.arange()` with a start value greater than the end value and a positive step, the behavior differs between CPU and GPU implementations: - GPU silently returns an empty tensor - CPU correctly raises an exception about inconsistent bounds with step sign ## Reprod...
true
3,047,900,019
Inconsistent behavior and misleading error message for `torch.nanmean()` with complex dtypes
SilentTester73
open
[ "topic: bug fixes" ]
2
NONE
### 🐛 Describe the bug ## Description: When using nanmean() with complex tensors, there's an inconsistent behavior between CPU and GPU implementations: - On GPU: The function works correctly with complex dtypes (complex128) - On CPU: The function fails, but with a misleading error message: "nansum does not support c...
true
3,047,833,331
fix slice w/ dynamic shapes
cgufb
open
[ "fb-exported", "ciflow/inductor" ]
4
CONTRIBUTOR
Summary: guard_size_oblivious has side effects that'll result in invalid strides when slice nodes take negative index on dynamic input shapes. Test Plan: CIs should pass. Differential Revision: D74354663
true
3,047,778,668
[Minimizer] Fix the path naming
jimone1
open
[ "fb-exported", "release notes: fx", "fx" ]
5
CONTRIBUTOR
Summary: Added some logging and captured the indexing. See below image. {F1977773416} This is why the saved module path is called `/tmp/jimwan/minimizer_a_acc.pt` Now the updated module paths are `/tmp/jimwan/minimizer_addmm_default_103_acc.pt`. Test Plan: ``` MTIAC_USE_DIST_REF_KERNELS=all buck2 run @//mode/opt mt...
true
3,047,722,761
DISABLED test_intermediary_hooks_same_on_inductor (__main__.HooksTests)
pytorch-bot[bot]
open
[ "triaged", "module: flaky-tests", "skipped", "oncall: pt2", "module: dynamo" ]
1
NONE
Platforms: asan, linux, mac, macos, rocm This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_intermediary_hooks_same_on_inductor&suite=HooksTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41835113407). ...
true
3,047,721,817
Inconsistent Complex torch.Tensor.asin() Results Between CPU and GPU
SilentTester73
open
[]
1
NONE
### 🐛 Describe the bug ## Bug Description When computing the asin of a complex tensor with very small real part, large imaginary part using PyTorch, there is a discrepancy between the results computed on CPU versus GPU. The CPU computation often returns complex infinity, while the GPU returns finite numerical values...
true
3,047,693,561
Operations on a tensor and a scalar will cause the error on dtype of the result
Redempt1onzzZZ
closed
[]
2
NONE
### 🐛 Describe the bug It's a derived finding based on #153014. As the normal logic of pytorch, when an API dealing with two tensor of different dtype (data precision), the result will follow the higher precision, like the below example. ``` import torch tensor1 = torch.tensor([0.01], dtype=torch.float16, device='cud...
true
3,047,638,306
Add tests to check pretty print when padding is a string in C++ API
Alvaro-Kothe
open
[ "open source", "topic: not user facing" ]
1
CONTRIBUTOR
Currently there are no tests to verify the behaviour of pretty print when padding is `torch::kSame` or `torch::kValid`. This PR just adds this tests to check for future regressions.
true
3,047,624,613
Add logging for guard miss failure
jamesjwu
open
[ "fb-exported", "ciflow/trunk", "topic: not user facing", "module: inductor", "ciflow/inductor" ]
6
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #153125 Differential Revision: [D74371381](https://our.internmc.facebook.com/intern/diff/D74371381/) This PR adds some logging for guard misses to tlparse, so that we know when AOTAutogradCache and FxGraphCache miss due to guard...
true
3,047,624,527
Turn on static cuda launcher test
jamesjwu
closed
[ "fb-exported", "module: inductor", "ciflow/inductor" ]
3
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * (to be filled) Differential Revision: [D74339692](https://our.internmc.facebook.com/intern/diff/D74339692/) cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chen...
true
3,047,608,665
DISABLED test_input_codegen_with_sympy_expr_xpu (__main__.AOTInductorTestABICompatibleGpu)
etaf
open
[ "triaged", "skipped", "module: xpu" ]
1
COLLABORATOR
Platforms: <fill this in or delete. Valid labels are: asan, linux, mac, macos, rocm, win, windows.> This test was disabled because it is failing on main branch ([recent examples](https://torch-ci.com/failure?failureCaptures=%5B%22inductor%2Ftest_aot_inductor.py%3A%3AAOTInductorTestABICompatibleGpu%3A%3Atest_input_code...
true
3,047,534,169
[CUDA] test_c10d_nccl test_extra_cuda_context failure due to _helper_test_extra_cuda_context_by_memory
nWEIdia
open
[]
1
COLLABORATOR
While trying to replace cuda11.8 distributed jobs by cuda 12.6 ([PR](https://github.com/pytorch/pytorch/pull/151594/files#diff-9f639571a250cffbe9cded7d2fbb5ad6311e4be9c0c7610e5ba85930806e7f38)), test_extra_cuda_context failed and I had to increase the 1.5x heuristic to 1.7 to temporarily workaround the failure. When...
true
3,047,526,737
Revert "[CI] docker images use tags instead of image name (#152209)"
huydhn
open
[ "module: rocm", "topic: not user facing", "ciflow/inductor", "ci-no-td" ]
1
CONTRIBUTOR
This reverts commit 0145f9e29e37beb2fb03bf2538f675060ab7b4f5. DEBUG PR, no need to review cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
true
3,047,515,089
DISABLED test_nn_module (__main__.TestGuardSerialization)
pytorch-bot[bot]
open
[ "triaged", "module: flaky-tests", "skipped", "oncall: pt2", "module: dynamo" ]
2
NONE
Platforms: asan, linux, mac, macos, rocm, slow This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_nn_module&suite=TestGuardSerialization&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41828599132). Over ...
true
3,047,510,959
devmate factor our test_torch tests
bobrenjc93
open
[ "topic: not user facing" ]
1
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #153119 * #153118 * #152924 prompt "ok great now can you split test\_torch.py into more smaller pieces just like you did with test\_basic\_vital\_signs.py?"
true
3,047,498,747
devmate test_basic_vital_signs
bobrenjc93
open
[ "topic: not user facing" ]
1
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #153119 * __->__ #153118 * #152924
true
3,047,468,182
[TESTING] Triton pin (May 7) 81f93f2c8ec7d20a1f8184def767edeaebeb6812
davidberard98
open
[ "ciflow/trunk", "topic: not user facing", "ciflow/inductor", "ciflow/rocm", "ci-no-td" ]
1
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #153117
true
3,047,467,718
[c10d] Reduce test verbosity
kwen2501
open
[ "module: c10d", "topic: not user facing" ]
2
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #153116 Has been seeing a lot of `Starting event listener thread for rank` recently in test print-out. Moving them to `logger.debug`.
true
3,047,453,154
[ONNX] Implement sym_float?
justinchuby
open
[ "module: onnx", "triaged" ]
2
COLLABORATOR
Do we need sym_float in https://github.com/pytorch/pytorch/blob/main/torch/onnx/_internal/exporter/_torchlib/ops/symops.py ? @titaiwangms @xadupre
true
3,047,451,491
[BE][lint] fix PYFMT for PT-D code under torch.testing._internal, add them to the lint list
XilunWu
open
[ "oncall: distributed", "module: lint", "better-engineering", "ciflow/trunk", "topic: not user facing" ]
2
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #153114 cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
true
3,047,447,913
[C10D] Move getNcclDataType into NCCLUtils
GD06
closed
[ "oncall: distributed", "fb-exported", "Merged", "ciflow/trunk", "release notes: distributed (c10d)" ]
6
CONTRIBUTOR
Differential Revision: D74365214 cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
true
3,047,425,123
Support using SymInt shapes for torch.baddbmm no-broadcast case
yf225
open
[ "ciflow/trunk", "topic: not user facing" ]
2
CONTRIBUTOR
A typical `bmm` kernel in Helion needs to pass in symint shapes to `torch.baddbmm`. Currently `self.expand((dim1, dim2, dim3))` in baddbmm runs unconditionally and it doesn't work with symint shapes (it raises the following error): ``` Traceback (most recent call last): File "/home/willfeng/local/helion_yf225/heli...
true
3,047,422,533
[Graph Partition] Maintain relative order within partition during reordering
BoyuanFeng
open
[ "oncall: distributed", "ciflow/trunk", "topic: not user facing", "module: inductor", "ciflow/inductor" ]
1
CONTRIBUTOR
PR #151968 adds `reorder_for_minimizing_partition` for the minimal number of partitions. If reordering two nodes cannot reduce the number of partitions, `reorder_for_minimizing_partition` should maintain the relative order of these two nodes and rely on other reorder passes for some nice features, such as shorter liven...
true
3,047,414,644
[c10d] Remove unordered PG destroy test
kwen2501
open
[ "oncall: distributed", "ciflow/trunk", "topic: not user facing" ]
1
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #153110 torch.distributed does not support unordered ProcessGroup destroy. Removing the test. Resolves #137507 cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
true
3,047,396,841
Inconsistent size passed to custom CUDA alloc/free in torch::unique_consecutive
darrin-willis
open
[]
0
NONE
### 🐛 Describe the bug When using `CUDAPluggableAllocator`, there is a different size passed to `malloc` vs `free` for some tensor inside `torch::unique_consecutive` on the third invocation. This can impact & corrupt alternative allocators like RMM. This may be related to https://github.com/pytorch/pytorch/pull/13047...
true
3,047,361,033
Introduce unbacked friendly is_known_contiguous and use it instead of is_contiguous in all locations where there is a general path for not know_contiguous
laithsakka
open
[ "oncall: pt2", "module: dynamic shapes", "data dependent error" ]
0
CONTRIBUTOR
title. cc @chauhang @penguinwu @ezyang @bobrenjc93
true
3,047,306,800
do not reinplace diagonal_scatter
BoyuanFeng
open
[ "ciflow/trunk", "topic: not user facing", "module: functionalization", "module: inductor", "ciflow/inductor" ]
1
CONTRIBUTOR
In the following code, `copy_` changes values of `mul` which is later read by `torch.mm`. So `torch.mm` has to happen after `copy_`. This info is captured in aot graph. We can see `mm` reads `diagonal_scatter`, which reads `copy`. So we know torch.ops.aten.mm must happen after torch.ops.aten.copy. However, in post_g...
true
3,047,291,352
[cutlass backend] Fix EVT test for fbcode post cutlass 3.9.2 upgrade
henrylhtsang
closed
[ "fb-exported", "Merged", "ciflow/trunk", "topic: not user facing", "module: inductor", "ciflow/inductor" ]
7
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #153106 cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
true
3,047,287,497
[dynamo] Fix super and classmethod binding of cls object
anijain2305
open
[ "ciflow/trunk", "topic: not user facing", "module: dynamo", "ciflow/inductor" ]
2
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #153105 * #152883 cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
true
3,047,281,502
[FlexAttention] Remove Old Constraint on lastdim strides
drisspg
open
[ "module: inductor", "ciflow/inductor" ]
1
CONTRIBUTOR
https://github.com/pytorch/pytorch/pull/151959 Cherry pick cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
true
3,047,264,749
[Inductor] Investigate computing global amaxes via atomics (instead of a reduction based approach) in triton codgen
danielvegamyhre
open
[ "oncall: pt2", "module: inductor" ]
0
CONTRIBUTOR
## Summary Tensorwise or rowwise amax values are used to compute scaling factors in float8 quantization. Computing these values in a performant way is critical for float8 training with dynamic quantization, where we are dynamically scaling the tensors at runtime in forward/backward. Currently inductor codegen uses a...
true
3,047,243,750
`bernoulli_()` produces inconsistent results between CPU and GPU
SilentTester73
closed
[]
1
NONE
### 🐛 Describe the bug ## Description The in-place `torch.Tensor.bernoulli_()` function generates significantly different results when run on CPU versus GPU. ## Minimal Reproduction Code Available on Colab: [https://colab.research.google.com/drive/1CC3VIj0FocMUu1ebozzF7IHsdBiQDPE_?usp=sharing](https://colab.researc...
true
3,047,232,042
[CUDA][CUDNN] Dispatch to cuDNN for non-batch-splittable 64-bit NCHW convolutions
eqy
open
[ "module: cuda", "module: cpu", "module: convolution", "open source", "topic: not user facing" ]
1
COLLABORATOR
For #152816 cc @ptrblck @msaroufim @jerryzh168 @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
true
3,047,230,094
DISABLED test_intermediary_hooks_same_on_aot_eager (__main__.HooksTests)
pytorch-bot[bot]
open
[ "triaged", "module: flaky-tests", "skipped", "oncall: pt2", "module: dynamo" ]
3
NONE
Platforms: asan, linux, mac, macos, rocm, slow This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_intermediary_hooks_same_on_aot_eager&suite=HooksTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/4181877...
true
3,047,223,137
[mm sampling] extract more triton information
coconutruben
closed
[ "fb-exported", "Merged", "ciflow/trunk", "topic: not user facing", "module: inductor", "ciflow/inductor" ]
8
CONTRIBUTOR
Summary: # Why capture more triton config information that was not being captured # What capture and extract - group_m - allow_tf32 - acc_type - matrix_instr_nonkdim - waves_per_eu - kpack to achieve this, add - matrix_instr_nonkdim - waves_per_eu - kpack to the info_dict of the TritonTemplateCaller Test Plan: ...
true
3,047,189,264
[Cherry-pick] Fix copysign + scalar correctness issue
malfet
open
[ "release notes: mps", "ciflow/mps" ]
1
CONTRIBUTOR
Which consists of two cherry-picks: - https://github.com/pytorch/pytorch/pull/152997 - https://github.com/pytorch/pytorch/pull/152510 (only partially, as code path are quite divergent between 2.7 and trunk)
true
3,047,165,801
Use std::fma for CUDA Adam kernel's lerps.
MeetThePatel
open
[ "open source", "release notes: cuda" ]
1
CONTRIBUTOR
Switch the calculation of lerps in Adam's fused CUDA kernel to use std::fma, as proposed by @crcrpar .
true
3,047,162,885
[WIP][XPU] Update Triton commit
anmyachev
open
[ "triaged", "open source", "topic: not user facing", "ciflow/inductor", "ciflow/xpu" ]
2
COLLABORATOR
To view the current pass rate on a full test suite and detect problems earlier.
true
3,047,161,717
[CUDA][cuBLASLt] Respect `allow[FP16/BF16]ReductionCuBLAS` in `cuBLASLt`
eqy
open
[ "module: cublas", "open source", "module: bfloat16", "module: half", "topic: not user facing", "matrix multiplication" ]
1
COLLABORATOR
cuBLASLt matmuls have been silently allowing all reduction types, which meant that e.g., `allow_fp16_reduced_precision_reduction = False` had no effect. In practice split-K with reduced precision reductions were unlikely to happen as the default `CUBLASLT_WORKSPACE_SIZE` of 1MiB tends to prevent this. However thi...
true
3,047,146,820
Add missing in-place on view check to custom autograd.Function
soulitzer
open
[ "release notes: autograd" ]
2
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #153094 * #153005 Fixes https://github.com/pytorch/pytorch/issues/152773
true
3,047,052,936
[vec128] Fix fmsub NEON defintion
pytorchbot
closed
[ "module: cpu", "open source" ]
1
COLLABORATOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #152075 As reported in https://github.com/pytorch/pytorch/issues/149292, according to manual, `vfmsq_f32` implements `c - a * b` rather than `a * b - c`, so it's call must be prefixed with `vnegq_f32` Also, adjust the tests to u...
true
3,047,034,816
[MKLDNN] Check that strides are positive
pytorchbot
closed
[ "module: cpu", "module: mkldnn", "open source", "ciflow/linux-aarch64" ]
1
COLLABORATOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #151848 For pooling ops. Prevents division-by-zero when argument is wrong Fixes https://github.com/pytorch/pytorch/issues/149274 cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168 @gujinghui @PenghuiCh...
true
3,047,030,686
Fix tensorpipe compilation with clang-17
pytorchbot
closed
[ "open source" ]
1
COLLABORATOR
By suppressing `missing-template-arg-list-after-template-kw` warning, which seems to be required to compile Google's libnop, which is in a semi-abandoned state now ``` In file included from /Users/malfet/git/pytorch/pytorch/third_party/tensorpipe/third_party/libnop/include/nop/base/variant.h:21: /Users/malfet/git/py...
true
3,046,998,970
Clean up right nav
svekars
open
[ "module: docs", "topic: docs", "topic: not user facing" ]
2
CONTRIBUTOR
- Move community and language binding links to the horizontal bar - Add an intro to the community page. - Fix the link in the ogp_image - Fix the link in the version switcher - Clean up unneeded links - Test noindex as a meta tag in fsdp doc cc @sekyondaMeta @AlannaBurke
true
3,046,977,866
[Cherry Pick] Remove cuda dependencies from non cuda buids #152333
atalman
closed
[ "topic: not user facing" ]
1
CONTRIBUTOR
Cherry Pick of https://github.com/pytorch/pytorch/pull/152333 Related to: https://github.com/pytorch/pytorch/issues/152121
true
3,046,976,974
[nativert] move recordfunction
dolpm
open
[ "fb-exported", "topic: not user facing" ]
8
CONTRIBUTOR
Summary: nativert RFC: https://github.com/zhxchen17/rfcs/blob/master/RFC-0043-torch-native-runtime.md To land the runtime into PyTorch core, we will gradually land logical parts of the code into the Github issue and get each piece properly reviewed. This diff moves the our function-recording raii wrapper into record_...
true
3,046,969,761
[nativert] move executor config to torch
dolpm
open
[ "fb-exported", "topic: not user facing" ]
3
CONTRIBUTOR
Summary: nativert RFC: https://github.com/zhxchen17/rfcs/blob/master/RFC-0043-torch-native-runtime.md To land the runtime into PyTorch core, we will gradually land logical parts of the code into the Github issue and get each piece properly reviewed. This diff moves the executor config to torch. since it's header-only...
true
3,046,947,509
Export doesn't work with patched forward
tugsbayasgalan
open
[ "oncall: pt2", "oncall: export" ]
0
CONTRIBUTOR
### 🐛 Describe the bug ``` class Foo(torch.nn.Module): def __init__(self): super().__init__() def forward(self, x): return x + 2 import functools def fancy_forward(x, y): return x + 2 + y Foo.forward = functools.partial(fancy_forward, y=torch.randn(4, 4)) torch.export.export(Foo(), (to...
true
3,046,947,193
Allow workflows to opt-out of experiments
zxiiro
open
[ "open source", "topic: not user facing", "ciflow/inductor-periodic" ]
1
COLLABORATOR
This change adds support to allow workflows to opt-out of experiments.
true
3,046,940,041
Refactor nested benchmarking functions in select_algorithm.py
masnesral
open
[ "topic: not user facing", "module: inductor", "ciflow/inductor" ]
1
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #153084 Summary: I'll need some of the benchmark-related functions surfaced so I can use them for remote autotuning. This PR just lifts the main in-process benchmarking helpers to classmethods. It wasn't strictly necessary to also mov...
true
3,046,920,180
[CUDA][cuBLASLt] Fix scale setting for `allowFP16AccumulationCuBLAS` `true` case
eqy
open
[ "module: cuda", "triaged", "module: cublas", "open source", "module: half", "release notes: cuda" ]
1
COLLABORATOR
Also add some missing `@onlyCUDA` / support check decorators in `test_matmul_cuda.py` Should help resolve #151890 cc @ptrblck @msaroufim @jerryzh168 @csarofeen @xwang233
true
3,046,896,343
[dynamo] Harden torch function dispatchability check for attributes and methods access
StrongerXi
open
[ "ciflow/trunk", "module: dynamo", "ciflow/inductor", "release notes: dynamo" ]
2
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #153082 See more details in https://github.com/pytorch/pytorch/issues/151771#issuecomment-2836372110. Fixes #151771. cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayis...
true
3,046,889,949
[cutlass-3] Add cutlass key for fbcode and OSS
henrylhtsang
open
[ "fb-exported", "ciflow/trunk", "topic: not user facing", "module: inductor", "ciflow/inductor" ]
7
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #153081 Differential Revision: [D74337959](https://our.internmc.facebook.com/intern/diff/D74337959/) cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chen...
true
3,046,852,748
[BE] Move all lint runner to 24.04
malfet
closed
[ "topic: not user facing" ]
1
CONTRIBUTOR
As Ubuntu-20 reached EOL on Apr 1st, see https://github.com/actions/runner-images/issues/11101 This forces older python version to be 3.8 Delete all linux-20.04 runners from the lintrunner.yml Cherry-pick of https://github.com/pytorch/pytorch/pull/150427 into release/2.7 branch (cherry picked from commit 48af2cd...
true
3,046,834,020
[FSDP2][Doc] add pointer to torchtitan
weifengpy
open
[ "release notes: distributed (fsdp)" ]
1
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #153079 Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags:
true
3,046,761,919
Add TensorLR variant for fused Adagrad on CPU
MeetThePatel
open
[ "triaged", "open source", "release notes: optim" ]
3
CONTRIBUTOR
This PR adds a tensor LR variant for the CPU Adagrad(fused=True). I copied the behavior from the tensor LR variant of CPU Adam(fused=True), where the `lr.item()` is cast to a double and passed in the default function.
true
3,046,743,699
Mismatch of mixed precision `cast_fn` in FSDP and FSDP2
markovka17
open
[ "oncall: distributed", "module: fsdp" ]
1
CONTRIBUTOR
### 🐛 Describe the bug FSDP2 does not work with `dataclasses` as input. More specifically, FSDP2's pre_hook does not cast tensors from dataclass. FSDP uses [_apply_to_tensors_](https://github.com/pytorch/pytorch/blob/172e6415299e93629497d9660c525c8bf60af912/torch/distributed/utils.py#L218) to handle dataclass-like ob...
true
3,046,733,312
Fix test/test_optim.py error message.
MeetThePatel
closed
[ "open source", "Merged", "ciflow/trunk", "topic: not user facing" ]
3
CONTRIBUTOR
Fixes an error message in test/test_optim.py Current behavior: If running the test with Adagrad, the error message reads: "SGD does not currently support capturable". Fix: The error message now says correctly: "Adagrad does not currently support capturable".
true
3,046,732,426
Delete .github/workflows/docker-cache-mi300.yml
seemethere
open
[ "topic: not user facing" ]
2
MEMBER
The runner group for this has 0 runners, we should probably just delete. ![Screenshot 2025-05-07 at 10 45 34 AM](https://github.com/user-attachments/assets/3e25220a-2a9c-427c-9839-286c56900b9c)
true
3,046,729,068
Fix TORCH_CHECK error message in FusedSgdKernel
MeetThePatel
closed
[ "open source", "Merged", "ciflow/trunk", "release notes: cuda" ]
3
CONTRIBUTOR
This fixes an issue in the TORCH_CHECK error message in the FusedSgdKernel. Current behavior: If the LR tensor is not on the same device as the parameters, the error message reads: "found_inf must be on the same GPU device as the params". Fix: The error message now correctly points out "lr must be on the same GPU...
true
3,046,724,169
[inductor] Fix #153071
rec
open
[ "open source", "topic: not user facing", "module: inductor", "ciflow/inductor" ]
3
COLLABORATOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #153073 cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
true
3,046,707,691
fbgemm Update pinned version
gchalump
open
[ "fb-exported", "topic: not user facing" ]
6
NONE
Differential Revision: D74335570
true
3,046,661,927
Link check fails on link from comment in torch/_inductor/codegen/cpp.py to Stack Overflow
rec
open
[ "module: lint", "triaged", "actionable", "bug" ]
3
COLLABORATOR
### 🐛 Describe the bug My PR kept stalling in merge complaining about link checking without providing a message, so I rebased it to reveal this: https://github.com/pytorch/pytorch/actions/runs/14888839700/job/41815590757?pr=149958 ``` [...] 200 https://github.com/pytorch/pytorch/blob/f353d17755ed23b02924c962a86ff99...
true
3,046,661,687
Fix path matching in `CPythonTestCase/setUpClass`
guilhermeleobas
open
[ "open source", "topic: not user facing", "module: dynamo", "ciflow/inductor" ]
2
COLLABORATOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #152991 * #152990 * #152908 * #152907 * #152989 * #152906 * #152905 * #152903 * #152902 * #152901 * #152904 * #152988 * #152987 * #150792 * #152900 * __->__ #153070 cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSupe...
true
3,046,646,916
`torch.ldexp` goes out of range when `2**other` is out of range
roman-openai
open
[ "high priority", "triage review", "module: correctness (silent)" ]
3
NONE
### 🐛 Describe the bug ```python import torch torch.ldexp(torch.tensor([2], dtype=torch.float16), torch.tensor([-25], dtype=torch.int32)) ``` Gives ```python tensor([0.], dtype=torch.float16) ``` Even though `2 * 2**-25 = 2**-24` is non-zero and is within representable range of `torch.float16`, and ```python torch.ld...
true
3,046,641,284
OSS CI Infra Storm (Scenario 1 + 2) - May 7, 2025
seemethere
open
[ "triaged" ]
2
MEMBER
## Current Status Executing Scenario 2 ## Scenario 1 Following along scenario 1 ([link, for Metamates only](https://docs.google.com/document/d/1ttAsjMrCEoEyqnIs5UdxzkAvL7xKnO10ytsm7hg9rWQ/edit?fbclid=IwZXh0bgNhZW0CMTEAYnJpZBExeXRVeUNaSlJVeG9NenBsUQEeQMjB0mGfUzUl5CQ3NcECnkY1we9HB_aw1MaM55y3smJvGT4jbkicOix5j-s_aem_tzgX...
true
3,046,634,751
Add device guard for xpu conv on multi device
guangyey
open
[ "module: cpu", "open source", "ciflow/trunk", "keep-going", "merging", "ciflow/xpu", "release notes: xpu" ]
12
COLLABORATOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #153067 # Motivation fixes https://github.com/pytorch/pytorch/issues/153022 The root cause is that the XPU backend registers the convolution op using `m.impl`, which bypasses the device guard logic typically added by the code gen...
true
3,046,576,289
fix bug with TORCHINDUCTOR_DUMP_LAUNCH_PARAMS
exclamaforte
open
[ "fb-exported", "module: inductor", "ciflow/inductor" ]
3
CONTRIBUTOR
Summary: https://fb.workplace.com/groups/1028545332188949/posts/9503194033132340/?comment_id=9504669536318123&reply_comment_id=9506405459477864&notif_id=1746154132646897&notif_t=work_group_comment_mention Aligns the arguments for the triton inputs Differential Revision: D74085173 cc @voznesenskym @penguinwu @Eika...
true
3,046,554,465
[ONNX] dynamic_shapes uses DYNAMIC
titaiwangms
closed
[ "module: onnx", "open source", "Merged", "ciflow/trunk", "release notes: onnx", "topic: improvements" ]
3
COLLABORATOR
Although Dim.AUTO covers the cases that a user sets more axes to be dynamic than the model actually needs, it silently falls back to STATIC when DYNAMIC fails. This increases the difficulty of debugging.
true
3,046,532,741
Keep raw cubin file around in case it gets deleted underneath us
jamesjwu
open
[ "ciflow/trunk", "topic: not user facing", "module: inductor", "ciflow/inductor", "ciflow/pull" ]
2
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #153064 This diff hardens StaticCudaLauncher in the event a cubin file gets deleted under us. We store the raw cubin on the static cuda launcher, and reload it as needed. On cold start, this can happen if the cubin file is created ...
true
3,046,516,669
[FlexAttention] export fails to trace with functorch
tugsbayasgalan
open
[ "triaged", "oncall: pt2", "module: functorch", "module: flex attention" ]
0
CONTRIBUTOR
### 🐛 Describe the bug ```python import torch import torch.nn as nn from torch.func import vmap from torch.export import export # 1. Inner model (shared across batch) class TinyModel(nn.Module): def __init__(self): super().__init__() self.linear = nn.Linear(8, 4) def forward(self, x): ...
true
3,046,481,642
non-strict export should detect fake tensor leakage
tugsbayasgalan
open
[ "oncall: pt2", "oncall: export" ]
0
CONTRIBUTOR
### 🐛 Describe the bug ```python class Model(torch.nn.Module): def __init__(self): super().__init__() self.buffer = torch.nn.Buffer(torch.randn(4, 4)) def forward(self, x): return self.buffer.sum() + x.sum() class Pipeline: def __init__(self, model): self.model = model ...
true
3,046,466,098
register_constant doesn't work on simple types
tugsbayasgalan
open
[ "module: pytree", "oncall: pt2", "oncall: export" ]
1
CONTRIBUTOR
### 🐛 Describe the bug ```python from enum import Enum class Color(Enum): RED = 1 GREEN = 2 BLUE = 3 class Foo(torch.nn.Module): def __init__(self): super().__init__() def forward(self, x, col): return x + col.value torch.utils._pytree.register_constant(Color) torch.export.ex...
true
3,046,465,981
Fix misleadingly high AOT Inductor dashboard performance
benjaminglass1
open
[ "open source", "topic: not user facing", "module: dynamo", "ciflow/inductor" ]
2
COLLABORATOR
An [example benchmark](https://hud.pytorch.org/benchmark/timm_models/inductor_aot_inductor?dashboard=torchinductor&startTime=Wed%2C%2030%20Apr%202025%2015%3A54%3A04%20GMT&stopTime=Wed%2C%2007%20May%202025%2015%3A54%3A04%20GMT&granularity=hour&mode=inference&dtype=bfloat16&deviceName=cuda%20(h100)&lBranch=main&lCommit=1...
true
3,046,429,194
DISABLED test_input_hooks_same (__main__.HooksTests)
pytorch-bot[bot]
open
[ "module: flaky-tests", "skipped", "module: unknown", "oncall: pt2", "module: dynamo" ]
3
NONE
Platforms: linux, mac, macos, rocm, asan, slow This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_input_hooks_same&suite=HooksTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41796649006). Over the p...
true
3,046,315,502
`cuda.Event` handling in dynamo is broken
bdhirsh
open
[ "module: cuda", "oncall: pt2", "module: dynamo" ]
1
CONTRIBUTOR
Here's an example: ``` import torch lst = [] @torch.compile(backend="eager", fullgraph=True) def f(x): start_event = torch.cuda.Event(enable_timing=True) end_event = torch.cuda.Event(enable_timing=True) start_event.record() out = torch.matmul(x, x) end_event.record() lst.append(start_event) ...
true
3,046,272,178
[BE] Update ruamel to 0.18.10
malfet
closed
[ "better-engineering", "Merged", "ciflow/trunk", "topic: not user facing" ]
3
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #152719 * __->__ #153057 To address the feedback from https://github.com/pytorch/pytorch/pull/153013 Previously it was pinned to 0.17.4, that was released in 2021
true
3,046,252,394
Export doesn't move embedding to correct device
tugsbayasgalan
open
[ "oncall: pt2", "oncall: export" ]
0
CONTRIBUTOR
### 🐛 Describe the bug ```python import torch class Model(torch.nn.Module): def __init__(self): super().__init__() self.embedding = torch.nn.Embedding(num_embeddings=10, embedding_dim=8) def forward(self, x): token_ids = torch.randint(0, 10, (4,), device=x.device) embedded...
true
3,046,238,361
[BE]: Add PEP621 project section to pyproject.toml
Skylion007
open
[ "triaged", "open source", "better-engineering", "topic: not user facing" ]
3
COLLABORATOR
Follow up to @ezyang's PR #153020 , but better uses PEP621 to reduce redundant fields and pass through metadata better to uv, setuptools, poetry and other tooling. * Enables modern tooling like uv sync and better support for tools like poetry. * Also allows us to set project wide settings that are respected by lin...
true
3,046,198,637
[HOP] Reworked HOPs to use FunctionalizeCtxWrapper
bohnstingl
open
[ "triaged", "open source", "topic: not user facing" ]
3
COLLABORATOR
This PR reworks the `py_functionalize_impl` of HOPs and introduces the use of `FunctionalizeCtxWrapper`. cc @ydwu4
true
3,046,102,584
[BE]: Blacklist broken setuptools until we upgrade MSVC API
Skylion007
open
[ "open source", "topic: not user facing" ]
1
COLLABORATOR
Alternative to #153052 where we just ban the broken setuptools version
true
3,046,100,119
[BE]: Use undocumented temp shim to restore setuptools compat
Skylion007
open
[ "oncall: releng", "open source", "better-engineering", "topic: not user facing" ]
2
COLLABORATOR
null
true
3,046,096,583
[Intel GPU] scalar tensor case handling in addmm, baddmm
ZhiweiYan-96
open
[ "module: cpu", "module: mkldnn", "open source", "ciflow/trunk", "topic: not user facing", "ciflow/binaries_wheel", "ciflow/xpu", "ciflow/linux-aarch64" ]
2
COLLABORATOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #153051 cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168 @gujinghui @PenghuiCheng @jianyuh @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal
true
3,045,564,719
Process never ends when sending tensors through multiprocessing queues in Python 3.12+ on macOS
rafalh
open
[ "needs reproduction", "module: multiprocessing", "triaged", "module: macos", "module: deadlock" ]
4
NONE
### 🐛 Describe the bug If a tensor is sent in multiprocessing queue, something blocks the process from ending after the end of script is reached (I have to press Ctrl+C to end the program). It seems to be related to the resource tracker (`multiprocessing.resource_tracker.ResourceTracker`) process started by Python au...
true
3,045,487,542
Update docs of saved_tensors_hooks to avoid ref cycle
ppwwyyxx
closed
[ "triaged", "open source", "Merged", "ciflow/trunk", "release notes: autograd", "topic: docs" ]
3
COLLABORATOR
Fixes #115255
true
3,045,394,006
🌠 Add Muon optimizer
kadirnar
open
[ "triaged", "open source", "release notes: optim" ]
3
NONE
Fixes https://github.com/pytorch/pytorch/issues/148819
true
3,045,385,461
DISABLED test_comprehensive_special_ndtri_cuda_int64 (__main__.TestInductorOpInfoCUDA)
pytorch-bot[bot]
open
[ "high priority", "triaged", "module: flaky-tests", "skipped", "oncall: pt2", "module: inductor" ]
2
NONE
Platforms: inductor This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_special_ndtri_cuda_int64&suite=TestInductorOpInfoCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41770133820). Over ...
true
3,045,385,330
DISABLED test_comprehensive_trunc_cuda_float64 (__main__.TestInductorOpInfoCUDA)
pytorch-bot[bot]
open
[ "triaged", "module: flaky-tests", "skipped", "oncall: pt2", "module: inductor" ]
1
NONE
Platforms: linux This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_trunc_cuda_float64&suite=TestInductorOpInfoCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41775258519). Over the past ...
true
3,045,385,194
DISABLED test_hook_with_nested_closure (__main__.HooksTests)
pytorch-bot[bot]
open
[ "triaged", "module: flaky-tests", "skipped", "oncall: pt2", "module: dynamo" ]
3
NONE
Platforms: asan, linux, mac, macos, rocm, slow This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_hook_with_nested_closure&suite=HooksTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41771802586). Over...
true
3,045,331,296
Unexpected float32 overflow for amp training with torch.compile
zbh2047
open
[ "high priority", "triage review", "oncall: pt2" ]
1
NONE
### 🐛 Describe the bug I recently encountered significant precision issue when using torch.amp together with torch.compile. I was finally able to create a minimal reproducible code as shown below: ```python import torch import torch.nn class Model(nn.Module): def __init__(self): super().__init__() ...
true
3,045,117,090
[Typing] Remove redundant type aliases of `_device_t` for `torch.types.Device` in `torch/_dynamo/device_interface.py`
shink
closed
[ "triaged", "open source", "topic: not user facing", "module: dynamo" ]
3
CONTRIBUTOR
Part of: #152952 Follow up: #153007 cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
true
3,045,054,926
Pytorch 2.7 crashes when using flex attention with torch.amp
zbh2047
open
[ "module: crash", "oncall: pt2", "module: higher order operators", "module: pt2-dispatcher", "module: flex attention" ]
2
NONE
### 🐛 Describe the bug I believe this bug should exist for a very long time but is still not fixed yet, so I post this new issue here. Basically, the current flex attention is incompatible with torch.amp.autocast. The bug can be reproduced with the following (extremely simple) code: ```python import torch import to...
true
3,044,898,009
gen_alias_from_base ruins the result of view after inductor generated a copy for the results of the view operations.
laithsakka
open
[ "triaged" ]
2
CONTRIBUTOR
There is three issues here: 1) if we have the following aot_graph inductor generates a copy for the view operation which is not permitted. (it should generate a view). see the view operation on the last line. cc @eellison ``` 2000 3525273 torch/fx/experimental/symbolic_shapes.py:1220] [0/0] For C++ stack trace, run ...
true
3,044,799,011
[AOTInductor] Generate kernels separately for const graph and main graph
muchulee8
open
[ "ciflow/trunk", "module: inductor", "ciflow/inductor", "release notes: inductor (aoti)" ]
6
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #153040 Summary: We should generate the kernel for const graph and main graph separately. The reason is that when we run autotuning, we would create separate kernel calls and we should make sure that main graph also contains the ...
true