id int64 2.74B 3.05B | title stringlengths 1 255 | user stringlengths 2 26 | state stringclasses 2
values | labels listlengths 0 24 | comments int64 0 206 | author_association stringclasses 4
values | body stringlengths 7 62.5k ⌀ | is_title bool 1
class |
|---|---|---|---|---|---|---|---|---|
2,748,565,642 | Build jobs intermittently timeout | malfet | closed | [
"module: ci",
"triaged"
] | 2 | CONTRIBUTOR | ### 🐛 Describe the bug
For example https://github.com/pytorch/pytorch/actions/runs/12392465209/job/34591708343
And following jobs are also slow:
I.e. https://github.com/pytorch/pytorch/actions/runs/12392605222/job/34592144299 took 3.5h to finish and sccache stats are:
```
+ sccache --show-stats
Compile request... | true |
2,748,517,773 | [BE] Move Mac BB test to its own step | malfet | closed | [
"Merged",
"release notes: releng",
"ciflow/binaries_wheel",
"ciflow/binaries_libtorch"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143513
| true |
2,748,504,186 | [BE] Delete `install sccache` step from MacBB | malfet | closed | [
"Merged",
"topic: not user facing",
"ciflow/binaries_wheel"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143513
* __->__ #143512
* #143511
To the best of my knowledge, this step never executed and there were no MacOS binary build running on trunk for a while | true |
2,748,503,733 | [BE] Integrate 5 line build script into template | malfet | closed | [
"Merged",
"topic: not user facing",
"ciflow/binaries_wheel"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143513
* #143512
* __->__ #143511
| true |
2,748,456,907 | Add support for differentiable LR in SGD + test v2.0 | EmmettBicker | closed | [
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: optim"
] | 10 | CONTRIBUTOR | Second PR in a larger project to broader support for differentiable optimizers with @janeyx99 ! The first one had an issue near the end so this is the second PR on that subject. See #143122 for the development up until this point. | true |
2,748,403,650 | leaking c++ singleton specifically | duduyi2013 | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 10 | CONTRIBUTOR | Summary:
fix forward for S477887
leaking c++ singleton specifically
when c++ shutdown, it tries to destruct the singleton and acquire GIL, at this moment python runtime exists already, causing undefined behavior.
Leaking here specifically so that we won't try to destroy singleton at the shutdown phase
Test Plan: n/a... | true |
2,748,386,329 | Upgrade submodule ideep for bf16f32 matmul changes | aditew01 | closed | [
"module: cpu",
"triaged",
"module: mkldnn",
"open source",
"module: arm",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/linux-aarch64"
] | 3 | COLLABORATOR | This change will enable this PR #140159 to pick proper kernels in bf16 mode for SDPA layer.
cc: @yanbing-j
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @gujinghui @PenghuiCheng @jianyuh @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal @malfet @milpuz01 | true |
2,748,379,403 | [ROCm] Fix unit test: matmul_offline_mgpu_tunableop | naromero77amd | closed | [
"module: rocm",
"open source",
"Merged",
"ciflow/trunk",
"release notes: linalg_frontend",
"ciflow/periodic"
] | 12 | COLLABORATOR | Fixes #141652
This PR contains:
- Fix for `matmul_offline_mgpu_tunableop`
- Modifications to _checking_tuning_assertions to enable TunableOp if it is disabled. Also moved it into the concurrent futures initializer.
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hong... | true |
2,748,320,790 | Prevent torch.jit.load path in torch.load when weights_only=True | pytorchbot | closed | [
"open source"
] | 1 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143403
* __->__ #143326
| true |
2,748,286,378 | [Dynamo] torch._dynamo.exc.Unsupported: sort with non-constant keys | SamGinzburg | closed | [
"triaged",
"oncall: pt2",
"module: dynamo",
"dynamo-triage-jan2025"
] | 5 | CONTRIBUTOR | ### 🐛 Describe the bug
This error was encountered while trying to implement a version of [Autotuner.prune_configs](https://github.com/triton-lang/triton/blob/137bc62102f4a261cc921998221cea2b046a6c1b/python/triton/runtime/autotuner.py#L214) from Triton.
This function was modified from operating on a dict to a list ... | true |
2,748,239,659 | Fix old-compiler-unfriendly zero init of bfloat16_t array | swolchok | closed | [
"module: cpu",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: cpp",
"ciflow/linux-aarch64"
] | 4 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* (to be filled)
clang versions before 17 don't like to assign 0 to a bfloat16_t. gcc versions before 13 also won't assign 0.0 to a bfloat16_t. (Citation: https://godbolt.org/z/Gzs5ebdej)
Differential Revision: [D67396740](https://our.internm... | true |
2,748,201,659 | fix: resolve recursion overflow issue by hashing weak references | aeeeeeep | closed | [
"triaged",
"open source",
"Stale"
] | 6 | NONE | Issue
Using `weakref` with recursive objects in PyTorch causes recursion overflow due to the __hash__ method using id(key).
Fix
Changed self._id = id(key) to self._id = id(ref(key)) in the `__hash__` method to base the hash on the weak reference, preventing recursion overflow.
Fixes #ISSUE_NUMBER
| true |
2,748,153,149 | AssertionError: not bool like VR[1.00000000000000, 1.00000000000000] | ezyang | closed | [
"triage review",
"oncall: pt2",
"module: dynamic shapes",
"module: inductor"
] | 1 | CONTRIBUTOR | ### 🐛 Describe the bug
I triggered this bug while bisecting, it is not blocking me.
Backtrace:
```
Traceback (most recent call last):
File "/data/users/ezyang/fbsource/buck-out/v2/gen/fbcode/1b080a82294b728e/bento_kernels/pytorch/__bento_kernel_pytorch_binary__/bento_kernel_pytorch_binary#link-tree/torch/_d... | true |
2,748,132,812 | Fix docs load state dict | joldov | closed | [
"triaged",
"open source",
"Stale",
"module: dynamo",
"release notes: AO frontend"
] | 2 | NONE | Fixes #141364:
- Added proper indentation and formatting
- Improved readability for assign by breaking the text into shorter sentences
- Added "NamedTuple:" before the return description to clarify the type for Sphinx
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng ... | true |
2,748,078,907 | [dynamo] add two-point iter test | williamwen42 | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 5 | MEMBER | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143500
Implements the last checkbox for https://github.com/pytorch/pytorch/issues/112532.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kaden... | true |
2,747,950,887 | [Reland 2.6][dynamo][pytree] make CXX pytree traceable: `tree_{flatten,unflatten,structure,map,map_}` | XuehaiPan | closed | [
"open source",
"module: dynamo",
"ciflow/inductor",
"ci-no-td"
] | 4 | COLLABORATOR | Reland PRs:
- #137398
- #137399
These two PRs are in a series of PRs there the first one is in the release branch before the branch cut.
- 78543e60020b9fabd73d32ee7b1d5803a07d5e94
- #137397
This PR tries to add the follow-ups into the release branch as well.
cc @voznesenskym @penguinwu @EikanWang @jgong5... | true |
2,747,762,194 | AOTI_TORCH_CHECK failed in aot_compile-d model | mstebelev | closed | [
"triaged",
"oncall: pt2",
"ciflow/inductor",
"oncall: export",
"oncall: cpu inductor",
"module: aotinductor"
] | 10 | NONE | ### 🐛 Describe the bug
I exported some model using `torch.export(strict=False)`. Exported model itself works well, but if I compile it using `torch._inductor.aot_compile`, the process crashes with some internal check in generated code.
Reproducer:
https://colab.research.google.com/drive/1U8fe9k85_S4fRurxz_M7g9kYf... | true |
2,747,683,610 | No reproducibility after ONNX export of fully converted QAT model | onnxruntime-user | open | [
"module: onnx",
"oncall: quantization",
"triaged"
] | 0 | NONE | ### 🐛 Describe the bug
When I use quantization-aware training, results are not reproducible between:
1. fake quantized model
2. real quantized model
3. exported ONNX model
### Code example
```python
import torch
import onnxruntime as ort
torch.manual_seed(42)
def dummy_training(model):
model.trai... | true |
2,747,451,048 | infer_size(a, b) fails when it could return a value | xadupre | open | [
"triaged",
"oncall: pt2",
"module: fakeTensor"
] | 3 | COLLABORATOR | ### 🐛 Describe the bug
In function [infer_size](https://github.com/pytorch/pytorch/blob/main/torch/_subclasses/fake_impls.py#L845), the case where both conditions sizeA == 1 and sizeB == 1 are unknown, assuming the model is valid, the function could set ``expandedSizes[i]`` instead of raising an exception:
```python... | true |
2,747,434,072 | sympy.C.ConstantInteger has no method name | xadupre | open | [
"needs reproduction",
"triaged",
"module: fx",
"oncall: pt2",
"module: dynamic shapes"
] | 3 | COLLABORATOR | ### 🐛 Describe the bug
In line, https://github.com/pytorch/pytorch/blob/main/torch/fx/experimental/symbolic_shapes.py#L1652 instruction ``src.name()`` fails when src is One or Zero (sympy.S.One or numpy.S.Zero) because it does not exist for singleton.
### Versions
```
Collecting environment information...
PyTorch... | true |
2,747,251,771 | Fix torch.histogramdd description | zeshengzong | closed | [
"open source",
"Stale",
"release notes: python_frontend"
] | 2 | CONTRIBUTOR | Fixes #124435
| true |
2,747,233,054 | [Inductor UT] Mark test case test_linear_and_cel as requires_cuda as | etaf | closed | [
"open source",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ciflow/xpu"
] | 1 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #142322
* __->__ #143492
* #143491
it's only for cuda now.
Fix #143479
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @Col... | true |
2,747,232,941 | [Inductor XPU] Add XPU check for `is_big_gpu()`. | etaf | closed | [
"open source",
"Merged",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 1 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #142322
* __->__ #143491
Fix #143472
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @ch... | true |
2,747,230,673 | Segmentation fault (core dumped) in `replication_pad2d` | LongZE666 | open | [
"module: crash",
"module: error checking",
"triaged",
"topic: fuzzer"
] | 2 | NONE | ### 🐛 Describe the bug
Under specific inputs, `replication_pad2d` triggered a crash.
```python
import torch
self = torch.full((9, 9, 2, 4, 3,), 1.251e+12, dtype=torch.float)
padding = [0, 0, 0, 0]
torch._C._nn.replication_pad2d(self, padding)
```
Output
```
Segmentation fault (core dumped)
```
### Versio... | true |
2,747,218,170 | Floating point exception (core dumped) in `thnn_conv2d` | LongZE666 | closed | [
"module: crash",
"module: nn",
"module: error checking",
"module: convolution",
"triaged",
"topic: fuzzer"
] | 1 | NONE | ### 🐛 Describe the bug
Under specific inputs, `thnn_conv2d` triggered a crash.
```python
import torch
self = torch.full((9, 2, 3, 9,), 1e+13, dtype=torch.float)
weight = torch.full((8, 2, 3, 3,), 7.89645e+16, dtype=torch.float)
kernel_size = [36028797018963968, 36028797018963968]
bias = None
stride = [104857... | true |
2,747,211,871 | Aborted (core dumped) in `replication_pad3d` | LongZE666 | closed | [
"module: crash",
"module: nn",
"module: error checking",
"triaged",
"topic: fuzzer"
] | 2 | NONE | ### 🐛 Describe the bug
Under specific inputs, `replication_pad3d` triggered a crash.
```python
import torch
self = torch.full((9, 1, 1, 9, 1, 8, 8, 7, 8,), 1.4013e-45, dtype=torch.float)
padding = [0, 0, 0, 0, 0, 0]
torch.ops.aten.replication_pad3d(self, padding)
```
Output
```
double free or corruption (o... | true |
2,747,204,933 | Aborted (core dumped) in `replication_pad1d` | LongZE666 | open | [
"module: crash",
"module: error checking",
"triaged",
"actionable",
"topic: fuzzer"
] | 2 | NONE | ### 🐛 Describe the bug
Under specific inputs, `replication_pad1d` triggered a crash.
```python
import torch
self = torch.full((9, 9, 7, 1,), 3.5e+35, dtype=torch.float)
padding = [-2, -2]
torch.ops.aten.replication_pad1d(self, padding)
```
Output
```
corrupted size vs. prev_size
Aborted (core dumped)
`... | true |
2,747,201,678 | torch cumsum gives incorrect output for large tensors | mzaidi59 | closed | [
"high priority",
"module: cuda",
"triaged",
"module: 64-bit"
] | 6 | NONE | ### 🐛 Describe the bug
We (@akhilkedia @anshmn) observed that torch.cumsum(() returns incorrect output for large tensors
Correct Case with small tensor -
```
import torch
a = torch.ones((4096*8, 100000), dtype=torch.float, device='cuda')
a /= 100000
c = a.cumsum(dim=-1)
print(c[0,-5:])
```
Output
```
t... | true |
2,747,199,702 | Aborted (core dumped) in `mkldnn_rnn_layer` | LongZE666 | open | [
"module: crash",
"module: error checking",
"triaged",
"module: mkldnn",
"topic: fuzzer"
] | 1 | NONE | ### 🐛 Describe the bug
Under specific inputs, `mkldnn_rnn_layer` triggered a crash.
```python
import torch
input = torch.full((1, 8, 1,), 4.13506, dtype=torch.float)
weight0 = torch.full((5, 8,), 2.47475, dtype=torch.float)
weight1 = torch.full((5, 8,), 8.52373, dtype=torch.float)
weight2 = torch.full((5,), 5... | true |
2,747,195,247 | Segmentation fault (core dumped) in `gru_cell` | LongZE666 | open | [
"module: crash",
"module: error checking",
"triaged",
"actionable",
"module: empty tensor",
"topic: fuzzer"
] | 1 | NONE | ### 🐛 Describe the bug
Under specific inputs, `gru_cell` triggered a crash.
```python
import torch
input = torch.full((0, 8,), 0, dtype=torch.float)
hx = torch.full((0, 9,), 0, dtype=torch.float)
w_ih = torch.full((1, 8,), 1.251e+12, dtype=torch.float)
w_hh = torch.full((1, 9,), 1.4013e-45, dtype=torch.float)... | true |
2,747,188,022 | Segmentation fault (core dumped) in `embedding_bag.padding_idx` | LongZE666 | open | [
"module: crash",
"module: error checking",
"triaged",
"module: embedding",
"module: empty tensor",
"topic: fuzzer"
] | 0 | NONE | ### 🐛 Describe the bug
Under specific inputs, `embedding_bag.padding_idx` triggered a crash.
```python
import torch
weight = torch.full((3, 4,), 1.11111e+15, dtype=torch.float)
indices = torch.full((5,), -2147483648, dtype=torch.long)
offsets = torch.full((0,), 0, dtype=torch.long)
scale_grad_by_freq = False
... | true |
2,747,179,041 | Segmentation fault (core dumped) in `embedding_backward` | LongZE666 | open | [
"module: crash",
"module: error checking",
"triaged",
"module: embedding",
"module: empty tensor",
"topic: fuzzer"
] | 0 | NONE | ### 🐛 Describe the bug
Under specific inputs, `embedding_backward` triggered a crash.
```python
import torch
grad = torch.full((8, 0, 3, 7, 6, 1, 0,), 0, dtype=torch.float)
indices = torch.full((2,), 1250999896764, dtype=torch.long)
num_weights =536870912
padding_idx = 4194304
scale_grad_by_freq = True
spar... | true |
2,747,174,664 | Segmentation fault (core dumped) in `conv3d` | LongZE666 | open | [
"module: crash",
"module: nn",
"module: error checking",
"module: convolution",
"triaged",
"topic: fuzzer"
] | 1 | NONE | ### 🐛 Describe the bug
Under specific inputs, `conv3d` triggered a crash.
```python
import torch
input = torch.full((3, 1, 3, 4, 3,), 4.44444e+12, dtype=torch.float)
weight = torch.full((3, 1, 3, 1, 3,), 1e+13, dtype=torch.float)
bias = None
stride = [1, 1, 1]
padding = "same"
dilation = [3046875451, 304687... | true |
2,747,167,862 | Segmentation fault (core dumped) in `conv1d` | LongZE666 | open | [
"module: crash",
"module: nn",
"module: error checking",
"triaged",
"module: edge cases",
"topic: fuzzer"
] | 2 | NONE | ### 🐛 Describe the bug
Under specific inputs, `conv1d` triggered a crash.
```python
import torch
input = torch.full((10, 10, 9,), 0, dtype=torch.float)
weight = torch.full((2, 10, 9,), 9.0072e+15, dtype=torch.float)
bias = None
stride = [1]
padding = "same"
dilation = [2147483648]
groups = 1
# torch.ops... | true |
2,747,158,019 | [Break XPU] Newly added test case with CUDA hard code failed on XPU. | etaf | closed | [
"triaged",
"module: xpu"
] | 0 | COLLABORATOR | ### 🐛 Describe the bug
The newly added test case `test_linear_and_cel` in test/inductor/test_inplace_padding.py has "cuda" hard code but run on XPU.:https://hud.pytorch.org/pr/pytorch/pytorch/142322#34573031104
```
2024-12-18T04:27:31.8569895Z =================================== FAILURES ==========================... | true |
2,747,116,491 | [DONT MERGE]Xpu win whl | chuanqi129 | closed | [
"open source",
"ciflow/binaries"
] | 2 | COLLABORATOR | Fixes #ISSUE_NUMBER
| true |
2,747,092,342 | Tensor size for `masked_fill` exceeds the limit supported by the MPS backend: must be less than 2**32 elements | rusnov | closed | [
"module: crash",
"triaged",
"module: mps"
] | 8 | NONE | ### 🐛 Describe the bug
I get the following error, when using `masked_fill` on larger tensors. See error and the minimal code below.
**Error:**
```
/AppleInternal/Library/BuildRoots/.../Library/Caches/com.apple.xbs/Sources/MetalPerformanceShaders/MPSCore/Types/MPSNDArray.mm:850: failed assertion `[MPSNDArray in... | true |
2,747,056,176 | Address source code building command for Intel GPU support | ZailiWang | closed | [
"triaged",
"open source",
"Merged",
"topic: not user facing"
] | 14 | CONTRIBUTOR | As the title | true |
2,747,022,409 | reduce import torch time. | xuhancn | closed | [
"open source",
"Stale",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 2 | COLLABORATOR | Fixes #140970
Original code:
<img width="413" alt="Image" src="https://github.com/user-attachments/assets/8035580c-f261-4b4c-a652-61d1666da894" />
It takes 2.1s
This PR, `load torch_cpu` modules to replace `import torch`:
<img width="438" alt="Image" src="https://github.com/user-attachments/assets/d6d5fe31-ae6... | true |
2,746,904,505 | [ONNX] Failed to export PyTorch-2-Export-Quantized model to onnx | veritas-Qiu | open | [
"module: onnx",
"triaged"
] | 1 | CONTRIBUTOR | ### 🐛 Describe the bug
try to quantize a model like [this link](https://pytorch.org/tutorials/prototype/pt2e_quant_qat.html)
(only different in model structures and datasets)
then export the quantized model to onnx by `torch.onnx.export` (original model is able to output), and get
```Traceback (most recent call ... | true |
2,746,898,464 | Fix space typo in warning message | SilverSoldier | closed | [
"oncall: distributed",
"triaged",
"open source",
"Merged",
"Stale",
"release notes: distributed (fsdp)"
] | 15 | CONTRIBUTOR | Warning shows up like this (no space between willbe):
```
/home/xxx/.local/lib/python3.11/site-packages/torch/distributed/fsdp/_state_dict_utils.py:827:
UserWarning: When using ``NO_SHARD`` for ``ShardingStrategy``, full_state_dict willbe returned.
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337... | true |
2,746,897,879 | [Break XPU] The device-bias hard code in `is_big_gpu` cause case failures on XPU. | etaf | closed | [
"triaged",
"module: xpu"
] | 1 | COLLABORATOR | ### 🐛 Describe the bug
We found the recent XPU CI failure https://hud.pytorch.org/pr/pytorch/pytorch/142322#34573031104 which is caused by #143339
```
Z _______________ AOTInductorTestABICompatibleGpu.test_conv3d_xpu ________________
2024-12-18T04:17:23.7324890Z Traceback (most recent call last):
2024-12-18T04... | true |
2,746,892,469 | NFS errors during DataLoader shutdown when num_workers > 1 when temporary directory is on NFS | edoyango | open | [
"triaged",
"module: data"
] | 0 | NONE | ### 🐛 Describe the bug
Hi,
This is more of a mild annoyance rather than a show-stopping issue. This issue occurs when on Linux and when using an NFS-mounted directory as the temporary directory.
When finished iterating over a DataLoader object, I get the following errors:
```
Traceback (most recent call last)... | true |
2,746,885,941 | [c10d] thread safety issue with CUDAEventCache | suo | closed | [
"oncall: distributed",
"module: c10d"
] | 4 | MEMBER | The following race can happen if we ever schedule NCCL work from a different thread than the original Python thread, and that thread dies before process shutdown.
1. The CUDAEventCache is [thread-local](https://github.com/pytorch/pytorch/blob/main/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp#L839-L841).
2. Work... | true |
2,746,853,880 | Larger numerical divergence after applying torch.compile on a batch-linear model | maybeLee | closed | [
"triaged",
"oncall: pt2",
"module: inductor"
] | 7 | CONTRIBUTOR | ### 🐛 Describe the bug
Hi I am trying to use torch.compile to optimize a model's performance. However, I notice that the optimized model has larger numerical divergence compared to the original one.
Here is the simplified reproducible script:
```
import torch
from torch import nn
torch.manual_seed(0)
NUM_... | true |
2,746,738,299 | dummy pr | xuhancn | closed | [
"open source",
"topic: not user facing",
"ciflow/xpu"
] | 1 | COLLABORATOR | Fixes #ISSUE_NUMBER
| true |
2,746,731,051 | log guard_size_oblivious call sites | bobrenjc93 | closed | [
"release notes: fx",
"topic: not user facing",
"fx",
"ciflow/inductor"
] | 6 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143467
This makes it much easier to know what's going on when we guard on data dependent operations. Currently if we throw on guard on data dependent, we only show the python invocation that caused it (not the underlying leaf c++ ... | true |
2,746,685,478 | Support Dict Parameter Type for custom_op | xinyu-intel | open | [
"triaged",
"module: custom-operators",
"oncall: pt2",
"module: pt2-dispatcher"
] | 4 | CONTRIBUTOR | ### 🐛 Describe the bug
Is it possible to support infer_schema for custom_op which has Dict as input parameters? I think opschema can support such sig `(Tensor t, Dict(str, Any) meta) -> Tensor`. Also, can such inputs be mutated?
```python
import torch
from typing import Dict, Any
@torch.library.custom_op("h... | true |
2,746,631,749 | [ROCm] MI300X FP8 scaled_mm is extremely slow on nightly | OrenLeung | open | [
"module: performance",
"module: rocm",
"triaged"
] | 22 | CONTRIBUTOR | ### 🐛 Describe the bug
Hi AMD Team,
`torch._scaled_mm` is extremely slow on MI300X at ~100TFLOP/s verus ~1200TFLOP/s on H100
Can you look into this?
cc: @hliuca
## ROCm
```
m=16384 n=8192 k=1280: 108.07154472843483
m=16384 n=1024 k=8192: 110.56206220309926
m=16384 n=8192 k=7168: 109.66662842248034
... | true |
2,746,624,804 | Add a register_replacement to fix float8 delayed scaling kernel fusion issues | y-sq | closed | [
"fb-exported",
"Stale",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 8 | CONTRIBUTOR | Summary:
We previously tried the `defer_reduction_split_after_fusion` way to fix the fusion issue.
However, as we agree that the longer-term solution is cooperative reduction + tiled reduction, the defer reduction split approach will also be a shorter-term solution. And we want to keep the shorter-term solution simple... | true |
2,746,613,893 | unreasonable ConstraintViolationError when using torch dynamo to compile torch model | Jason3900 | open | [
"triaged",
"oncall: pt2",
"module: dynamic shapes",
"module: dynamo",
"oncall: export"
] | 3 | NONE | ### 🐛 Describe the bug
I'm using torch dynamo backend to compile model to export to tensorrt.
```python
inputs = [torch.randn(1, 3, 28, 288, 512).cuda().to(torch.float16)]
dynamic_h = torch.export.Dim("dim_3", min=224, max=640)
dynamic_w = torch.export.Dim("dim_4", min=224, max=640)
dynam... | true |
2,746,588,868 | Fix torch._refs.tensor error with empty list | zeshengzong | closed | [
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 10 | CONTRIBUTOR | Fixes #143216
**Test Result**
**Before**
```python
>>> import torch
>>> torch._refs.tensor([])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/zong/code/pytorch/torch/_refs/__init__.py", line 6614, in tensor
new_tensor = _internal_new_from_data(
... | true |
2,746,555,992 | [Inductor][CPU] disable bernoulli_p decomposition | blzheng | closed | [
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 5 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143460
Fix https://github.com/pytorch/pytorch/issues/142853
`fallback_random=True` should cause RNG to match between compile/eager (by having compile fall back to eager for RNG ops), but the `bernoulli_p` decompose function is not ... | true |
2,746,520,622 | Add save_config and load_config arguments to torch.save/load | mikaylagawarecki | closed | [
"Stale",
"release notes: python_frontend",
"topic: improvements"
] | 2 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143459
* #143342
* #143324
| true |
2,746,502,522 | [Inductor] move custom pre pass | Valentine233 | closed | [
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3 | COLLABORATOR | Fixes #143363.
Move `joint_custom_pre` pass after `remove_noop_ops`/`constant_folding`, in order to get the same behavior as `pattern_matcher`.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @Coli... | true |
2,746,476,513 | [while_loop][jit inductor] auto-unspecialize int input and output to unbacked symints | ydwu4 | open | [
"Stale",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 4 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143457
cpp_wrapper codegen doesn't work yet because:
1. wrapper codegen logic assumes tensor output, we need to support int outputs
2. since cpp is strongly typed, we must declare the variable to be either tensor or int and ass... | true |
2,746,476,059 | [hop][inductor] track the dependency on unbacked symbols correctly with constant_args for hops | ydwu4 | closed | [
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ci-no-td"
] | 11 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143456
Before the PR, we're getting an undefined symbol error for output code when an unbacked symint is **only** used in the hop because we didn't correctly record the dependency of the unbacked symbols for hops and it gets DCEed a... | true |
2,746,464,964 | Add strict kwarg to `nn.Module.set_submodule` and fix bug for non dot delineated strings | mariovas3 | closed | [
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: nn",
"topic: improvements"
] | 16 | CONTRIBUTOR | Before fixing set_submodule, it used to create leaf modules when the target was not a dot-delimited string. After the fix it will not create a new attribute if target is a non-dot-delimited string. If you want to create leaf nodes of `nn.Module` parent nodes, you can use `replace_or_create_new_leaf_module`.
Fixes ht... | true |
2,746,453,932 | [foreach_map] Add foreach_map Adam impl to compiled optimizer tests | mlazos | closed | [
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 3 | CONTRIBUTOR | Adds a foreach_map backed Adam to compiled optimizer tests
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,746,444,346 | Compiler Bisector Improvements | eellison | open | [
"triaged",
"module: inductor"
] | 6 | CONTRIBUTOR | ### 🚀 The feature, motivation and pitch
@ezyang has been using Compiler Bisector internally and run it into a few feature requests.
- [ ] Query for backend, subsystems
- [ ] Config option to check meta stride for all ops, not just custom ops
- [ ] Option to specify particular backend/subsystem to iterate over
... | true |
2,746,429,384 | [Inductor] Fix _can_be_inplace function (#143279) | jiayisunx | closed | [
"open source",
"module: inductor",
"ciflow/inductor"
] | 1 | COLLABORATOR | Summary:
Modify _can_be_inplace function: return False if `_other.data` is an instance of `ir.BaseView`.
Fix https://github.com/pytorch/pytorch/issues/143280.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/143279
Approved by: https://github.com/leslie-fang-intel, https://github.com/jansel, https... | true |
2,746,381,128 | [MTIA] (4/n) Implement PyTorch APIs to query/reset device peak memory usage | chaos5958 | closed | [
"fb-exported",
"Stale",
"ciflow/trunk",
"topic: not user facing"
] | 6 | CONTRIBUTOR | Summary: This diff implements the "reset_peak_memory_stats" PyTorch API for MTIA devices, which resets the peak device DRAM usage
Test Plan:
```
buck2 test //mtia/host_runtime/torch_mtia/tests:test_torch_mtia_api -- -r test_reset_peak_memory_stats
```
https://www.internalfb.com/intern/testinfra/testrun/28147537181229... | true |
2,746,356,265 | Make Inductor cpp backend enable_floating_point_contract_flag to take string | hl475 | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 18 | CONTRIBUTOR | Differential Revision: D66269001
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,746,323,174 | [MPS] Add `aten::angle` | sezelt | closed | [
"triaged",
"open source",
"Merged",
"release notes: mps",
"ciflow/mps"
] | 6 | CONTRIBUTOR | This adds an MPS backend implementation for `aten::angle` and `aten::angle_out` (mentioned in issue #77764), following the example #78408.
| true |
2,746,310,552 | Enable CPP/CUDAExtension with py_limited_api for python agnosticism | pytorchbot | closed | [
"open source"
] | 1 | COLLABORATOR | Getting tested with ao, but now there is a real test i added.
## What does this PR do?
We want to allow custom PyTorch extensions to be able to build one wheel for multiple Python versions, in other words, achieve python agnosticism. It turns out that there is such a way that setuptools/Python provides already! N... | true |
2,746,306,961 | [dynamo] Properly model root frame globals during inlining | StrongerXi | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 4 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143447
This patch updates `InliningInstructionTranslator.STORE_GLOBAL` to
properly check whether `self.f_globals` is the same as root frame
`f_globals`. See added comments for why this is important.
Fixes #143425.
cc @voznesenskym ... | true |
2,746,283,733 | [c10d][fr] flight recorder improvements | c-p-i-o | closed | [
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 12 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143446
Summary:
1. Flight recorder dumps are now automatically dumped by default upon
timeout or exception. Users don't need to opt-in.
2. Change default dump location to running user's home directory
`.cache` folder.
Test Pla... | true |
2,746,267,089 | update kineto to XPU Windows fixed PR. [submodule kineto] | xuhancn | closed | [
"module: windows",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"intel",
"ciflow/xpu"
] | 15 | COLLABORATOR | Include XPU Windows Fixed PR: https://github.com/pytorch/kineto/pull/1012
cc @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 | true |
2,746,230,091 | [ONNX] Save dynamic shapes constraints to ONNX metadata | titaiwangms | closed | [
"module: onnx",
"triaged",
"onnx-triaged"
] | 5 | COLLABORATOR | We should include shape constraints in ONNX metadata to provide more information to users. This can also reveal to the users why certain axes should remain static for them to further debug in their models. | true |
2,746,225,890 | [ONNX] Rename dynamic shapes produced by ExportedProgram to dynamic_axes | titaiwangms | closed | [
"module: onnx",
"triaged",
"onnx-triaged"
] | 3 | COLLABORATOR | `torch.export.export` names dynamic shapes to be s0, s1, s2, s3, ..., etc. However, in ONNX, users could pass in the naming through `dynamic_axes` and `input_names`. We need to rename them to what users request. | true |
2,746,223,503 | fix checking non-trivial input constraints | avikchaudhuri | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: export"
] | 5 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143442
A bunch of auto dynamic shape tests would fail non-strict retraceability because when checking input constraints, we'd compare non-trivial expressions, which would require / affect shape env.
```
... is not tracked with proxy ... | true |
2,746,190,350 | Bug-set-submodule-assigns-module-to-new-attribute | mariovas3 | closed | [
"module: nn",
"triaged"
] | 0 | CONTRIBUTOR | ### 🐛 Describe the bug
Based on the docstring of `nn.Module.set_submodule` - https://pytorch.org/docs/stable/generated/torch.nn.Module.html#torch.nn.Module.set_submodule
we have `Set the submodule given by target if it exists, otherwise throw an error.`
This is violated when passing non-dot-delimited strings.... | true |
2,746,177,352 | Locale issues in colab: after tensor(1j).cuda().abs() !commands cannot be executed. | fzimmermann89 | open | [
"triaged",
"module: third_party",
"module: python frontend"
] | 7 | CONTRIBUTOR | ### 🐛 Describe the bug
Running the following in colab (T4 runtime):
```
import torch
a=torch.tensor(1j,device="cuda")
a.abs()
!echo "cake is a lie"
```
results in an `NotImplementedError: A UTF-8 locale is required. Got ANSI_X3.4-1968`
it has to be a) complex b) abs c) on cuda.
otherwise, the final comman... | true |
2,746,174,157 | remove allow-untyped-defs for torch/fx/experimental/debug.py | bobrenjc93 | closed | [
"Merged",
"ciflow/trunk",
"release notes: fx",
"topic: not user facing",
"fx"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143439
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv | true |
2,746,174,081 | remove allow-untyped-defs for torch/_functorch/batch_norm_replacement.py | bobrenjc93 | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143438
| true |
2,746,173,937 | remove allow-untyped-defs for torch/nn/parallel/__init__.py | bobrenjc93 | closed | [
"oncall: distributed",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143437
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,746,173,856 | remove allow-untyped-defs for torch/_inductor/test_operators.py | bobrenjc93 | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 6 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143436
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,746,173,783 | remove allow-untyped-defs for torch/_export/passes/remove_runtime_assertions.py | bobrenjc93 | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor",
"release notes: export"
] | 6 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143439
* #143438
* #143437
* #143436
* __->__ #143435
| true |
2,746,173,145 | Missing nightly 20241217 on x86_64 | Jack-Khuu | open | [
"module: binaries",
"triaged"
] | 1 | CONTRIBUTOR | ### 🐛 Describe the bug
I'm looking at bumping the nightly pin in torchchat to dev20241217, but it looks like the nightly isn't being found.
Was there a wheel failure or was there a install support change recently (< 1 week)?
Looking a [download.pytorch.org](https://download.pytorch.org/whl/nightly/torch/) list... | true |
2,746,155,774 | Backout D66648013 | mlazos | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 7 | CONTRIBUTOR | Summary:
backing out https://www.internalfb.com/diff/D66648013 (see comments there for justification)
I will reland and disallow the bfloat16 atomics behavior on A100 because it causes a pretty significant performance regression.
Test Plan: This is a revert
Differential Revision: D67357485
cc @voznesenskym @peng... | true |
2,746,123,852 | Eager style export V0 API. | zhxchen17 | closed | [
"fb-exported",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 10 | CONTRIBUTOR | Summary:
Prototype of an end-to-end export workflow to call a torch.compiled model eagerly and package every single compiled model in the wrapped region of the code.
Code sample:
```
@torch.compile(fullgraph=True)
def f(x, y):
return x + y
# Compile the model and save it on disk
with torch.compiler._f... | true |
2,746,123,503 | aot_eager causes CPU RNG behavior to change | ezyang | closed | [
"triaged",
"oncall: pt2",
"module: aotdispatch",
"module: pt2-dispatcher"
] | 5 | CONTRIBUTOR | ### 🐛 Describe the bug
Repro
```
import torch
def f(image_latent):
B = 2
num_ref = 3
num_tar = 3
x = torch.rand(B, 12)
indices = torch.argsort(torch.rand(*x.shape), dim=-1)[:, :num_ref + num_tar]
return image_latent[torch.arange(B).unsqueeze(-1), indices][:, :num_ref]
torch.man... | true |
2,746,107,833 | [pytorch/et] Allow ET to save additional resources for completing a trace like generated kernels and index tensor data | sanrise | closed | [
"fb-exported",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"ci-no-td"
] | 14 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143430
The resources directory lets ET observer dump any additional data like Triton kernels while capturing the ET.
This allows us to use the ET trace to replay PT2 workloads and get visibility into data like generated kernels and ... | true |
2,746,076,900 | [BE] Update triton repo link | malfet | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3 | CONTRIBUTOR | It should be https://github.com/triton-lang/triton rather than https://github.com/openai/triton shouldn't it?
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauha... | true |
2,746,074,800 | [pytorch/et] Allow ET to save additional resources for completing a trace like generated kernels and index tensor data (#142521) | sanrise | closed | [
"fb-exported"
] | 3 | CONTRIBUTOR | Summary:
The resources directory lets ET observer dump any additional data like Triton kernels while capturing the ET.
This allows us to use the ET trace to replay PT2 workloads and get visibility into data like generated kernels and their usage in a model, index tensor data etc.
We also added a few ways to enable E... | true |
2,746,068,307 | Implement increment and add_to_set for CompileEventLogger | jamesjwu | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 13 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143427
This diff implements `increment` and `add_to_set`, which are features of MetricsContext, but not ChromiumEventLogger. This allows us to add a bunch of other metricscontext callsites to use CompileEventLogger instead.
Differen... | true |
2,746,054,290 | [reland] Kill capture_pre_autograd_graph API | yushangdi | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: quantization",
"release notes: releng",
"ci-no-td"
] | 4 | CONTRIBUTOR | Summary:
Delete the following API:
- capture_pre_autograd_graph()
- capture_pre_autograd_graph_using_training_ir()
- gm_using_training_ir()
Update XLA pin to include https://github.com/pytorch/xla/pull/8398
There's no more call sites to `capture_pre_autograd_graph`.
Except
1) two test cases in coreml, g... | true |
2,746,035,623 | Dynamo fails to propagate updates to global variable | guilhermeleobas | closed | [
"oncall: pt2",
"module: dynamo",
"dynamo-triage-june2024"
] | 0 | COLLABORATOR | ### 🐛 Describe the bug
I discovered this one while working on https://github.com/pytorch/pytorch/pull/136033. The reproducer without using `@contextmanager` is a bit tricky, but the idea is the same. To reproduce, one needs two files to have different globals.
```python
# main file
import torch
import other_fi... | true |
2,746,026,381 | higher rank convolution | sycamoreoak | open | [
"module: nn",
"triaged"
] | 0 | NONE | ### 🚀 The feature, motivation and pitch
would it be possible to add official pytorch support for higher rank convolution? thanks!
### Alternatives
_No response_
### Additional context
working at a higher rank can be useful, depending on the application!
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagaware... | true |
2,746,015,275 | Use Manylinux 2.28 for nightly build and cxx11-abi | atalman | closed | [
"Merged",
"ciflow/binaries",
"topic: not user facing"
] | 4 | CONTRIBUTOR | As per: https://dev-discuss.pytorch.org/t/pytorch-linux-wheels-switching-to-new-wheel-build-platform-manylinux-2-28-on-november-12-2024/2581
Linux Builds: CPU, CUDA 11.8, CUDA 12.4 switched to Manylinux 2.28 and D_GLIBCXX_USE_CXX11_ABI=1 on the week of Dec 16
| true |
2,745,929,853 | cpp_builder.py: Build in -O2 to improve compilation time | benjaminglass1 | closed | [
"open source",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 2 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143422
* #143421
* #143223
* #141371
This does not appear to affect performance substantively (benchmarks pending), since we already apply OMP optimizations to loops which should be tightly optimized.
This PR additionally applies... | true |
2,745,929,519 | AOTI fallback ops: remove ops that were never codegen'ed | benjaminglass1 | closed | [
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor",
"module: aotinductor"
] | 9 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144124
* #144123
* #144002
* #143909
* __->__ #143421
* #143223
* #141371
Removes 4 fallback ops that are currently not possible to codegen, which does not break ABI-compatibility.
1. `_cudnn_rnn_backward` and `_histogramdd_bin_edges` both... | true |
2,745,918,870 | Introduce CompileEventLogger, replace usages of metrics_context and chromium_event with it | jamesjwu | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"release notes: AO frontend"
] | 11 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143427
* __->__ #143420
**Problem statement**: I want to be able to centralize and simplify the process by which people add columns/data to existing spans. We have MetricsContext and ChromiumEventLogger, and there's various choices you can ... | true |
2,745,909,540 | OpenGL interoperability | cajoek | closed | [
"module: cuda"
] | 4 | NONE | ### 🚀 The feature, motivation and pitch
Zero-copy transfer of data between PyTorch and OpenGL on GPU by including "OpenGL interoperability" from CUDA in pytorch.
I am working on a real-time machine learning graphics project which uses OpenGL both as an intermediate processing step in the model and to visualize the... | true |
2,745,907,760 | [ODML] Make the ML feature provider thread safe | seanxiaoxiao | closed | [
"oncall: jit",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: jit"
] | 73 | CONTRIBUTOR | Summary:
This PR is generated from a meta internal Diff, aiming to resolve a crash from a race condition on the dictionary.
Test Plan:
Build and run
Print out the count/name/value of the dictionary and see if the values are get/set/removed correctly.
Observe the print statement on app start within IG
@d... | true |
2,745,906,201 | [compiled autograd] stop specializing on metadata during initial trace | zou3519 | closed | [
"Merged",
"Reverted",
"ciflow/trunk",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo",
"keep-going",
"module: compiled autograd",
"ci-no-td"
] | 2 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144115
* __->__ #143417
* #143405
* #143387
* #143304
* #143296
The previous PRs built up to this. We change compiled autograd's initial
trace to stop baking in metadata.
While tracing, we allocate some weirdly shaped tensors that we can p... | true |
2,745,892,843 | [ROCm] port CK rowwise F8 from fbgemm (#140856) | drisspg | closed | [
"module: rocm",
"fb-exported",
"Stale",
"ciflow/binaries",
"ciflow/trunk",
"topic: not user facing",
"skip-pr-sanity-checks",
"ciflow/rocm"
] | 11 | CONTRIBUTOR | Summary:
author @jeffdaily
This ports (copies) FBGEMM's implementation from jwfromm.
https://github.com/pytorch/FBGEMM/tree/main/fbgemm_gpu/experimental/gen_ai/src/quantize/ck_extensions/fp8_rowwise
cc sunway513 jithunnair-amd pruthvistony ROCmSupport dllehr-amd jataylo hongxiayang naromero77amd yanbing-j vk... | true |
2,745,859,524 | Fix sample inputs leaked from subtest | soulitzer | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 4 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143415
* #143333
| true |
2,745,853,716 | [PassRate] TorchBench training PassRate is less than 100 | IvanKobzarev | open | [
"high priority",
"triaged",
"oncall: pt2"
] | 3 | CONTRIBUTOR | ### 🐛 Describe the bug
Umbrella Task for the < 100 TorchBench PassRate
https://hud.pytorch.org/benchmark/compilers

### Versions
master
cc @ezyang @gchanan @zou3519 @kadeng @msaroufi... | true |
2,745,819,535 | don't rethrow guard on data dependent errors | bobrenjc93 | closed | [
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 1 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143413
as discussed offline, this makes errors much easier to read/understand | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.