id int64 2.74B 3.05B | title stringlengths 1 255 | user stringlengths 2 26 | state stringclasses 2
values | labels listlengths 0 24 | comments int64 0 206 | author_association stringclasses 4
values | body stringlengths 7 62.5k ⌀ | is_title bool 1
class |
|---|---|---|---|---|---|---|---|---|
2,743,175,297 | [dynamo, guards] Move SHAPE_ENV guard to C++ | williamwen42 | open | [
"triaged",
"oncall: pt2",
"module: dynamo"
] | 1 | MEMBER | Followup to https://github.com/pytorch/pytorch/pull/140063.
> Rewrite the SHAPE_ENV guard into C++ - it is a fairly common guard that results in FrameLocalsMapping needing to convert to a dict
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiay... | true |
2,743,172,941 | [dynamo, guards] Implement FrameLocalsMapping version of check_verbose_nopybind | williamwen42 | open | [
"triaged",
"oncall: pt2",
"module: dynamo"
] | 0 | MEMBER | Follow up to https://github.com/pytorch/pytorch/pull/140063.
> Add FrameLocalsMapping version for check_verbose_nopybind in order to match behavior between check_nopybind and check_verbose_nopybind. This can prevent difficult debugging situations where guards fail (check_nopybind returns false) but no guard error me... | true |
2,743,131,846 | easy: sort dictionary keys for inductor config when publishing | c00w | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 14 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143317
* __->__ #143307
This means we should get consistent logging strings for the same
config on different ranks
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chen... | true |
2,743,128,114 | Add CPU scalar support in addcdiv | EmmettBicker | open | [
"triaged",
"enhancement",
"actionable",
"module: python frontend"
] | 0 | CONTRIBUTOR | ### 🚀 The feature, motivation and pitch
Continuation of #143264 .
Allow user to pass in a cpu scalar to addcdiv. I can do this as soon as the mentioned PR is merged!
### Alternatives
_No response_
### Additional context
_No response_
cc @albanD | true |
2,743,093,359 | [C10D] Update docs for wait() | wconstab | closed | [
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 8 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143305
Clarify that currently active stream, not default stream, is the one
that will be blocked by a call to wait(), and also point out that the
CPU is not blocked by the call for CUDA/nccl collectives. | true |
2,743,085,099 | [compiled autograd] Proxy a node for CopyBackwards into the graph | zou3519 | closed | [
"Merged",
"Reverted",
"topic: not user facing",
"ci-no-td"
] | 2 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144115
* #143417
* #143405
* #143387
* __->__ #143304
* #143296
CopyBackwards is a manual C++ torch::autograd::Node; we update its
apply_with_saved to proxy a functional version of it into the graph instead
of inlining into it.
Test Plan:
... | true |
2,743,005,800 | update non strict cond tests | avikchaudhuri | closed | [
"fb-exported",
"Stale",
"ciflow/trunk",
"topic: not user facing"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143303
Differential Revision: [D67285992](https://our.internmc.facebook.com/intern/diff/D67285992/) | true |
2,742,990,248 | Triton bump for 3.2 cherry-picks (mmav3 segfault fix, gfx950 support) | bertmaher | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor",
"rocm",
"ciflow/rocm"
] | 6 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143302
* https://github.com/triton-lang/triton/pull/5277
* https://github.com/triton-lang/triton/pull/5084 | true |
2,742,985,940 | Fix a misspelling [ONNX] | xadupre | closed | [
"module: onnx",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 7 | COLLABORATOR | null | true |
2,742,956,575 | [BE] Revert "Add conda to Manylinux Docker images (#139903)" | atalman | closed | [
"Merged",
"Reverted",
"Stale",
"topic: not user facing",
"ci-no-td"
] | 7 | CONTRIBUTOR | This reverts commit 56a40d4ebb0bcf733f1ea5f6efde805326a7a565.
Having conda in manylinux builder images is not required. This was added to have manylinux-builder images as the only images for CD builds after conda-builder is deprecated. However we decided to start using ``almalinux-builder``.
We are using almalinu... | true |
2,742,950,386 | [FlexAttention] Allow num_warps 8 since when block size >=128 | drisspg | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ciflow/rocm",
"module: flex attention"
] | 5 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143103
* #143344
* __->__ #143299
# Summary
Fixes #143290
We already strip bad configs here: https://github.com/pytorch/pytorch/blob/e0e763e33135d2ad25c613007aa5f2fee6d2cc24/torch/_inductor/kernel/flex_attention.py#L2299
So this shouldn... | true |
2,742,949,945 | non strict sequential slicing | avikchaudhuri | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 8 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143298
Differential Revision: [D67284841](https://our.internmc.facebook.com/intern/diff/D67284841/) | true |
2,742,933,107 | [FSDP2] Clamp `reduce_dtype` in lazy init | awgu | closed | [
"oncall: distributed",
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: distributed (fsdp2)"
] | 3 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143297
fixes https://github.com/pytorch/pytorch/issues/143277 by moving the clamp of `reduce_dtype` to `None` to lazy init (same place as where `param_dtype` can be clamped to `None`)
cc @H-Huang @kwen2501 @wanchaol @fegin @fdu... | true |
2,742,816,793 | [compiled autograd] Proxy opaque nodes for built-in autograd nodes | zou3519 | closed | [
"oncall: distributed",
"Merged",
"Reverted",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"keep-going",
"module: compiled autograd",
"ci-no-td"
] | 4 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144115
* #143417
* #143405
* #143387
* #143304
* __->__ #143296
This PR is on the way to getting compiled autograd's initial capture to
stop specializing on Tensor metadata.
This PR changes compiled autograd's initial capture to proxy an o... | true |
2,742,801,156 | `torch.Tensor.angle()` produces inconsistent results on CPU, only on Linux | Uncomfy | closed | [] | 3 | NONE | ### 🐛 Describe the bug
Hello!
`torch.Tensor.angle()` produces inconsistent results depending on the order of operations. Specifically:
1. Computing the angle for the entire tensor and then indexing into the result gives different values compared to first indexing the tensor and then computing the angle.
2. Similar... | true |
2,742,673,333 | update aten bmm CK heuristic | bradleyhd | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 5 | CONTRIBUTOR | Summary: updates heuristic to use new instances based on ck profiling of LLM shapes
Differential Revision: D67280269
| true |
2,742,673,023 | `bias=False` fails in `Transformer` when `batch_first=True` and in eval mode | aamster | open | [
"module: nn",
"triaged"
] | 0 | NONE | ### 🐛 Describe the bug
```
import torch
from torch import nn
transformer_model = nn.Transformer(nhead=16, num_encoder_layers=12, bias=False, batch_first=True)
src = torch.rand((10, 32, 512))
tgt = torch.rand((10, 32, 512))
transformer_model.eval()
out = transformer_model(src, tgt)
```
```
Traceback (m... | true |
2,742,645,100 | [CD] Fix XPU linux CD whl test failure | pytorchbot | closed | [
"open source",
"topic: not user facing"
] | 1 | COLLABORATOR | Follow https://github.com/pytorch/pytorch/pull/142482, refer the original fix PR https://github.com/pytorch/pytorch/pull/130742 and new issue in https://github.com/pytorch/pytorch/actions/runs/12323126436/job/34403681230
Works for https://github.com/pytorch/pytorch/issues/114850
| true |
2,742,612,823 | [2/N][Memory Profiling] Record memory allocation/free | mzzchy | closed | [
"fb-exported",
"Stale",
"topic: not user facing"
] | 6 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143291
Design Doc: https://fburl.com/gdoc/47zpuweb
Prototyping: D66469341
In this diff, we implement the logic to record, store and export memory trace which will be involved by mtia hooks later.
* Add RingBuffer<MTIATraceEntry> to... | true |
2,742,566,284 | FlexAttention: BFloat16 training is not working on nightly | ViktorooReps | closed | [
"high priority",
"triage review",
"module: bfloat16",
"oncall: pt2",
"module: flex attention"
] | 6 | NONE | ### 🐛 Describe the bug
Minimal code to reproduce:
```python
import torch
from torch.nn.attention.flex_attention import flex_attention
flex_attention = torch.compile(flex_attention)
x = torch.randn(
(1, 8, 256, 128),
device='cuda',
dtype=torch.float,
requires_grad=True
)
flex_atten... | true |
2,742,536,710 | RFC: Dynamically Quantized 4 bit matmul API and usage | nikhil-arm | open | [
"oncall: quantization"
] | 6 | COLLABORATOR | # 4-Bit Dynamically Quantized Matrix Multiplication in PyTorch
This RFC introduces two new operations to enable efficient 4-bit weight quantization and matrix multiplication in PyTorch. These operations provide a mechanism for low-precision arithmetic to be used for both training and inference, improving performanc... | true |
2,742,426,577 | EXCEPTION : /python3.11/distutils/core.py | tanwarsh | open | [
"needs reproduction",
"triaged",
"module: third_party",
"oncall: pt2"
] | 6 | NONE | ### 🐛 Describe the bug
Facing the same issue with python 3.10 and 3.11 as well with latest torch versions
[pip3] torch==2.5.1
[pip3] torchvision==0.20.1
```
from torchvision import datasets
```
```
File "/my_workspace/src/dataloader.py", line 7, in <module>
from torchvision import datasets
File "/l... | true |
2,742,407,567 | Masked self-attention not working as expected when each token is masking also itself | jacksalici | closed | [
"module: autograd",
"module: nn",
"triaged"
] | 1 | NONE | ### 🐛 Describe the bug
I was developing a self-attentive module using `nn.MultiheadAttention` (MHA). My goal was to implement a causal mask that enforces each token to attend only to the tokens before itself, excluding itself, unlike the standard autoregressive causal masks where tokens can attend to themselves.
H... | true |
2,742,363,305 | [ROCm] ROCm-specific gemm tuning parameters | jataylo | closed | [
"module: rocm",
"triaged",
"open source",
"release notes: rocm",
"module: inductor",
"ciflow/inductor",
"rocm",
"ciflow/rocm"
] | 11 | COLLABORATOR | Adds tuning options for extra_args in mm_common.py on ROCm side we can supply specific triton tuning args such as waves_per_eu, kpack, matrix_instr_nonkdim. This PR also introduces behavior to allow tuning for GROUP_M in triton gemm case. Also brings in specific tuning for general ROCm gemm case.
Dynamo huggingface ... | true |
2,742,103,911 | Add _foreach_clone ops | zeshengzong | open | [
"triaged",
"open source",
"Stale",
"release notes: foreach_frontend"
] | 6 | CONTRIBUTOR | Fixes #142181
Add `_foreach_clone` ops
**Test Result**
```bash
$ pytest test/test_foreach.py -k test_foreach_clone_tensors -v
```

cc @janeyx99
| true |
2,742,070,472 | `set_linter` suggests destructive changes on a new commit | rec | closed | [
"module: lint",
"triaged",
"bug"
] | 4 | COLLABORATOR | ### 🐛 Describe the bug
Reported by @Esquains and discussed [here](https://github.com/pytorch/pytorch/pull/138454#issuecomment-2543369337).
This string
```
print(f"{tempfile.gettempdir()}/memory_snapshot.pickle")
```
gets mistakenly translated into
```
print(f"OrderedSet([tempfile.gettempdir()}/memory... | true |
2,741,992,301 | [Triton commit bump] Upgrade nightly commit to include gfx950 target + LLVM bump | jataylo | closed | [
"open source",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 4 | COLLABORATOR | Brings in https://github.com/triton-lang/triton/pull/5417 | true |
2,741,952,682 | [foreach-map] Add tests for backward | mlazos | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 6 | CONTRIBUTOR | Adds tests for unary and binary foreach_map w/ backwards
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,741,641,079 | torch.linalg.qr is significantly slower on GPU compared to CPU and SVD for batched small matrices | h-skibbe | open | [
"module: cuda",
"triaged",
"module: linear algebra"
] | 5 | NONE | ### 🐛 Describe the bug
When performing QR decomposition on batched small matrices, torch.linalg.qr is significantly slower on the GPU compared to the CPU and even slower than torch.linalg.svd on the GPU. This behavior seems unexpected since QR decomposition is typically faster than SVD. Tested with pytorch 2.3 and py... | true |
2,741,624,383 | [CPU][Inductor] Diffusers model got NotImplementedError: SliceView on CPU | mengfei25 | closed | [
"oncall: pt2",
"oncall: cpu inductor"
] | 0 | CONTRIBUTOR | ### 🐛 Describe the bug
SliceView NotImplementedError on CPU
```python
# https://github.com/intel/ai-reference-models/blob/05dea0c0554aa1051cd622d06c959eb1dea74213/models_v2/pytorch/LCM/inference/cpu/inference.py
export TORCH_INDUCTOR=1
export TORCHINDUCTOR_FREEZING=1
python ai-reference-models/models_v2/pytorc... | true |
2,741,607,321 | [Inductor] Fix _can_be_inplace function | jiayisunx | closed | [
"open source",
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 3 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143279
Summary:
Modify _can_be_inplace function: return False if `_other.data` is an instance of `ir.BaseView`.
Fix https://github.com/pytorch/pytorch/issues/143280.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-C... | true |
2,741,583,374 | Update slow tests | pytorchupdatebot | closed | [
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/slow",
"ci-no-td"
] | 3 | COLLABORATOR | This PR is auto-generated weekly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/weekly.yml).
Update the list of slow tests. | true |
2,741,287,656 | [fsdp2] mixed precision reduce dtype is clamped before lazy init | leonardo0lyj | closed | [
"oncall: distributed",
"triaged",
"module: fsdp"
] | 4 | NONE | Hi Andrew @awgu 😊,
As a big fan of fsdp2, I found an potential issue in mixed precision in context of lazy init:
- ideally, fsdp2 allows user to change param dtype after initialization but before forward, so comes the lazy init of mixed precision's `param_dtype`
(https://github.com/pytorch/pytorch/blob/d745b2b5... | true |
2,741,269,179 | Flex Attention Trainable Bias Bug on A6000 | joydddd | open | [
"triaged",
"oncall: pt2",
"module: higher order operators",
"module: pt2-dispatcher",
"module: flex attention"
] | 3 | CONTRIBUTOR | ### 🐛 Describe the bug
`python test/inductor/test_flex_attention.py -k test_head_specific_gate_batch:2 `
on A6000 GPU commit `625b4ed`
```
======================================================================
FAIL: test_head_specific_gate_batch:2_head:4_seq_len:256_headdim:16_dtype:float32_mode_max-autotune-no... | true |
2,741,265,763 | init | pianpwk | closed | [
"Stale",
"release notes: fx",
"fx",
"module: dynamo",
"ciflow/inductor"
] | 2 | CONTRIBUTOR | Fixes #ISSUE_NUMBER
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,740,999,512 | torch._logging.set_logs kind of sucks for Jupyter notebooks | ezyang | open | [
"module: logging",
"triaged"
] | 0 | CONTRIBUTOR | ### 🐛 Describe the bug
Problems:
1. You can't override TORCH_TRACE via... anything. Impossible to do.
2. It would be really helpful if the function took the string format that the envvar takes, that format is very convenient and compact!
3. all=INFO is extremely spammy, for some reason
cc @mlazos
### Versio... | true |
2,740,989,791 | remove allow-untyped-defs for utils/data/datapipes/dataframe/structures.py | bobrenjc93 | closed | [
"Merged",
"ciflow/trunk",
"release notes: dataloader",
"topic: not user facing"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143272
* __->__ #143273
| true |
2,740,989,766 | remove allow-untyped-defs for _inductor/codegen/rocm/rocm_template_buffer.py | bobrenjc93 | closed | [
"module: rocm",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ciflow/rocm"
] | 15 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143272
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jia... | true |
2,740,989,447 | remove allow-untyped-defs for distributed/rpc/_testing/__init__.py | bobrenjc93 | closed | [
"oncall: distributed",
"Merged",
"ciflow/trunk",
"release notes: distributed (rpc)",
"topic: not user facing"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143153
* #143273
* #143272
* __->__ #143271
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,740,936,298 | [Flex Decoding] split_kv Schedule evening | joydddd | closed | [
"open source",
"Stale",
"module: inductor"
] | 3 | CONTRIBUTOR | `flex_decoding` divide the KV matrix along the sequence length dimension into multiple sub-sequences and assigns to different blocks to improve GPU occupancy and HBM bandwidth utilization.
`num_splits = num_SM / Bsz / Hq`. (each SM is assigned on subsequence for one head)
This assignment happens statically, nam... | true |
2,740,847,345 | [ROCm] Improvements for vectorized elementwise kernels | jerrymannil | closed | [
"module: rocm",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/periodic",
"rocm",
"ciflow/unstable",
"ciflow/rocm",
"ciflow/inductor-rocm"
] | 35 | CONTRIBUTOR | * Make io_size calculation as minimum of size of input and output size, rather than the summation of all sizes
* for e.g, for torch.add() on half dtypes (bfloat16/float16), calc_io_size() returns 6 causing elems_per_thread to be 4
* But elems_per_thread = 8 works better on half datypes for AMD gpus
* Enable *... | true |
2,740,768,393 | [CD] Fix XPU linux CD whl test failure | chuanqi129 | closed | [
"open source",
"Merged",
"ciflow/binaries",
"ciflow/trunk",
"topic: not user facing"
] | 6 | COLLABORATOR | Follow https://github.com/pytorch/pytorch/pull/142482, refer the original fix PR https://github.com/pytorch/pytorch/pull/130742 and new issue in https://github.com/pytorch/pytorch/actions/runs/12323126436/job/34403681230
Works for https://github.com/pytorch/pytorch/issues/114850
| true |
2,740,653,377 | Excessive precision discrepancy in torch.abs for complex Tensors with different data types | rookieLiu2018 | closed | [] | 1 | NONE | ### 🐛 Describe the bug
Using `torch.abs` on complex tensors with `dtype=torch.complex32` and `dtype=torch.complex64` leads to an excessively large discrepancy in results
``` python
import torch
complex_tensor = [100 + 150j, 200 + 250j]
x = torch.tensor(complex_tensor, dtype=torch.complex32)
y = torch.tensor(... | true |
2,740,254,369 | [Inductor XPU] Support max-autotune on XPU and reuse the corresponding Inductor UT. | etaf | closed | [
"open source",
"Merged",
"Reverted",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ciflow/xpu",
"ci-no-td"
] | 12 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143266
This PR aims to add the functionality support of max-autotune for XPU. The current triton templates and configurations are not well optimized for XPU, so the performance is not ready yet. Also the `mm_plus_mm` template have a... | true |
2,740,203,300 | [audio hash update] update the pinned audio hash | pytorchupdatebot | closed | [
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 3 | COLLABORATOR | This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/nightly.yml).
Update the pinned audio hash. | true |
2,740,174,371 | Add support for CPU scalar in addcmul | EmmettBicker | closed | [
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: python_frontend"
] | 9 | CONTRIBUTOR | Step required for performance in #143122
Adds support for CPU scalar for tensor_2 in addcmul. For example:
```
import torch
a = torch.rand(2, 2, device="cuda")
b = torch.tensor(1e-3)
torch.add(a, b)
torch.addcmul(a, a, b) # used to fail, now works
``` | true |
2,740,145,240 | [Easy] Bump CUDA nightly version to 11.8 / 12.4 / 12.6 in nightly pull tool | XuehaiPan | closed | [
"module: cuda",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: devx"
] | 6 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143262
* #141282
* __->__ #143263
cc @ptrblck @msaroufim @eqy @ZainRizvi @kit1980 @huydhn @clee2000 | true |
2,740,145,194 | Set proper `LD_LIBRARY_PATH` on Linux in nightly venv in nightly pull tool | XuehaiPan | closed | [
"open source",
"Merged",
"Stale",
"topic: not user facing",
"no-stale",
"module: devx"
] | 8 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143262
Before this change:
```console
$ make setup-env-cuda PYTHON="${HOMEBREW_PREFIX}/bin/python3.12"
$ source venv/bin/activate
$ python3 -c 'import torch'
Traceback (most recent call last):
File "<string>", line 1, in <... | true |
2,740,125,800 | Add a warning when a tensor with requires_grad=True is converted to a scalar | joshdavham | closed | [
"triaged",
"open source",
"Merged",
"Reverted",
"Stale",
"ciflow/trunk",
"release notes: python_frontend",
"topic: improvements",
"ci-no-td"
] | 49 | CONTRIBUTOR | Fixes #143071
Operations performed on tensors with `requires_grad=True` such as
```python
import torch
x = torch.tensor(2.0, requires_grad=True)
y = x ** 3
```
and
```python
x = torch.tensor(2.0, requires_grad=True)
y = torch.pow(x,3)
```
are valid operations.
While an operation using `numpy` like
... | true |
2,740,017,317 | Regression: `BlockMask__getitem__` returns a new BlockMask but forgets to change its shape on the Q dimension | w568w | open | [
"triaged",
"oncall: pt2",
"module: higher order operators",
"module: pt2-dispatcher",
"module: flex attention"
] | 0 | NONE | ### 🐛 Describe the bug
## Problem
Before af883262509b80f13a08dd5184d7b9456da38173, slicing a BlockMask along the query dimension would shrink its length on that dimension (and unfortunately round up the KV dimension):
```python
from torch.nn.attention.flex_attention import create_block_mask
block_mask = cre... | true |
2,739,988,920 | s.isIntegral(false) INTERNAL ASSERT FAILED | barbara42 | open | [
"needs reproduction",
"module: autograd",
"triaged"
] | 1 | NONE | ### 🐛 Describe the bug
When training ViT_b_16 (https://pytorch.org/vision/main/models/generated/torchvision.models.vit_b_16.html#torchvision.models.vit_b_16) on CUDA
```
model = helper.train_model(model, dataloaders, criterion, optimizer, scheduler,
File "/home/birdy/meng_thesis/code/master_ifcb_classifie... | true |
2,739,869,869 | Proper support for optionals in TorchScript | bluenote10 | open | [
"oncall: jit"
] | 1 | CONTRIBUTOR | ### 🚀 The feature, motivation and pitch
This came up as part of https://github.com/pytorch/pytorch/pull/142326.
TorchScript should support `Optional[T]` or `T | None` annotations correctly. Currently something basic like the following fails:
```py
import torch
class MyScriptModule(torch.nn.Module):
b... | true |
2,739,767,182 | [4/N] Apply py39 ruff and pyupgrade fixes | cyyever | closed | [
"oncall: distributed",
"oncall: jit",
"open source",
"Merged",
"ciflow/trunk",
"release notes: quantization",
"fx",
"module: dynamo",
"ciflow/inductor",
"release notes: AO frontend",
"suppress-bc-linter",
"ciflow/s390"
] | 14 | COLLABORATOR | ```torch/fx/passes/annotate_getitem_nodes.py``` was changed to support the new type hinting annotations.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @ezyang @SherlockNoMad @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhu... | true |
2,739,741,733 | Remove all dead type ignores (round 2) | bluenote10 | closed | [
"oncall: distributed",
"oncall: jit",
"module: rocm",
"module: cpu",
"open source",
"module: amp (automated mixed precision)",
"Stale",
"release notes: quantization",
"release notes: distributed (c10d)",
"fx",
"module: inductor",
"module: dynamo",
"ciflow/inductor",
"oncall: distributed ch... | 2 | CONTRIBUTOR | The next follow-up on #142325
This PR removes all dead/unused `# type: ignore` that do not have the code `# type: ignore[import]` (because these may be conditional type ignores, as discussed in https://github.com/pytorch/pytorch/pull/60006#issuecomment-2480604728).
Considering that the amount of dead type ignore... | true |
2,739,615,243 | Remove unnecessary once flag usage | cyyever | closed | [
"oncall: distributed",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)",
"ciflow/s390"
] | 12 | COLLABORATOR | Static variables in C++11 is guaranteed to be initialised exactly once, as mentioned [here](https://en.cppreference.com/w/cpp/language/storage_duration)
```
If multiple threads attempt to initialize the same static local variable concurrently,
the initialization occurs exactly once
(similar behavior can be obtain... | true |
2,739,506,633 | [TorchGen] Simplify argumenttype_type | cyyever | closed | [
"triaged",
"open source",
"Stale",
"topic: not user facing"
] | 3 | COLLABORATOR | Simplify torchgen code.
| true |
2,739,492,184 | Introduce gc_time_us field for dynamo_compile scuba logging | qiurc | closed | [
"fb-exported",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 14 | CONTRIBUTOR | Summary: The newly introduced field will be used by the following diff D67062158 to record the garbage collection time during PT2 compilation
Test Plan:
This diff itself should be no-op.
Test together with D67062158. Please refer to the test plan in D67062158 for the detailed test plan and result.
Differential Revisi... | true |
2,739,480,517 | Remove __ubsan_ignore_undefined__ | cyyever | open | [
"module: cpu",
"triaged",
"open source",
"topic: not user facing"
] | 8 | COLLABORATOR | Fixes #ISSUE_NUMBER
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 | true |
2,739,457,137 | Simplify host_softmax | cyyever | closed | [
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 5 | COLLABORATOR | Fixes #ISSUE_NUMBER
| true |
2,739,454,547 | [PyTorch] Add backend aot_eager_decomp_partition_with_mode | silverlakeli | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"module: dynamo",
"ciflow/inductor",
"release notes: AO frontend"
] | 13 | CONTRIBUTOR | Summary:
## Why
To make it possible to run torch dispatch mode inside compiled modules. This is to enable running MemoryTrackerMode (in next diff) to collect memory usage of compiled modules.
## What
Add a backend aot_eager_decomp_partition_with_mode.
Add an enable_log to the backend to control the compilation logging... | true |
2,739,424,688 | torch.select could not guard on data-dependent expression error | ydwu4 | closed | [
"triaged",
"oncall: pt2",
"module: dynamic shapes"
] | 4 | CONTRIBUTOR | ### 🐛 Describe the bug
File the issue for tracking.
I tried the following code:
```python
import torch
torch._dynamo.config.capture_scalar_outputs = True
def f(x, t):
c = x.item()
torch._check(c >= 0)
torch._check(c < t.size(0))
return torch.select(t, 0, c) + 1
out = torch.compile(f, f... | true |
2,739,411,458 | try root fix for FP8 tensor | mayank31398 | closed | [
"oncall: distributed",
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: distributed (fsdp)"
] | 7 | CONTRIBUTOR | Fixes #143194
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,739,408,997 | [ca] re-enable disabled tests | xmfan | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3 | MEMBER | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143247
FIXES https://github.com/pytorch/pytorch/issues/133197
The unspecified floats PR landed while this test was disabled, and it added an analysis restart which counts towards the backend call counter the test is using
cc... | true |
2,739,403,209 | UNSTABLE slow / linux-focal-rocm6.2-py3.10 / test (slow) | huydhn | closed | [
"module: rocm",
"module: ci",
"triaged",
"unstable"
] | 4 | CONTRIBUTOR | Network issue on ROCM runners is causing all the download there to fail
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @seemethere @malfet @pytorch/pytorch-dev-infra | true |
2,739,401,309 | [audio hash update] update the pinned audio hash | pytorchupdatebot | closed | [
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor"
] | 3 | COLLABORATOR | This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/nightly.yml).
Update the pinned audio hash. | true |
2,739,400,520 | Exclude py 31.3t triton package from PyTorch 3.13t wheel | pytorchbot | closed | [
"open source",
"topic: not user facing"
] | 1 | COLLABORATOR | Follow up after https://github.com/pytorch/pytorch/pull/143162
Include triton only for 3.13 packages not 3.13t | true |
2,739,400,420 | [CD] Test torch.compile on 3.13 | pytorchbot | closed | [
"open source",
"topic: not user facing"
] | 1 | COLLABORATOR | Follow up after https://github.com/pytorch/pytorch/pull/143162 | true |
2,739,397,070 | ROCm SDPA: Ensure attn_mask has the same dtype with q | xinyazhang | closed | [
"module: rocm",
"open source",
"Merged",
"topic: not user facing",
"ciflow/rocm"
] | 17 | COLLABORATOR | This is required by current AOTriton's backend.
Fixes NaN when calling SDPA ME backend with `q.dtype() != attn_mask.dtype()` when training llama2 using transformers+deepspeed+pytorch
Corresponding CUDA check seems to be here:
https://github.com/pytorch/pytorch/blob/708ce3c0082d670d9eaff84bc3c43cad4554a75d/aten/... | true |
2,739,381,740 | [DSD][BE] Rewrite some tests to remove `with_comms` | fegin | closed | [
"oncall: distributed",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 5 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143241
* #143240
Summary:
This saves ~ 1 minute test time.
cc @H-Huang @awgu @kwen2501 @wanchaol @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @LucasLLC @MeetVadakkanchery @mhorowitz @pradeepfn @ekr0 | true |
2,739,381,694 | [BE][CP] Use run_subtests instead of parametrize | fegin | closed | [
"oncall: distributed",
"Merged",
"topic: not user facing"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143241
* __->__ #143240
Summary:
This provides a 15X increase in test performance speed.
cc @H-Huang @awgu @kwen2501 @wanchaol @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | true |
2,739,381,175 | xpu: torch.nn.DataParallel fails on multi-XPU environment with "module 'torch._C' has no attribute '_scatter'" | dvrogozh | open | [
"oncall: distributed",
"triaged",
"module: xpu"
] | 8 | CONTRIBUTOR | With:
* Nightly PyTorch XPU:
* torch `2.6.0.dev20241209+xpu`
* torchaudio `2.5.0.dev20241209+xpu`
* torchvision `0.20.0.dev20241209+xpu`
* https://github.com/huggingface/transformers/commit/add53e25ffa3d1750a944086d2fbb016aee35406
`torch.nn.DataParallel` fails on multi-XPU environment with: `"AttributeErr... | true |
2,739,377,222 | [torch][cuda] fix race condition in cuda initialization | suo | closed | [
"Merged",
"ciflow/trunk",
"release notes: cuda"
] | 4 | MEMBER | The access to lazy init callbacks (`_lazy_seed_tracker` and `_queued_calls`) is not synchronized with the initialization lock.
This exposes us to the following race:
1. start `_lazy_init`
2. take `_initialization_lock`
3. flush `_queued_calls` and run them all
4. another thread comes in and uses `_lazy_call` to ... | true |
2,739,376,533 | [EZ] Remove `--pre` from numpy installation command | malfet | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3 | CONTRIBUTOR | null | true |
2,739,376,152 | [AOTI] Relax input alignment assertion | desertfire | closed | [
"Merged",
"ciflow/trunk",
"topic: improvements",
"module: inductor",
"ciflow/inductor",
"release notes: inductor"
] | 8 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143236
Summary: https://github.com/pytorch/pytorch/pull/142136 added a runtime alignment assertion. But the assumption is probably too strict for more flexible use cases of AOTI, e.g. python deployment, see a recent error torchchat r... | true |
2,739,374,550 | [Utilization Log] Concurrently collect aggregate data during the output interval | yangw-dev | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3 | CONTRIBUTOR | # overview
Add worker to collect metrics in short intervals
1.Worker: Add a worker to collect usage metrics, by default, every 500ms, notice this is configurable
2.Calculate & avg and max as data point, by default, every 5 second.
# Other
clean up the log format for necessary needs, currentl we do not need to t... | true |
2,739,370,026 | [CI/CD] Build torch with numpy 2 and compatible scipy & numba versions | haifeng-jin | closed | [
"open source",
"topic: not user facing"
] | 4 | COLLABORATOR | This is a follow-up for https://github.com/pytorch/pytorch/pull/141925.
The installed version of SciPy and Numba were not compatible with numpy 2.0.2 while building.
This PR specifies compatible versions of SciPy and Numba to install.
| true |
2,739,351,039 | Network outage on ROCm runners | huydhn | closed | [
"high priority",
"triage review",
"module: rocm",
"ci: sev"
] | 2 | CONTRIBUTOR | ## Current Status
Ongoing
## Mitigation
ROCm jobs have been marked as unstable for the time being:
* https://github.com/pytorch/pytorch/issues/143232
* https://github.com/pytorch/pytorch/issues/143231
* https://github.com/pytorch/pytorch/issues/143230
* https://github.com/pytorch/pytorch/issues/143246
... | true |
2,739,349,517 | UNSTABLE inductor-rocm / rocm6.2-py3.10-inductor / test | huydhn | closed | [
"module: rocm",
"module: ci",
"triaged",
"unstable"
] | 4 | CONTRIBUTOR | Network issue on ROCM runners is causing all the download there to fail
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @seemethere @malfet @pytorch/pytorch-dev-infra | true |
2,739,349,135 | UNSTABLE rocm / linux-focal-rocm6.2-py3.10 / test | huydhn | closed | [
"module: rocm",
"module: ci",
"triaged",
"unstable"
] | 4 | CONTRIBUTOR | Network issue on ROCM runners is causing all the download there to fail
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @seemethere @malfet @pytorch/pytorch-dev-infra | true |
2,739,348,658 | UNSTABLE trunk / linux-focal-rocm6.2-py3.10 / test | huydhn | closed | [
"module: rocm",
"module: ci",
"triaged",
"unstable"
] | 4 | CONTRIBUTOR | Network issue on ROCM runners is causing all the download there to fail
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @seemethere @malfet @pytorch/pytorch-dev-infra | true |
2,739,345,586 | Add typechecking indirection for Config | oulgen | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 4 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143152
* __->__ #143229
When we create a Config[T], we actually dynamically unbox this in the module, so lets have type checker believe that Config[T] creates a T. This enables proper typechecking support.
| true |
2,739,336,881 | Remove deprecated branch after capture_pre_autograd_graph fully migrate to training IR | yushangdi | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: quantization",
"ciflow/inductor",
"release notes: export"
] | 8 | CONTRIBUTOR | Summary:
as title
#buildall
Test Plan: CI
Differential Revision: D67222286
| true |
2,739,333,888 | [export] Unify single and multiple return for hops | yiming0416 | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"module: inductor",
"ciflow/inductor",
"release notes: export"
] | 35 | CONTRIBUTOR | Summary: Introduce `is_hop_single_tensor_return` field to the `Node` class in serialization so that during deserialization when there is a single return, we know whether it is a tuple of a single element or a single element.
Test Plan:
```
buck2 run @mode/dev-nosan sigmoid/inference/test:e2e_test_cpu -- -r E2ETest... | true |
2,739,333,556 | Expose remaining sharedMem cudaDeviceProps to python | peterbell10 | closed | [
"open source",
"Merged",
"ciflow/trunk",
"release notes: python_frontend",
"topic: new features"
] | 3 | COLLABORATOR | Was a bit too fast with my earlier PR, `sharedMemPerMultiprocessor` includes some memory that is reserved for the system. The amount a kernel can actually use is limited by `sharedMemPerBlockOptin`.
I also expose `sharedMemPerBlock` for completeness.
| true |
2,739,330,194 | No actual change, just remove variable contain Tensors from global scope | albanD | closed | [
"oncall: jit",
"Merged",
"ciflow/trunk",
"release notes: quantization",
"skip-pr-sanity-checks",
"release notes: AO frontend"
] | 4 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #143204
* #143323
* __->__ #143225
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel | true |
2,739,301,510 | Kill capture_pre_autograd_graph API | yushangdi | closed | [
"fb-exported",
"Merged",
"Reverted",
"ciflow/trunk",
"release notes: quantization",
"ciflow/inductor",
"release notes: export",
"ci-no-td"
] | 160 | CONTRIBUTOR | Summary:
Delete the following API:
- capture_pre_autograd_graph()
- capture_pre_autograd_graph_using_training_ir()
- gm_using_training_ir()
There's no more call sites to `capture_pre_autograd_graph`.
Except
1) two test cases in coreml, PR to remove: https://github.com/apple/coremltools/pull/2400
2) XLA: one test cas... | true |
2,739,269,358 | cpp_wrapper: Use runtime dispatched fallbacks for complex ops | benjaminglass1 | closed | [
"open source",
"Merged",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 4 | COLLABORATOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #144124
* #144123
* #144002
* #143909
* #143421
* __->__ #143223
* #141371
When calling a fallback op in cpp_wrapper mode, where any of the inputs are complex numbers, utilize the runtime dispatched fallback mode. This properly handles the ... | true |
2,739,253,649 | torch.onnx.export fails with <class 'torch._dynamo.exc.UserError'>: Could not guard on data-dependent expression u1 < 0 (unhinted: u1 < 0). (Size-like symbols: none) | liqunfu | open | [
"module: onnx",
"triaged",
"actionable"
] | 1 | COLLABORATOR | ### 🐛 Describe the bug
```python
import torch
from detectron2.structures import ImageList
batched_inputs = [{"image": torch.randint(0, 256, (3, 1024, 1024), dtype=torch.uint8), "height": 1024, "width": 1024}]
class test_model(torch.nn.Module):
def __init__(self):
super(test_model, self).__init__... | true |
2,739,242,308 | This should fail | malfet | closed | [
"module: cpu",
"ciflow/linux-aarch64"
] | 2 | CONTRIBUTOR | Fixes #ISSUE_NUMBER
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 | true |
2,739,234,362 | [logging] Log cudagraphify timings to dynamo_timed | masnesral | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"module: dynamo",
"ciflow/inductor"
] | 7 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143220
Summary: this adds some new dynamo_timed calls in cudagraph_trees, primarily with the aim to add cudagraph-related timing to scuba. Things to note:
* Uses the changes in https://github.com/pytorch/pytorch/pull/141919 to log ... | true |
2,739,229,442 | ROCM 6.2.4 RuntimeError: HIP error: AMD_SERIALIZE_KERNEL=3 Compile with `TORCH_USE_HIP_DSA` to enable device-side assertions. | KEDI103 | closed | [
"module: rocm",
"triaged"
] | 8 | NONE | ### 🐛 Describe the bug
Before release of 6.2.4 to main page I tried it with torch: 2.6.0.dev20241209+rocm6.2.4 it working perfect but after torch: 2.6.0.dev20241213+rocm6.2.4 released this error appeared
```
HIP error: invalid device function
HIP kernel errors might be asynchronously reported at some other AP... | true |
2,739,213,441 | Exclude py 31.3t triton package from PyTorch 3.13t wheel | atalman | closed | [
"Merged",
"Reverted",
"ciflow/binaries",
"topic: not user facing",
"ci-no-td"
] | 11 | CONTRIBUTOR | Follow up after https://github.com/pytorch/pytorch/pull/143162
Include triton only for 3.13 packages not 3.13t | true |
2,739,181,315 | support slicing with symints in non-strict | avikchaudhuri | closed | [
"fb-exported",
"Merged",
"ciflow/trunk",
"ciflow/inductor",
"release notes: export"
] | 6 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143217
Differential Revision: [D67215745](https://our.internmc.facebook.com/intern/diff/D67215745/) | true |
2,739,171,144 | `torch._refs.tensor` does not accept `[]` | avikchaudhuri | closed | [
"triaged",
"actionable",
"module: primTorch",
"module: decompositions"
] | 3 | CONTRIBUTOR | ### 🐛 Describe the bug
```python
torch._refs.tensor([]) # error
torch.tensor([]) # OK
```
### Versions
trunk
cc @ezyang @mruberry @SherlockNoMad | true |
2,739,143,375 | Runners, torchbench, & the future | janeyx99 | open | [
"module: ci",
"triaged"
] | 13 | CONTRIBUTOR | The purpose of this issue is to centralize discussions regarding the state of our runners and torchbench, in particular what should be expected when they go through transitions. It is a bit of a weird issue as this does not point to any codebase problems with pytorch/pytorch, but the intended discussion group spans be... | true |
2,739,140,405 | Add tests for non divisible inputs for flex decoding | joydddd | closed | [
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor"
] | 6 | CONTRIBUTOR | cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov | true |
2,739,120,886 | Get rid of _lazy_import hack | ezyang | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 6 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143213
Signed-off-by: Edward Z. Yang <ezyang@meta.com>
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames | true |
2,739,077,043 | [CI] Add Triton 3.13t build | malfet | closed | [
"Merged",
"Reverted",
"topic: not user facing",
"ci-no-td"
] | 13 | CONTRIBUTOR | By just extending the matrix and invoking script with appropriate cpython runtime | true |
2,739,023,586 | [dynamo] disable eval frame callback around most of _TorchDynamoContext wrapper function | williamwen42 | closed | [
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 4 | MEMBER | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #143211
Internal xref: https://fb.workplace.com/groups/1075192433118967/permalink/1559636954674510/
If the `_fn` returned by `_TorchDynamoContext.__call__` makes an external function call, dynamo is recursively invoked. This can... | true |
2,738,969,965 | [2/N][Memory Profiling] Record memory allocation/free | mzzchy | closed | [
"Stale"
] | 3 | CONTRIBUTOR | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* (to be filled)
Design Doc: https://fburl.com/gdoc/47zpuweb
Prototyping: D66469341
In this diff, we implement the logic to record, store and export memory trace which will be involved by mtia hooks later.
* Add RingBuffer<MTIATraceEntry> to... | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.