id
int64
2.74B
3.05B
title
stringlengths
1
255
user
stringlengths
2
26
state
stringclasses
2 values
labels
listlengths
0
24
comments
int64
0
206
author_association
stringclasses
4 values
body
stringlengths
7
62.5k
is_title
bool
1 class
3,040,294,136
[precompile] [easy] Refactor FxGraphCache to add cache_hit_post_compile function
jamesjwu
closed
[ "Merged", "ciflow/trunk", "topic: not user facing", "module: inductor", "ciflow/inductor" ]
3
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #152840 * __->__ #152839 * #152836 This PR refactors CompiledFxGraph by adding a new post_compile step that only runs on cache hit. This refactors a bunch of code in _lookup_graph to its own function so that we can use it in BundledAOTA...
true
3,040,292,658
[ROCm] Fix SymmetricMemory build error on NAVI arch
pragupta
closed
[ "oncall: distributed", "module: rocm", "open source", "Merged", "ciflow/trunk", "release notes: distributed (c10d)", "ciflow/periodic", "ciflow/rocm", "ciflow/periodic-rocm-mi300" ]
6
CONTRIBUTOR
NAVI arch doesn't support `__builtin_amdgcn_s_memtime()`, using `clock64()` instead which works for both NAVI and MI archs. Fixes #ISSUE_NUMBER cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @nar...
true
3,040,279,361
[nativert] Move MPMCQueue to torch/nativert.
zhxchen17
closed
[ "fb-exported", "Merged", "ciflow/trunk", "topic: not user facing" ]
21
CONTRIBUTOR
Summary: Torch Native Runtime RFC: https://github.com/zhxchen17/rfcs/blob/master/RFC-0043-torch-native-runtime.md To land the runtime into PyTorch core, we will gradually land logical parts of the code into the Github issue and get each piece properly reviewed. This diff adds a small library implementing a multi...
true
3,040,243,576
[precompile] Refactor AOTAutogradCacheEntry to be generic
jamesjwu
closed
[ "Merged", "ciflow/trunk", "topic: not user facing", "module: inductor", "ciflow/inductor" ]
3
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #152840 * #152839 * __->__ #152836 The purpose of this stack is to create a new BundledAOTAutogradCacheEntry, which is an AOTAutogradCacheEntry that is self contained, i.e. it contains all of the CompiledFxGraph directly in the entry, i...
true
3,040,225,776
[DRAFT] Test nccl
atalman
open
[ "ciflow/binaries" ]
2
CONTRIBUTOR
Fixes #ISSUE_NUMBER
true
3,040,174,808
[c10d] Fix extra CUDA context created by barrier
kwen2501
open
[ "oncall: distributed", "release notes: distributed (c10d)" ]
1
CONTRIBUTOR
Fixes #149119. In ProcessGroup.hpp, we create a dummy tensor for dispatching. This requires a correct device index. This PR uses `device_id` given by user when calling `init_process_group`. This PR also uses `torch._C._get_accelerator()` to determine the device type. ghstack-source-id: 96c32b9565794d995c26bd17...
true
3,040,167,991
Document that dampening is skipped in SGD momentum first step
janeyx99
closed
[ "Merged", "ciflow/trunk", "topic: docs", "release notes: optim" ]
3
CONTRIBUTOR
Pointed out by https://x.com/hi_tysam/status/1917318692276174977/photo/2. It would be BC breaking to change this behavior 7 years after it has been decided, so we are documenting it first at the very least. <img width="642" alt="image" src="https://github.com/user-attachments/assets/3febcb07-e0ed-44a1-bd3b-a8e685...
true
3,040,134,501
Allow to set custom PYTHONPATH for torch.inductor
gdippolito
open
[ "triaged", "open source", "oncall: pt2", "module: inductor", "release notes: inductor" ]
4
NONE
When using Bazel, it’s common to encounter issues like [this](https://github.com/bazelbuild/bazel/issues/14640) and [this](https://github.com/bazel-contrib/rules_python/issues/792) where the `PYTHONPATH` environment variable becomes too long and results in an error such as: `OSError: [Errno 7] Argument list too long` ....
true
3,040,131,353
[pytorch][PR][inductor] Fix one instance of launch_enter_hook
devashishshankar
closed
[ "fb-exported", "Merged", "ciflow/trunk", "topic: not user facing", "module: inductor", "ciflow/inductor" ]
5
CONTRIBUTOR
Summary: One usage seems missed in https://github.com/pytorch/pytorch/pull/152457 Test Plan: EMS local benchmark Differential Revision: D74159749 cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @...
true
3,040,101,312
[BE]: Improve aten formatter with fmtlib
Skylion007
open
[ "open source" ]
2
COLLABORATOR
Fixes #ISSUE_NUMBER
true
3,040,027,504
Don't hardcoded support for DTensor to_local/from_local/redistribute into dynamo
bdhirsh
open
[ "oncall: distributed", "triaged", "oncall: pt2", "module: dynamo" ]
0
CONTRIBUTOR
There has been a long-standing hack in dynamo around support for DTensor - there are a few primitive functions (listed above) that accept opaque python types (`DTensorSpec/Placement/DeviceMesh`) and therefore cannot go in the dynamo graph, that have hardcoded support in dynamo. This is bad for several reasons: (1) it...
true
3,040,016,632
[MSVC] Enable updated lambda processor by setting compiler flag /Zc:lambda globally
taras-janea
open
[ "module: build", "module: windows", "module: cpu", "open source", "topic: not user facing", "skip-url-lint" ]
1
COLLABORATOR
Fixes: - https://github.com/pytorch/pytorch/issues/92600 [Enable updated lambda processor](https://learn.microsoft.com/en-us/cpp/build/reference/zc-lambda?view=msvc-170) by setting compiler flag `/Zc:lambda` globally. cc @malfet @seemethere @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex @jgong5 @min...
true
3,039,923,454
Pipeline Parallelism Fails when stage input does not produce gradients in all stages.
man2machine
open
[ "oncall: distributed" ]
0
NONE
### 🐛 Describe the bug TLDR: Pipeline parallelism fails if stage input does not have gradients produced Consider the case where a outputs from each pipeline stage is passed to the next stage, but whether or not the output is used or not for a particular batch is conditional (based on the code of the model). Hence, i...
true
3,039,919,509
Only do shallow clone when checkout nccl
YouJiacheng
closed
[ "open source", "Merged", "ciflow/trunk", "topic: not user facing" ]
9
CONTRIBUTOR
Note: `--depth` implies `--single-branch` since git 2.7.6 ```sh git clone https://github.com/NVIDIA/nccl.git Cloning into 'nccl'... remote: Enumerating objects: 4205, done. remote: Counting objects: 100% (238/238), done. remote: Compressing objects: 100% (122/122), done. remote: Total 4205 (delta 144), reused ...
true
3,039,882,069
Use gcc13 in Manylinux 2.28 images
atalman
open
[ "ciflow/binaries", "topic: not user facing" ]
5
CONTRIBUTOR
Related to: https://github.com/pytorch/pytorch/issues/152426
true
3,039,706,050
`mypy` stage of `lintrunner -a` has intermittent but continuing crashes
rec
open
[ "module: crash", "module: lint", "triaged", "module: flaky-tests", "bug" ]
1
COLLABORATOR
### 🐛 Describe the bug Sometimes (5-10% of the time?) when I run `lintrunner init && lintrunner -a` I get a Python traceback in the second step (listed below). Almost always this does not happen again when I rerun the command. I've been sort of ignoring it for a long time but figured I should finally report it! The...
true
3,039,582,622
Performance Regression nightly 03/11→03/12, on nanogpt speedrun
YouJiacheng
open
[ "high priority", "triaged", "oncall: pt2", "upstream triton", "module: higher order operators", "module: pt2-dispatcher", "module: flex attention" ]
9
CONTRIBUTOR
### 🐛 Describe the bug code: https://gist.github.com/YouJiacheng/687efdab59a3c3b4ad89864804bd918a I manually applied changes from #152641 03/10: 1469.0-1470.4s (3 runs) 03/11: 1469.4-1470.5s 03/12: 1486.0-1487.4s (a few runs) 03/15: ≈1487.5s (a single run) FWD diffs (03/10 vs. 03/15): https://www.diffchecker.com/bL...
true
3,039,556,091
TorchRun: Option to specify which GPUs to run on
bjourne
open
[ "oncall: distributed" ]
2
NONE
### 🚀 The feature, motivation and pitch TorchRun has an `--nproc-per-node` option to specify how many processes/gpus to use. But it has no option for specifying *which* gpus to use. So if you run torchrun multiple times the same gpus will be used. You can get around that as follows: CUDA_VISIBLE_DEVICES="2,4,7" ...
true
3,039,454,175
[Easy][Inductor] Adds safety checks in get_estimated_runtime
Aidyn-A
open
[ "triaged", "open source", "topic: not user facing", "module: inductor", "ciflow/inductor" ]
11
COLLABORATOR
This PR adds checks on `gpu_memory_bandwidth` and `gpu_flops` in `get_estimated_runtime`. This will prevent division by zero and other potential incorrect values: https://github.com/pytorch/pytorch/blob/9210a98b9203c5ff42f39241304a8e38435110b8/torch/_inductor/scheduler.py#L864-L865 https://github.com/pytorch/pytorc...
true
3,039,435,245
[DO NOT MERGE] update build tools version
alinpahontu2912
open
[ "triaged", "open source", "ciflow/binaries_wheel" ]
2
COLLABORATOR
Use latest msvc to build pytorch and check if avs512 instructions are correctly set
true
3,039,424,780
[TEST][Quantization] Skip test_learnable due to hypothesis
Aidyn-A
open
[ "triaged", "open source", "release notes: quantization", "topic: not user facing" ]
2
COLLABORATOR
As per comment in https://github.com/pytorch/pytorch/issues/111471#issuecomment-1866933243 the tests are failing due to hypothesis. This PR adds a skip to those tests.
true
3,039,320,406
fix: correct typo in randomness/reproducibility documentation
nachodieez
closed
[ "open source", "topic: not user facing" ]
4
NONE
Fixes #152817 by using the correct word in the documentation file.
true
3,039,309,286
Mention of nondeterministic index_add when deterministic implementation is being used
nachodieez
closed
[]
1
NONE
### 📚 The doc issue In [this documentation page](https://pytorch.org/docs/stable/notes/randomness.html#avoiding-nondeterministic-algorithms) it is mentioned that the nodeterministic CUDA implementation of `index_add` is being used when in fact the one that is being used and is giving error is the deterministic versio...
true
3,039,170,171
Depthwise Separable Convolutions with Large Tensors (> 2**31) Elements) Fail Despite cuDNN 64-bit Indexing Support
lely475
open
[ "module: cudnn", "module: cuda", "module: convolution", "triaged", "module: 64-bit" ]
3
NONE
### 🐛 Describe the bug The forward pass on a 2D convolutional layer using grouped convolutions (e.g., depthwise separable convolutions) fails when operating on tensors with more than 2\**31 elements. This limitation persists even when cuDNN v9.7.1 is used, which should theoretically support 64-bit indexing for large t...
true
3,039,108,164
[Cutlass] E2E Tests for EVT
mlazos
open
[ "ciflow/trunk", "topic: not user facing", "module: inductor", "ciflow/inductor" ]
2
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #152815 * #150907 * #151406 * #150906 * #152733 cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhun...
true
3,039,106,703
[TEST][ATen][CUDA] Skip row-wise scaled matrix mmultiplication tests on sm_120+
Aidyn-A
open
[ "module: cuda", "triaged", "open source", "topic: not user facing" ]
10
COLLABORATOR
The float8 row-wise scaled matmuls are not supported on Blackwell yet. This PR adds skips to those tests to decrease the noise on `sm_120+` machines. cc @ptrblck @msaroufim @eqy @jerryzh168
true
3,039,101,868
Mismatch in dynamic quantization performance for torchao and torch.quantization
PioneerAlexander
open
[ "oncall: quantization" ]
0
NONE
Hi everyone! Can someone explain, why I get different performance, when I apply torch.quantization.quantize_dynamic and torchao.quantize_? More specifically, I have an LSTM model with two fully connected layers (in the front and in the back). In order to quantize it with torchao, I reimplemented a lstm layer (checked...
true
3,038,948,509
Fix typo on `test_multi_device_context_manager` for XPU
guangyey
closed
[ "open source", "Merged", "ciflow/trunk", "topic: not user facing", "ciflow/xpu" ]
11
COLLABORATOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #152812 # Motivation Align https://github.com/pytorch/pytorch/pull/152474, fix the typo on UT for XPU introduced by https://github.com/pytorch/pytorch/issues/148864
true
3,038,926,403
[Quant][X86] add an op to compute uint8 batch norm 2d
Xia-Weiwen
open
[ "module: cpu", "open source", "release notes: quantization", "intel" ]
1
COLLABORATOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #152811 * #152411 **Summary** This PR adds a new op, `onednn.qbatch_norm2d`, which accepts uint8 inputs on CPU device (instead of QuantizedCPU). The new ops are implemented with AVX512 instructions and it provides similar performan...
true
3,038,863,226
Upgrade to NCCL 2.26.5 for CUDA 12
tinglvv
open
[ "open source", "ciflow/binaries", "ciflow/trunk", "topic: not user facing", "ciflow/inductor" ]
19
COLLABORATOR
Upgrade NCCL to latest 2.26.5 cc @atalman @ptrblck @malfet @eqy @nWEIdia
true
3,038,859,346
[xla hash update] update the pinned xla hash
pytorchupdatebot
closed
[ "open source", "Merged", "ciflow/trunk", "topic: not user facing", "ciflow/inductor" ]
3
COLLABORATOR
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/nightly.yml). Update the pinned xla hash.
true
3,038,821,202
another try
hl475
open
[ "module: cpu", "fb-exported", "release notes: quantization" ]
2
CONTRIBUTOR
Differential Revision: D74161994 cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168
true
3,038,800,154
wip
hl475
open
[ "module: cpu", "fb-exported", "release notes: quantization" ]
2
CONTRIBUTOR
Differential Revision: D74161784 cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168
true
3,038,780,293
[invoke_subgraph] Force the output stride to be same as eager
anijain2305
open
[ "ciflow/trunk", "topic: not user facing", "module: inductor", "ciflow/inductor" ]
2
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #152806 * #152675 * #152770 * #152772 cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
true
3,038,763,925
False INTERNAL ASSERT FAILED
noaft
closed
[ "needs reproduction", "oncall: jit" ]
3
NONE
### 🐛 Describe the bug This my code: import torch # Đây là model sau khi convert rồi quantized_model.eval() # Convert sang TorchScript scripted_model = torch.jit.script(quantized_model) # Lưu lại bằng TorchScript scripted_model.save("resnet50_int8_scripted.pt") I want to save my model quantitied with jit and have...
true
3,038,592,496
Segmentation fault (core dumped) in torch.nn.functional.max_unpool2d
cx104906
closed
[ "triage review", "module: crash", "topic: fuzzer" ]
3
NONE
### 🐛 Describe the bug reproduce ``` curl -L -o 004-args "https://github.com/cx104906/poc/raw/main/pytorch/id%3A000004-args" curl -L -o 004-kwargs "https://github.com/cx104906/poc/raw/main/pytorch/id%3A000004-kwargs" python cxtest1.py ``` cxtest1.py ``` import torch import pickle print(torch.__version__) mylist = to...
true
3,038,557,505
same test for guard_or_false 2
laithsakka
open
[]
2
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #152803 * #152802 * #152784 * #152722 * #148872
true
3,038,556,240
same test for guard_or_false 1
laithsakka
open
[]
2
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #152803 * __->__ #152802 * #152784 * #152722 * #148872
true
3,038,539,405
Thread through options so GraphPickler can allow all ops
aorenste
closed
[ "Merged", "ciflow/trunk", "release notes: fx", "topic: not user facing", "fx" ]
6
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #152801 Fixes #151904 In #151904 we discussed the feasibility of including all ops in the GraphPickler. This PR changes it so we can filter which ops are allowed and which are blocked. cc @ezyang @SherlockNoMad @EikanWang @jgo...
true
3,038,484,054
Add "#pragma once" to CachingHostAllocator.h
jhapradip
closed
[ "open source", "Merged", "ciflow/trunk", "topic: not user facing" ]
6
CONTRIBUTOR
null
true
3,038,430,994
[float16]: Fast path for torch.dot with float16/bfloat16
f2013519
closed
[ "module: cpu", "open source", "Merged", "Reverted", "ciflow/trunk", "release notes: linalg_frontend", "topic: performance", "ci-no-td" ]
21
CONTRIBUTOR
Fixes #152798 Add the fast path for dot with contiguous tensors for float16/bfloat16 types. Performance with patch (see issue for benchmark and current performance): ![Improved dot performance](https://github.com/user-attachments/assets/57f64e90-8191-4710-adb0-f430644827de) **We see up to 10x+ improvement i...
true
3,038,418,140
Poor performance of torch.dot with float16 & bfloat16
f2013519
closed
[ "triaged", "module: bfloat16", "module: half", "module: linear algebra", "topic: performance" ]
0
CONTRIBUTOR
### 🐛 Describe the bug torch.dot is an order of magnitude slower(or more) with float16/bfloat16 versus float32: ```python import torch import timeit import sys import platform import matplotlib.pyplot as plt import numpy as np import warnings import math # --- Configuration --- # Vector sizes (N) - Powers of 10 fro...
true
3,038,373,933
DISABLED test_comprehensive_fliplr_cuda_float16 (__main__.TestInductorOpInfoCUDA)
pytorch-bot[bot]
open
[ "high priority", "triaged", "module: flaky-tests", "skipped", "oncall: pt2", "module: inductor" ]
2
NONE
Platforms: inductor This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_fliplr_cuda_float16&suite=TestInductorOpInfoCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41618142892). Over the p...
true
3,038,373,932
DISABLED test_comprehensive_rot90_cuda_float32 (__main__.TestInductorOpInfoCUDA)
pytorch-bot[bot]
open
[ "high priority", "triaged", "module: flaky-tests", "skipped", "oncall: pt2", "module: inductor" ]
5
NONE
Platforms: inductor This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_rot90_cuda_float32&suite=TestInductorOpInfoCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41618142889). Over the pa...
true
3,038,373,442
DISABLED test_comprehensive_unbind_copy_cuda_int32 (__main__.TestInductorOpInfoCUDA)
pytorch-bot[bot]
open
[ "high priority", "triaged", "module: flaky-tests", "skipped", "oncall: pt2", "module: inductor" ]
14
NONE
Platforms: inductor This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_unbind_copy_cuda_int32&suite=TestInductorOpInfoCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41618142889). Over th...
true
3,038,373,416
DISABLED test_comprehensive_slice_scatter_cuda_bool (__main__.TestInductorOpInfoCUDA)
pytorch-bot[bot]
open
[ "high priority", "triaged", "module: flaky-tests", "skipped", "oncall: pt2", "module: inductor" ]
12
NONE
Platforms: inductor This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_slice_scatter_cuda_bool&suite=TestInductorOpInfoCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41618142892). Over t...
true
3,038,373,413
DISABLED test_comprehensive_linalg_pinv_singular_cuda_float64 (__main__.TestInductorOpInfoCUDA)
pytorch-bot[bot]
open
[ "triaged", "module: flaky-tests", "skipped", "oncall: pt2", "module: inductor" ]
1
NONE
Platforms: linux, slow This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_linalg_pinv_singular_cuda_float64&suite=TestInductorOpInfoCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41618142...
true
3,038,371,840
Pass UNINSTALL_DILL to docker build
cyyever
closed
[ "triaged", "open source", "Merged", "ciflow/trunk", "topic: not user facing" ]
6
COLLABORATOR
`UNINSTALL_DILL` was not really passed to docker before.
true
3,038,326,479
Inconsistent export behavior for nonzero+grid_sample between CUDA and CPU/MPS backends
sachin-skyline
open
[ "oncall: pt2", "oncall: export" ]
1
NONE
### 🐛 Describe the bug I am trying to `export` a model that contains a `nonzero` call followed by a `grid_sample` (for use in `aoti_compile_and_package`). When exporting for cpu or mps, no error is thrown, but when using cuda, "torch._dynamo.exc.UserError: Could not guard on data-dependent expression Eq(2*u0, 0) (unh...
true
3,038,266,053
[CXX11ABI] torch 2.6.0-cu126 and cu124 have different exported symbols
vadimkantorov
open
[ "module: binaries", "module: cuda", "triaged" ]
15
CONTRIBUTOR
### 🐛 Describe the bug The symbol `_ZN3c105ErrorC2ENS_14SourceLocationESs` is exported in cu124's version, but missing in cu126: some `nm` outputs in https://github.com/Dao-AILab/flash-attention/issues/1644 I understand that because of missing symbols, flash_attention has stopped working with torch 2.7. But it was a...
true
3,038,259,121
Fixed rerr computation in lobpcg
ignasa007
open
[ "open source", "release notes: linalg_frontend" ]
15
NONE
Fixes #101075 This PR fixes an issue with the computation of residuals in the LOBPCG algorithm. **Bug**: [Line 788](https://github.com/pytorch/pytorch/blob/8f54e56e62692bcebf218f2e4c1855a3be97baf2/torch/_lobpcg.py#L788) is supposed to compute the denominator in Equation 9 of [Duersch et al., 2018](https://arxiv....
true
3,038,243,246
[MPSInductor] Fix `truncdiv` implementation
malfet
closed
[ "Merged", "topic: bug fixes", "release notes: mps", "ciflow/mps", "module: inductor", "ciflow/inductor" ]
3
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #152788 For integral dtypes it should be just an alias for division Fixes `GPUTests.test_div7_mps` cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @cheny...
true
3,038,209,346
Implement DeviceType.h as header-only
desertfire
open
[ "oncall: jit", "module: cpu", "module: mkldnn", "ciflow/trunk", "release notes: quantization", "ciflow/inductor", "ciflow/linux-aarch64" ]
3
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #152787 Summary: Move c10/core/DeviceType.h to a separate torch/csrc/header_only directory. Still keep a copy of c10/core/DeviceType.h for backwrad compatibility. More header files will be moved as follow-up. CI to guard "header-only-...
true
3,038,188,163
Update CMakeLists.txt
gisp-cubicon
open
[ "triaged", "open source", "topic: not user facing" ]
2
NONE
Fixes #ISSUE_NUMBER
true
3,038,177,115
Fix negative dim issue in for parallel loss context manager
abhilash1910
open
[ "oncall: distributed", "triaged", "open source", "topic: not user facing" ]
6
NONE
Facing similar issue as on #152016 , and added as per @tianyu-l 's solution. Fixes #152016 Tagging @tianyu-l @atalman for review. cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
true
3,038,168,322
test that guard_or_true change can only make valid results null but does not change result or make invalid valid
laithsakka
open
[]
2
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #152803 * #152802 * __->__ #152784 * #152722 * #148872
true
3,038,134,742
undefined symbol: __nvJitLinkCreate_12_8, version libnvJitLink.so.12
FurkanGozukara
open
[ "triage review", "module: binaries" ]
3
NONE
I am trying to use Torch 2.7 with CUDA 12.8 on Linux with Kohya trainer and I am getting this error Exactly same installation and setup works on Windows I tried Torch 2.7 official and latest Torch 2.8 nightly all CUDA 12.8 and same error ``` ╭───────────────────── Traceback (most recent call last) ────────────────...
true
3,038,076,765
[BE]: Update cudnn to 9.9 for cu128
Skylion007
open
[ "open source", "topic: not user facing", "ciflow/inductor", "ciflow/inductor-cu126" ]
1
COLLABORATOR
Update cudnn to 9.9 for better blackwell support for cu128
true
3,038,073,282
[MPS] SDPA specialized kernels
Isalia20
closed
[ "triaged", "open source", "Merged", "module: mps", "release notes: mps", "ciflow/mps", "module: sdpa" ]
8
COLLABORATOR
Paritally fixes #139668 and #152550 Still work in progress. Following needs to be addressed: - [x] Some tests are failing and need to check why and bugfix - [x] Benchmark the new kernels and add to this PR for varying sequence lengths head dimensions(the ones that get dispatched to kernels) - [x] Add tests to co...
true
3,038,054,985
Error with nccl + multiple RTX5090 in ddp training. CUDA error: an illegal memory access was encountered
KohakuBlueleaf
closed
[ "oncall: distributed", "triaged" ]
3
NONE
### 🐛 Describe the bug Related issues: https://github.com/Lightning-AI/pytorch-lightning/issues/20757 When I tried to run DDP training with multiple RTX5090 I encountered the error in nccl. I have seen this in different task/project and different trainer implementation, and eventually reproduced this error with nati...
true
3,038,050,985
[BE]: Update cutlass submodule to 3.9.2
Skylion007
closed
[ "open source", "Merged", "ciflow/trunk", "release notes: cuda", "module: dynamo", "ciflow/inductor" ]
4
COLLABORATOR
A lot of last minute bugfixes for CUTLASS blackwell that we should upstream. It's a header only library and a minor release so this should strictly improve compiler support and fix some bugs. Needed to update some instruction numbers in torch compile baselines for the new kernels cc @voznesenskym @penguinwu @Eikan...
true
3,038,048,736
[BE]: Update torch core lazy helpers with micropts
Skylion007
closed
[ "open source", "better-engineering", "Merged", "ciflow/trunk", "topic: not user facing" ]
5
COLLABORATOR
Some minor nits I noticed. Use reserve when possible
true
3,037,894,802
Segmentation fault (core dumped) in torch.nn.functional.alpha_dropout
cx104906
open
[ "module: crash", "oncall: quantization", "module: error checking", "triaged", "module: empty tensor", "topic: fuzzer" ]
1
NONE
### 🐛 Describe the bug reproduce ``` curl -L -o 003-args "https://github.com/cx104906/poc/raw/main/pytorch/id%3A000003-args" curl -L -o 003-kwargs "https://github.com/cx104906/poc/raw/main/pytorch/id%3A000003-kwargs" python cxtest1.py ``` cxtest1.py ```import torch import pickle print(torch.__version__) mylist = tor...
true
3,037,832,335
[WIP] Pattern matcher support for mutable ops with view inputs
yf225
open
[ "module: inductor", "ciflow/inductor" ]
2
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #152776 * #152775 cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
true
3,037,808,077
[Inductor] Pattern matcher support for mutable ops with non-view inputs
yf225
open
[ "module: inductor", "ciflow/inductor", "release notes: inductor" ]
2
CONTRIBUTOR
Fixes the non-view input use case in https://github.com/pytorch/pytorch/issues/152441. Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #152776 * __->__ #152775 Pull-Request-resolved: https://github.com/pytorch/pytorch/pull/152767 cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guob...
true
3,037,780,186
[dynamo][super variable] Fix bug to use correct source
anijain2305
closed
[ "module: rocm", "module: dynamo", "ciflow/inductor" ]
1
CONTRIBUTOR
Fixes #ISSUE_NUMBER cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
true
3,037,779,755
RuntimeError: creation_meta == CreationMeta::DEFAULT INTERNAL ASSERT FAILED at "/build/pytorch/torch/csrc/autograd/variable.cpp":224, please report a bug to PyTorch.
ad8e
open
[ "high priority", "triage review", "module: autograd", "triaged" ]
4
CONTRIBUTOR
### 🐛 Describe the bug Reproducer: 1. `git clone https://github.com/crowsonkb/k-diffusion.git` 2. `cd k-diffusion` 3. Use find in files: `q, k = scale_for_cosine_sim(q, k, self.scale[:, None], 1e-6)` (it'll be in image_transformer_v2.py). Comment it out. 4. Run `python train.py --config configs/config_oxford_flowers....
true
3,037,774,797
[fx] Recursive DCE on subgraphs
anijain2305
closed
[ "Merged", "ciflow/trunk", "release notes: fx", "topic: not user facing", "fx", "module: inductor", "module: dynamo", "ciflow/inductor", "ci-no-td", "ciflow/pull" ]
3
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #152806 * #152675 * #152770 * __->__ #152772 cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @cha...
true
3,037,772,983
[aoti] Add grid_sampler_3d to cshim
MaanasArora
open
[ "triaged", "open source", "module: inductor", "release notes: inductor (aoti)" ]
4
NONE
Fixes #147625. Do we need any tests? This is my first contribution. Thanks! cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @angelayi @desertfire
true
3,037,768,075
[inductor][refactor] Refactor the fetching of subgraph names
anijain2305
closed
[ "Merged", "ciflow/trunk", "topic: not user facing", "module: inductor", "module: dynamo", "ciflow/inductor", "ciflow/pull" ]
3
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #152806 * #152675 * __->__ #152770 * #152772 cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
true
3,037,708,655
Set CMake 3.5 as minimum version in pytorch_android
cyyever
closed
[ "open source", "Merged", "ciflow/binaries", "ciflow/trunk", "topic: not user facing", "ciflow/android" ]
9
COLLABORATOR
I saw pytorch_android failure in docker image builds. This fix attempts to bypass CMake 4 limitations.
true
3,037,694,873
[cudagraphs] Fix issue in collecting static_input_idxs
pytorchbot
closed
[ "open source", "module: inductor", "module: dynamo", "ciflow/inductor", "release notes: AO frontend" ]
1
COLLABORATOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #152287 related to https://github.com/pytorch/pytorch/issues/152275 cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amj...
true
3,037,692,794
[WIP] Pattern matcher support for custom op
yf225
closed
[ "module: inductor", "ciflow/inductor" ]
2
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #152767 cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
true
3,037,690,318
[caffe2] Support building for armv8.1
andrewjcg
closed
[ "module: cpu", "fb-exported", "Merged", "ciflow/trunk", "topic: not user facing" ]
5
CONTRIBUTOR
Summary: - Remove explicit `-march=` compiler flags, as they're already implied by the toolchain: https://www.internalfb.com/code/fbsource/[7f85b0565073]/fbcode/tools/build/buck/wrappers/defs.bzl?lines=819 - Gate non-8.1 compliant opcodes with `__ARM_FEATURE_*`. Test Plan: CI Reviewed By: rahulg Differential Revi...
true
3,037,687,461
[c10d] Fix unused `group` input argument in `new_subgroups()`
tsunghsienlee
closed
[ "oncall: distributed", "fb-exported", "Merged", "ciflow/trunk", "release notes: distributed (c10d)" ]
10
CONTRIBUTOR
Summary: This diff fixes an unused input argument [`group`](https://github.com/pytorch/pytorch/blob/8faa22569519b8916dfa0334287cbb849704965f/torch/distributed/distributed_c10d.py#L5341) in the `new_subgroups()` function. Test Plan: contbuild & OSS CI, see Differential Revision: D74132537 cc @H-Huang @awgu @wancha...
true
3,037,685,086
[WIP] fix issue 151198
yf225
closed
[ "module: cpu", "topic: not user facing", "module: inductor", "ciflow/inductor" ]
1
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #152764 cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @...
true
3,037,676,169
can't build torch on WSL
thot-experiment
closed
[ "module: build" ]
5
NONE
### 🐛 Describe the bug I'm on hour 5 of trying to get a version of torch built that support sm_70 AND sm_120, for some reason the latest linux version does not, everything is working fine for me under windows so I know it must be possible to do both somehow but I'm sort of at wits end. I've followed the instructions ...
true
3,037,564,690
added short integer for repeat_interleave_cpu, Fixes #151311
arjuanwall
open
[ "triaged", "open source", "topic: not user facing" ]
5
NONE
- Fixes #151311 (repeat_interleave_cpu not implemented for "Char") - Allows torch.repeat_interleave on CPU to accept int8, uint8, and int16 repeat‑count tensors - In aten/src/ATen/native/Repeat.cpp, tiny integer dtypes (kChar, kByte, kShort) are up‑cast to kInt before the AT_DISPATCH_INDEX_TYPES macro, so they reach ...
true
3,037,563,460
Performance Regression nightly 02/14→02/15, on nanogpt speedrun
YouJiacheng
closed
[]
4
CONTRIBUTOR
### 🐛 Describe the bug I manually applied changes from #152641 02/09: 1469.8-1470.4s. 03/01: 1471.3-1472.5s. #### Inductor output code 1. (02/09 + patch vs. 03/01 + patch) Bwd diff: https://www.diffchecker.com/p6TsbcIF/ Fwd diff (~no diff): https://www.diffchecker.com/BaZVI86E/ #### Bisection 02/20 Bwd is identic...
true
3,037,539,544
[Easy][BE] update recommanded VS Code settings
XuehaiPan
open
[ "open source", "better-engineering", "topic: not user facing" ]
1
COLLABORATOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #152760 Remove old invalid settings and replace with new settings.
true
3,037,484,623
Allow ATen ops overloading
goldcoderZ
open
[ "fb-exported" ]
4
CONTRIBUTOR
Summary: Allow ATen ops being overloaded. Test Plan: contbuild & OSS CI [pending] Differential Revision: D74117257
true
3,037,303,120
[MPS] Migrate div roudning modes
malfet
closed
[ "Merged", "topic: improvements", "release notes: mps", "ciflow/mps", "keep-going" ]
4
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #152788 * __->__ #152758 By implementing `div_floor` and `div_trunc` . Do not mark `div_trunc` as OPMATH, to align following output with CPU(if division is performed in fp32, than result will be truncated to 25 ``` import torch print(t...
true
3,037,220,364
wip
bobrenjc93
closed
[ "release notes: fx", "fx", "ciflow/inductor" ]
1
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #152757 * #152601 * #152597 * #152596 cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
true
3,037,212,290
Cuda-12.9 removed libnvToolsExt.so.* and is now purely header nvtx3
whitesscott
open
[ "module: cuda", "triaged", "actionable" ]
3
NONE
### 🐛 Describe the bug Nvidia released Cuda-12.9 on 05/01/25 Python 3.12.10 venv Nvidia Jetson AGX Orin dev kit Cuda-12.9 removed libnvToolsExt.so.* and is now purely header /usr/local/cuda/include/nvtx3/* torch/__init__.py attempts to load the now nonexistent library: "nvtx": "libnvToolsExt.so.*[0-9]", I...
true
3,037,186,867
Inconsistent behavior between CPU and GPU implementations of `torch.Tensor.put_` method
SilentTester73
closed
[]
1
NONE
### 🐛 Describe the bug ## Description I've discovered a discrepancy in the behavior of the `put_` method between CPU and GPU tensors. When executing identical operations, CPU tensors maintain their original values while GPU tensors are incorrectly modified to zero. ## Reproduction Code colab link: [https://colab.re...
true
3,037,173,341
[nativert] move intrusive list to c10/util
dolpm
closed
[ "fb-exported", "Merged", "ciflow/trunk", "topic: not user facing" ]
13
CONTRIBUTOR
Summary: nativert RFC: https://github.com/zhxchen17/rfcs/blob/master/RFC-0043-torch-native-runtime.md To land the runtime into PyTorch core, we will gradually land logical parts of the code into the Github issue and get each piece properly reviewed. This diff moves intrusive list to c10/util Test Plan: CI Different...
true
3,037,152,263
Handle less functions than number of segments
JacobHelwig
open
[ "triaged", "open source", "release notes: autograd" ]
9
NONE
Fixes #152752
true
3,037,151,754
Checkpoint sequential doesn't raise clear error when segments is greater than number of functions
JacobHelwig
open
[ "module: activation checkpointing", "triaged" ]
0
NONE
### 🐛 Describe the bug When incorrectly specifying segments to be greater than number of functions, the error message is not clear: ``` import torch print(torch.__version__) from torch.utils.checkpoint import checkpoint_sequential lin = torch.nn.Linear(10, 10) torch.nn.init.zeros_(lin.weight) torch.nn.init.zeros_(l...
true
3,037,106,704
Implement util function compute_global_tensor_shape for 1D device mesh
dharakk
closed
[ "oncall: distributed", "Merged", "ciflow/trunk", "topic: not user facing", "ciflow/inductor" ]
3
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #152166 * __->__ #152751 ### Summary Recreating #151990 to mitigate easyCLA failure compute_global_tensor_shape util function takes in local tensor shape, device mesh and placements. We all gather the shapes from the shards and ...
true
3,037,070,633
Error on padding 0-sized tensors
roman-openai
open
[ "triaged", "actionable", "module: python frontend", "module: edge cases" ]
1
NONE
### 🐛 Describe the bug ```python from torch.nn import functional x = torch.ones((0, 1)) y = functional.pad(x, [1, 1, 0, 0]) ``` raises ```python --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) Cell In[517], line 3 ...
true
3,037,034,349
wip
bobrenjc93
closed
[ "release notes: fx", "fx", "module: dynamo", "ciflow/inductor" ]
1
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #152749 * #152670 * #152601 * #152597 * #152596 cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @jiayisunx @chenyang78 @kadeng @chauhang @amjames
true
3,037,027,389
Conditionally support experimental filesystem include in jit_opt_limit
aa6moham
open
[ "oncall: jit", "fb-exported", "ciflow/trunk", "release notes: jit" ]
11
NONE
Summary: some build modes rely on GCC toolchains older than 8.1 (version where the official std::filesystem library was integrated into the STL library) so to support these older build modes (i.e. arvr/mode/embedded/linux/clang-aarch64-release) lets have a conditional on when to include the experimental filesystem libr...
true
3,037,006,558
torch.compile causes stride mismatch in SDPA with non-contiguous query in torch 2.7
felix-lyx
open
[ "high priority", "triaged", "module: regression", "oncall: pt2" ]
0
NONE
### 🐛 Describe the bug In PyTorch 2.7, when running compiled attention block with non-contiguous query input to `F.scaled_dot_product_attention` on CUDA, I got a stride mismatch error. The default mode for `torch.compile` is used. Non-contiguous query comes from transpose sequence and head dimensions, which should be...
true
3,037,003,894
[FSDP2] fully_shard(mesh=(shard, shard)) for intra and inter node all-gathers
weifengpy
open
[ "oncall: distributed", "triaged" ]
3
CONTRIBUTOR
### 🚀 The feature, motivation and pitch current stauts: `fully_shard(mesh=(shard))` do intra/inter node all-gather together by calling `torch.distributed.all_gather_into_tensor` once what if we all-gather into 2 stages: do inter-node AG first, then intra-node AG for recommendation workload, we can have following AG...
true
3,037,002,135
[CUDA][cuDNN] Fix handling of `CPU` side input and target length tensors in `CTCLoss`
eqy
closed
[ "module: cudnn", "module: cuda", "open source", "Merged", "ciflow/trunk", "topic: bug fixes", "topic: not user facing" ]
3
COLLABORATOR
https://github.com/pytorch/pytorch/pull/128271 migrated to cuDNN V8 CTCLoss which expects input and target length tensors to be on `CUDA` rather than `CPU` without adding the logic to account for the edge case of them being on `CPU` see also #152421 cc @csarofeen @ptrblck @xwang233 @msaroufim @jerryzh168
true
3,037,001,982
Ensure mxfp8 scaled_mm works w/ max-autotune
drisspg
closed
[ "Merged", "ciflow/trunk", "topic: not user facing", "module: inductor", "ciflow/inductor" ]
3
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #152665 * __->__ #152744 cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
true
3,036,995,932
[MPS] Migrate `div` to Metal
malfet
closed
[ "Merged", "topic: not user facing", "release notes: mps", "ciflow/mps", "keep-going" ]
4
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #152758 * __->__ #152743 TODOs: - Verify accuracy of `metal::dot` vs `x.x*x.x + y.y*y.y`
true
3,036,993,246
[export][cond] support merging constant ints as unbacked symint
ydwu4
open
[ "ciflow/trunk", "topic: not user facing", "module: dynamo", "ciflow/inductor", "release notes: export" ]
2
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #152742 @pianpwk points out that this will be helpful to address several data dependent issues in huggingface [models](https://github.com/huggingface/diffusers/blob/e23705e5577387872dd55ebf6db81bd59df928f1/src/diffusers/schedulers/...
true
3,036,992,230
[dynamo] Support `delattr` on result of `torch.compile(module)`
StrongerXi
closed
[ "Merged", "ciflow/trunk", "topic: not user facing", "module: dynamo", "ciflow/inductor" ]
3
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #152741 * #152740 This is essentially a follow-up on #122098, where we added support of `getattr` and `setattr` on result of `torch.compile(module)`, but didn't add support for `delattr`. Fixes #150711. cc @voznesenskym @penguinwu @...
true
3,036,992,201
[dynamo] Avoid running `torch.nn.Module.__call__` twice under `torch.compile(mod)`
StrongerXi
closed
[ "Merged", "topic: not user facing", "module: dynamo", "ciflow/inductor" ]
2
CONTRIBUTOR
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #152741 * __->__ #152740 When we do `torch.compile(mod)`, we eventually end up returning a new module instance, whose `forward` method is the result of `torch.compile(mod.__call__)`, meaning it already captures all the extra logic (e.g., hoo...
true