Datasets:
Serial Number int64 1 6k | Issue Number int64 75.6k 112k | Title stringlengths 3 357 | Labels stringlengths 3 241 ⌀ | Body stringlengths 9 74.5k ⌀ | Comments int64 0 867 |
|---|---|---|---|---|---|
1 | 111,756 | userwarning: loky-backed parallel loops cannot be called in a multiprocessing with num_workers=1 but two dataloaders | null | ### 🐛 Describe the bug
Setting num_workers=1 speeds dataloader a lot, but it doesn't seem to work when I have more than 1 dataloader. When I only have one, no warning appears and the enumeration only takes 0.3s. However, when I have 2 dataloader (train and val), the warning starts to appear on every iteration and i... | 0 |
2 | 111,755 | [dtensor] add device_mesh.device_type to make RNGStateTracker support CUDA-like devices | open source | [dtensor] Add device_mesh.device_type to make RNGStateTracker support CUDA-like devices | 1 |
3 | 111,754 | [dynamo] Better determinism of `ConfigModule` by walking using pytree | null | ### 🚀 The feature, motivation and pitch
https://github.com/pytorch/pytorch/pull/111318
Currently, validation only occurs at the root. However, we should walk the pytree of each object to ensure types are respected.
In particular, we can do conversions of unfriendly types
- function objects into `"f{__module_... | 0 |
4 | 111,753 | [dynamo] AutogradFunctionMethodHigherOrderVariable check for new guards is broken | module: dynamo | AutogradFunctionMethodHigherOrderVariable has a check for new guards being added in the following places:
https://github.com/pytorch/pytorch/blob/f0cde8613c4c8814e157c0a742187a91aa72a009/torch/_dynamo/variables/higher_order_ops.py#L1091
https://github.com/pytorch/pytorch/blob/f0cde8613c4c8814e157c0a742187a91aa72a009/... | 0 |
5 | 111,752 | Is it a good time to switch to CXX11_ABI? | null | ### 🚀 The feature, motivation and pitch
Now most of CI jobs use g++>=9 except Android jobs which use g++-8. Given this situation, is it possible to always use CXX11_ABI and get rid of the many checks in build systems?
### Alternatives
_No response_
### Additional context
_No response_ | 0 |
6 | 111,749 | [dynamo] Expand _nonvar_fields names | module: dynamo, ciflow/inductor | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #111749
This should be a small compile time optimization, since we won't need to
walk these fields in apply().
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ch... | 1 |
7 | 111,748 | Allow to specify specific files for debug info | topic: not user facing | Building with `USE_CUSTOM_DEBINFO=torch/csrc/Module.cpp python setup.py develop` for example will provide debug info only for this file.
This allows to enable debug symbols very fast from a non-debug build by doing a clean then develop (as long as you have ccache) and avoid very large binaries that take a very long ti... | 1 |
8 | 111,747 | New swap function | module: dynamo | This PR is proposing a new approach to solve the nn/optim only linked by python object identity problem.
The idea is to have a function that can swap the content of two Tensors t1 and t2 while preserving all the old references.
This would allow us to swap the `model.weight` with a new Tensor (can be any subclass of T... | 3 |
9 | 111,746 | [dynamo] add repro for functorch/fx interop issue (`allow_in_graph`) | open source, topic: not user facing, module: dynamo | Fixes https://github.com/pytorch/pytorch/issues/109025 by adding repro
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng | 1 |
10 | 111,745 | [dynamo]: `nn.Module` recursively set `training` mode via `train` and `eval` | open source, topic: not user facing, module: dynamo, ciflow/inductor | Fixes https://github.com/pytorch/pytorch/issues/109885
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng | 1 |
11 | 111,744 | ninja: build stopped: subcommand failed | oncall: pt2 | ### 🐛 Describe the bug
When I try to build pytorch from source in linux, I face a confusing problem when I run the 'python setup.py install'.
Here are the error logs when I run the 'python setup.py install' the second time.
### Error logs
(pytorch_install) [root@cn0 pytorch-1.7]# python setup.py installBuilding ... | 0 |
12 | 111,743 | WIP Implement channels_last_3d convolution | module: cpu, open source | Maybe participates to #59168
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 | 3 |
13 | 111,742 | Add CSR tensor with non-contiguous values support to CuSparseSpMatCsrDescriptor | module: sparse, open source, release notes: sparse, topic: new features | Fixes https://github.com/pytorch/pytorch/issues/111574
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #111742
cc @alexsamardzic @nikitaved @cpuhrsch @amjames @bhosmer | 3 |
14 | 111,741 | [dynamo] `{*}Tensor.__init__` from list of ndarray as `torch.stack(List[FakeTensor])` | open source, module: dynamo, ciflow/inductor | Follow up to https://github.com/pytorch/pytorch/pull/111665
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng @lezcano | 2 |
15 | 111,740 | GPU computation is not equivalent | null | ### 🐛 Describe the bug
GPU computation is not equivalent, but it is equivalent on CPU. Why? And how can I avoid this?
```python
import torch
import torch.nn as nn
hidden_states = torch.randn([4, 2048, 512])
v_proj = nn.Linear(512, 128, bias=False)
value_states = v_proj(hidden_states)
h1, h2 = torch.chunk(h... | 0 |
16 | 111,739 | grad is inf/nan when using torch.amp | null | ### 🐛 Describe the bug
Below is a very simple for using torch.amp, but the gradients are inf/nan.
```python
import torch
from torch.cuda.amp import GradScaler
from torch import optim
scaler = GradScaler()
a = torch.randn(2, 2, requires_grad=True, device="cuda")
b = torch.randn(2, 2, requires_grad=True, device=... | 0 |
17 | 111,738 | [dynamo] Implement `set.__contains__` for `Tensor` as object match of `FakeTensor` | open source, topic: not user facing, module: dynamo, ciflow/inductor | Fixes https://github.com/pytorch/pytorch/issues/111556
Dynamo implementation of `set.__contains__` previously used `__eq__` match.
But this is wrong when `__eq__` match does not imply `__hash__` match, as is the case for `torch.Tensor`, leading to inconsistent results. See: https://github.com/pytorch/pytorch/issu... | 2 |
18 | 111,737 | Support calling __torch_function__ attribute access | module: dynamo, ciflow/inductor | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #111737
* #111731
* #111730
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng | 1 |
19 | 111,736 | Implementation of Lion Optimizer. | null | ### 🚀 The feature, motivation and pitch
Lion Optimizer is becoming a great alterative to AdamW and Adam Optimizer. It is more efficient as it does not use second order moments and instead uses sign operations in order to update the weights. This saves on memory and decreases training time. In some cases it is bett... | 0 |
20 | 111,735 | hack hack hack | ciflow/inductor, release notes: AO frontend | Fixes #ISSUE_NUMBER
| 1 |
21 | 111,734 | Is the index_add_ function differentiable? | null | ### 🚀 The feature, motivation and pitch
verts_normals = torch.zeros_like(cornea_vertex)
vertices_faces = cornea_vertex[face_index]
faces_normals = torch.cross(
vertices_faces[:, 2] - vertices_faces[:, 1],
vertices_faces[:, 0] - vertices_faces[:, 1],
... | 0 |
22 | 111,733 | Bug: torch.compile fails to compile torch.func.vmap with reduction functions and raw python numbers | null | ### 🐛 Describe the bug
`torch.compile` fails to compile vmap transformation with reduction functions and native python numbers. This bug was only found when using reduction functions, and there are several workarounds as shown in the following examples:
```python
import torch
torch._dynamo.reset()
torch._dyna... | 0 |
23 | 111,732 | Pass `ignored_params` at the leaf FSDP wrapping class call | open source, release notes: distributed (fsdp) | Fixes #111623 | 2 |
24 | 111,731 | Support tracing base torch_function impl | module: dynamo, ciflow/inductor | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #111737
* __->__ #111731
* #111730
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng | 1 |
25 | 111,730 | TensorWithTFOverride inheritance from TensorVariable | module: dynamo, ciflow/inductor | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #111737
* #111731
* __->__ #111730
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng | 1 |
26 | 111,729 | An OOM where there should not be any OOM. | null | ### 🐛 Describe the bug
I see similar type of errors being asked about in quite a few places, with advice given being usually useless. The suggestion below to muck around with an environment variable is similarly useless.
What is confounding to me is that memory allocation is tiny in comparison to still available s... | 0 |
27 | 111,728 | Not Implemented Issue | null | ### 🚀 The feature, motivation and pitch
NotImplementedError: The operator 'aten::_unique2' is not currently implemented for the MPS device. If you want this op to be added in priority during the prototype phase of this feature, please comment on https://github.com/pytorch/pytorch/issues/77764. As a temporary fix, you... | 0 |
28 | 111,727 | [TESTING] Check Triton update after elementwise dedup fix | ciflow/trunk, topic: not user facing, ciflow/inductor | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #111727
This PR is patched over the current Triton pin: https://github.com/openai/triton/pull/2512 . | 1 |
29 | 111,726 | [dynamo] Remove VariableTracker.propagate | module: dynamo, ciflow/inductor | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #111726
* #111725
* #111415
* #111614
* #111717
* #111306
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng | 1 |
30 | 111,725 | [dynamo] Remove VariableTracker.add_options | module: dynamo, ciflow/inductor | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #111726
* __->__ #111725
* #111415
* #111614
* #111717
* #111306
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng | 1 |
31 | 111,724 | [torchx] Do not terminate parent process if exit code from child isn't valid | fb-exported | Summary:
There's no reason to terminate the parent process trying to find the name of the signal received by the child process.
Let's make sure this is handled properly, which then will ensure that parent process can process child failures.
Test Plan: Unit tests.
Differential Revision: D50516668
| 6 |
32 | 111,722 | Add cudagraph_mark_step_begin in torch.compiler, reference in error message | module: inductor, ciflow/inductor | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #111722
cc @chauhang
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler | 1 |
33 | 111,721 | Constrain sdpa to fx strides | module: inductor, ciflow/inductor | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #111721
Fix for https://github.com/pytorch/pytorch/issues/109607. sdpa requires last dimension strides to be 1. Add constraint so that we run the op with the strides we observed in tracing.
cc @voznesenskym @penguinwu @EikanWa... | 2 |
34 | 111,719 | add dry run metrics to td strategies | topic: not user facing | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #111719
Creates a decorator in order to emit metrics for dry runs on target determination strategies. @ZainRizvi does this seem reasonable?
<!--
copilot:summary
-->
### <samp>🤖 Generated by Copilot at c90a623</samp>
Add a new ... | 1 |
35 | 111,718 | Wrong way of checking if CustomModule is a subclass of torch.nn.Module | null | ### 🐛 Describe the bug
When I build my custom module and try to add it to a sequential text processing, with "[torchtext.transforms.Sequential](https://pytorch.org/text/stable/transforms.html#torchtext.transforms.Sequential)" It raises an error even though I'm doing the sub classing correctly.
This is a fragment o... | 0 |
36 | 111,717 | [dynamo] Lazily construct symbolic_locals | module: dynamo, ciflow/inductor | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #111726
* #111725
* #111415
* #111614
* __->__ #111717
* #111306
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng | 1 |
37 | 111,716 | Cannot pip install torch 2.0.1 | null | ### 🐛 Describe the bug
I was trying to follow the instruction on the [webpage](https://pytorch.org/get-started/previous-versions/) to install torch 2.0.1 using pip.
```
# ROCM 5.4.2 (Linux only)
pip install torch==2.0.1+rocm5.4.2 torchvision==0.15.2+rocm5.4.2 torchaudio==2.0.2 --index-url https://download.pytor... | 0 |
38 | 111,715 | [Export] Don't serialize missing args with default value | fb-exported, topic: not user facing, module: inductor, ciflow/inductor, module: export | Summary: Per https://docs.google.com/document/d/1FzWm-sHYwmRi3x_g036kOxd99KaYquUsA-L5JwOn8ys/edit
I wonder if this would break executorch? @larryliu0820
I see exir/serialize.py using export's GraphModuleSerializer.
Test Plan: Existing CIs
Differential Revision: D50519217
cc @voznesenskym @penguinwu... | 4 |
39 | 111,713 | [dynamo] generic `is_` type shortcut is not appropriately guarded | bug, oncall: pt2, module: dynamo | ### 🐛 Describe the bug
This hack
https://github.com/pytorch/pytorch/blob/5a2f97dee80ca27b732e12b61359d6e475a9c03b/torch/_dynamo/variables/builtin.py#L1310
in https://github.com/pytorch/pytorch/pull/104840
is too strong.
### Use-Cases
Support for tracing `is_` when there's type mismatch: https://github.co... | 0 |
40 | 111,712 | Re-enable some embedded bag tests | topic: not user facing | They were temporary disabled in 2019 by https://github.com/pytorch/pytorch/pull/26599
May be it has been fixed already...
<!--
copilot:poem
-->
### <samp>🤖 Generated by Copilot at 1e49d84</samp>
> _`TestEmbeddingNN`_
> _CUDA tests restored_
> _Bug fixed in autumn breeze_
| 1 |
41 | 111,711 | [aotinductor] 14k models: CppCompileError: C++ compile error | triaged, oncall: pt2 | ```
25 errors like: CppCompileError: C++ compile error (example ./generated/test_krrish94_nerf_pytorch.py:SinThetaByTheta # pytest ./generated/test_krrish94_nerf_pytorch.py -k test_001)
```
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @zou3519 | 1 |
42 | 111,710 | `fbgemm` update causes failures in `test_embedding.py` | high priority, triage review, module: regression, module: third_party | ### 🐛 Describe the bug
```
% python3 test/nn/test_embedding.py -k test_EmbeddingBag_per_sample_weights_and_new_offsets_cpu_int32_int32_bfloat16
...
AssertionError: Tensor-likes are not close!
Mismatched elements: 4 / 10 (40.0%)
Greatest absolute difference: 9.1875 at index (3, 1) (up to 0.1 allowed)
Greatest ... | 2 |
43 | 111,709 | lintrunner job time keeps growing | triaged, module: devx | For example:
Sep 29 https://hud.pytorch.org/pytorch/pytorch/commit/bc047ec906d8e1730e2ccd8192cef3c3467d75d1 - 18 mins
Oct 06 https://hud.pytorch.org/pytorch/pytorch/commit/65d40a72c4ff3cf5218dffda8b5da60ea2163890 - 22 mins
Today, Oct 20 https://hud.pytorch.org/pytorch/pytorch/commit/303c54dbd9921d78ed01116547c063b45... | 3 |
44 | 111,706 | DISABLED test_meta_outplace_fft_ifft_cpu_uint8 (__main__.TestMetaCPU) | triaged, module: flaky-tests, skipped, module: primTorch, oncall: pt2 | Platforms: dynamo
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_meta_outplace_fft_ifft_cpu_uint8&suite=TestMetaCPU) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/17905842710).
Over the past 3 hours, it has been... | 1 |
45 | 111,704 | Add more flexibility on print / output console | null | ### 🚀 The feature, motivation and pitch
For a large number of debug usage, printing tensors on console is clearly usefull. The current C++ API output fix the `std::cout` precision on float to 4 decimals.
> float can hold up to 7 decimal digits accurately while double can hold up to 15
The only 'user' parameter... | 0 |
46 | 111,695 | Runnings SentenceTransformer encoding step causes Docker containers on Mac (Silicon) to crash with code 139 | null | ### 🐛 Describe the bug
Hi! Hopefully there isn't a similar issue already open. I couldn't find one after a search through the issues list. Feel free to mark as duplicate/close if it already exists.
I've created this repository with a minimal setup to reproduce the error: https://github.com/sabaimran/repro-torch-... | 0 |
47 | 111,694 | [Release/2.1.1][ONNX] Fix aten::new_zeros due to TorchScript behavior change on Pytorch 2.1 Fix #110935 | module: onnx, open source, release notes: onnx | Original PR: https://github.com/pytorch/pytorch/pull/110956
Fixes https://github.com/pytorch/pytorch/issues/110597
Summary:
* Generic code: The torch._C.Value.node().mustBeNone() is encapsulated into the high-level API JitScalarType.from_value ; _is_none was also extended to allow either None or torch._C.Value... | 1 |
48 | 111,693 | [export] 14k models: AssertionError: graph-captured input # 2, of type <class 'torch.nn.parameter.Parameter'>, is not among original inputs of types | triaged, oncall: pt2, module: export | 167 errors like: AssertionError: graph-captured input # 2, of type <class 'torch.nn.parameter.Parameter'>, is not among original inputs of types: (<class 'torch.Tensor'>) (example ./generated/test_XPixelGroup_BasicSR.py:SPADEResnetBlock # pytest ./generated/test_XPixelGroup_BasicSR.py -k test_030)
cc @ezyang @msaroufi... | 1 |
49 | 111,692 | DISABLED test_sigmoid (__main__.TestQuantizedOps) | oncall: quantization, triaged, module: macos, skipped | Platforms: mac, macos
This test was disabled because it is failing on main branch ([recent examples](http://torch-ci.com/failure/test_quantization.py%3A%3ATestQuantizedOps%3A%3Atest_sigmoid)).
This test is failing on MacOS x86 https://hud.pytorch.org/pytorch/pytorch/commit/ca7d084ff9b67675cfff0d175ea6b96fcedc4950... | 1 |
50 | 111,691 | [aotinductor] 14k models: TypeError: make_boxed_func..g() missing 1 required positional argument: 'args' | triaged, oncall: pt2 | 347 errors like: TypeError: make_boxed_func..g() missing 1 required positional argument: 'args' (example ./generated/test_ludwig_ai_ludwig.py:SequenceReducer # pytest ./generated/test_ludwig_ai_ludwig.py -k test_015)
cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @zou3519 | 1 |
51 | 111,689 | [Quantization] Add a test for QAT + PTQ selective quantization in | release notes: quantization | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #111689
xnnpack quantizer
Summary:
For some workflows you want to quantize some parts of the model via qat
and then continue eager mode training. After training, you want to
export the whole model and perform PTQ on the rest.
Test P... | 3 |
52 | 111,688 | Document torch.from_file and fix UntypedStorage.from_file docs | release notes: python_frontend, topic: docs | Fixes https://github.com/pytorch/pytorch/issues/37439
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #111688
cc @albanD | 1 |
53 | 111,687 | [Release/2.1.1][DCP] Remove _shard_tensor() call in load_sharded_optimizer_state_dict in optimizer.py #111096 | null | Cherry pick into 2.1.1
[original PR: #111096](https://github.com/pytorch/pytorch/pull/111096)
_shard_tensor() calls into dist.all_gather_object() and this is causing optimizer state dict loading to be super slow. Workaround: call FSDP._shard_utils._create_chunk_sharded_tensor() to construct ShardedTensor without an... | 1 |
54 | 111,686 | RecursionError for backend='inductor' with a loop | oncall: pt2 | ### 🐛 Describe the bug
Running the following code causes RecursionError.
It's not a very practical example, but it works totally fine in eager mode and with `torch.jit.script`.
``` python
import torch
class Net(torch.nn.Module):
def forward(self, x):
for i in range(1000):
x = 1.0 * ... | 2 |
55 | 111,685 | Disable dynamo when running generated opcheck tests | fb-exported | Summary: Use `TORCHDYNAMO_DISABLE=1` when running generated opcheck tests. Enable some `fbgemm::pack_segments` tests that errored out (with error `RuntimeError: expected int but got s0*s1**2`) because dynamo was being run in the opcheck tests.
Test Plan: `parsh -v --build-flags mode/dev-nosan //deeplearning/fbgemm/fbg... | 5 |
56 | 111,682 | [BE]: ruff apply rule PLW1510 to find silent subprocess errors | open source, better-engineering, NNC, release notes: jit, module: dynamo, ciflow/inductor | Opts in to check=True or check=False to ensure nonzero exit codes are propogated
cc @EikanWang @jgong5 @voznesenskym @penguinwu @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng | 1 |
57 | 111,681 | Make require_stride_order peek into AliasedLayout | module: inductor, ciflow/inductor | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #111681
Summary:
`require_stride_order` doesn't know how to handle storage with `AliasedLayout`. It always resorts to a copy even when the view refers to a storage with `FixedLayout`. This causes an unneccessary allocation + copy ... | 2 |
58 | 111,680 | [pytorch-vulkan] Support zero-dim | fb-exported, module: vulkan, release notes: vulkan, ciflow/periodic | Summary:
1. Add zero-dim (Tensor with 1 element) support.
2. New operator `_local_scalar_dense` that map a zero-dim tensor into a Scalar
3. `sum_dim`:
3.1. Add zero-dim support.
3.2. Fix bug in negative indices when handling multi-dim reduction call
3.3. Add unittests to test new coverages
4. Add `aten::sum` support.
5... | 2 |
59 | 111,679 | [Release/2.1.1] [Test][ShardedTensor] Add test for corner case for chunk sharding spec #109626 | topic: not user facing | Cherry pick https://github.com/pytorch/pytorch/pull/109626 into release/2.1.1
This adds a test case to cover the corner case of empty shards when creating ShardedTensor.
Original fix contributed by a user.
https://github.com/pytorch/pytorch/pull/108915
Cherry-pick PR for the fix above: https://github.com/pytorch/... | 1 |
60 | 111,678 | AOT Inductor Does not Work with minifier | ciflow/inductor | ### 🐛 Describe the bug
Because AOT Inductor attaches parameters to the GraphModule, it does not currently work with minifier.
> File "/opt/dlami/nvme/eellison/work/pytorch/torch/_dynamo/repro/after_aot.py", line 444, in repro_common
assert not any(mod.named_parameters())
### Versions
master | 0 |
61 | 111,677 | [2.1.1] Update NCCL to 2.18.6 for upstream bugfix | open source, topic: not user facing | This updates NCCL in PyTorch 2.1 with one tiny bugfix in from this commit: https://github.com/NVIDIA/nccl/commit/4365458757e4107ecbf629b2fd6e0e19a5d237c2 It's a minor bugfix release, otherwise everything is exactly the same as the release currently in PyTorch. We already updated to 2.19 upstream.
| 1 |
62 | 111,676 | [export] self.buffer += 1 raises error | triaged, module: export | ```
import torch
class Mod(torch.nn.Module):
def __init__(self):
super().__init__()
self.register_buffer("foo", torch.ones(2, 3))
def forward(self, x: torch.Tensor) -> torch.Tensor:
self.foo += x
return self.foo
torch.export(Mod(), (torch.ones(2, 3),))
```
produc... | 0 |
63 | 111,674 | Dynamo Compile samples should record file/line that raised exception | null | ### 🐛 Describe the bug
@voznesenskym and I were looking at https://fburl.com/scuba/dynamo_compile/7pzz3bi1 and we noticed that while "Fail reason" doesn't include the file/line that raised the exception, which would be useful.
cc @yanboliang
### Versions
main | 0 |
64 | 111,673 | [quant][bc-breaking] Remove deprecated QConfigDynamic | release notes: quantization | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #111673
Summary: QConfigDynamic was deprecated in PyTorch 1.12. It has
continued to cause confusion to users who wish to use dynamic
quantization. This commit removes this deprecated API and
requires users to use QConfig instead.
BC-... | 3 |
65 | 111,669 | Buffer overflow not prevented on MPS devices | null | ### 🐛 Describe the bug
When indexing using an indexing tensor (or list), it is possible to read or write outside the valid range of the tensor.
Minimal example:
```
import torch
x = torch.arange(4, device=torch.device("mps"))
y = x[:2]
y[torch.tensor([3])] = -1
x[3]
```
This code should raise an Inde... | 0 |
66 | 111,667 | [Release/2.1] Introduce is_big_gpu condition for test_max_autotune | open source, topic: not user facing, module: inductor | Fixes https://github.com/pytorch/pytorch/issues/111527
Other test files that rely on max_autotune mode being enabled already conditionalise the UT suite on this condition (e.g. test_select_algorithm). Proposing to add this condition for test_max_autotune.
Currently we are observing failures on these UTs on the RO... | 1 |
67 | 111,666 | torch.onnx.errors.UnsupportedOperatorError: Exporting the operator 'aten::binary_cross_entropy' to ONNX opset version 14 is not supported. | module: onnx | ### 🚀 The feature, motivation and pitch
Unable to export ONNX model from https://github.com/xue-pai/FuxiCTR/tree/main/model_zoo/AFM. While exporting onnx it throws torch.onnx.errors.UnsupportedOperatorError: Exporting the operator 'aten::binary_cross_entropy'
### Alternatives
_No response_
### Additional context
... | 1 |
68 | 111,665 | [dynamo] Fix guard for ndarray calling `torch.as_tensor(None)` | open source, topic: not user facing, module: dynamo, ciflow/inductor | Fixes https://github.com/pytorch/pytorch/issues/111662
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng @lezcano | 1 |
69 | 111,663 | [dynamo] Tracking: object identity | null | ### 🚀 The feature, motivation and pitch
This covers many things:
1. Tensor Identity
2. User objects identity - usual objects, enums, builtins??
Use cases
- [ ] https://github.com/pytorch/pytorch/issues/111550
- [ ] https://github.com/pytorch/pytorch/issues/111556
Tensor Aliasing Methods and Obstacles
- [... | 0 |
70 | 111,662 | torch.dynamo (caching?) issues with `Optional[np.ndarray]` arguments | module: numpy, module: dynamo | ### 🐛 Describe the bug
```
$ cat nonz.py
import torch
import numpy as np
def fn(x=None):
if x is None:
x = np.ones(3)
return x**2
opt_fn = torch.compile(fn)
x = np.zeros((2, 2))
print(opt_fn(x))
print(opt_fn())
```
fails with
```
$ python nonz.py
[[0. 0.]
[0. 0.]]
ERROR ... | 1 |
71 | 111,661 | Higher-level custom op API, V3 | null | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #111661
* #111660
* #111310
* #111659
* #111380
This PR introduces:
- a FunctonalBaseOp class.
- To define a new custom op, a user subclasses FunctionalBaseOp, adds
their device-specific implementations (by adding static methods to
... | 1 |
72 | 111,660 | torch.library: Create helper function `is_functional_schema` | null | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #111661
* __->__ #111660
* #111310
* #111659
* #111380
I will need this again soon.
Test Plan:
- existing tests | 1 |
73 | 111,659 | Change torch.library.impl to accept a device string | null | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #111661
* #111660
* #111310
* __->__ #111659
* #111380
torch.library.impl now accepts a device string (e.g. "cpu", "cuda"). It
still accepts DispatchKey strings, but we no longer document this, because
using arbitrary DispatchKeys is more fo... | 1 |
74 | 111,657 | [aotinductor] Update test utility to use AOTIModelRunner | module: inductor, module: dynamo, ciflow/inductor | Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #111657
Summary: Use AOTIModelRunner provided by libtorch instead of the custom written RAIIModelContainer for testing. This change also makes running AOTInductor benchmarks on CPU possbile.
cc @voznesenskym @penguinwu @EikanWang @jg... | 1 |
75 | 111,656 | WIP Adding 512 to xblock size config | open source, module: inductor, ciflow/inductor | Try to see perf improvements with adding 512 into xblock size:
inductor-A100-perf-nightly :
- https://github.com/pytorch/pytorch/actions/runs/6589467661
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78... | 2 |
76 | 111,654 | Static Linking C++, Op not available at runtime | null | ### 🐛 Describe the bug
When linking with static libtorch and torchvision libraries, I am able to build, but at runtime, I get an error about an `Unknown builtin op: aten::mul`.
I have found references indicating that including <torchvision/vision.h> should cause the operators to be registered so they are linked i... | 0 |
77 | 111,653 | s390x vectorization: implement atanh for complex vectorized data | module: cpu, open source | s390x vectorization: implement atanh for complex vectorized data
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 | 2 |
78 | 111,651 | DISABLED test_meta_outplace_fft_ifft_cpu_int64 (__main__.TestMetaCPU) | triaged, module: flaky-tests, skipped, module: primTorch, oncall: pt2 | Platforms: dynamo
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_meta_outplace_fft_ifft_cpu_int64&suite=TestMetaCPU) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/17893852781).
Over the past 3 hours, it has been... | 1 |
79 | 111,650 | FSDP CPU Offload + fp16 + sharded grad scaler crash / hang | oncall: distributed, triaged, module: fsdp | ### 🐛 Describe the bug
I get the following when running the above combination:
```
ERROR:aiplatform.error_reporting.error_reporting:Exception Found: Could not run 'ate n::_amp_foreach_non_finite_check_and_unscale_' with arguments from the 'CPU' backend . This could be because the operator doesn't exist fo... | 1 |
80 | 111,649 | [dynamo] higher-order ops do not preserve `FakeTensor` for in-place ops | triaged, module: fakeTensor, module: functorch, module: dynamo | ### 🐛 Describe the bug
```python
def fn(z):
x = z.clone()
y = torch.vmap(torch.Tensor.acos_)(x)
# y's fake tensor is not x's fake tensor in terms of pyobjects
return y is x
fn_opt = torch.compile(backend="eager", fullgraph=True, dynamic=True)(fn)
z = torch.ones(4, 1)
self.assertEqual(f... | 1 |
End of preview. Expand in Data Studio
Dataset Card for Dataset Name
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
Dataset Details
Dataset Description
- Curated by: [More Information Needed]
- Funded by [optional]: [More Information Needed]
- Shared by [optional]: [More Information Needed]
- Language(s) (NLP): [More Information Needed]
- License: [More Information Needed]
Dataset Sources [optional]
- Repository: [More Information Needed]
- Paper [optional]: [More Information Needed]
- Demo [optional]: [More Information Needed]
Uses
Direct Use
[More Information Needed]
Out-of-Scope Use
[More Information Needed]
Dataset Structure
[More Information Needed]
Dataset Creation
Curation Rationale
[More Information Needed]
Source Data
Data Collection and Processing
[More Information Needed]
Who are the source data producers?
[More Information Needed]
Annotations [optional]
Annotation process
[More Information Needed]
Who are the annotators?
[More Information Needed]
Personal and Sensitive Information
[More Information Needed]
Bias, Risks, and Limitations
[More Information Needed]
Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
Citation [optional]
BibTeX:
[More Information Needed]
APA:
[More Information Needed]
Glossary [optional]
[More Information Needed]
More Information [optional]
[More Information Needed]
Dataset Card Authors [optional]
[More Information Needed]
Dataset Card Contact
[More Information Needed]
- Downloads last month
- 6