Lỗi not enough gpu memory to place dag năm 2024

is used to set up and run CUDA operations. It keeps track of the currently selected GPU, and all CUDA tensors you allocate will by default be created on that device. The selected device can be changed with a context manager.

However, once a tensor is allocated, you can do operations on it irrespective of the selected device, and the results will be always placed on the same device as the tensor.

Cross-GPU operations are not allowed by default, with the exception of and other methods with copy-like functionality such as and . Unless you enable peer-to-peer memory access, any attempts to launch ops on tensors spread across different devices will raise an error.

Below you can find a small example showcasing this:

cuda = torch.device('cuda') # Default CUDA device cuda0 = torch.device('cuda:0') cuda2 = torch.device('cuda:2') # GPU 2 (these are 0-indexed) x = torch.tensor([1., 2.], device=cuda0)

x.device is device(type='cuda', index=0)

y = torch.tensor([1., 2.]).cuda()

y.device is device(type='cuda', index=0)

with torch.cuda.device(1):

# allocates a tensor on GPU 1
a = torch.tensor([1., 2.], device=cuda)
# transfers a tensor from CPU to GPU 1
b = torch.tensor([1., 2.]).cuda()
# a.device and b.device are device(type='cuda', index=1)
# You can also use ``Tensor.to`` to transfer a tensor:
b2 = torch.tensor([1., 2.]).to(device=cuda)
# b.device and b2.device are device(type='cuda', index=1)
c = a + b
# c.device is device(type='cuda', index=1)
z = x + y
# z.device is device(type='cuda', index=0)
# even within a context, you can specify the device
# (or give a GPU index to the .cuda call)
d = torch.randn(2, device=cuda2)
e = torch.randn(2).to(cuda2)
f = torch.randn(2).cuda(cuda2)
# d.device, e.device, and f.device are all device(type='cuda', index=2)

TensorFloat-32 (TF32) on Ampere (and later) devices

Starting in PyTorch 1.7, there is a new flag called allow_tf32. This flag defaults to True in PyTorch 1.7 to PyTorch 1.11, and False in PyTorch 1.12 and later. This flag controls whether PyTorch is allowed to use the TensorFloat32 (TF32) tensor cores, available on NVIDIA GPUs since Ampere, internally to compute matmul (matrix multiplies and batched matrix multiplies) and convolutions.

TF32 tensor cores are designed to achieve better performance on matmul and convolutions on torch.float32 tensors by rounding input data to have 10 bits of mantissa, and accumulating results with FP32 precision, maintaining FP32 dynamic range.

matmuls and convolutions are controlled separately, and their corresponding flags can be accessed at:

The flag below controls whether to allow TF32 on matmul. This flag defaults to False

in PyTorch 1.12 and later.

torch.backends.cuda.matmul.allow_tf32 = True

The flag below controls whether to allow TF32 on cuDNN. This flag defaults to True.

torch.backends.cudnn.allow_tf32 = True

The precision of matmuls can also be set more broadly (limited not just to CUDA) via

torch.backends.cuda.matmul.allow_tf32 = False torch.backends.cudnn.allow_tf32 = False

9. Note that besides matmuls and convolutions themselves, functions and nn modules that internally uses matmuls or convolutions are also affected. These include nn.Linear, nn.Conv*, cdist, tensordot, affine grid and grid sample, adaptive log softmax, GRU and LSTM.

To get an idea of the precision and speed, see the example code and benchmark data (on A100) below:

a_full = torch.randn(10240, 10240, dtype=torch.double, device='cuda') b_full = torch.randn(10240, 10240, dtype=torch.double, device='cuda') ab_full = a_full @ b_full mean = ab_full.abs().mean() # 80.7277 a = a_full.float() b = b_full.float()

Do matmul at TF32 mode.

torch.backends.cuda.matmul.allow_tf32 = True ab_tf32 = a @ b # takes 0.016s on GA100 error = (ab_tf32 - ab_full).abs().max() # 0.1747 relative_error = error / mean # 0.0022

Do matmul with TF32 disabled.

torch.backends.cuda.matmul.allow_tf32 = False ab_fp32 = a @ b # takes 0.11s on GA100 error = (ab_fp32 - ab_full).abs().max() # 0.0031 relative_error = error / mean # 0.000039

From the above example, we can see that with TF32 enabled, the speed is ~7x faster on A100, and that relative error compared to double precision is approximately 2 orders of magnitude larger. Note that the exact ratio of TF32 to single precision speed depends on the hardware generation, as properties such as the ratio of memory bandwidth to compute as well as the ratio of TF32 to FP32 matmul throughput may vary from generation to generation or model to model. If full FP32 precision is needed, users can disable TF32 by:

torch.backends.cuda.matmul.allow_tf32 = False torch.backends.cudnn.allow_tf32 = False

To toggle the TF32 flags off in C++, you can do

at::globalContext().setAllowTF32CuBLAS(false); at::globalContext().setAllowTF32CuDNN(false);

For more information about TF32, see:

  • TensorFloat-32
  • CUDA 11
  • Ampere architecture

Reduced Precision Reduction in FP16 GEMMs

fp16 GEMMs are potentially done with some intermediate reduced precision reductions (e.g., in fp16 rather than fp32). These selective reductions in precision can allow for higher performance on certain workloads (particularly those with a large k dimension) and GPU architectures at the cost of numerical precision and potential for overflow.

Some example benchmark data on V100:

[--- bench_gemm_transformer --]

  [  m ,  k  ,  n  ]    |  allow_fp16_reduc=True  |  allow_fp16_reduc=False
1 threads: --------
  [4096, 4048, 4096]    |           1634.6        |           1639.8
  [4096, 4056, 4096]    |           1670.8        |           1661.9
  [4096, 4080, 4096]    |           1664.2        |           1658.3
  [4096, 4096, 4096]    |           1639.4        |           1651.0
  [4096, 4104, 4096]    |           1677.4        |           1674.9
  [4096, 4128, 4096]    |           1655.7        |           1646.0
  [4096, 4144, 4096]    |           1796.8        |           2519.6
  [4096, 5096, 4096]    |           2094.6        |           3190.0
  [4096, 5104, 4096]    |           2144.0        |           2663.5
  [4096, 5112, 4096]    |           2149.1        |           2766.9
  [4096, 5120, 4096]    |           2142.8        |           2631.0
  [4096, 9728, 4096]    |           3875.1        |           5779.8
  [4096, 16384, 4096]   |           6182.9        |           9656.5
(times in microseconds).

If full precision reductions are needed, users can disable reduced precision reductions in fp16 GEMMs with:

torch.backends.cuda.matmul.allow_fp16_reduced_precision_reduction = False

To toggle the reduced precision reduction flags in C++, one can do

at::globalContext().setAllowFP16ReductionCuBLAS(false);

Reduced Precision Reduction in BF16 GEMMs

A similar flag (as above) exists for BFloat16 GEMMs. Note that this switch is set to True by default for BF16, if you observe numerical instability in your workload, you may wish to set it to False.

If reduced precision reductions are not desired, users can disable reduced precision reductions in bf16 GEMMs with:

torch.backends.cuda.matmul.allow_bf16_reduced_precision_reduction = False

To toggle the reduced precision reduction flags in C++, one can do

at::globalContext().setAllowBF16ReductionCuBLAS(true);

Asynchronous execution

By default, GPU operations are asynchronous. When you call a function that uses the GPU, the operations are enqueued to the particular device, but not necessarily executed until later. This allows us to execute more computations in parallel, including operations on CPU or other GPUs.

In general, the effect of asynchronous computation is invisible to the caller, because (1) each device executes operations in the order they are queued, and (2) PyTorch automatically performs necessary synchronization when copying data between CPU and GPU or between two GPUs. Hence, computation will proceed as if every operation was executed synchronously.

You can force synchronous computation by setting environment variable

at::globalContext().setAllowTF32CuBLAS(false); at::globalContext().setAllowTF32CuDNN(false);

0. This can be handy when an error occurs on the GPU. (With asynchronous execution, such an error isn’t reported until after the operation is actually executed, so the stack trace does not show where it was requested.)

A consequence of the asynchronous computation is that time measurements without synchronizations are not accurate. To get precise measurements, one should either call before measuring, or use to record times as following:

The flag below controls whether to allow TF32 on matmul. This flag defaults to False

in PyTorch 1.12 and later.

torch.backends.cuda.matmul.allow_tf32 = True

The flag below controls whether to allow TF32 on cuDNN. This flag defaults to True.

torch.backends.cudnn.allow_tf32 = True

0

As an exception, several functions such as and admit an explicit

at::globalContext().setAllowTF32CuBLAS(false); at::globalContext().setAllowTF32CuDNN(false);

5 argument, which lets the caller bypass synchronization when it is unnecessary. Another exception is CUDA streams, explained below.

CUDA streams

A is a linear sequence of execution that belongs to a specific device. You normally do not need to create one explicitly: by default, each device uses its own “default” stream.

Operations inside each stream are serialized in the order they are created, but operations from different streams can execute concurrently in any relative order, unless explicit synchronization functions (such as or ) are used. For example, the following code is incorrect:

The flag below controls whether to allow TF32 on matmul. This flag defaults to False

in PyTorch 1.12 and later.

torch.backends.cuda.matmul.allow_tf32 = True

The flag below controls whether to allow TF32 on cuDNN. This flag defaults to True.

torch.backends.cudnn.allow_tf32 = True

1

When the “current stream” is the default stream, PyTorch automatically performs necessary synchronization when data is moved around, as explained above. However, when using non-default streams, it is the user’s responsibility to ensure proper synchronization. The fixed version of this example is:

The flag below controls whether to allow TF32 on matmul. This flag defaults to False

in PyTorch 1.12 and later.

torch.backends.cuda.matmul.allow_tf32 = True

The flag below controls whether to allow TF32 on cuDNN. This flag defaults to True.

torch.backends.cudnn.allow_tf32 = True

2

There are two new additions. The call ensures that the

at::globalContext().setAllowTF32CuBLAS(false); at::globalContext().setAllowTF32CuDNN(false);

9 execution has finished before we start running

[--- bench_gemm_transformer --]

  [  m ,  k  ,  n  ]    |  allow_fp16_reduc=True  |  allow_fp16_reduc=False
1 threads: --------
  [4096, 4048, 4096]    |           1634.6        |           1639.8
  [4096, 4056, 4096]    |           1670.8        |           1661.9
  [4096, 4080, 4096]    |           1664.2        |           1658.3
  [4096, 4096, 4096]    |           1639.4        |           1651.0
  [4096, 4104, 4096]    |           1677.4        |           1674.9
  [4096, 4128, 4096]    |           1655.7        |           1646.0
  [4096, 4144, 4096]    |           1796.8        |           2519.6
  [4096, 5096, 4096]    |           2094.6        |           3190.0
  [4096, 5104, 4096]    |           2144.0        |           2663.5
  [4096, 5112, 4096]    |           2149.1        |           2766.9
  [4096, 5120, 4096]    |           2142.8        |           2631.0
  [4096, 9728, 4096]    |           3875.1        |           5779.8
  [4096, 16384, 4096]   |           6182.9        |           9656.5
(times in microseconds).

0 on a side stream. The (see for more details) ensures that we do not deallocate A before

[--- bench_gemm_transformer --]

  [  m ,  k  ,  n  ]    |  allow_fp16_reduc=True  |  allow_fp16_reduc=False
1 threads: --------
  [4096, 4048, 4096]    |           1634.6        |           1639.8
  [4096, 4056, 4096]    |           1670.8        |           1661.9
  [4096, 4080, 4096]    |           1664.2        |           1658.3
  [4096, 4096, 4096]    |           1639.4        |           1651.0
  [4096, 4104, 4096]    |           1677.4        |           1674.9
  [4096, 4128, 4096]    |           1655.7        |           1646.0
  [4096, 4144, 4096]    |           1796.8        |           2519.6
  [4096, 5096, 4096]    |           2094.6        |           3190.0
  [4096, 5104, 4096]    |           2144.0        |           2663.5
  [4096, 5112, 4096]    |           2149.1        |           2766.9
  [4096, 5120, 4096]    |           2142.8        |           2631.0
  [4096, 9728, 4096]    |           3875.1        |           5779.8
  [4096, 16384, 4096]   |           6182.9        |           9656.5
(times in microseconds).

0 has completed. You can also manually wait on the stream at some later point in time with

[--- bench_gemm_transformer --]

  [  m ,  k  ,  n  ]    |  allow_fp16_reduc=True  |  allow_fp16_reduc=False
1 threads: --------
  [4096, 4048, 4096]    |           1634.6        |           1639.8
  [4096, 4056, 4096]    |           1670.8        |           1661.9
  [4096, 4080, 4096]    |           1664.2        |           1658.3
  [4096, 4096, 4096]    |           1639.4        |           1651.0
  [4096, 4104, 4096]    |           1677.4        |           1674.9
  [4096, 4128, 4096]    |           1655.7        |           1646.0
  [4096, 4144, 4096]    |           1796.8        |           2519.6
  [4096, 5096, 4096]    |           2094.6        |           3190.0
  [4096, 5104, 4096]    |           2144.0        |           2663.5
  [4096, 5112, 4096]    |           2149.1        |           2766.9
  [4096, 5120, 4096]    |           2142.8        |           2631.0
  [4096, 9728, 4096]    |           3875.1        |           5779.8
  [4096, 16384, 4096]   |           6182.9        |           9656.5
(times in microseconds).

3 (note that it is pointless to wait immediately, since that will prevent the stream execution from running in parallel with other work on the default stream.) See the documentation for on more details on when to use one or another.

Note that this synchronization is necessary even when there is no read dependency, e.g., as seen in this example:

The flag below controls whether to allow TF32 on matmul. This flag defaults to False

in PyTorch 1.12 and later.

torch.backends.cuda.matmul.allow_tf32 = True

The flag below controls whether to allow TF32 on cuDNN. This flag defaults to True.

torch.backends.cudnn.allow_tf32 = True

3

Despite the computation on

[--- bench_gemm_transformer --]

  [  m ,  k  ,  n  ]    |  allow_fp16_reduc=True  |  allow_fp16_reduc=False
1 threads: --------
  [4096, 4048, 4096]    |           1634.6        |           1639.8
  [4096, 4056, 4096]    |           1670.8        |           1661.9
  [4096, 4080, 4096]    |           1664.2        |           1658.3
  [4096, 4096, 4096]    |           1639.4        |           1651.0
  [4096, 4104, 4096]    |           1677.4        |           1674.9
  [4096, 4128, 4096]    |           1655.7        |           1646.0
  [4096, 4144, 4096]    |           1796.8        |           2519.6
  [4096, 5096, 4096]    |           2094.6        |           3190.0
  [4096, 5104, 4096]    |           2144.0        |           2663.5
  [4096, 5112, 4096]    |           2149.1        |           2766.9
  [4096, 5120, 4096]    |           2142.8        |           2631.0
  [4096, 9728, 4096]    |           3875.1        |           5779.8
  [4096, 16384, 4096]   |           6182.9        |           9656.5
(times in microseconds).

5 not reading the contents of

[--- bench_gemm_transformer --]

  [  m ,  k  ,  n  ]    |  allow_fp16_reduc=True  |  allow_fp16_reduc=False
1 threads: --------
  [4096, 4048, 4096]    |           1634.6        |           1639.8
  [4096, 4056, 4096]    |           1670.8        |           1661.9
  [4096, 4080, 4096]    |           1664.2        |           1658.3
  [4096, 4096, 4096]    |           1639.4        |           1651.0
  [4096, 4104, 4096]    |           1677.4        |           1674.9
  [4096, 4128, 4096]    |           1655.7        |           1646.0
  [4096, 4144, 4096]    |           1796.8        |           2519.6
  [4096, 5096, 4096]    |           2094.6        |           3190.0
  [4096, 5104, 4096]    |           2144.0        |           2663.5
  [4096, 5112, 4096]    |           2149.1        |           2766.9
  [4096, 5120, 4096]    |           2142.8        |           2631.0
  [4096, 9728, 4096]    |           3875.1        |           5779.8
  [4096, 16384, 4096]   |           6182.9        |           9656.5
(times in microseconds).

6 and no other uses of

[--- bench_gemm_transformer --]

  [  m ,  k  ,  n  ]    |  allow_fp16_reduc=True  |  allow_fp16_reduc=False
1 threads: --------
  [4096, 4048, 4096]    |           1634.6        |           1639.8
  [4096, 4056, 4096]    |           1670.8        |           1661.9
  [4096, 4080, 4096]    |           1664.2        |           1658.3
  [4096, 4096, 4096]    |           1639.4        |           1651.0
  [4096, 4104, 4096]    |           1677.4        |           1674.9
  [4096, 4128, 4096]    |           1655.7        |           1646.0
  [4096, 4144, 4096]    |           1796.8        |           2519.6
  [4096, 5096, 4096]    |           2094.6        |           3190.0
  [4096, 5104, 4096]    |           2144.0        |           2663.5
  [4096, 5112, 4096]    |           2149.1        |           2766.9
  [4096, 5120, 4096]    |           2142.8        |           2631.0
  [4096, 9728, 4096]    |           3875.1        |           5779.8
  [4096, 16384, 4096]   |           6182.9        |           9656.5
(times in microseconds).

6, it is still necessary to synchronize, because

[--- bench_gemm_transformer --]

  [  m ,  k  ,  n  ]    |  allow_fp16_reduc=True  |  allow_fp16_reduc=False
1 threads: --------
  [4096, 4048, 4096]    |           1634.6        |           1639.8
  [4096, 4056, 4096]    |           1670.8        |           1661.9
  [4096, 4080, 4096]    |           1664.2        |           1658.3
  [4096, 4096, 4096]    |           1639.4        |           1651.0
  [4096, 4104, 4096]    |           1677.4        |           1674.9
  [4096, 4128, 4096]    |           1655.7        |           1646.0
  [4096, 4144, 4096]    |           1796.8        |           2519.6
  [4096, 5096, 4096]    |           2094.6        |           3190.0
  [4096, 5104, 4096]    |           2144.0        |           2663.5
  [4096, 5112, 4096]    |           2149.1        |           2766.9
  [4096, 5120, 4096]    |           2142.8        |           2631.0
  [4096, 9728, 4096]    |           3875.1        |           5779.8
  [4096, 16384, 4096]   |           6182.9        |           9656.5
(times in microseconds).

6 may correspond to memory reallocated by the CUDA caching allocator, with pending operations from the old (deallocated) memory.

Stream semantics of backward passes

Each backward CUDA op runs on the same stream that was used for its corresponding forward op. If your forward pass runs independent ops in parallel on different streams, this helps the backward pass exploit that same parallelism.

The stream semantics of a backward call with respect to surrounding ops are the same as for any other call. The backward pass inserts internal syncs to ensure this even when backward ops run on multiple streams as described in the previous paragraph. More concretely, when calling , , or , and optionally supplying CUDA tensor(s) as the initial gradient(s) (e.g., , , or ), the acts of

  1. optionally populating initial gradient(s),
  2. invoking the backward pass, and
  3. using the gradients

have the same stream-semantics relationship as any group of ops:

The flag below controls whether to allow TF32 on matmul. This flag defaults to False

in PyTorch 1.12 and later.

torch.backends.cuda.matmul.allow_tf32 = True

The flag below controls whether to allow TF32 on cuDNN. This flag defaults to True.

torch.backends.cudnn.allow_tf32 = True

4

BC note: Using grads on the default stream

In prior versions of PyTorch (1.9 and earlier), the autograd engine always synced the default stream with all backward ops, so the following pattern:

The flag below controls whether to allow TF32 on matmul. This flag defaults to False

in PyTorch 1.12 and later.

torch.backends.cuda.matmul.allow_tf32 = True

The flag below controls whether to allow TF32 on cuDNN. This flag defaults to True.

torch.backends.cudnn.allow_tf32 = True

5

was safe as long as

torch.backends.cuda.matmul.allow_fp16_reduced_precision_reduction = False

5 happened on the default stream. In present PyTorch, that pattern is no longer safe. If

torch.backends.cuda.matmul.allow_fp16_reduced_precision_reduction = False

6 and

torch.backends.cuda.matmul.allow_fp16_reduced_precision_reduction = False

5 are in different stream contexts, you must sync the streams:

The flag below controls whether to allow TF32 on matmul. This flag defaults to False

in PyTorch 1.12 and later.

torch.backends.cuda.matmul.allow_tf32 = True

The flag below controls whether to allow TF32 on cuDNN. This flag defaults to True.

torch.backends.cudnn.allow_tf32 = True

6

even if

torch.backends.cuda.matmul.allow_fp16_reduced_precision_reduction = False

5 is on the default stream.

Memory management

PyTorch uses a caching memory allocator to speed up memory allocations. This allows fast memory deallocation without device synchronizations. However, the unused memory managed by the allocator will still show as if used in

torch.backends.cuda.matmul.allow_fp16_reduced_precision_reduction = False

9. You can use and to monitor memory occupied by tensors, and use and to monitor the total amount of memory managed by the caching allocator. Calling releases all unused cached memory from PyTorch so that those can be used by other GPU applications. However, the occupied GPU memory by tensors will not be freed so it can not increase the amount of GPU memory available for PyTorch.

To better understand how CUDA memory is being used over time, describes tools for capturing and visualizing traces of memory use.

For more advanced users, we offer more comprehensive memory benchmarking via . We also offer the capability to capture a complete snapshot of the memory allocator state via , which can help you understand the underlying allocation patterns produced by your code.

Environment variables

Use of a caching allocator can interfere with memory checking tools such as

at::globalContext().setAllowFP16ReductionCuBLAS(false);

7. To debug memory errors using

at::globalContext().setAllowFP16ReductionCuBLAS(false);

7, set

at::globalContext().setAllowFP16ReductionCuBLAS(false);

9 in your environment to disable caching.

The behavior of the caching allocator can be controlled via the environment variable

torch.backends.cuda.matmul.allow_bf16_reduced_precision_reduction = False

0. The format is

torch.backends.cuda.matmul.allow_bf16_reduced_precision_reduction = False

1 Available options:

  • torch.backends.cuda.matmul.allow_bf16_reduced_precision_reduction = False 2 allows selecting the underlying allocator implementation. Currently, valid options are torch.backends.cuda.matmul.allow_bf16_reduced_precision_reduction = False 3, which uses PyTorch’s native implementation, and torch.backends.cuda.matmul.allow_bf16_reduced_precision_reduction = False 4, which uses CUDA’s built-in asynchronous allocator. torch.backends.cuda.matmul.allow_bf16_reduced_precision_reduction = False 4 requires CUDA 11.4 or newer. The default is torch.backends.cuda.matmul.allow_bf16_reduced_precision_reduction = False
  • torch.backends.cuda.matmul.allow_bf16_reduced_precision_reduction = False 2 applies to all devices used by the process, and can’t be specified on a per-device basis.
  • torch.backends.cuda.matmul.allow_bf16_reduced_precision_reduction = False 8 prevents the native allocator from splitting blocks larger than this size (in MB). This can reduce fragmentation and may allow some borderline workloads to complete without running out of memory. Performance cost can range from ‘zero’ to ‘substantial’ depending on allocation patterns. Default value is unlimited, i.e. all blocks can be split. The and methods are useful for tuning. This option should be used as a last resort for a workload that is aborting due to ‘out of memory’ and showing a large amount of inactive split blocks. torch.backends.cuda.matmul.allow_bf16_reduced_precision_reduction = False 8 is only meaningful with at::globalContext().setAllowBF16ReductionCuBLAS(true); 2. With at::globalContext().setAllowBF16ReductionCuBLAS(true); 3, torch.backends.cuda.matmul.allow_bf16_reduced_precision_reduction = False 8 is ignored.
  • at::globalContext().setAllowBF16ReductionCuBLAS(true); 5 helps with rounding the requested allocation size to nearest power-2 division and making better use of the blocks. In the native CUDACachingAllocator, the sizes are rounded up in multiple of blocks size of 512, so this works fine for smaller sizes. However, this can be inefficient for large near-by allocations as each will go to different size of blocks and re-use of those blocks are minimized. This might create lots of unused blocks and will waste GPU memory capacity. This option enables the rounding of allocation size to nearest power-2 division. For example, if we need to round-up size of 1200 and if number of divisions is 4, the size 1200 lies between 1024 and 2048 and if we do 4 divisions between them, the values are 1024, 1280, 1536, and 1792. So, allocation size of 1200 will be rounded to 1280 as the nearest ceiling of power-2 division. Specify a single value to apply for all allocation sizes or specify an array of key value pairs to set power-2 division individually for each power of two interval. For example to set 1 division for all allocations under 256MB, 2 division for allocations between 256MB and 512MB, 4 divisions for allocations between 512MB and 1GB and 8 divisions for any larger allocations, set the knob value to: [256:1,512:2,1024:4,>:8]. at::globalContext().setAllowBF16ReductionCuBLAS(true); 5 is only meaningful with at::globalContext().setAllowBF16ReductionCuBLAS(true); 2. With at::globalContext().setAllowBF16ReductionCuBLAS(true); 3, at::globalContext().setAllowBF16ReductionCuBLAS(true); 5 is ignored.
  • The flag below controls whether to allow TF32 on matmul. This flag defaults to False

    in PyTorch 1.12 and later.

    torch.backends.cuda.matmul.allow_tf32 = True

    The flag below controls whether to allow TF32 on cuDNN. This flag defaults to True.

    torch.backends.cudnn.allow_tf32 = True 00 helps actively reclaiming unused GPU memory to avoid triggering expensive sync-and-reclaim-all operation (release_cached_blocks), which can be unfavorable to latency-critical GPU applications (e.g., servers). Upon setting this threshold (e.g., 0.8), the allocator will start reclaiming GPU memory blocks if the GPU memory capacity usage exceeds the threshold (i.e., 80% of the total memory allocated to the GPU application). The algorithm prefers to free old & unused blocks first to avoid freeing blocks that are actively being reused. The threshold value should be between greater than 0.0 and less than 1.0.

    The flag below controls whether to allow TF32 on matmul. This flag defaults to False

    in PyTorch 1.12 and later.

    torch.backends.cuda.matmul.allow_tf32 = True

    The flag below controls whether to allow TF32 on cuDNN. This flag defaults to True.

    torch.backends.cudnn.allow_tf32 = True 00 is only meaningful with at::globalContext().setAllowBF16ReductionCuBLAS(true); 2. With at::globalContext().setAllowBF16ReductionCuBLAS(true); 3,

    The flag below controls whether to allow TF32 on matmul. This flag defaults to False

    in PyTorch 1.12 and later.

    torch.backends.cuda.matmul.allow_tf32 = True

    The flag below controls whether to allow TF32 on cuDNN. This flag defaults to True.

    torch.backends.cudnn.allow_tf32 = True 00 is ignored.
  • The flag below controls whether to allow TF32 on matmul. This flag defaults to False

    in PyTorch 1.12 and later.

    torch.backends.cuda.matmul.allow_tf32 = True

    The flag below controls whether to allow TF32 on cuDNN. This flag defaults to True.

    torch.backends.cudnn.allow_tf32 = True 05 (experimental, default: False) If set to True, this setting instructs the allocator to create CUDA allocations that can later be expanded to better handle cases where a job changing allocation sizes frequently, such as having a changing batch size. Normally for large (>2MB) allocations, the allocator calls cudaMalloc to get allocations that are the same size as what the user requests. In the future, parts of these allocations can be reused for other requests if they are free. This works well when the program makes many requests of exactly the same size or of sizes that even multiples of that size. Many deep learning models follow this behavior. However, one common exception is when the batch size changes slightly from one iteration to the next, e.g. in batched inference. When the program runs initially with batch size N, it will make allocations appropriate for that size. If in the future, it runs at size N - 1, the existing allocations will still be big enough. However, if it runs at size N + 1, then it will have to make new allocations that are slightly larger. Not all the tensors are the same size. Some might be (N + 1)*A and others (N + 1)*A*B where A and B are some non-batch dimensions in the model. Because the allocator reuses existing allocations when they are big enough, some number of (N + 1)*A allocations will actually fit in the already existing N*B*A segments, though not perfectly. As the model runs it will partially fill up all of these segments leaving unusable free slices of memory at the end of these segments. The allocator at some point will need to cudaMalloc a new (N + 1)*A*B segment. If there is not enough memory, there is now no way to recover the slices of memory that are free at the end of existing segments. With models 50+ layers deep, this pattern might repeat 50+ times creating many slivers. expandable_segments allows the allocator to create a segment initially and then expand its size later when more memory is needed. Instead of making one segment per allocation, it tries to make one segment (per stream) that grows as necessary. Now when the N + 1 case runs, the allocations will tile nicely into the one large segment until it fills up. Then more memory is requested and appended to the end of the segment. This process does not create as many slivers of unusable memory, so it is more likely to succeed at finding this memory. pinned_use_cuda_host_register option is a boolean flag that determines whether to use the CUDA API’s cudaHostRegister function for allocating pinned memory instead of the default cudaHostAlloc. When set to True, the memory is allocated using regular malloc and then pages are mapped to the memory before calling cudaHostRegister. This pre-mapping of pages helps reduce the lock time during the execution of cudaHostRegister. pinned_num_register_threads option is only valid when pinned_use_cuda_host_register is set to True. By default, one thread is used to map the pages. This option allows using more threads to parallelize the page mapping operations to reduce the overall allocation time of pinned memory. A good value for this option is 8 based on benchmarking results.

Note

Some stats reported by the are specific to

at::globalContext().setAllowBF16ReductionCuBLAS(true);

2, and are not meaningful with

at::globalContext().setAllowBF16ReductionCuBLAS(true);

3. See each function’s docstring for details.

Using custom memory allocators for CUDA

It is possible to define allocators as simple functions in C/C++ and compile them as a shared library, the code below shows a basic allocator that just traces all the memory operations.

The flag below controls whether to allow TF32 on matmul. This flag defaults to False

in PyTorch 1.12 and later.

torch.backends.cuda.matmul.allow_tf32 = True

The flag below controls whether to allow TF32 on cuDNN. This flag defaults to True.

torch.backends.cudnn.allow_tf32 = True

7

This can be used in python through the . The user is responsible for supplying the path to the .so file and the name of the alloc/free functions that match the signatures specified above.

The flag below controls whether to allow TF32 on matmul. This flag defaults to False

in PyTorch 1.12 and later.

torch.backends.cuda.matmul.allow_tf32 = True

The flag below controls whether to allow TF32 on cuDNN. This flag defaults to True.

torch.backends.cudnn.allow_tf32 = True

8

The flag below controls whether to allow TF32 on matmul. This flag defaults to False

in PyTorch 1.12 and later.

torch.backends.cuda.matmul.allow_tf32 = True

The flag below controls whether to allow TF32 on cuDNN. This flag defaults to True.

torch.backends.cudnn.allow_tf32 = True

9

cuBLAS workspaces

For each combination of cuBLAS handle and CUDA stream, a cuBLAS workspace will be allocated if that handle and stream combination executes a cuBLAS kernel that requires a workspace. In order to avoid repeatedly allocating workspaces, these workspaces are not deallocated unless

The flag below controls whether to allow TF32 on matmul. This flag defaults to False

in PyTorch 1.12 and later.

torch.backends.cuda.matmul.allow_tf32 = True

The flag below controls whether to allow TF32 on cuDNN. This flag defaults to True.

torch.backends.cudnn.allow_tf32 = True

09 is called. The workspace size per allocation can be specified via the environment variable

The flag below controls whether to allow TF32 on matmul. This flag defaults to False

in PyTorch 1.12 and later.

torch.backends.cuda.matmul.allow_tf32 = True

The flag below controls whether to allow TF32 on cuDNN. This flag defaults to True.

torch.backends.cudnn.allow_tf32 = True

10 with the format

The flag below controls whether to allow TF32 on matmul. This flag defaults to False

in PyTorch 1.12 and later.

torch.backends.cuda.matmul.allow_tf32 = True

The flag below controls whether to allow TF32 on cuDNN. This flag defaults to True.

torch.backends.cudnn.allow_tf32 = True

11. As an example, the default workspace size per allocation is

The flag below controls whether to allow TF32 on matmul. This flag defaults to False

in PyTorch 1.12 and later.

torch.backends.cuda.matmul.allow_tf32 = True

The flag below controls whether to allow TF32 on cuDNN. This flag defaults to True.

torch.backends.cudnn.allow_tf32 = True

12 which specifies a total size of

The flag below controls whether to allow TF32 on matmul. This flag defaults to False

in PyTorch 1.12 and later.

torch.backends.cuda.matmul.allow_tf32 = True

The flag below controls whether to allow TF32 on cuDNN. This flag defaults to True.

torch.backends.cudnn.allow_tf32 = True

13. To force cuBLAS to avoid using workspaces, set

The flag below controls whether to allow TF32 on matmul. This flag defaults to False

in PyTorch 1.12 and later.

torch.backends.cuda.matmul.allow_tf32 = True

The flag below controls whether to allow TF32 on cuDNN. This flag defaults to True.

torch.backends.cudnn.allow_tf32 = True

14.

cuFFT plan cache

For each CUDA device, an LRU cache of cuFFT plans is used to speed up repeatedly running FFT methods (e.g., ) on CUDA tensors of same geometry with same configuration. Because some cuFFT plans may allocate GPU memory, these caches have a maximum capacity.

You may control and query the properties of the cache of current device with the following APIs:

  • The flag below controls whether to allow TF32 on matmul. This flag defaults to False

    in PyTorch 1.12 and later.

    torch.backends.cuda.matmul.allow_tf32 = True

    The flag below controls whether to allow TF32 on cuDNN. This flag defaults to True.

    torch.backends.cudnn.allow_tf32 = True 16 gives the capacity of the cache (default is 4096 on CUDA 10 and newer, and 1023 on older CUDA versions). Setting this value directly modifies the capacity.
  • The flag below controls whether to allow TF32 on matmul. This flag defaults to False

    in PyTorch 1.12 and later.

    torch.backends.cuda.matmul.allow_tf32 = True

    The flag below controls whether to allow TF32 on cuDNN. This flag defaults to True.

    torch.backends.cudnn.allow_tf32 = True 17 gives the number of plans currently residing in the cache.
  • The flag below controls whether to allow TF32 on matmul. This flag defaults to False

    in PyTorch 1.12 and later.

    torch.backends.cuda.matmul.allow_tf32 = True

    The flag below controls whether to allow TF32 on cuDNN. This flag defaults to True.

    torch.backends.cudnn.allow_tf32 = True 18 clears the cache.

To control and query plan caches of a non-default device, you can index the

The flag below controls whether to allow TF32 on matmul. This flag defaults to False

in PyTorch 1.12 and later.

torch.backends.cuda.matmul.allow_tf32 = True

The flag below controls whether to allow TF32 on cuDNN. This flag defaults to True.

torch.backends.cudnn.allow_tf32 = True

19 object with either a object or a device index, and access one of the above attributes. E.g., to set the capacity of the cache for device

The flag below controls whether to allow TF32 on matmul. This flag defaults to False

in PyTorch 1.12 and later.

torch.backends.cuda.matmul.allow_tf32 = True

The flag below controls whether to allow TF32 on cuDNN. This flag defaults to True.

torch.backends.cudnn.allow_tf32 = True

21, one can write

The flag below controls whether to allow TF32 on matmul. This flag defaults to False

in PyTorch 1.12 and later.

torch.backends.cuda.matmul.allow_tf32 = True

The flag below controls whether to allow TF32 on cuDNN. This flag defaults to True.

torch.backends.cudnn.allow_tf32 = True

22.

Just-in-Time Compilation

PyTorch just-in-time compiles some operations, like torch.special.zeta, when performed on CUDA tensors. This compilation can be time consuming (up to a few seconds depending on your hardware and software) and may occur multiple times for a single operator since many PyTorch operators actually select from a variety of kernels, each of which must be compiled once, depending on their input. This compilation occurs once per process, or just once if a kernel cache is used.

By default, PyTorch creates a kernel cache in $XDG_CACHE_HOME/torch/kernels if XDG_CACHE_HOME is defined and $HOME/.cache/torch/kernels if it’s not (except on Windows, where the kernel cache is not yet supported). The caching behavior can be directly controlled with two environment variables. If USE_PYTORCH_KERNEL_CACHE is set to 0 then no cache will be used, and if PYTORCH_KERNEL_CACHE_PATH is set then that path will be used as a kernel cache instead of the default location.

Best practices

Device-agnostic code

Due to the structure of PyTorch, you may need to explicitly write device-agnostic (CPU or GPU) code; an example may be creating a new tensor as the initial hidden state of a recurrent neural network.

The first step is to determine whether the GPU should be used or not. A common pattern is to use Python’s

The flag below controls whether to allow TF32 on matmul. This flag defaults to False

in PyTorch 1.12 and later.

torch.backends.cuda.matmul.allow_tf32 = True

The flag below controls whether to allow TF32 on cuDNN. This flag defaults to True.

torch.backends.cudnn.allow_tf32 = True

23 module to read in user arguments, and have a flag that can be used to disable CUDA, in combination with . In the following,

The flag below controls whether to allow TF32 on matmul. This flag defaults to False

in PyTorch 1.12 and later.

torch.backends.cuda.matmul.allow_tf32 = True

The flag below controls whether to allow TF32 on cuDNN. This flag defaults to True.

torch.backends.cudnn.allow_tf32 = True

25 results in a object that can be used to move tensors to CPU or CUDA.

a_full = torch.randn(10240, 10240, dtype=torch.double, device='cuda') b_full = torch.randn(10240, 10240, dtype=torch.double, device='cuda') ab_full = a_full @ b_full mean = ab_full.abs().mean() # 80.7277 a = a_full.float() b = b_full.float()

Do matmul at TF32 mode.

torch.backends.cuda.matmul.allow_tf32 = True ab_tf32 = a @ b # takes 0.016s on GA100 error = (ab_tf32 - ab_full).abs().max() # 0.1747 relative_error = error / mean # 0.0022

Do matmul with TF32 disabled.

torch.backends.cuda.matmul.allow_tf32 = False ab_fp32 = a @ b # takes 0.11s on GA100 error = (ab_fp32 - ab_full).abs().max() # 0.0031 relative_error = error / mean # 0.000039

0

Note

When assessing the availability of CUDA in a given environment (), PyTorch’s default behavior is to call the CUDA Runtime API method . Because this call in turn initializes the CUDA Driver API (via ) if it is not already initialized, subsequent forks of a process that has run will fail with a CUDA initialization error.

One can set

The flag below controls whether to allow TF32 on matmul. This flag defaults to False

in PyTorch 1.12 and later.

torch.backends.cuda.matmul.allow_tf32 = True

The flag below controls whether to allow TF32 on cuDNN. This flag defaults to True.

torch.backends.cudnn.allow_tf32 = True

29 in your environment before importing PyTorch modules that execute (or before executing it directly) in order to direct to attempt an NVML-based assessment (). If the NVML-based assessment is successful (i.e. NVML discovery/initialization does not fail), calls will not poison subsequent forks.

If NVML discovery/initialization fails, will fallback to the standard CUDA Runtime API assessment and the aforementioned fork constraint will apply.

Note that the above NVML-based CUDA availability assessment provides a weaker guarantee than the default CUDA Runtime API approach (which requires CUDA initialization to succeed). In some circumstances, the NVML-based check may succeed while later CUDA initialization fails.

Now that we have

The flag below controls whether to allow TF32 on matmul. This flag defaults to False

in PyTorch 1.12 and later.

torch.backends.cuda.matmul.allow_tf32 = True

The flag below controls whether to allow TF32 on cuDNN. This flag defaults to True.

torch.backends.cudnn.allow_tf32 = True

25, we can use it to create a Tensor on the desired device.

a_full = torch.randn(10240, 10240, dtype=torch.double, device='cuda') b_full = torch.randn(10240, 10240, dtype=torch.double, device='cuda') ab_full = a_full @ b_full mean = ab_full.abs().mean() # 80.7277 a = a_full.float() b = b_full.float()

Do matmul at TF32 mode.

torch.backends.cuda.matmul.allow_tf32 = True ab_tf32 = a @ b # takes 0.016s on GA100 error = (ab_tf32 - ab_full).abs().max() # 0.1747 relative_error = error / mean # 0.0022

Do matmul with TF32 disabled.

torch.backends.cuda.matmul.allow_tf32 = False ab_fp32 = a @ b # takes 0.11s on GA100 error = (ab_fp32 - ab_full).abs().max() # 0.0031 relative_error = error / mean # 0.000039

1

This can be used in a number of cases to produce device agnostic code. Below is an example when using a dataloader:

a_full = torch.randn(10240, 10240, dtype=torch.double, device='cuda') b_full = torch.randn(10240, 10240, dtype=torch.double, device='cuda') ab_full = a_full @ b_full mean = ab_full.abs().mean() # 80.7277 a = a_full.float() b = b_full.float()

Do matmul at TF32 mode.

torch.backends.cuda.matmul.allow_tf32 = True ab_tf32 = a @ b # takes 0.016s on GA100 error = (ab_tf32 - ab_full).abs().max() # 0.1747 relative_error = error / mean # 0.0022

Do matmul with TF32 disabled.

torch.backends.cuda.matmul.allow_tf32 = False ab_fp32 = a @ b # takes 0.11s on GA100 error = (ab_fp32 - ab_full).abs().max() # 0.0031 relative_error = error / mean # 0.000039

2

When working with multiple GPUs on a system, you can use the

The flag below controls whether to allow TF32 on matmul. This flag defaults to False

in PyTorch 1.12 and later.

torch.backends.cuda.matmul.allow_tf32 = True

The flag below controls whether to allow TF32 on cuDNN. This flag defaults to True.

torch.backends.cudnn.allow_tf32 = True

35 environment flag to manage which GPUs are available to PyTorch. As mentioned above, to manually control which GPU a tensor is created on, the best practice is to use a context manager.

a_full = torch.randn(10240, 10240, dtype=torch.double, device='cuda') b_full = torch.randn(10240, 10240, dtype=torch.double, device='cuda') ab_full = a_full @ b_full mean = ab_full.abs().mean() # 80.7277 a = a_full.float() b = b_full.float()

Do matmul at TF32 mode.

torch.backends.cuda.matmul.allow_tf32 = True ab_tf32 = a @ b # takes 0.016s on GA100 error = (ab_tf32 - ab_full).abs().max() # 0.1747 relative_error = error / mean # 0.0022

Do matmul with TF32 disabled.

torch.backends.cuda.matmul.allow_tf32 = False ab_fp32 = a @ b # takes 0.11s on GA100 error = (ab_fp32 - ab_full).abs().max() # 0.0031 relative_error = error / mean # 0.000039

3

If you have a tensor and would like to create a new tensor of the same type on the same device, then you can use a

The flag below controls whether to allow TF32 on matmul. This flag defaults to False

in PyTorch 1.12 and later.

torch.backends.cuda.matmul.allow_tf32 = True

The flag below controls whether to allow TF32 on cuDNN. This flag defaults to True.

torch.backends.cudnn.allow_tf32 = True

37 method (see ). Whilst the previously mentioned

The flag below controls whether to allow TF32 on matmul. This flag defaults to False

in PyTorch 1.12 and later.

torch.backends.cuda.matmul.allow_tf32 = True

The flag below controls whether to allow TF32 on cuDNN. This flag defaults to True.

torch.backends.cudnn.allow_tf32 = True

39 factory functions () depend on the current GPU context and the attributes arguments you pass in,

The flag below controls whether to allow TF32 on matmul. This flag defaults to False

in PyTorch 1.12 and later.

torch.backends.cuda.matmul.allow_tf32 = True

The flag below controls whether to allow TF32 on cuDNN. This flag defaults to True.

torch.backends.cudnn.allow_tf32 = True

37 methods preserve the device and other attributes of the tensor.

This is the recommended practice when creating modules in which new tensors need to be created internally during the forward pass.

a_full = torch.randn(10240, 10240, dtype=torch.double, device='cuda') b_full = torch.randn(10240, 10240, dtype=torch.double, device='cuda') ab_full = a_full @ b_full mean = ab_full.abs().mean() # 80.7277 a = a_full.float() b = b_full.float()

Do matmul at TF32 mode.

torch.backends.cuda.matmul.allow_tf32 = True ab_tf32 = a @ b # takes 0.016s on GA100 error = (ab_tf32 - ab_full).abs().max() # 0.1747 relative_error = error / mean # 0.0022

Do matmul with TF32 disabled.

torch.backends.cuda.matmul.allow_tf32 = False ab_fp32 = a @ b # takes 0.11s on GA100 error = (ab_fp32 - ab_full).abs().max() # 0.0031 relative_error = error / mean # 0.000039

4

If you want to create a tensor of the same type and size of another tensor, and fill it with either ones or zeros, or are provided as convenient helper functions (which also preserve and of a Tensor).

a_full = torch.randn(10240, 10240, dtype=torch.double, device='cuda') b_full = torch.randn(10240, 10240, dtype=torch.double, device='cuda') ab_full = a_full @ b_full mean = ab_full.abs().mean() # 80.7277 a = a_full.float() b = b_full.float()

Do matmul at TF32 mode.

torch.backends.cuda.matmul.allow_tf32 = True ab_tf32 = a @ b # takes 0.016s on GA100 error = (ab_tf32 - ab_full).abs().max() # 0.1747 relative_error = error / mean # 0.0022

Do matmul with TF32 disabled.

torch.backends.cuda.matmul.allow_tf32 = False ab_fp32 = a @ b # takes 0.11s on GA100 error = (ab_fp32 - ab_full).abs().max() # 0.0031 relative_error = error / mean # 0.000039

5

Use pinned memory buffers

Warning

This is an advanced tip. If you overuse pinned memory, it can cause serious problems when running low on RAM, and you should be aware that pinning is often an expensive operation.

Host to GPU copies are much faster when they originate from pinned (page-locked) memory. CPU tensors and storages expose a method, that returns a copy of the object, with data put in a pinned region.

Also, once you pin a tensor or storage, you can use asynchronous GPU copies. Just pass an additional

The flag below controls whether to allow TF32 on matmul. This flag defaults to False

in PyTorch 1.12 and later.

torch.backends.cuda.matmul.allow_tf32 = True

The flag below controls whether to allow TF32 on cuDNN. This flag defaults to True.

torch.backends.cudnn.allow_tf32 = True

46 argument to a or a call. This can be used to overlap data transfers with computation.

You can make the return batches placed in pinned memory by passing

The flag below controls whether to allow TF32 on matmul. This flag defaults to False

in PyTorch 1.12 and later.

torch.backends.cuda.matmul.allow_tf32 = True

The flag below controls whether to allow TF32 on cuDNN. This flag defaults to True.

torch.backends.cudnn.allow_tf32 = True

50 to its constructor.

Use nn.parallel.DistributedDataParallel instead of multiprocessing or nn.DataParallel

Most use cases involving batched inputs and multiple GPUs should default to using to utilize more than one GPU.

There are significant caveats to using CUDA models with ; unless care is taken to meet the data handling requirements exactly, it is likely that your program will have incorrect or undefined behavior.

It is recommended to use , instead of to do multi-GPU training, even if there is only a single node.

The difference between and is: uses multiprocessing where a process is created for each GPU, while uses multithreading. By using multiprocessing, each GPU has its dedicated process, this avoids the performance overhead caused by GIL of Python interpreter.

If you use , you could use torch.distributed.launch utility to launch your program, see .

CUDA Graphs

A CUDA graph is a record of the work (mostly kernels and their arguments) that a CUDA stream and its dependent streams perform. For general principles and details on the underlying CUDA API, see Getting Started with CUDA Graphs and the of the CUDA C Programming Guide.

PyTorch supports the construction of CUDA graphs using , which puts a CUDA stream in capture mode. CUDA work issued to a capturing stream doesn’t actually run on the GPU. Instead, the work is recorded in a graph.

After capture, the graph can be launched to run the GPU work as many times as needed. Each replay runs the same kernels with the same arguments. For pointer arguments this means the same memory addresses are used. By filling input memory with new data (e.g., from a new batch) before each replay, you can rerun the same work on new data.

Why CUDA Graphs?

Replaying a graph sacrifices the dynamic flexibility of typical eager execution in exchange for greatly reduced CPU overhead. A graph’s arguments and kernels are fixed, so a graph replay skips all layers of argument setup and kernel dispatch, including Python, C++, and CUDA driver overheads. Under the hood, a replay submits the entire graph’s work to the GPU with a single call to . Kernels in a replay also execute slightly faster on the GPU, but eliding CPU overhead is the main benefit.

You should try CUDA graphs if all or part of your network is graph-safe (usually this means static shapes and static control flow, but see the other ) and you suspect its runtime is at least somewhat CPU-limited.

PyTorch API

Warning

This API is in beta and may change in future releases.

PyTorch exposes graphs via a raw class and two convenience wrappers, and .

is a simple, versatile context manager that captures CUDA work in its context. Before capture, warm up the workload to be captured by running a few eager iterations. Warmup must occur on a side stream. Because the graph reads from and writes to the same memory addresses in every replay, you must maintain long-lived references to tensors that hold input and output data during capture. To run the graph on new input data, copy new data to the capture’s input tensor(s), replay the graph, then read the new output from the capture’s output tensor(s). Example:

a_full = torch.randn(10240, 10240, dtype=torch.double, device='cuda') b_full = torch.randn(10240, 10240, dtype=torch.double, device='cuda') ab_full = a_full @ b_full mean = ab_full.abs().mean() # 80.7277 a = a_full.float() b = b_full.float()

Do matmul at TF32 mode.

torch.backends.cuda.matmul.allow_tf32 = True ab_tf32 = a @ b # takes 0.016s on GA100 error = (ab_tf32 - ab_full).abs().max() # 0.1747 relative_error = error / mean # 0.0022

Do matmul with TF32 disabled.

torch.backends.cuda.matmul.allow_tf32 = False ab_fp32 = a @ b # takes 0.11s on GA100 error = (ab_fp32 - ab_full).abs().max() # 0.0031 relative_error = error / mean # 0.000039

6

See , , and for realistic and advanced patterns.

is more sophisticated. accepts Python functions and s. For each passed function or Module, it creates separate graphs of the forward-pass and backward-pass work. See .

Constraints

A set of ops is capturable if it doesn’t violate any of the following constraints.

Constraints apply to all work in a context and all work in the forward and backward passes of any callable you pass to .

Violating any of these will likely cause a runtime error:

  • Capture must occur on a non-default stream. (This is only a concern if you use the raw and calls. and set a side stream for you.)
  • Ops that synchronize the CPU with the GPU (e.g.,

    The flag below controls whether to allow TF32 on matmul. This flag defaults to False

    in PyTorch 1.12 and later.

    torch.backends.cuda.matmul.allow_tf32 = True

    The flag below controls whether to allow TF32 on cuDNN. This flag defaults to True.

    torch.backends.cudnn.allow_tf32 = True 73 calls) are prohibited.
  • CUDA RNG ops are allowed, but must use default generators. For example, explicitly constructing a new instance and passing it as the

    The flag below controls whether to allow TF32 on matmul. This flag defaults to False

    in PyTorch 1.12 and later.

    torch.backends.cuda.matmul.allow_tf32 = True

    The flag below controls whether to allow TF32 on cuDNN. This flag defaults to True.

    torch.backends.cudnn.allow_tf32 = True 75 argument to an RNG function is prohibited.

Violating any of these will likely cause silent numerical errors or undefined behavior:

  • Within a process, only one capture may be underway at a time.
  • No non-captured CUDA work may run in this process (on any thread) while capture is underway.
  • CPU work is not captured. If the captured ops include CPU work, that work will be elided during replay.
  • Every replay reads from and writes to the same (virtual) memory addresses.
  • Dynamic control flow (based on CPU or GPU data) is prohibited.
  • Dynamic shapes are prohibited. The graph assumes every tensor in the captured op sequence has the same size and layout in every replay.
  • Using multiple streams in a capture is allowed, but there are .

Non-constraints

  • Once captured, the graph may be replayed on any stream.

Whole-network capture

If your entire network is capturable, you can capture and replay an entire iteration:

a_full = torch.randn(10240, 10240, dtype=torch.double, device='cuda') b_full = torch.randn(10240, 10240, dtype=torch.double, device='cuda') ab_full = a_full @ b_full mean = ab_full.abs().mean() # 80.7277 a = a_full.float() b = b_full.float()

Do matmul at TF32 mode.

torch.backends.cuda.matmul.allow_tf32 = True ab_tf32 = a @ b # takes 0.016s on GA100 error = (ab_tf32 - ab_full).abs().max() # 0.1747 relative_error = error / mean # 0.0022

Do matmul with TF32 disabled.

torch.backends.cuda.matmul.allow_tf32 = False ab_fp32 = a @ b # takes 0.11s on GA100 error = (ab_fp32 - ab_full).abs().max() # 0.0031 relative_error = error / mean # 0.000039

7

Partial-network capture

If some of your network is unsafe to capture (e.g., due to dynamic control flow, dynamic shapes, CPU syncs, or essential CPU-side logic), you can run the unsafe part(s) eagerly and use to graph only the capture-safe part(s).

By default, callables returned by are autograd-aware, and can be used in the training loop as direct replacements for the functions or s you passed.

internally creates objects, runs warmup iterations, and maintains static inputs and outputs as needed. Therefore (unlike with ) you don’t need to handle those manually.

In the following example, data-dependent dynamic control flow means the network isn’t capturable end-to-end, but lets us capture and run graph-safe sections as graphs regardless:

a_full = torch.randn(10240, 10240, dtype=torch.double, device='cuda') b_full = torch.randn(10240, 10240, dtype=torch.double, device='cuda') ab_full = a_full @ b_full mean = ab_full.abs().mean() # 80.7277 a = a_full.float() b = b_full.float()

Do matmul at TF32 mode.

torch.backends.cuda.matmul.allow_tf32 = True ab_tf32 = a @ b # takes 0.016s on GA100 error = (ab_tf32 - ab_full).abs().max() # 0.1747 relative_error = error / mean # 0.0022

Do matmul with TF32 disabled.

torch.backends.cuda.matmul.allow_tf32 = False ab_fp32 = a @ b # takes 0.11s on GA100 error = (ab_fp32 - ab_full).abs().max() # 0.0031 relative_error = error / mean # 0.000039

8

Usage with torch.cuda.amp

For typical optimizers, syncs the CPU with the GPU, which is prohibited during capture. To avoid errors, either use , or (if forward, loss, and backward are capture-safe) capture forward, loss, and backward but not the optimizer step:

a_full = torch.randn(10240, 10240, dtype=torch.double, device='cuda') b_full = torch.randn(10240, 10240, dtype=torch.double, device='cuda') ab_full = a_full @ b_full mean = ab_full.abs().mean() # 80.7277 a = a_full.float() b = b_full.float()

Do matmul at TF32 mode.

torch.backends.cuda.matmul.allow_tf32 = True ab_tf32 = a @ b # takes 0.016s on GA100 error = (ab_tf32 - ab_full).abs().max() # 0.1747 relative_error = error / mean # 0.0022

Do matmul with TF32 disabled.

torch.backends.cuda.matmul.allow_tf32 = False ab_fp32 = a @ b # takes 0.11s on GA100 error = (ab_fp32 - ab_full).abs().max() # 0.0031 relative_error = error / mean # 0.000039

9

Usage with multiple streams

Capture mode automatically propagates to any streams that sync with a capturing stream. Within capture, you may expose parallelism by issuing calls to different streams, but the overall stream dependency DAG must branch out from the initial capturing stream after capture begins and rejoin the initial stream before capture ends:

torch.backends.cuda.matmul.allow_tf32 = False torch.backends.cudnn.allow_tf32 = False

0

Note

To avoid confusion for power users looking at replays in nsight systems or nvprof: Unlike eager execution, the graph interprets a nontrivial stream DAG in capture as a hint, not a command. During replay, the graph may reorganize independent ops onto different streams or enqueue them in a different order (while respecting your original DAG’s overall dependencies).

Usage with DistributedDataParallel

NCCL < 2.9.6

NCCL versions earlier than 2.9.6 don’t allow collectives to be captured. You must use , which defers allreduces to happen outside graphed sections of backward.

Call on graphable network sections before wrapping the network with DDP.

NCCL >= 2.9.6

NCCL versions 2.9.6 or later allow collectives in the graph. Approaches that capture an are a viable option, but need three setup steps.

  1. Disable DDP’s internal async error handling: torch.backends.cuda.matmul.allow_tf32 = False torch.backends.cudnn.allow_tf32 = False 1
  2. Before full-backward capture, DDP must be constructed in a side-stream context: torch.backends.cuda.matmul.allow_tf32 = False torch.backends.cudnn.allow_tf32 = False 2
  3. Your warmup must run at least 11 DDP-enabled eager iterations before capture.

Graph memory management

A captured graph acts on the same virtual addresses every time it replays. If PyTorch frees the memory, a later replay can hit an illegal memory access. If PyTorch reassigns the memory to new tensors, the replay can corrupt the values seen by those tensors. Therefore, the virtual addresses used by the graph must be reserved for the graph across replays. The PyTorch caching allocator achieves this by detecting when capture is underway and satisfying the capture’s allocations from a graph-private memory pool. The private pool stays alive until its object and all tensors created during capture go out of scope.

Private pools are maintained automatically. By default, the allocator creates a separate private pool for each capture. If you capture multiple graphs, this conservative approach ensures graph replays never corrupt each other’s values, but sometimes needlessly wastes memory.

Sharing memory across captures

To economize the memory stashed in private pools, and optionally allow different captures to share the same private pool. It’s safe for a set of graphs to share a private pool if you know they’ll always be replayed in the same order they were captured, and never be replayed concurrently.

’s

The flag below controls whether to allow TF32 on matmul. This flag defaults to False

in PyTorch 1.12 and later.

torch.backends.cuda.matmul.allow_tf32 = True

The flag below controls whether to allow TF32 on cuDNN. This flag defaults to True.

torch.backends.cudnn.allow_tf32 = True

89 argument is a hint to use a particular private pool, and can be used to share memory across graphs as shown:

torch.backends.cuda.matmul.allow_tf32 = False torch.backends.cudnn.allow_tf32 = False

3

With , if you want to graph several callables and you know they’ll always run in the same order (and never concurrently) pass them as a tuple in the same order they’ll run in the live workload, and will capture their graphs using a shared private pool.

If, in the live workload, your callables will run in an order that occasionally changes, or if they’ll run concurrently, passing them as a tuple to a single invocation of is not allowed. Instead, you must call separately for each one.