Shortcuts

torch.nn

These are the basic building block for graphs

Parameter

A kind of Tensor that is to be considered a module parameter.

Containers

Module

Base class for all neural network modules.

Sequential

A sequential container.

ModuleList

Holds submodules in a list.

ModuleDict

Holds submodules in a dictionary.

ParameterList

Holds parameters in a list.

ParameterDict

Holds parameters in a dictionary.

Convolution Layers

nn.Conv1d

Applies a 1D convolution over an input signal composed of several input planes.

nn.Conv2d

Applies a 2D convolution over an input signal composed of several input planes.

nn.Conv3d

Applies a 3D convolution over an input signal composed of several input planes.

nn.ConvTranspose1d

Applies a 1D transposed convolution operator over an input image composed of several input planes.

nn.ConvTranspose2d

Applies a 2D transposed convolution operator over an input image composed of several input planes.

nn.ConvTranspose3d

Applies a 3D transposed convolution operator over an input image composed of several input planes.

nn.Unfold

Extracts sliding local blocks from a batched input tensor.

nn.Fold

Combines an array of sliding local blocks into a large containing tensor.

Pooling layers

nn.MaxPool1d

Applies a 1D max pooling over an input signal composed of several input planes.

nn.MaxPool2d

Applies a 2D max pooling over an input signal composed of several input planes.

nn.MaxPool3d

Applies a 3D max pooling over an input signal composed of several input planes.

nn.MaxUnpool1d

Computes a partial inverse of MaxPool1d.

nn.MaxUnpool2d

Computes a partial inverse of MaxPool2d.

nn.MaxUnpool3d

Computes a partial inverse of MaxPool3d.

nn.AvgPool1d

Applies a 1D average pooling over an input signal composed of several input planes.

nn.AvgPool2d

Applies a 2D average pooling over an input signal composed of several input planes.

nn.AvgPool3d

Applies a 3D average pooling over an input signal composed of several input planes.

nn.FractionalMaxPool2d

Applies a 2D fractional max pooling over an input signal composed of several input planes.

nn.LPPool1d

Applies a 1D power-average pooling over an input signal composed of several input planes.

nn.LPPool2d

Applies a 2D power-average pooling over an input signal composed of several input planes.

nn.AdaptiveMaxPool1d

Applies a 1D adaptive max pooling over an input signal composed of several input planes.

nn.AdaptiveMaxPool2d

Applies a 2D adaptive max pooling over an input signal composed of several input planes.

nn.AdaptiveMaxPool3d

Applies a 3D adaptive max pooling over an input signal composed of several input planes.

nn.AdaptiveAvgPool1d

Applies a 1D adaptive average pooling over an input signal composed of several input planes.

nn.AdaptiveAvgPool2d

Applies a 2D adaptive average pooling over an input signal composed of several input planes.

nn.AdaptiveAvgPool3d

Applies a 3D adaptive average pooling over an input signal composed of several input planes.

Padding Layers

nn.ReflectionPad1d

Pads the input tensor using the reflection of the input boundary.

nn.ReflectionPad2d

Pads the input tensor using the reflection of the input boundary.

nn.ReplicationPad1d

Pads the input tensor using replication of the input boundary.

nn.ReplicationPad2d

Pads the input tensor using replication of the input boundary.

nn.ReplicationPad3d

Pads the input tensor using replication of the input boundary.

nn.ZeroPad2d

Pads the input tensor boundaries with zero.

nn.ConstantPad1d

Pads the input tensor boundaries with a constant value.

nn.ConstantPad2d

Pads the input tensor boundaries with a constant value.

nn.ConstantPad3d

Pads the input tensor boundaries with a constant value.

Non-linear Activations (weighted sum, nonlinearity)

nn.ELU

Applies the element-wise function:

nn.Hardshrink

Applies the hard shrinkage function element-wise:

nn.Hardsigmoid

Applies the element-wise function:

nn.Hardtanh

Applies the HardTanh function element-wise

nn.Hardswish

Applies the hardswish function, element-wise, as described in the paper:

nn.LeakyReLU

Applies the element-wise function:

nn.LogSigmoid

Applies the element-wise function:

nn.MultiheadAttention

Allows the model to jointly attend to information from different representation subspaces.

nn.PReLU

Applies the element-wise function:

nn.ReLU

Applies the rectified linear unit function element-wise:

nn.ReLU6

Applies the element-wise function:

nn.RReLU

Applies the randomized leaky rectified liner unit function, element-wise, as described in the paper:

nn.SELU

Applied element-wise, as:

nn.CELU

Applies the element-wise function:

nn.GELU

Applies the Gaussian Error Linear Units function:

nn.SiLU

Applies the silu function, element-wise.

nn.Sigmoid

Applies the element-wise function:

nn.Softplus

Applies the element-wise function:

nn.Softshrink

Applies the soft shrinkage function elementwise:

nn.Softsign

Applies the element-wise function:

nn.Tanh

Applies the element-wise function:

nn.Tanhshrink

Applies the element-wise function:

nn.Threshold

Thresholds each element of the input Tensor.

Non-linear Activations (other)

nn.Softmin

Applies the Softmin function to an n-dimensional input Tensor rescaling them so that the elements of the n-dimensional output Tensor lie in the range [0, 1] and sum to 1.

nn.Softmax

Applies the Softmax function to an n-dimensional input Tensor rescaling them so that the elements of the n-dimensional output Tensor lie in the range [0,1] and sum to 1.

nn.Softmax2d

Applies SoftMax over features to each spatial location.

nn.LogSoftmax

Applies the log(Softmax(x))\log(\text{Softmax}(x)) function to an n-dimensional input Tensor.

nn.AdaptiveLogSoftmaxWithLoss

Efficient softmax approximation as described in Efficient softmax approximation for GPUs by Edouard Grave, Armand Joulin, Moustapha Cissé, David Grangier, and Hervé Jégou.

Normalization Layers

nn.BatchNorm1d

Applies Batch Normalization over a 2D or 3D input (a mini-batch of 1D inputs with optional additional channel dimension) as described in the paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift .

nn.BatchNorm2d

Applies Batch Normalization over a 4D input (a mini-batch of 2D inputs with additional channel dimension) as described in the paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift .

nn.BatchNorm3d

Applies Batch Normalization over a 5D input (a mini-batch of 3D inputs with additional channel dimension) as described in the paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift .

nn.GroupNorm

Applies Group Normalization over a mini-batch of inputs as described in the paper Group Normalization

nn.SyncBatchNorm

Applies Batch Normalization over a N-Dimensional input (a mini-batch of [N-2]D inputs with additional channel dimension) as described in the paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift .

nn.InstanceNorm1d

Applies Instance Normalization over a 3D input (a mini-batch of 1D inputs with optional additional channel dimension) as described in the paper Instance Normalization: The Missing Ingredient for Fast Stylization.

nn.InstanceNorm2d

Applies Instance Normalization over a 4D input (a mini-batch of 2D inputs with additional channel dimension) as described in the paper Instance Normalization: The Missing Ingredient for Fast Stylization.

nn.InstanceNorm3d

Applies Instance Normalization over a 5D input (a mini-batch of 3D inputs with additional channel dimension) as described in the paper Instance Normalization: The Missing Ingredient for Fast Stylization.

nn.LayerNorm

Applies Layer Normalization over a mini-batch of inputs as described in the paper Layer Normalization

nn.LocalResponseNorm

Applies local response normalization over an input signal composed of several input planes, where channels occupy the second dimension.

Recurrent Layers

nn.RNNBase

nn.RNN

Applies a multi-layer Elman RNN with tanh\tanh or ReLU\text{ReLU} non-linearity to an input sequence.

nn.LSTM

Applies a multi-layer long short-term memory (LSTM) RNN to an input sequence.

nn.GRU

Applies a multi-layer gated recurrent unit (GRU) RNN to an input sequence.

nn.RNNCell

An Elman RNN cell with tanh or ReLU non-linearity.

nn.LSTMCell

A long short-term memory (LSTM) cell.

nn.GRUCell

A gated recurrent unit (GRU) cell

Transformer Layers

nn.Transformer

A transformer model.

nn.TransformerEncoder

TransformerEncoder is a stack of N encoder layers

nn.TransformerDecoder

TransformerDecoder is a stack of N decoder layers

nn.TransformerEncoderLayer

TransformerEncoderLayer is made up of self-attn and feedforward network.

nn.TransformerDecoderLayer

TransformerDecoderLayer is made up of self-attn, multi-head-attn and feedforward network.

Linear Layers

nn.Identity

A placeholder identity operator that is argument-insensitive.

nn.Linear

Applies a linear transformation to the incoming data: y=xAT+by = xA^T + b

nn.Bilinear

Applies a bilinear transformation to the incoming data: y=x1TAx2+by = x_1^T A x_2 + b

Dropout Layers

nn.Dropout

During training, randomly zeroes some of the elements of the input tensor with probability p using samples from a Bernoulli distribution.

nn.Dropout2d

Randomly zero out entire channels (a channel is a 2D feature map, e.g., the jj -th channel of the ii -th sample in the batched input is a 2D tensor input[i,j]\text{input}[i, j] ).

nn.Dropout3d

Randomly zero out entire channels (a channel is a 3D feature map, e.g., the jj -th channel of the ii -th sample in the batched input is a 3D tensor input[i,j]\text{input}[i, j] ).

nn.AlphaDropout

Applies Alpha Dropout over the input.

Sparse Layers

nn.Embedding

A simple lookup table that stores embeddings of a fixed dictionary and size.

nn.EmbeddingBag

Computes sums or means of ‘bags’ of embeddings, without instantiating the intermediate embeddings.

Distance Functions

nn.CosineSimilarity

Returns cosine similarity between x1x_1 and x2x_2 , computed along dim.

nn.PairwiseDistance

Computes the batchwise pairwise distance between vectors v1v_1 , v2v_2 using the p-norm:

Loss Functions

nn.L1Loss

Creates a criterion that measures the mean absolute error (MAE) between each element in the input xx and target yy .

nn.MSELoss

Creates a criterion that measures the mean squared error (squared L2 norm) between each element in the input xx and target yy .

nn.CrossEntropyLoss

This criterion combines nn.LogSoftmax() and nn.NLLLoss() in one single class.

nn.CTCLoss

The Connectionist Temporal Classification loss.

nn.NLLLoss

The negative log likelihood loss.

nn.PoissonNLLLoss

Negative log likelihood loss with Poisson distribution of target.

nn.KLDivLoss

The `Kullback-Leibler divergence`_ Loss

nn.BCELoss

Creates a criterion that measures the Binary Cross Entropy between the target and the output:

nn.BCEWithLogitsLoss

This loss combines a Sigmoid layer and the BCELoss in one single class.

nn.MarginRankingLoss

Creates a criterion that measures the loss given inputs x1x1 , x2x2 , two 1D mini-batch Tensors, and a label 1D mini-batch tensor yy (containing 1 or -1).

nn.HingeEmbeddingLoss

Measures the loss given an input tensor xx and a labels tensor yy (containing 1 or -1).

nn.MultiLabelMarginLoss

Creates a criterion that optimizes a multi-class multi-classification hinge loss (margin-based loss) between input xx (a 2D mini-batch Tensor) and output yy (which is a 2D Tensor of target class indices).

nn.SmoothL1Loss

Creates a criterion that uses a squared term if the absolute element-wise error falls below 1 and an L1 term otherwise.

nn.SoftMarginLoss

Creates a criterion that optimizes a two-class classification logistic loss between input tensor xx and target tensor yy (containing 1 or -1).

nn.MultiLabelSoftMarginLoss

Creates a criterion that optimizes a multi-label one-versus-all loss based on max-entropy, between input xx and target yy of size (N,C)(N, C) .

nn.CosineEmbeddingLoss

Creates a criterion that measures the loss given input tensors x1x_1 , x2x_2 and a Tensor label yy with values 1 or -1.

nn.MultiMarginLoss

Creates a criterion that optimizes a multi-class classification hinge loss (margin-based loss) between input xx (a 2D mini-batch Tensor) and output yy (which is a 1D tensor of target class indices, 0yx.size(1)10 \leq y \leq \text{x.size}(1)-1 ):

nn.TripletMarginLoss

Creates a criterion that measures the triplet loss given an input tensors x1x1 , x2x2 , x3x3 and a margin with a value greater than 00 .

Vision Layers

nn.PixelShuffle

Rearranges elements in a tensor of shape (,C×r2,H,W)(*, C \times r^2, H, W) to a tensor of shape (,C,H×r,W×r)(*, C, H \times r, W \times r) .

nn.Upsample

Upsamples a given multi-channel 1D (temporal), 2D (spatial) or 3D (volumetric) data.

nn.UpsamplingNearest2d

Applies a 2D nearest neighbor upsampling to an input signal composed of several input channels.

nn.UpsamplingBilinear2d

Applies a 2D bilinear upsampling to an input signal composed of several input channels.

DataParallel Layers (multi-GPU, distributed)

nn.DataParallel

Implements data parallelism at the module level.

nn.parallel.DistributedDataParallel

Implements distributed data parallelism that is based on torch.distributed package at the module level.

Utilities

From the torch.nn.utils module

clip_grad_norm_

Clips gradient norm of an iterable of parameters.

clip_grad_value_

Clips gradient of an iterable of parameters at specified value.

parameters_to_vector

Convert parameters to one vector

vector_to_parameters

Convert one vector to the parameters

prune.BasePruningMethod

Abstract base class for creation of new pruning techniques.

prune.PruningContainer

Container holding a sequence of pruning methods for iterative pruning.

prune.Identity

Utility pruning method that does not prune any units but generates the pruning parametrization with a mask of ones.

prune.RandomUnstructured

Prune (currently unpruned) units in a tensor at random.

prune.L1Unstructured

Prune (currently unpruned) units in a tensor by zeroing out the ones with the lowest L1-norm.

prune.RandomStructured

Prune entire (currently unpruned) channels in a tensor at random.

prune.LnStructured

Prune entire (currently unpruned) channels in a tensor based on their Ln-norm.

prune.CustomFromMask

prune.identity

Applies pruning reparametrization to the tensor corresponding to the parameter called name in module without actually pruning any units.

prune.random_unstructured

Prunes tensor corresponding to parameter called name in module by removing the specified amount of (currently unpruned) units selected at random.

prune.l1_unstructured

Prunes tensor corresponding to parameter called name in module by removing the specified amount of (currently unpruned) units with the lowest L1-norm.

prune.random_structured

Prunes tensor corresponding to parameter called name in module by removing the specified amount of (currently unpruned) channels along the specified dim selected at random.

prune.ln_structured

Prunes tensor corresponding to parameter called name in module by removing the specified amount of (currently unpruned) channels along the specified dim with the lowest L``n``-norm.

prune.global_unstructured

Globally prunes tensors corresponding to all parameters in parameters by applying the specified pruning_method.

prune.custom_from_mask

Prunes tensor corresponding to parameter called name in module by applying the pre-computed mask in mask.

prune.remove

Removes the pruning reparameterization from a module and the pruning method from the forward hook.

prune.is_pruned

Check whether module is pruned by looking for forward_pre_hooks in its modules that inherit from the BasePruningMethod.

weight_norm

Applies weight normalization to a parameter in the given module.

remove_weight_norm

Removes the weight normalization reparameterization from a module.

spectral_norm

Applies spectral normalization to a parameter in the given module.

remove_spectral_norm

Removes the spectral normalization reparameterization from a module.

Utility functions in other modules

nn.utils.rnn.PackedSequence

Holds the data and list of batch_sizes of a packed sequence.

nn.utils.rnn.pack_padded_sequence

Packs a Tensor containing padded sequences of variable length.

nn.utils.rnn.pad_packed_sequence

Pads a packed batch of variable length sequences.

nn.utils.rnn.pad_sequence

Pad a list of variable length Tensors with padding_value

nn.utils.rnn.pack_sequence

Packs a list of variable length Tensors

nn.Flatten

Flattens a contiguous range of dims into a tensor.

Quantized Functions

Quantization refers to techniques for performing computations and storing tensors at lower bitwidths than floating point precision. PyTorch supports both per tensor and per channel asymmetric linear quantization. To learn more how to use quantized functions in PyTorch, please refer to the Quantization documentation.

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources