Shortcuts

torch.svd

torch.svd(input, some=True, compute_uv=True, out=None) -> (Tensor, Tensor, Tensor)

This function returns a namedtuple (U, S, V) which is the singular value decomposition of a input real matrix or batches of real matrices input such that input=U×diag(S)×VTinput = U \times diag(S) \times V^T .

If some is True (default), the method returns the reduced singular value decomposition i.e., if the last two dimensions of input are m and n, then the returned U and V matrices will contain only min(n,m)min(n, m) orthonormal columns.

If compute_uv is False, the returned U and V matrices will be zero matrices of shape (m×m)(m \times m) and (n×n)(n \times n) respectively. some will be ignored here.

Note

The singular values are returned in descending order. If input is a batch of matrices, then the singular values of each matrix in the batch is returned in descending order.

Note

The implementation of SVD on CPU uses the LAPACK routine ?gesdd (a divide-and-conquer algorithm) instead of ?gesvd for speed. Analogously, the SVD on GPU uses the MAGMA routine gesdd as well.

Note

Irrespective of the original strides, the returned matrix U will be transposed, i.e. with strides U.contiguous().transpose(-2, -1).stride()

Note

Extra care needs to be taken when backward through U and V outputs. Such operation is really only stable when input is full rank with all distinct singular values. Otherwise, NaN can appear as the gradients are not properly defined. Also, notice that double backward will usually do an additional backward through U and V even if the original backward is only on S.

Note

When some = False, the gradients on U[..., :, min(m, n):] and V[..., :, min(m, n):] will be ignored in backward as those vectors can be arbitrary bases of the subspaces.

Note

When compute_uv = False, backward cannot be performed since U and V from the forward pass is required for the backward operation.

Parameters
  • input (Tensor) – the input tensor of size (,m,n)(*, m, n) where * is zero or more batch dimensions consisting of m×nm \times n matrices.

  • some (bool, optional) – controls the shape of returned U and V

  • compute_uv (bool, optional) – option whether to compute U and V or not

  • out (tuple, optional) – the output tuple of tensors

Example:

>>> a = torch.randn(5, 3)
>>> a
tensor([[ 0.2364, -0.7752,  0.6372],
        [ 1.7201,  0.7394, -0.0504],
        [-0.3371, -1.0584,  0.5296],
        [ 0.3550, -0.4022,  1.5569],
        [ 0.2445, -0.0158,  1.1414]])
>>> u, s, v = torch.svd(a)
>>> u
tensor([[ 0.4027,  0.0287,  0.5434],
        [-0.1946,  0.8833,  0.3679],
        [ 0.4296, -0.2890,  0.5261],
        [ 0.6604,  0.2717, -0.2618],
        [ 0.4234,  0.2481, -0.4733]])
>>> s
tensor([2.3289, 2.0315, 0.7806])
>>> v
tensor([[-0.0199,  0.8766,  0.4809],
        [-0.5080,  0.4054, -0.7600],
        [ 0.8611,  0.2594, -0.4373]])
>>> torch.dist(a, torch.mm(torch.mm(u, torch.diag(s)), v.t()))
tensor(8.6531e-07)
>>> a_big = torch.randn(7, 5, 3)
>>> u, s, v = torch.svd(a_big)
>>> torch.dist(a_big, torch.matmul(torch.matmul(u, torch.diag_embed(s)), v.transpose(-2, -1)))
tensor(2.6503e-06)

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources