torch.qr¶
-
torch.qr(input, some=True, out=None) -> (Tensor, Tensor)¶ Computes the QR decomposition of a matrix or a batch of matrices
input, and returns a namedtuple (Q, R) of tensors such that with being an orthogonal matrix or batch of orthogonal matrices and being an upper triangular matrix or batch of upper triangular matrices.If
someisTrue, then this function returns the thin (reduced) QR factorization. Otherwise, ifsomeisFalse, this function returns the complete QR factorization.Note
precision may be lost if the magnitudes of the elements of
inputare largeNote
While it should always give you a valid decomposition, it may not give you the same one across platforms - it will depend on your LAPACK implementation.
- Parameters
input (Tensor) – the input tensor of size where * is zero or more batch dimensions consisting of matrices of dimension .
some (bool, optional) – Set to
Truefor reduced QR decomposition andFalsefor complete QR decomposition.out (tuple, optional) – tuple of Q and R tensors satisfying
input = torch.matmul(Q, R). The dimensions of Q and R are and respectively, where ifsome:isTrueand otherwise.
Example:
>>> a = torch.tensor([[12., -51, 4], [6, 167, -68], [-4, 24, -41]]) >>> q, r = torch.qr(a) >>> q tensor([[-0.8571, 0.3943, 0.3314], [-0.4286, -0.9029, -0.0343], [ 0.2857, -0.1714, 0.9429]]) >>> r tensor([[ -14.0000, -21.0000, 14.0000], [ 0.0000, -175.0000, 70.0000], [ 0.0000, 0.0000, -35.0000]]) >>> torch.mm(q, r).round() tensor([[ 12., -51., 4.], [ 6., 167., -68.], [ -4., 24., -41.]]) >>> torch.mm(q.t(), q).round() tensor([[ 1., 0., 0.], [ 0., 1., -0.], [ 0., -0., 1.]]) >>> a = torch.randn(3, 4, 5) >>> q, r = torch.qr(a, some=False) >>> torch.allclose(torch.matmul(q, r), a) True >>> torch.allclose(torch.matmul(q.transpose(-2, -1), q), torch.eye(5)) True