Shortcuts

Bilinear

class torch.nn.Bilinear(in1_features: int, in2_features: int, out_features: int, bias: bool = True)[source]

Applies a bilinear transformation to the incoming data: y=x1TAx2+by = x_1^T A x_2 + b

Parameters
  • in1_features – size of each first input sample

  • in2_features – size of each second input sample

  • out_features – size of each output sample

  • bias – If set to False, the layer will not learn an additive bias. Default: True

Shape:
  • Input1: (N,,Hin1)(N, *, H_{in1}) where Hin1=in1_featuresH_{in1}=\text{in1\_features} and * means any number of additional dimensions. All but the last dimension of the inputs should be the same.

  • Input2: (N,,Hin2)(N, *, H_{in2}) where Hin2=in2_featuresH_{in2}=\text{in2\_features} .

  • Output: (N,,Hout)(N, *, H_{out}) where Hout=out_featuresH_{out}=\text{out\_features} and all but the last dimension are the same shape as the input.

Variables
  • ~Bilinear.weight – the learnable weights of the module of shape (out_features,in1_features,in2_features)(\text{out\_features}, \text{in1\_features}, \text{in2\_features}) . The values are initialized from U(k,k)\mathcal{U}(-\sqrt{k}, \sqrt{k}) , where k=1in1_featuresk = \frac{1}{\text{in1\_features}}

  • ~Bilinear.bias – the learnable bias of the module of shape (out_features)(\text{out\_features}) . If bias is True, the values are initialized from U(k,k)\mathcal{U}(-\sqrt{k}, \sqrt{k}) , where k=1in1_featuresk = \frac{1}{\text{in1\_features}}

Examples:

>>> m = nn.Bilinear(20, 30, 40)
>>> input1 = torch.randn(128, 20)
>>> input2 = torch.randn(128, 30)
>>> output = m(input1, input2)
>>> print(output.size())
torch.Size([128, 40])

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources