jactorch.nn.embedding#

Classes

LearnedPositionalEmbedding

This module learns positional embeddings up to a fixed maximum size.

Functions

make_positions(tensor, padding_idx, left_pad)

Replace non-padding symbols with their position numbers.

Class LearnedPositionalEmbedding

class LearnedPositionalEmbedding[source]#

Bases: Embedding

This module learns positional embeddings up to a fixed maximum size. Padding symbols are ignored, but it is necessary to specify whether padding is added on the left side (left_pad=True) or right side (left_pad=False).

Adapted from: pytorch/fairseq.

__init__(num_embeddings, embedding_dim, padding_idx=0, left_pad=False)[source]#

Initialize internal Module state, shared by both nn.Module and ScriptModule.

extra_repr()#

Set the extra representation of the module.

To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.

Return type:

str

forward(input, incremental_state=None)[source]#

Input is expected to be of size [bsz x seqlen].

classmethod from_pretrained(embeddings, freeze=True, padding_idx=None, max_norm=None, norm_type=2., scale_grad_by_freq=False, sparse=False)#

Create Embedding instance from given 2-dimensional FloatTensor.

Parameters:
  • embeddings (Tensor) – FloatTensor containing weights for the Embedding. First dimension is being passed to Embedding as num_embeddings, second as embedding_dim.

  • freeze (bool, optional) – If True, the tensor does not get updated in the learning process. Equivalent to embedding.weight.requires_grad = False. Default: True

  • padding_idx (int, optional) – If specified, the entries at padding_idx do not contribute to the gradient; therefore, the embedding vector at padding_idx is not updated during training, i.e. it remains as a fixed “pad”.

  • max_norm (float, optional) – See module initialization documentation.

  • norm_type (float, optional) – See module initialization documentation. Default 2.

  • scale_grad_by_freq (bool, optional) – See module initialization documentation. Default False.

  • sparse (bool, optional) – See module initialization documentation.

Examples:

>>> # FloatTensor containing pretrained weights
>>> weight = torch.FloatTensor([[1, 2.3, 3], [4, 5.1, 6.3]])
>>> embedding = nn.Embedding.from_pretrained(weight)
>>> # Get embeddings for index 1
>>> input = torch.LongTensor([1])
>>> # xdoctest: +IGNORE_WANT("non-deterministic")
>>> embedding(input)
tensor([[ 4.0000,  5.1000,  6.3000]])
max_positions()[source]#

Maximum number of supported positions.

reset_parameters()#
Return type:

None

embedding_dim: int#
freeze: bool#
max_norm: float | None#
norm_type: float#
num_embeddings: int#
padding_idx: int | None#
scale_grad_by_freq: bool#
sparse: bool#
weight: Tensor#

Functions

make_positions(tensor, padding_idx, left_pad)[source]#

Replace non-padding symbols with their position numbers. Position numbers begin at padding_idx+1. Padding symbols are ignored, but it is necessary to specify whether padding is added on the left side (left_pad=True) or right side (left_pad=False).