encoder¶

class
BoWEncoder
(emb_dim)[源代码]¶ 基类：
paddle.fluid.dygraph.layers.Layer
A
BoWEncoder
takes as input a sequence of vectors and returns a single vector, which simply sums the embeddings of a sequence across the time dimension. The input to this encoder is of shape(batch_size, num_tokens, emb_dim)
, and the output is of shape(batch_size, emb_dim)
. 参数
emb_dim (int)  The dimension of each vector in the input sequence.
示例
import paddle import paddle.nn as nn import paddlenlp as nlp class BoWModel(nn.Layer): def __init__(self, vocab_size, num_classes, emb_dim=128, padding_idx=0, hidden_size=128, fc_hidden_size=96): super().__init__() self.embedder = nn.Embedding( vocab_size, emb_dim, padding_idx=padding_idx) self.bow_encoder = nlp.seq2vec.BoWEncoder(emb_dim) self.fc1 = nn.Linear(self.bow_encoder.get_output_dim(), hidden_size) self.fc2 = nn.Linear(hidden_size, fc_hidden_size) self.output_layer = nn.Linear(fc_hidden_size, num_classes) def forward(self, text): # Shape: (batch_size, num_tokens, embedding_dim) embedded_text = self.embedder(text) # Shape: (batch_size, embedding_dim) summed = self.bow_encoder(embedded_text) encoded_text = paddle.tanh(summed) # Shape: (batch_size, hidden_size) fc1_out = paddle.tanh(self.fc1(encoded_text)) # Shape: (batch_size, fc_hidden_size) fc2_out = paddle.tanh(self.fc2(fc1_out)) # Shape: (batch_size, num_classes) logits = self.output_layer(fc2_out) return logits model = BoWModel(vocab_size=100, num_classes=2) text = paddle.randint(low=1, high=10, shape=[1,10], dtype='int32') logits = model(text)

get_input_dim
()[源代码]¶ Returns the dimension of the vector input for each element in the sequence input to a
BoWEncoder
. This is not the shape of the input tensor, but the last element of that shape.

get_output_dim
()[源代码]¶ Returns the dimension of the final vector output by this
BoWEncoder
. This is not the shape of the returned tensor, but the last element of that shape.

forward
(inputs, mask=None)[源代码]¶ It simply sums the embeddings of a sequence across the time dimension.
 参数
inputs (Tensor)  Shape as
(batch_size, num_tokens, emb_dim)
and dtype asfloat32
orfloat64
. The sequence length of the input sequence.mask (Tensor, optional)  Shape same as
inputs
. Its each elements identify whether the corresponding input token is padding or not. If True, not padding token. If False, padding token. Defaults toNone
.
 返回
Shape as
(batch_size, emb_dim)
, and dtype is same asinputs
. The result vector of BagOfEmbedding. 返回类型
Tensor

class
CNNEncoder
(emb_dim, num_filter, ngram_filter_sizes=(2, 3, 4, 5), conv_layer_activation=Tanh(), output_dim=None, **kwargs)[源代码]¶ 基类：
paddle.fluid.dygraph.layers.Layer
A
CNNEncoder
takes as input a sequence of vectors and returns a single vector, a combination of multiple convolution layers and max pooling layers. The input to this encoder is of shape(batch_size, num_tokens, emb_dim)
, and the output is of shape(batch_size, ouput_dim)
or(batch_size, len(ngram_filter_sizes) * num_filter)
.The CNN has one convolution layer for each ngram filter size. Each convolution operation gives out a vector of size num_filter. The number of times a convolution layer will be used is
num_tokens  ngram_size + 1
. The corresponding maxpooling layer aggregates all these outputs from the convolution layer and outputs the max.This operation is repeated for every ngram size passed, and consequently the dimensionality of the output after maxpooling is
len(ngram_filter_sizes) * num_filter
. This then gets (optionally) projected down to a lower dimensional output, specified byoutput_dim
.We then use a fully connected layer to project in back to the desired output_dim. For more details, refer to A Sensitivity Analysis of (and Practitioners’ Guide to) Convolutional Neural Networks for Sentence Classification , Zhang and Wallace 2016, particularly Figure 1.
 参数
emb_dim (int)  The dimension of each vector in the input sequence.
num_filter (int)  This is the output dim for each convolutional layer, which is the number of "filters" learned by that layer.
ngram_filter_sizes (Tuple[int], optinal)  This specifies both the number of convolutional layers we will create and their sizes. The default of
(2, 3, 4, 5)
will have four convolutional layers, corresponding to encoding ngrams of size 2 to 5 with some number of filters.conv_layer_activation (Layer, optional)  Activation to use after the convolution layers. Defaults to
paddle.nn.Tanh()
.output_dim (int, optional)  After doing convolutions and pooling, we'll project the collected features into a vector of this size. If this value is
None
, we will just return the result of the max pooling, giving an output of shapelen(ngram_filter_sizes) * num_filter
. Defaults toNone
.
示例
import paddle import paddle.nn as nn import paddlenlp as nlp class CNNModel(nn.Layer): def __init__(self, vocab_size, num_classes, emb_dim=128, padding_idx=0, num_filter=128, ngram_filter_sizes=(3, ), fc_hidden_size=96): super().__init__() self.embedder = nn.Embedding( vocab_size, emb_dim, padding_idx=padding_idx) self.encoder = nlp.seq2vec.CNNEncoder( emb_dim=emb_dim, num_filter=num_filter, ngram_filter_sizes=ngram_filter_sizes) self.fc = nn.Linear(self.encoder.get_output_dim(), fc_hidden_size) self.output_layer = nn.Linear(fc_hidden_size, num_classes) def forward(self, text): # Shape: (batch_size, num_tokens, embedding_dim) embedded_text = self.embedder(text) # Shape: (batch_size, len(ngram_filter_sizes)*num_filter) encoder_out = self.encoder(embedded_text) encoder_out = paddle.tanh(encoder_out) # Shape: (batch_size, fc_hidden_size) fc_out = self.fc(encoder_out) # Shape: (batch_size, num_classes) logits = self.output_layer(fc_out) return logits model = CNNModel(vocab_size=100, num_classes=2) text = paddle.randint(low=1, high=10, shape=[1,10], dtype='int32') logits = model(text)

get_input_dim
()[源代码]¶ Returns the dimension of the vector input for each element in the sequence input to a
CNNEncoder
. This is not the shape of the input tensor, but the last element of that shape.

get_output_dim
()[源代码]¶ Returns the dimension of the final vector output by this
CNNEncoder
. This is not the shape of the returned tensor, but the last element of that shape.

forward
(inputs, mask=None)[源代码]¶ The combination of multiple convolution layers and max pooling layers.
 参数
inputs (Tensor)  Shape as
(batch_size, num_tokens, emb_dim)
and dtype asfloat32
orfloat64
. Tensor containing the features of the input sequence.mask (Tensor, optional)  Shape shoule be same as
inputs
and dtype asint32
,int64
,float32
orfloat64
. Its each elements identify whether the corresponding input token is padding or not. If True, not padding token. If False, padding token. Defaults toNone
 返回
If output_dim is None, the result shape is of
(batch_size, output_dim)
and dtype isfloat
; If not, the result shape is of(batch_size, len(ngram_filter_sizes) * num_filter)
. 返回类型
Tensor

class
GRUEncoder
(input_size, hidden_size, num_layers=1, direction='forward', dropout=0.0, pooling_type=None, **kwargs)[源代码]¶ 基类：
paddle.fluid.dygraph.layers.Layer
A GRUEncoder takes as input a sequence of vectors and returns a single vector, which is a combination of multiple paddle.nn.GRU subclass. The input to this encoder is of shape
(batch_size, num_tokens, input_size)
, The output is of shape(batch_size, hidden_size * 2)
if GRU is bidirection; If not, output is of shape(batch_size, hidden_size)
.Paddle's GRU have two outputs: the hidden state for every time step at last layer, and the hidden state at the last time step for every layer. If
pooling_type
is not None, we perform the pooling on the hidden state of every time step at last layer to create a single vector. If None, we use the hidden state of the last time step at last layer as a single output (shape of(batch_size, hidden_size)
); And if direction is bidirection, the we concat the hidden state of the last forward gru and backward gru layer to create a single vector (shape of(batch_size, hidden_size * 2)
). 参数
input_size (int)  The number of expected features in the input (the last dimension).
hidden_size (int)  The number of features in the hidden state.
num_layers (int, optional)  Number of recurrent layers. E.g., setting num_layers=2 would mean stacking two GRUs together to form a stacked GRU, with the second GRU taking in outputs of the first GRU and computing the final results. Defaults to 1.
direction (str, optional)  The direction of the network. It can be "forward" and "bidirect" (it means bidirection network). If "bidirect", it is a birectional GRU, and returns the concat output from both directions. Defaults to "forward".
dropout (float, optional)  If nonzero, introduces a Dropout layer on the outputs of each GRU layer except the last layer, with dropout probability equal to dropout. Defaults to 0.0.
pooling_type (str, optional)  If
pooling_type
is None, then the GRUEncoder will return the hidden state of the last time step at last layer as a single vector. If pooling_type is not None, it must be one of "sum", "max" and "mean". Then it will be pooled on the GRU output (the hidden state of every time step at last layer) to create a single vector. Defaults toNone
示例
import paddle import paddle.nn as nn import paddlenlp as nlp class GRUModel(nn.Layer): def __init__(self, vocab_size, num_classes, emb_dim=128, padding_idx=0, gru_hidden_size=198, direction='forward', gru_layers=1, dropout_rate=0.0, pooling_type=None, fc_hidden_size=96): super().__init__() self.embedder = nn.Embedding( num_embeddings=vocab_size, embedding_dim=emb_dim, padding_idx=padding_idx) self.gru_encoder = nlp.seq2vec.GRUEncoder( emb_dim, gru_hidden_size, num_layers=gru_layers, direction=direction, dropout=dropout_rate, pooling_type=pooling_type) self.fc = nn.Linear(self.gru_encoder.get_output_dim(), fc_hidden_size) self.output_layer = nn.Linear(fc_hidden_size, num_classes) def forward(self, text, seq_len): # Shape: (batch_size, num_tokens, embedding_dim) embedded_text = self.embedder(text) # Shape: (batch_size, num_tokens, num_directions*gru_hidden_size) # num_directions = 2 if direction is 'bidirect' # if not, num_directions = 1 text_repr = self.gru_encoder(embedded_text, sequence_length=seq_len) # Shape: (batch_size, fc_hidden_size) fc_out = paddle.tanh(self.fc(text_repr)) # Shape: (batch_size, num_classes) logits = self.output_layer(fc_out) return logits model = GRUModel(vocab_size=100, num_classes=2) text = paddle.randint(low=1, high=10, shape=[1,10], dtype='int32') seq_len = paddle.to_tensor([10]) logits = model(text, seq_len)

get_input_dim
()[源代码]¶ Returns the dimension of the vector input for each element in the sequence input to a
GRUEncoder
. This is not the shape of the input tensor, but the last element of that shape.

get_output_dim
()[源代码]¶ Returns the dimension of the final vector output by this
GRUEncoder
. This is not the shape of the returned tensor, but the last element of that shape.

forward
(inputs, sequence_length)[源代码]¶ GRUEncoder takes the a sequence of vectors and and returns a single vector, which is a combination of multiple GRU layers. The input to this encoder is of shape
(batch_size, num_tokens, input_size)
, The output is of shape(batch_size, hidden_size * 2)
if GRU is bidirection; If not, output is of shape(batch_size, hidden_size)
. 参数
inputs (Tensor)  Shape as
(batch_size, num_tokens, input_size)
. Tensor containing the features of the input sequence.sequence_length (Tensor)  Shape as
(batch_size)
. The sequence length of the input sequence.
 返回
 Shape as
(batch_size, hidden_size)
and dtype isfloat
. The hidden state at the last time step for every layer.
 Shape as
 返回类型
Tensor

class
LSTMEncoder
(input_size, hidden_size, num_layers=1, direction='forward', dropout=0.0, pooling_type=None, **kwargs)[源代码]¶ 基类：
paddle.fluid.dygraph.layers.Layer
An LSTMEncoder takes as input a sequence of vectors and returns a single vector, which is a combination of multiple paddle.nn.LSTM subclass. The input to this encoder is of shape
(batch_size, num_tokens, input_size)
. The output is of shape(batch_size, hidden_size * 2)
if LSTM is bidirection; If not, output is of shape(batch_size, hidden_size)
.Paddle's LSTM have two outputs: the hidden state for every time step at last layer, and the hidden state and cell at the last time step for every layer. If
pooling_type
is not None, we perform the pooling on the hidden state of every time step at last layer to create a single vector. If None, we use the hidden state of the last time step at last layer as a single output (shape of(batch_size, hidden_size)
); And if direction is bidirection, the we concat the hidden state of the last forward lstm and backward lstm layer to create a single vector (shape of(batch_size, hidden_size * 2)
). 参数
input_size (int)  The number of expected features in the input (the last dimension).
hidden_size (int)  The number of features in the hidden state.
num_layers (int, optional)  Number of recurrent layers. E.g., setting num_layers=2 would mean stacking two LSTMs together to form a stacked LSTM, with the second LSTM taking in outputs of the first LSTM and computing the final results. Defaults to 1.
direction (str, optional)  The direction of the network. It can be "forward" or "bidirect" (it means bidirection network). If "bidirect", it is a birectional LSTM, and returns the concat output from both directions. Defaults to "forward".
dropout (float, optional)  If nonzero, introduces a Dropout layer on the outputs of each LSTM layer except the last layer, with dropout probability equal to dropout. Defaults to 0.0 .
pooling_type (str, optional)  If
pooling_type
is None, then the LSTMEncoder will return the hidden state of the last time step at last layer as a single vector. If pooling_type is not None, it must be one of "sum", "max" and "mean". Then it will be pooled on the LSTM output (the hidden state of every time step at last layer) to create a single vector. Defaults toNone
.
示例
import paddle import paddle.nn as nn import paddlenlp as nlp class LSTMModel(nn.Layer): def __init__(self, vocab_size, num_classes, emb_dim=128, padding_idx=0, lstm_hidden_size=198, direction='forward', lstm_layers=1, dropout_rate=0.0, pooling_type=None, fc_hidden_size=96): super().__init__() self.embedder = nn.Embedding( num_embeddings=vocab_size, embedding_dim=emb_dim, padding_idx=padding_idx) self.lstm_encoder = nlp.seq2vec.LSTMEncoder( emb_dim, lstm_hidden_size, num_layers=lstm_layers, direction=direction, dropout=dropout_rate, pooling_type=pooling_type) self.fc = nn.Linear(self.lstm_encoder.get_output_dim(), fc_hidden_size) self.output_layer = nn.Linear(fc_hidden_size, num_classes) def forward(self, text, seq_len): # Shape: (batch_size, num_tokens, embedding_dim) embedded_text = self.embedder(text) # Shape: (batch_size, num_tokens, num_directions*lstm_hidden_size) # num_directions = 2 if direction is 'bidirect' # if not, num_directions = 1 text_repr = self.lstm_encoder(embedded_text, sequence_length=seq_len) # Shape: (batch_size, fc_hidden_size) fc_out = paddle.tanh(self.fc(text_repr)) # Shape: (batch_size, num_classes) logits = self.output_layer(fc_out) return logits model = LSTMModel(vocab_size=100, num_classes=2) text = paddle.randint(low=1, high=10, shape=[1,10], dtype='int32') seq_len = paddle.to_tensor([10]) logits = model(text, seq_len)

get_input_dim
()[源代码]¶ Returns the dimension of the vector input for each element in the sequence input to a
LSTMEncoder
. This is not the shape of the input tensor, but the last element of that shape.

get_output_dim
()[源代码]¶ Returns the dimension of the final vector output by this
LSTMEncoder
. This is not the shape of the returned tensor, but the last element of that shape.

forward
(inputs, sequence_length)[源代码]¶ LSTMEncoder takes the a sequence of vectors and and returns a single vector, which is a combination of multiple LSTM layers. The input to this encoder is of shape
(batch_size, num_tokens, input_size)
, The output is of shape(batch_size, hidden_size * 2)
if LSTM is bidirection; If not, output is of shape(batch_size, hidden_size)
. 参数
inputs (Tensor)  Shape as
(batch_size, num_tokens, input_size)
. Tensor containing the features of the input sequence.sequence_length (Tensor)  Shape as
(batch_size)
. The sequence length of the input sequence.
 返回
Shape as
(batch_size, hidden_size)
and dtype as float. The hidden state at the last time step for every layer. 返回类型
Tensor

class
RNNEncoder
(input_size, hidden_size, num_layers=1, direction='forward', dropout=0.0, pooling_type=None, **kwargs)[源代码]¶ 基类：
paddle.fluid.dygraph.layers.Layer
A RNNEncoder takes as input a sequence of vectors and returns a single vector, which is a combination of multiple paddle.nn.RNN subclass. The input to this encoder is of shape
(batch_size, num_tokens, input_size)
, The output is of shape(batch_size, hidden_size * 2)
if RNN is bidirection; If not, output is of shape(batch_size, hidden_size)
.Paddle's RNN have two outputs: the hidden state for every time step at last layer, and the hidden state at the last time step for every layer. If
pooling_type
is not None, we perform the pooling on the hidden state of every time step at last layer to create a single vector. If None, we use the hidden state of the last time step at last layer as a single output (shape of(batch_size, hidden_size)
); And if direction is bidirection, the we concat the hidden state of the last forward rnn and backward rnn layer to create a single vector (shape of(batch_size, hidden_size * 2)
). 参数
input_size (int)  The number of expected features in the input (the last dimension).
hidden_size (int)  The number of features in the hidden state.
num_layers (int, optional)  Number of recurrent layers. E.g., setting num_layers=2 would mean stacking two RNNs together to form a stacked RNN, with the second RNN taking in outputs of the first RNN and computing the final results. Defaults to 1.
direction (str, optional)  The direction of the network. It can be "forward" and "bidirect" (it means bidirection network). If "biderect", it is a birectional RNN, and returns the concat output from both directions. Defaults to "forward"
dropout (float, optional)  If nonzero, introduces a Dropout layer on the outputs of each RNN layer except the last layer, with dropout probability equal to dropout. Defaults to 0.0.
pooling_type (str, optional)  If
pooling_type
is None, then the RNNEncoder will return the hidden state of the last time step at last layer as a single vector. If pooling_type is not None, it must be one of "sum", "max" and "mean". Then it will be pooled on the RNN output (the hidden state of every time step at last layer) to create a single vector. Defaults toNone
.
示例
import paddle import paddle.nn as nn import paddlenlp as nlp class RNNModel(nn.Layer): def __init__(self, vocab_size, num_classes, emb_dim=128, padding_idx=0, rnn_hidden_size=198, direction='forward', rnn_layers=1, dropout_rate=0.0, pooling_type=None, fc_hidden_size=96): super().__init__() self.embedder = nn.Embedding( num_embeddings=vocab_size, embedding_dim=emb_dim, padding_idx=padding_idx) self.rnn_encoder = nlp.seq2vec.RNNEncoder( emb_dim, rnn_hidden_size, num_layers=rnn_layers, direction=direction, dropout=dropout_rate, pooling_type=pooling_type) self.fc = nn.Linear(self.rnn_encoder.get_output_dim(), fc_hidden_size) self.output_layer = nn.Linear(fc_hidden_size, num_classes) def forward(self, text, seq_len): # Shape: (batch_size, num_tokens, embedding_dim) embedded_text = self.embedder(text) # Shape: (batch_size, num_tokens, num_directions*rnn_hidden_size) # num_directions = 2 if direction is 'bidirect' # if not, num_directions = 1 text_repr = self.rnn_encoder(embedded_text, sequence_length=seq_len) # Shape: (batch_size, fc_hidden_size) fc_out = paddle.tanh(self.fc(text_repr)) # Shape: (batch_size, num_classes) logits = self.output_layer(fc_out) return logits model = RNNModel(vocab_size=100, num_classes=2) text = paddle.randint(low=1, high=10, shape=[1,10], dtype='int32') seq_len = paddle.to_tensor([10]) logits = model(text, seq_len)

get_input_dim
()[源代码]¶ Returns the dimension of the vector input for each element in the sequence input to a
RNNEncoder
. This is not the shape of the input tensor, but the last element of that shape.

get_output_dim
()[源代码]¶ Returns the dimension of the final vector output by this
RNNEncoder
. This is not the shape of the returned tensor, but the last element of that shape.

forward
(inputs, sequence_length)[源代码]¶ RNNEncoder takes the a sequence of vectors and and returns a single vector, which is a combination of multiple RNN layers. The input to this encoder is of shape
(batch_size, num_tokens, input_size)
. The output is of shape(batch_size, hidden_size * 2)
if RNN is bidirection; If not, output is of shape(batch_size, hidden_size)
. 参数
inputs (Tensor)  Shape as
(batch_size, num_tokens, input_size)
. Tensor containing the features of the input sequence.sequence_length (Tensor)  Shape as
(batch_size)
. The sequence length of the input sequence.
 返回
Shape as
(batch_size, hidden_size)
and dtype asfloat
. The hidden state at the last time step for every layer. 返回类型
last_hidden (Tensor)

class
TCNEncoder
(input_size, num_channels, kernel_size=2, dropout=0.2)[源代码]¶ 基类：
paddle.fluid.dygraph.layers.Layer
A
TCNEncoder
takes as input a sequence of vectors and returns a single vector, which is the last one time step in the feature map. The input to this encoder is of shape(batch_size, num_tokens, input_size)
, and the output is of shape(batch_size, num_channels[1])
with a receptive filed:\[receptive filed = $2 * \sum_{i=0}^{len(num\_channels)1}2^i(kernel\_size1)$.\]Temporal Convolutional Networks is a simple convolutional architecture. It outperforms canonical recurrent networks such as LSTMs in many tasks. See https://arxiv.org/pdf/1803.01271.pdf for more details.
 参数
input_size (int)  The number of expected features in the input (the last dimension).
num_channels (list)  The number of channels in different layer.
kernel_size (int)  The kernel size. Defaults to 2.
dropout (float)  The dropout probability. Defaults to 0.2.

get_input_dim
()[源代码]¶ Returns the dimension of the vector input for each element in the sequence input to a
TCNEncoder
. This is not the shape of the input tensor, but the last element of that shape.

get_output_dim
()[源代码]¶ Returns the dimension of the final vector output by this
TCNEncoder
. This is not the shape of the returned tensor, but the last element of that shape.

forward
(inputs)[源代码]¶ TCNEncoder takes as input a sequence of vectors and returns a single vector, which is the last one time step in the feature map. The input to this encoder is of shape
(batch_size, num_tokens, input_size)
, and the output is of shape(batch_size, num_channels[1])
with a receptive filed:\[receptive filed = $2 * \sum_{i=0}^{len(num\_channels)1}2^i(kernel\_size1)$.\] 参数
inputs (paddle.Tensor)  The input tensor with shape
[batch_size, num_tokens, input_size]
. 返回
The output tensor with shape
[batch_size, num_channels[1]]
. 返回类型
output (paddle.Tensor)