# layers¶

## control_flow¶

### split_lod_tensor¶

paddle.fluid.layers.split_lod_tensor(input, mask, level=0)

split_lod_tensor

This function takes in an input that contains the complete lod information, and takes in a mask which is used to mask certain parts of the input. The output is the true branch and the false branch with the mask applied to the input at a certain level in the tensor.

Parameters: input (tuple|list|None) – The input tensor that contains complete lod information needed to construct the output. mask (list) – A bool column vector which masks the input. level (int) – The specific lod level to rank. The true branch of tensor as per the mask applied to input. Variable: The false branch of tensor as per the mask applied to input. Variable

Examples

x = layers.data(name='x', shape=[1])
x.persistable = True

y = layers.data(name='y', shape=[1])
y.persistable = True

out_true, out_false = layers.split_lod_tensor(


### merge_lod_tensor¶

paddle.fluid.layers.merge_lod_tensor(in_true, in_false, x, mask, level=0)

merge_lod_tensor

This function takes in an input $x$, the True branch, the False branch and a binary $mask$. Using this information, this function merges the True and False branches of the tensor into a single Output at a certain lod level indiacted by $level$.

Parameters: in_true (tuple|list|None) – The True branch to be merged. in_false (tuple|list|None) – The False branch to be merged. x (tuple|list|None) – The input tensor that contains complete lod information needed to construct the output. mask (list) – A bool column vector which masks the input. level (int) – The specific lod level to rank. The merged output tensor. Variable

Examples

x = layers.data(
y = layers.data(

level = 0

out_true, out_false = layers.split_lod_tensor(
out = layers.merge_lod_tensor(


### BlockGuard¶

class paddle.fluid.layers.BlockGuard(main_program)

BlockGuard class.

BlockGuard class is used to create a sub-block in a program by using the Python with keyword.

### BlockGuardWithCompletion¶

class paddle.fluid.layers.BlockGuardWithCompletion(rnn)

BlockGuardWithCompletion class.

BlockGuardWithCompletion class is used to create an op with a block in a program.

### WhileGuard¶

class paddle.fluid.layers.WhileGuard(while_op)

### While¶

class paddle.fluid.layers.While(cond, name=None)

### Switch¶

class paddle.fluid.layers.Switch(name=None)
case(condition)

create a new block for this condition

default()

create a default case for this switch

### lod_rank_table¶

paddle.fluid.layers.lod_rank_table(x, level=0)

LoD Rank Table Operator. Given an input variable x and a level number of LoD, this layer creates a LodRankTable object. A LoDRankTable object contains a list of bi-element tuples. Each tuple consists of an index and a length, both of which are int type. Refering to specified level of LoD, the index is the sequence index number and the length representes the sequence length. Please note that the list is ranked in descending order by the length. The following is an example:

x is a LoDTensor:
x.lod = [[0,                2, 3],
[0,             5, 6, 7]]
x.data = [a, b, c, d, e, f, g]

1. set level to 0:
Create lod rank table:
lod_rank_table_obj = lod_rank_table(x, level=0)

Get:
lod_rank_table_obj.items() = [(0, 2), (1, 1)]

2. set level to 1:
Create lod rank table:
lod_rank_table_obj = lod_rank_table(x, level=1)

Get:
lod_rank_table_obj.items() = [(0, 5), (1, 1), (2, 1)]

Parameters: x (Variable) – Input variable, a LoDTensor based which to create the lod rank table. level (int) – Specify the LoD level, on which to create the lod rank table. The created LoDRankTable object. Variable

Examples

x = fluid.layers.data(name='x', shape=[10],
dtype='float32', lod_level=1)
out = layers.lod_rank_table(x=x, level=0)


### max_sequence_len¶

paddle.fluid.layers.max_sequence_len(rank_table)

Max Sequence Len Operator. Given a LoDRankTable object, this layer returns the max length of a batch of sequences. In fact, a LoDRankTable object contains a list of tuples(<sequence index, sequence length>) and the list is already sorted by sequence length in descending order, so the operator just returns the sequence length of the first tuple element.

Parameters: rank_table (Variable) – Input variable which is a LoDRankTable object. The max length of sequence. Variable

Examples

x = fluid.layers.data(name='x', shape=[10],
dtype='float32', lod_level=1)
rank_table = layers.lod_rank_table(x=x, level=0)
max_seq_len = layers.max_sequence_len(rank_table)


### lod_tensor_to_array¶

paddle.fluid.layers.lod_tensor_to_array(x, table)

Convert a LOD_TENSOR to an LOD_TENSOR_ARRAY.

Parameters: x (Variable|list) – The LOD tensor to be converted to a LOD tensor array. table (ParamAttr|list) – The variable that stores the level of lod which is ordered by sequence length in descending order. The variable of type array that has been converted from a tensor. Variable

Examples

x = fluid.layers.data(name='x', shape=[10])
table = fluid.layers.lod_rank_table(x, level=0)
array = fluid.layers.lod_tensor_to_array(x, table)


### array_to_lod_tensor¶

paddle.fluid.layers.array_to_lod_tensor(x, table)

Convert a LoD_Tensor_Aarry to an LoDTensor.

Parameters: x (Variable|list) – The lod tensor array to be converted to a tensor. table (ParamAttr|list) – The variable that stores the level of lod which is ordered by sequence length in descending order. The variable of type tensor that has been converted from an array. Variable

Examples

x = fluid.layers.data(name='x', shape=[10])
table = fluid.layers.lod_rank_table(x, level=0)
array = fluid.layers.lod_tensor_to_array(x, table)
lod_tensor = fluid.layers.array_to_lod_tensor(array, table)


### increment¶

paddle.fluid.layers.increment(x, value=1.0, in_place=True)

This function performs an operation that increments each value in the input $x$ by an amount: $value$ as mentioned in the input parameter. This operation is performed in-place by default.

Parameters: x (Variable|list) – The tensor that has the input values. value (float) – The amount by which the values should be incremented. in_place (bool) – If the increment should be performed in-place. The tensor variable storing the transformation of element-wise increment of each value in the input. Variable

Examples

data = fluid.layers.data(name='data', shape=[32, 32], dtype='float32')
data = fluid.layers.increment(x=data, value=3.0, in_place=True)


### array_write¶

paddle.fluid.layers.array_write(x, i, array=None)

This function writes the given input variable to the specified position indicating by the arrary index to an output LOD_TENSOR_ARRAY. If the output LOD_TENSOR_ARRAY is not given(None), a new one will be created and returned.

Parameters: x (Variable|list) – The input tensor from which the data will be read. i (Variable|list) – The index of the output LOD_TENSOR_ARRAY, pointing to the position to which the input tensor will be written. array (Variable|list) – The output LOD_TENSOR_ARRAY to which the input tensor will be written. If this parameter is NONE, a new LOD_TENSOR_ARRAY will be created and returned. The output LOD_TENSOR_ARRAY where the input tensor is written. Variable

Examples

### create_array¶

paddle.fluid.layers.create_array(dtype)

This function creates an array of type $LOD_TENSOR_ARRAY$ using the LayerHelper.

Parameters: dtype (int|float) – The data type of the elements in the array. The tensor variable storing the elements of data type. Variable

Examples

data = fluid.layers.create_array(dtype='float32')


### less_than¶

paddle.fluid.layers.less_than(x, y, force_cpu=True, cond=None, **ignored)

Less than

This layer returns the truth value of $x < y$ elementwise.

Parameters: x (Variable) – First operand of less_than y (Variable) – Second operand of less_than force_cpu (Bool|True) – The output data will be on CPU if set true. cond (Variable|None) – Optional output variable to store the result of less_than The tensor variable storing the output of less_than. Variable

Examples

less = fluid.layers.less_than(x=label, y=limit)


### equal¶

paddle.fluid.layers.equal(x, y, cond=None, **ignored)

equal

This layer returns the truth value of $x == y$ elementwise.

Parameters: x (Variable) – First operand of equal y (Variable) – Second operand of equal cond (Variable|None) – Optional output variable to store the result of equal The tensor variable storing the output of equal. Variable

Examples

less = fluid.layers.equal(x=label, y=limit)


paddle.fluid.layers.array_read(array, i)

This function performs the operation to read the data in as an LOD_TENSOR_ARRAY. :param array: The input tensor that will be written to an array. :type array: Variable|list :param i: The subscript index in tensor array, that points the

place where data will be written to.
Returns: The tensor type variable that has the data written to it. Variable

Examples

### shrink_memory¶

paddle.fluid.layers.shrink_memory(x, i, table)

This function creates an operator to shrink_rnn_memory using the RankTable as mentioned in the input parameter.

### array_length¶

paddle.fluid.layers.array_length(array)

This function performs the operation to find the length of the input LOD_TENSOR_ARRAY.

Parameters: array (LOD_TENSOR_ARRAY) – The input array that will be used to compute the length. The length of the input LoDTensorArray. Variable

Examples

### IfElse¶

class paddle.fluid.layers.IfElse(cond, name=None)

### DynamicRNN¶

class paddle.fluid.layers.DynamicRNN(name=None)

### ConditionalBlock¶

class paddle.fluid.layers.ConditionalBlock(inputs, is_scalar_condition=False, name=None)

### StaticRNN¶

class paddle.fluid.layers.StaticRNN(name=None)

StaticRNN class.

StaticRNN class is used to create a StaticRNN. The RNN will have its own parameters like inputs, outputs, memories, status and length.

memory(init=None, shape=None, batch_ref=None, init_value=0.0, init_batch_dim_idx=0, ref_batch_dim_idx=1)
Parameters: init – boot memory, if not set, a shape, batch_ref must be provided shape – shape of the boot memory batch_ref – batch size reference variable init_value – the init value of boot memory init_batch_dim_idx – the index of batch size in init’s dimension ref_batch_dim_idx – the index of batch size in batch_ref’s dimension

### reorder_lod_tensor_by_rank¶

paddle.fluid.layers.reorder_lod_tensor_by_rank(x, rank_table)

ReorderLoDTensorByRankTable operator.

Input(X) is a batch of sequences. Input(RankTable) stores new orders of the input sequence batch. The reorder_lod_tensor_by_rank operator reorders the Input(X) according to the information provided by Input(RankTable).

For example:

If the indices stored in the Input(RankTable) are [3, 0, 2, 1], the Input(X) will be reordered that the fourth sequence in Input(X) will become the first one, and then followed by the original first, third, and the second one.

This is: X = [Seq0, Seq1, Seq2, Seq3]. The indices in RankTable are [3, 0, 2, 1]. Out = [Seq3, Seq0, Seq2, Seq1] with a new LoD information.

If the LoD information of Input(X) is empty, this means Input(X) is not sequence data. This is also identical to a batch of sequences where each sequence has a fixed length 1. In this case, the reorder_lod_tensor_by_rank operator reorders each slice of Input(X) along the first axis according to Input(RankTable).

This is: X = [Slice0, Slice1, Slice2, Slice3] and its LoD information is empty. The indices in RankTable are [3, 0, 2, 1]. Out = [Slice3, Slice0, Slice2, Slice1] with no LoD information is appended.

NOTE: This operator sorts Input(X) according to a given LoDRankTable which does not need to be calculated according to Input(X). It can be calculated according to another different sequence, and then this operator sorts Input(X) according to the given LoDRankTable.

Parameters: x – (LoDTensor), the input lod tensor to be reordered according to Input(RankTable). Duplicable: False Optional: False rank_table – (LoDRankTable), the rank table according to which Input(X) is reordered. Duplicable: False Optional: False op_role (INT) – The role of this operator op_role_var (STRINGS) – Optimized for variable (LoDTensor), the reordered lod tensor.

### ParallelDo¶

class paddle.fluid.layers.ParallelDo(places, use_nccl=False, name=None)

ParallelDo class.

ParallelDo class is used to create a ParallelDo.

### Print¶

paddle.fluid.layers.Print(input, first_n=-1, message=None, summarize=-1, print_tensor_name=True, print_tensor_type=True, print_tensor_shape=True, print_tensor_lod=True, print_phase='both')

Print operator

This creates a print op that will print when a tensor is accessed.

Wraps the tensor passed in so that whenever that a tensor is accessed, the message message is printed, along with the current value of the tensor t.

Parameters: input (Variable) – A Tensor to print. summarize (int) – Print this number of elements in the tensor, will print all if left is negative. message (str) – A string message to print as a prefix. first_n (int) – Only log first_n number of times. print_tensor_name (bool) – Print the tensor name. print_tensor_type (bool) – Print the tensor type. print_tensor_shape (bool) – Print the tensor shape. print_tensor_lod (bool) – Print the tensor lod. print_phase (str) – Which phase to displace, including ‘forward’, ‘backward’ and ‘both’. If set to ‘backward’ or ‘both’, will print the gradients of input tensor. Output tensor, same data with input tensor. Variable

Examples




value = some_layer(...) Print(value, summarize=10,

message=”The content of some_layer: ”)

## device¶

### get_places¶

paddle.fluid.layers.get_places(device_count=None, device_type=None)

Returns a list of places based on flags. The list will be used for parallel execution.

Parameters: device_count (INT) – device count device_type (STRING) – device type op_role (INT) – The role of this operator op_role_var (STRINGS) – Optimized for variable vector of Place

## io¶

### data¶

paddle.fluid.layers.data(name, shape, append_batch_size=True, dtype='float32', lod_level=0, type=VarType.LOD_TENSOR, stop_gradient=True)

Data Layer

This function takes in the input and based on whether data has to be returned back as a minibatch, it creates the global variable by using the helper functions. The global variables can be accessed by all the following operators in the graph.

All the input variables of this function are passed in as local variables to the LayerHelper constructor.

Parameters: name (str) – The name/alias of the function shape (list) – Tuple declaring the shape. append_batch_size (bool) – Whether or not to append the data as a batch. dtype (int|float) – The type of data : float32, float_16, int etc type (VarType) – The output type. By default it is LOD_TENSOR. lod_level (int) – The LoD Level. 0 means the input data is not a sequence. stop_gradient (bool) – A boolean that mentions whether gradient should flow. The global variable that gives access to the data. Variable

Examples

data = fluid.layers.data(name='x', shape=[784], dtype='float32')


### BlockGuardServ¶

class paddle.fluid.layers.BlockGuardServ(server)

BlockGuardServ class.

BlockGuardServ class is used to create an op with a block in a program.

### ListenAndServ¶

class paddle.fluid.layers.ListenAndServ(endpoint, inputs, fan_in=1, optimizer_mode=True)

ListenAndServ class.

ListenAndServ class is used to wrap listen_and_serv op to create a server which can receive variables from clients and run a block.

### Send¶

paddle.fluid.layers.Send(endpoints, send_vars, get_vars=None)

Send layer

Parameters: endpoints – comma seperated IP:PORT pairs in the order of send_vars to send send_vars – vars to send get_vars – vars to get from server after send completes.

Send variables to the server side, and get vars from server side when server have finished running server side program.

### open_recordio_file¶

paddle.fluid.layers.open_recordio_file(filename, shapes, lod_levels, dtypes, pass_num=1, for_parallel=True)

Open a RecordIO file

This layer takes a RecordIO file to read from and returns a Reader Variable. Via the Reader Variable, we can get data from the given RecordIO file.

Parameters: filename (str) – The RecordIO file’s name. shapes (list) – List of tuples which declaring data shapes. lod_levels (list) – List of ints which declaring data lod_level. dtypes (list) – List of strs which declaring data type. pass_num (int) – Number of passes to run. for_parallel (Bool) – Set it as True if you are going to run subsequent operators in parallel. A Reader Variable via which we can get RecordIO file data. Variable

Examples

reader = fluid.layers.io.open_recordio_file(
filename='./data.recordio',
shapes=[(3,224,224), (1)],
lod_levels=[0, 0],
dtypes=['float32', 'int64'])

# Via the reader, we can use 'read_file' layer to get data:


### open_files¶

paddle.fluid.layers.open_files(filenames, shapes, lod_levels, dtypes, thread_num, buffer_size=None, pass_num=1, for_parallel=True)

Open files

This layer takes a list of files to read from and returns a Reader Variable. Via the Reader Variable, we can get data from given files. All files must have name suffixs to indicate their formats, e.g., ‘*.recordio’.

Parameters: filenames (list) – The list of file names. shapes (list) – List of tuples which declaring data shapes. lod_levels (list) – List of ints which declaring data lod_level. dtypes (list) – List of strs which declaring data type. thread_num (int) – The maximal concurrent prefetch thread number. buffer_size (int) – The size of prefetch buffer. pass_num (int) – Number of passes to run. for_parallel (Bool) – Set it as True if you are going to run subsequent operators in parallel. A Reader Variable via which we can get file data. Variable

Examples

reader = fluid.layers.io.open_files(filenames=['./data1.recordio',
'./data2.recordio'],
shapes=[(3,224,224), (1)],
lod_levels=[0, 0],
dtypes=['float32', 'int64'],
buffer_size=2)

# Via the reader, we can use 'read_file' layer to get data:


paddle.fluid.layers.read_file(file_obj)

### shuffle¶

paddle.fluid.layers.shuffle(reader, buffer_size)

### batch¶

paddle.fluid.layers.batch(reader, batch_size)

### double_buffer¶

paddle.fluid.layers.double_buffer(reader, place=None, name=None)

## nn¶

### fc¶

paddle.fluid.layers.fc(input, size, num_flatten_dims=1, param_attr=None, bias_attr=None, use_cudnn=False, use_mkldnn=False, act=None, is_test=False, name=None)

Fully Connected Layer

The fully connected layer can take multiple tensors as its inputs. It creates a variable called weights for each input tensor, which represents a fully connected weight matrix from each input unit to each output unit. The fully connected layer multiplies each input tensor with its coresponding weight to produce an output Tensor. If multiple input tensors are given, the results of multiple multiplications will be sumed up. If bias_attr is not None, a bias variable will be created and added to the output. Finally, if activation is not None, it will be applied to the output as well.

This process can be formulated as follows:

$Out = Act({\sum_{i=0}^{N-1}X_iW_i + b})$

In the above equation:

• $N$: Number of the input.
• $X_i$: The input tensor.
• $W$: The weights created by this layer.
• $b$: The bias parameter created by this layer (if needed).
• $Act$: The activation function.
• $Out$: The output tensor.
Parameters: input (Variable|list of Variable) – The input tensor(s) of this layer, and the dimension of the input tensor(s) is at least 2. size (int) – The number of output units in this layer. num_flatten_dims (int, default 1) – The fc layer can accept an input tensor with more than two dimensions. If this happens, the multidimensional tensor will first be flattened into a 2-dimensional matrix. The parameter num_flatten_dims determines how the input tensor is flattened: the first num_flatten_dims (inclusive, index starts from 1) dimensions will be flatten to form the first dimension of the final matrix (height of the matrix), and the rest rank(X) - num_flatten_dims dimensions are flattened to form the second dimension of the final matrix (width of the matrix). For example, suppose X is a 6-dimensional tensor with a shape [2, 3, 4, 5, 6], and num_flatten_dims = 3. Then, the flattened matrix will have a shape [2 x 3 x 4, 5 x 6] = [24, 30]. param_attr (ParamAttr|list of ParamAttr, default None) – The parameter attribute for learnable parameters/weights of this layer. bias_attr (ParamAttr|list of ParamAttr, default None) – The parameter attribute for the bias of this layer. If it is set to None, no bias will be added to the output units. act (str, default None) – Activation to be applied to the output of this layer. is_test (bool) – A flag indicating whether execution is in test phase. use_mkldnn (bool) – Use mkldnn kernel or not, it is valid only when the mkldnn library is installed. Default: False name (str, default None) – The name of this layer. A tensor variable storing the transformation result. ValueError – If rank of the input tensor is less than 2.

Examples

data = fluid.layers.data(
name="data", shape=[32, 32], dtype="float32")
fc = fluid.layers.fc(input=data, size=1000, act="tanh")


### embedding¶

paddle.fluid.layers.embedding(input, size, is_sparse=False, is_distributed=False, padding_idx=None, param_attr=None, dtype='float32')

Embedding Layer

This layer is used to lookup embeddings of IDs, provided by input, in a lookup table. The result of this lookup is the embedding of each ID in the input.

All the input variables are passed in as local variables to the LayerHelper constructor.

Parameters: input (Variable) – The tensor variable containing the IDs. size (tuple|list) – The shape of the look up table parameter. It should have two elements which indicate the size of the dictionary of embeddings and the size of each embedding vector respectively. is_sparse (bool) – The flag indicating whether to use sparse update. padding_idx (int|long|None) – If None, it makes no effect to lookup. Otherwise the given padding_idx indicates padding the output with zeros whenever lookup encounters it in input. If $padding_idx < 0$, the padding_idx to use in lookup is $size[0] + dim$. param_attr (ParamAttr) – Parameters for this layer dtype (np.dtype|core.VarDesc.VarType|str) – The type of data : float32, float_16, int etc The tensor variable storing the embeddings of the supplied inputs. Variable

Examples

dict_size = len(dataset.ids)
data = fluid.layers.data(name='ids', shape=[32, 32], dtype='float32')
fc = fluid.layers.embedding(input=data, size=[dict_size, 16])


### dynamic_lstm¶

paddle.fluid.layers.dynamic_lstm(input, size, param_attr=None, bias_attr=None, use_peepholes=True, is_reverse=False, gate_activation='sigmoid', cell_activation='tanh', candidate_activation='tanh', dtype='float32', name=None)

Dynamic LSTM Layer

The defalut implementation is diagonal/peephole connection (https://arxiv.org/pdf/1402.1128.pdf), the formula is as follows:

\begin{align}\begin{aligned}i_t & = \sigma(W_{ix}x_{t} + W_{ih}h_{t-1} + W_{ic}c_{t-1} + b_i)\\f_t & = \sigma(W_{fx}x_{t} + W_{fh}h_{t-1} + W_{fc}c_{t-1} + b_f)\\\tilde{c_t} & = act_g(W_{cx}x_t + W_{ch}h_{t-1} + b_c)\\o_t & = \sigma(W_{ox}x_{t} + W_{oh}h_{t-1} + W_{oc}c_t + b_o)\\c_t & = f_t \odot c_{t-1} + i_t \odot \tilde{c_t}\\h_t & = o_t \odot act_h(c_t)\end{aligned}\end{align}

where the $W$ terms denote weight matrices (e.g. $W_{xi}$ is the matrix of weights from the input gate to the input), $W_{ic}, W_{fc}, W_{oc}$ are diagonal weight matrices for peephole connections. In our implementation, we use vectors to reprenset these diagonal weight matrices. The $b$ terms denote bias vectors ($b_i$ is the input gate bias vector), $\sigma$ is the non-linear activations, such as logistic sigmoid function, and $i, f, o$ and $c$ are the input gate, forget gate, output gate, and cell activation vectors, respectively, all of which have the same size as the cell output activation vector $h$.

The $\odot$ is the element-wise product of the vectors. $act_g$ and $act_h$ are the cell input and cell output activation functions and tanh is usually used for them. $\tilde{c_t}$ is also called candidate hidden state, which is computed based on the current input and the previous hidden state.

Set use_peepholes to False to disable peephole connection. The formula is omitted here, please refer to the paper http://www.bioinf.jku.at/publications/older/2604.pdf for details.

Note that these $W_{xi}x_{t}, W_{xf}x_{t}, W_{xc}x_{t}, W_{xo}x_{t}$ operations on the input $x_{t}$ are NOT included in this operator. Users can choose to use fully-connect layer before LSTM layer.

Parameters: input (Variable) – The input of dynamic_lstm layer, which supports variable-time length input sequence. The underlying tensor in this Variable is a matrix with shape (T X 4D), where T is the total time steps in this mini-batch, D is the hidden size. size (int) – 4 * hidden size. param_attr (ParamAttr|None) – The parameter attribute for the learnable hidden-hidden weights. Weights = {$W_{ch}, W_{ih}, W_{fh}, W_{oh}$} The shape is (D x 4D), where D is the hidden size. bias_attr (ParamAttr|None) – The bias attribute for the learnable bias weights, which contains two parts, input-hidden bias weights and peephole connections weights if setting use_peepholes to True. use_peepholes = False Biases = {$b_c, b_i, b_f, b_o$}. The shape is (1 x 4D). use_peepholes = True Biases = { $b_c, b_i, b_f, b_o, W_{ic}, W_{fc}, W_{oc}$}. The shape is (1 x 7D). use_peepholes (bool) – Whether to enable diagonal/peephole connections, default True. is_reverse (bool) – Whether to compute reversed LSTM, default False. gate_activation (str) – The activation for input gate, forget gate and output gate. Choices = [“sigmoid”, “tanh”, “relu”, “identity”], default “sigmoid”. cell_activation (str) – The activation for cell output. Choices = [“sigmoid”, “tanh”, “relu”, “identity”], default “tanh”. candidate_activation (str) – The activation for candidate hidden state. Choices = [“sigmoid”, “tanh”, “relu”, “identity”], default “tanh”. dtype (str) – Data type. Choices = [“float32”, “float64”], default “float32”. name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically. The hidden state, and cell state of LSTM. The shape of both is (T x D), and lod is the same with the input. tuple

Examples

hidden_dim = 512
forward_proj = fluid.layers.fc(input=input_seq, size=hidden_dim * 4,
act=None, bias_attr=None)
forward, _ = fluid.layers.dynamic_lstm(
input=forward_proj, size=hidden_dim * 4, use_peepholes=False)


### dynamic_lstmp¶

paddle.fluid.layers.dynamic_lstmp(input, size, proj_size, param_attr=None, bias_attr=None, use_peepholes=True, is_reverse=False, gate_activation='sigmoid', cell_activation='tanh', candidate_activation='tanh', proj_activation='tanh', dtype='float32', name=None)

Dynamic LSTMP Layer

LSTMP (LSTM with recurrent projection) layer has a separate projection layer after the LSTM layer, projecting the original hidden state to a lower-dimensional one, which is proposed to reduce the number of total parameters and furthermore computational complexity for the LSTM, espeacially for the case that the size of output units is relative large (https://research.google.com/pubs/archive/43905.pdf).

The formula is as follows:

\begin{align}\begin{aligned}i_t & = \sigma(W_{ix}x_{t} + W_{ir}r_{t-1} + W_{ic}c_{t-1} + b_i)\\f_t & = \sigma(W_{fx}x_{t} + W_{fr}r_{t-1} + W_{fc}c_{t-1} + b_f)\\\tilde{c_t} & = act_g(W_{cx}x_t + W_{cr}r_{t-1} + b_c)\\o_t & = \sigma(W_{ox}x_{t} + W_{or}r_{t-1} + W_{oc}c_t + b_o)\\c_t & = f_t \odot c_{t-1} + i_t \odot \tilde{c_t}\\h_t & = o_t \odot act_h(c_t)\\r_t & = \overline{act_h}(W_{rh}h_t)\end{aligned}\end{align}

In the above formula:

• $W$: Denotes weight matrices (e.g. $W_{xi}$ is the matrix of weights from the input gate to the input).
• $W_{ic}$, $W_{fc}$, $W_{oc}$: Diagonal weight matrices for peephole connections. In our implementation, we use vectors to reprenset these diagonal weight matrices.
• $b$: Denotes bias vectors (e.g. $b_i$ is the input gate bias vector).
• $\sigma$: The activation, such as logistic sigmoid function.
• $i, f, o$ and $c$: The input gate, forget gate, output gate, and cell activation vectors, respectively, all of which have the same size as the cell output activation vector $h$.
• $h$: The hidden state.
• $r$: The recurrent projection of the hidden state.
• $\tilde{c_t}$: The candidate hidden state, whose computation is based on the current input and previous hidden state.
• $\odot$: The element-wise product of the vectors.
• $act_g$ and $act_h$: The cell input and cell output activation functions and tanh is usually used for them.
• $\overline{act_h}$: The activation function for the projection output, usually using identity or same as $act_h$.

Set use_peepholes to False to disable peephole connection. The formula is omitted here, please refer to the paper http://www.bioinf.jku.at/publications/older/2604.pdf for details.

Note that these $W_{xi}x_{t}, W_{xf}x_{t}, W_{xc}x_{t}, W_{xo}x_{t}$ operations on the input $x_{t}$ are NOT included in this operator. Users can choose to use fully-connected layer before LSTMP layer.

Parameters: input (Variable) – The input of dynamic_lstmp layer, which supports variable-time length input sequence. The underlying tensor in this Variable is a matrix with shape (T X 4D), where T is the total time steps in this mini-batch, D is the hidden size. size (int) – 4 * hidden size. proj_size (int) – The size of projection output. param_attr (ParamAttr|None) – The parameter attribute for the learnable hidden-hidden weight and projection weight. Hidden-hidden weight = {$W_{ch}, W_{ih}, W_{fh}, W_{oh}$}. The shape of hidden-hidden weight is (P x 4D), where P is the projection size and D the hidden size. Projection weight = {$W_{rh}$}. The shape of projection weight is (D x P). bias_attr (ParamAttr|None) – The bias attribute for the learnable bias weights, which contains two parts, input-hidden bias weights and peephole connections weights if setting use_peepholes to True. use_peepholes = False Biases = {$b_c, b_i, b_f, b_o$}. The shape is (1 x 4D). use_peepholes = True Biases = { $b_c, b_i, b_f, b_o, W_{ic}, W_{fc}, W_{oc}$}. The shape is (1 x 7D). use_peepholes (bool) – Whether to enable diagonal/peephole connections, default True. is_reverse (bool) – Whether to compute reversed LSTM, default False. gate_activation (str) – The activation for input gate, forget gate and output gate. Choices = [“sigmoid”, “tanh”, “relu”, “identity”], default “sigmoid”. cell_activation (str) – The activation for cell output. Choices = [“sigmoid”, “tanh”, “relu”, “identity”], default “tanh”. candidate_activation (str) – The activation for candidate hidden state. Choices = [“sigmoid”, “tanh”, “relu”, “identity”], default “tanh”. proj_activation (str) – The activation for projection output. Choices = [“sigmoid”, “tanh”, “relu”, “identity”], default “tanh”. dtype (str) – Data type. Choices = [“float32”, “float64”], default “float32”. name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically. The projection of hidden state, and cell state of LSTMP. The shape of projection is (T x P), for the cell state which is (T x D), and both LoD is the same with the input. tuple

Examples

hidden_dim, proj_dim = 512, 256
fc_out = fluid.layers.fc(input=input_seq, size=hidden_dim * 4,
act=None, bias_attr=None)
proj_out, _ = fluid.layers.dynamic_lstmp(input=fc_out,
size=hidden_dim * 4,
proj_size=proj_dim,
use_peepholes=False,
is_reverse=True,
cell_activation="tanh",
proj_activation="tanh")


### dynamic_gru¶

paddle.fluid.layers.dynamic_gru(input, size, param_attr=None, bias_attr=None, is_reverse=False, gate_activation='sigmoid', candidate_activation='tanh', h_0=None)

Dynamic GRU Layer

The formula is as follows:

\begin{align}\begin{aligned}u_t & = act_g(W_{ux}x_{t} + W_{uh}h_{t-1} + b_u)\\r_t & = act_g(W_{rx}x_{t} + W_{rh}h_{t-1} + b_r)\\\tilde{h_t} & = act_c(W_{cx}x_{t} + W_{ch}(r_t \odot h_{t-1}) + b_c)\\h_t & = (1-u_t) \odot h_{t-1} + u_t \odot \tilde{h_t}\end{aligned}\end{align}

The $\odot$ is the element-wise product of the vectors. $act_g$ is the update gate and reset gate activation function and $sigmoid$ is usually used for it. $act_c$ is the activation function for candidate hidden state and $tanh$ is usually used for it.

Note that these $W_{ux}x_{t}, W_{rx}x_{t}, W_{cx}x_{t}$ operations on the input $x_{t}$ are NOT included in this operator. Users can choose to use fully-connect layer before GRU layer.

Parameters: input (Variable) – The input of dynamic_gru layer, which supports variable-time length input sequence. The underlying tensor in this Variable is a matrix with shape $(T \times 3D)$, where $T$ is the total time steps in this mini-batch, $D$ is the hidden size. size (int) – The dimension of the gru cell. param_attr (ParamAttr|None) – The parameter attribute for the learnable hidden-hidden weight matrix. Note: The shape of the weight matrix is $(T \times 3D)$, where $D$ is the hidden size. All elements in the weight matrix can be divided into two parts. The first part are weights of the update gate and reset gate with shape $(D \times 2D)$, and the second part are weights for candidate hidden state with shape $(D \times D)$. bias_attr (ParamAttr) – The parameter attribute for learnable the hidden-hidden bias. is_reverse (bool) – Whether to compute reversed GRU, default False. gate_activation (str) – The activation for update gate and reset gate. Choices = [“sigmoid”, “tanh”, “relu”, “identity”], default “sigmoid”. activation (str) – The activation for candidate hidden state. Choices = [“sigmoid”, “tanh”, “relu”, “identity”], default “tanh”. The hidden state of GRU. The shape is $(T \times D)$, and lod is the same with the input. Variable

Examples

hidden_dim = 512
x = fluid.layers.fc(input=data, size=hidden_dim * 3)
hidden = fluid.layers.dynamic_gru(input=x, dim=hidden_dim)


### gru_unit¶

paddle.fluid.layers.gru_unit(input, hidden, size, param_attr=None, bias_attr=None, activation='tanh', gate_activation='sigmoid')

GRU unit layer. The equation of a gru step is:

\begin{align}\begin{aligned}u_t & = actGate(xu_{t} + W_u h_{t-1} + b_u)\\r_t & = actGate(xr_{t} + W_r h_{t-1} + b_r)\\m_t & = actNode(xm_t + W_c dot(r_t, h_{t-1}) + b_m)\\h_t & = dot((1-u_t), m_t) + dot(u_t, h_{t-1})\end{aligned}\end{align}

The inputs of gru unit includes $z_t$, $h_{t-1}$. In terms of the equation above, the $z_t$ is split into 3 parts - $xu_t$, $xr_t$ and $xm_t$. This means that in order to implement a full GRU unit operator for an input, a fully connected layer has to be applied, such that $z_t = W_{fc}x_t$.

The terms $u_t$ and $r_t$ represent the update and reset gates of the GRU cell. Unlike LSTM, GRU has one lesser gate. However, there is an intermediate candidate hidden output, which is denoted by $m_t$. This layer has three outputs $h_t$, $dot(r_t, h_{t-1})$ and concatenation of $u_t$, $r_t$ and $m_t$.

Parameters: input (Variable) – The fc transformed input value of current step. hidden (Variable) – The hidden value of lstm unit from previous step. size (integer) – The input dimension value. param_attr (ParamAttr) – The weight parameters for gru unit. Default: None bias_attr (ParamAttr) – The bias parameters for gru unit. Default: None activation (string) – The activation type for cell (actNode). Default: ‘tanh’ gate_activation (string) – The activation type for gates (actGate). Default: ‘sigmoid’ The hidden value, reset-hidden value and gate values. tuple

Examples

# assuming we have x_t_data and prev_hidden of size=10
x_t = fluid.layers.fc(input=x_t_data, size=30)
hidden_val, r_h_val, gate_val = fluid.layers.gru_unit(input=x_t,
hidden = prev_hidden)


### linear_chain_crf¶

paddle.fluid.layers.linear_chain_crf(input, label, param_attr=None)

### crf_decoding¶

paddle.fluid.layers.crf_decoding(input, param_attr, label=None)

### cos_sim¶

paddle.fluid.layers.cos_sim(X, Y)

This function performs the cosine similarity between two tensors X and Y and returns that as the output.

### cross_entropy¶

paddle.fluid.layers.cross_entropy(input, label, soft_label=False)

Cross Entropy Layer

This layer computes the cross entropy between input and label. It supports both standard cross-entropy and soft-label cross-entropy loss computation.

1. One-hot cross-entropy:

soft_label = False, Label[i, 0] indicates the class index for sample i:

$Y[i] = -\log(X[i, Label[i]])$
2. Soft-label cross-entropy:

soft_label = True, Label[i, j] indicates the soft label of class j for sample i:

$Y[i] = \sum_j{-Label[i, j] * log(X[i, j])}$

Please make sure that in this case the summation of each row of label equals one.

3. One-hot cross-entropy with vecterized label:

As a special case of 2), when each row of ‘label’ has only one non-zero element which is equal to 1, soft-label cross-entropy degenerates to a one-hot cross-entropy with one-hot label representation.

Parameters: input (Variable|list) – a 2-D tensor with shape [N x D], where N is the batch size and D is the number of classes. This input is a probability computed by the previous operator, which is almost always the result of a softmax operator. label (Variable|list) – the ground truth which is a 2-D tensor. When soft_label is set to False, label is a tensor with shape [N x 1]. When soft_label is set to True, label is a tensor with shape [N x D]. soft_label (bool) – a flag indicating whether to interpretate the given labels as soft labels, default False. A 2-D tensor with shape [N x 1], the cross entropy loss. ValueError – 1) the 1st dimension of input and label are not equal. 2) when soft_label == True, and the 2nd dimension of input and label are not equal. when soft_label == False, and the 2nd dimension of label is not 1.

Examples

predict = fluid.layers.fc(input=net, size=classdim, act='softmax')
cost = fluid.layers.cross_entropy(input=predict, label=label)


### square_error_cost¶

paddle.fluid.layers.square_error_cost(input, label)

Square error cost layer

This layer accepts input predictions and target label and returns the squared error cost.

For predictions, $X$, and target labels, $Y$, the equation is:

$Out = (X - Y)^2$

In the above equation:

• $X$: Input predictions, a tensor.
• $Y$: Input labels, a tensor.
• $Out$: Output value, same shape with $X$.
Parameters: input (Variable) – Input tensor, has predictions. label (Variable) – Label tensor, has target labels. The tensor variable storing the element-wise squared error difference of input and label. Variable

Examples

y = layers.data(name='y', shape=[1], dtype='float32')
y_predict = layers.data(name='y_predict', shape=[1], dtype='float32')
cost = layers.square_error_cost(input=y_predict, label=y)


### chunk_eval¶

paddle.fluid.layers.chunk_eval(input, label, chunk_scheme, num_chunk_types, excluded_chunk_types=None)

This function computes and outputs the precision, recall and F1-score of chunk detection.

### sequence_conv¶

paddle.fluid.layers.sequence_conv(input, num_filters, filter_size=3, filter_stride=1, padding=None, bias_attr=None, param_attr=None, act=None)

This function creates the op for sequence_conv, using the inputs and other convolutional configurations for the filters and stride as given in the input parameters to the function.

### conv2d¶

paddle.fluid.layers.conv2d(input, num_filters, filter_size, stride=1, padding=0, dilation=1, groups=None, param_attr=None, bias_attr=None, use_cudnn=True, use_mkldnn=False, act=None, name=None)

Convlution2D Layer

The convolution2D layer calculates the output based on the input, filter and strides, paddings, dilations, groups parameters. Input(Input) and Output(Output) are in NCHW format. Where N is batch size, C is the number of channels, H is the height of the feature, and W is the width of the feature. The details of convolution layer, please refer UFLDL’s convolution, . If bias attribution and activation type are provided, bias is added to the output of the convolution, and the corresponding activation function is applied to the final result.

For each input $X$, the equation is:

$Out = \sigma (W \ast X + b)$

In the above equation:

• $X$: Input value, a tensor with NCHW format.
• $W$: Filter value, a tensor with MCHW format.
• $\ast$: Convolution operation.
• $b$: Bias value, a 2-D tensor with shape [M, 1].
• $\sigma$: Activation function.
• $Out$: Output value, the shape of $Out$ and $X$ may be
different.

Example

• Input:

Input shape: $(N, C_{in}, H_{in}, W_{in})$

Filter shape: $(C_{out}, C_{in}, H_f, W_f)$

• Output: Output shape: $(N, C_{out}, H_{out}, W_{out})$

Where



H_{out}&= frac{(H_{in} + 2 * paddings[0] - (dilations[0] * (H_f - 1) + 1))}{strides[0]} + 1 \ W_{out}&= frac{(W_{in} + 2 * paddings[1] - (dilations[1] * (W_f - 1) + 1))}{strides[1]} + 1

Parameters: input (Variable) – The input image with [N, C, H, W] format. num_filters (int) – The number of filter. It is as same as the output image channel. filter_size (int|tuple|None) – The filter size. If filter_size is a tuple, it must contain two integers, (filter_size_H, filter_size_W). Otherwise, the filter will be a square. stride (int|tuple) – The stride size. If stride is a tuple, it must contain two integers, (stride_H, stride_W). Otherwise, the stride_H = stride_W = stride. Default: stride = 1. padding (int|tuple) – The padding size. If padding is a tuple, it must contain two integers, (padding_H, padding_W). Otherwise, the padding_H = padding_W = padding. Default: padding = 0. dilation (int|tuple) – The dilation size. If dilation is a tuple, it must contain two integers, (dilation_H, dilation_W). Otherwise, the dilation_H = dilation_W = dilation. Default: dilation = 1. groups (int) – The groups number of the Conv2d Layer. According to grouped convolution in Alex Krizhevsky’s Deep CNN paper: when group=2, the first half of the filters is only connected to the first half of the input channels, while the second half of the filters is only connected to the second half of the input channels. Default: groups=1 param_attr (ParamAttr) – The parameters to the Conv2d Layer. Default: None bias_attr (ParamAttr) – Bias parameter for the Conv2d layer. Default: None use_cudnn (bool) – Use cudnn kernel or not, it is valid only when the cudnn library is installed. Default: True act (str) – Activation type. Default: None name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically. The tensor variable storing the convolution and non-linearity activation result. Variable ValueError – If the shapes of input, filter_size, stride, padding and groups mismatch.

Examples

data = fluid.layers.data(
name='data', shape=[3, 32, 32], dtype='float32')
conv2d = fluid.layers.conv2d(
input=data, num_filters=2, filter_size=3, act="relu")


### sequence_pool¶

paddle.fluid.layers.sequence_pool(input, pool_type)

This function add the operator for sequence pooling. It pools features of all time-steps of each instance, and is applied on top of the input using pool_type mentioned in the parameters.

It supports four pool_type:

• average: $Out[i] = \frac{\sum_i X_i}{N}$
• sum: $Out[i] = \sum_jX_{ij}$
• sqrt: $Out[i] = \frac{\sum_jX_{ij}}{\sqrt{len(X_i)}}$
• max: $Out[i] = max(X_i)$
x is a 1-level LoDTensor:
x.lod = [[0, 2, 5, 7]]
x.data = [1, 3, 2, 4, 6, 5, 1]
x.dims = [7, 1]

then output is a Tensor:
out.dim = [3, 1]
with condition len(x.lod[-1]) - 1 == out.dims[0]

for different pool_type:
average: out.data = [2, 4, 3], where 2=(1+3)/2, 4=(2+4+6)/3, 3=(5+1)/2
sum    : out.data = [4, 12, 6], where 4=1+3, 12=2+4+6, 6=5+1
sqrt   : out.data = [2.82, 6.93, 4.24], where 2.82=(1+3)/sqrt(2),
6.93=(2+4+6)/sqrt(3), 4.24=(5+1)/sqrt(2)
max    : out.data = [3, 6, 5], where 3=max(1,3), 6=max(2,4,6), 5=max(5,1)
last   : out.data = [3, 6, 1], where 3=last(1,3), 6=last(2,4,6), 1=last(5,1)
first  : out.data = [1, 2, 5], where 1=first(1,3), 2=first(2,4,6), 5=first(5,1)

Parameters: input (variable) – The input variable which is a LoDTensor. pool_type (string) – The pooling type of sequence_pool. It supports average, sum, sqrt and max. The sequence pooling variable which is a Tensor.

Examples

x = fluid.layers.data(name='x', shape=[7, 1],
dtype='float32', lod_level=1)
avg_x = fluid.layers.sequence_pool(input=x, pool_type='average')
sum_x = fluid.layers.sequence_pool(input=x, pool_type='sum')
sqrt_x = fluid.layers.sequence_pool(input=x, pool_type='sqrt')
max_x = fluid.layers.sequence_pool(input=x, pool_type='max')
last_x = fluid.layers.sequence_pool(input=x, pool_type='last')
first_x = fluid.layers.sequence_pool(input=x, pool_type='first')


### sequence_softmax¶

paddle.fluid.layers.sequence_softmax(input, param_attr=None, bias_attr=None, use_cudnn=True)

### softmax¶

paddle.fluid.layers.softmax(input, param_attr=None, bias_attr=None, use_cudnn=True, name=None)

### pool2d¶

paddle.fluid.layers.pool2d(input, pool_size=-1, pool_type='max', pool_stride=1, pool_padding=0, global_pooling=False, use_cudnn=True, ceil_mode=False, use_mkldnn=False, name=None)

This function adds the operator for pooling in 2 dimensions, using the pooling configurations mentioned in input parameters.

### batch_norm¶

paddle.fluid.layers.batch_norm(input, act=None, is_test=False, momentum=0.9, epsilon=1e-05, param_attr=None, bias_attr=None, data_layout='NCHW', in_place=False, use_mkldnn=False, name=None, moving_mean_name=None, moving_variance_name=None, do_model_average_for_mean_and_var=False)

This function helps create an operator to implement the BatchNorm layer using the configurations from the input parameters.

### beam_search_decode¶

paddle.fluid.layers.beam_search_decode(ids, scores, name=None)

### conv2d_transpose¶

paddle.fluid.layers.conv2d_transpose(input, num_filters, output_size=None, filter_size=None, padding=0, stride=1, dilation=1, groups=None, param_attr=None, bias_attr=None, use_cudnn=True, act=None, name=None)

Convlution2D transpose layer

The convolution2D transpose layer calculates the output based on the input, filter, and dilations, strides, paddings. Input(Input) and output(Output) are in NCHW format. Where N is batch size, C is the number of channels, H is the height of the feature, and W is the width of the feature. Parameters(dilations, strides, paddings) are two elements. These two elements represent height and width, respectively. The details of convolution transpose layer, please refer to the following explanation and references therein.

For each input $X$, the equation is:

$Out = W \ast X$

In the above equation:

• $X$: Input value, a tensor with NCHW format.
• $W$: Filter value, a tensor with MCHW format.
• $\ast$ : Convolution transpose operation.
• $Out$: Output value, the shape of $Out$ and $X$ may be
different.

Example

• Input:

Input shape: $(N, C_{in}, H_{in}, W_{in})$

Filter shape: $(C_{in}, C_{out}, H_f, W_f)$

• Output:

Output shape: $(N, C_{out}, H_{out}, W_{out})$

Where

$\begin{split}H_{out} &= (H_{in} - 1) * strides[0] - 2 * paddings[0] + dilations[0] * (H_f - 1) + 1 \\ W_{out} &= (W_{in} - 1) * strides[1] - 2 * paddings[1] + dilations[1] * (W_f - 1) + 1\end{split}$
Parameters: input (Variable) – The input image with [N, C, H, W] format. num_filters (int) – The number of the filter. It is as same as the output image channel. output_size (int|tuple|None) – The output image size. If output size is a tuple, it must contain two integers, (image_H, image_W). This parameter only works when filter_size is None. filter_size (int|tuple|None) – The filter size. If filter_size is a tuple, it must contain two integers, (filter_size_H, filter_size_W). Otherwise, the filter will be a square. None if use output size to calculate filter_size. padding (int|tuple) – The padding size. If padding is a tuple, it must contain two integers, (padding_H, padding_W). Otherwise, the padding_H = padding_W = padding. Default: padding = 0. stride (int|tuple) – The stride size. If stride is a tuple, it must contain two integers, (stride_H, stride_W). Otherwise, the stride_H = stride_W = stride. Default: stride = 1. dilation (int|tuple) – The dilation size. If dilation is a tuple, it must contain two integers, (dilation_H, dilation_W). Otherwise, the dilation_H = dilation_W = dilation. Default: dilation = 1. groups (int) – The groups number of the Conv2d transpose layer. Inspired by grouped convolution in Alex Krizhevsky’s Deep CNN paper, in which when group=2, the first half of the filters is only connected to the first half of the input channels, while the second half of the filters is only connected to the second half of the input channels. Default: groups=1 param_attr (ParamAttr) – The parameters to the Conv2d_transpose Layer. Default: None bias_attr (ParamAttr) – Bias parameter for the Conv2d layer. Default: None use_cudnn (bool) – Use cudnn kernel or not, it is valid only when the cudnn library is installed. Default: True act (str) – Activation type. Default: None name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically. The tensor variable storing the convolution transpose result. Variable ValueError – If the shapes of input, filter_size, stride, padding and groups mismatch.

Examples

data = fluid.layers.data(
name='data', shape=[3, 32, 32], dtype='float32')
conv2d_transpose = fluid.layers.conv2d_transpose(
input=data, num_filters=2, filter_size=3)


### sequence_expand¶

paddle.fluid.layers.sequence_expand(x, y, ref_level=-1, name=None)

Sequence Expand Layer. This layer will expand the input variable x according to specified level lod of y. Please note that lod level of x is at most 1 and rank of x is at least 2. When rank of x is greater than 2, then it would be viewed as a 2-D tensor. Following examples will explain how sequence_expand works:

* Case 1
x is a LoDTensor:
x.lod  = [[0,   2,        4]]
x.data = [[a], [b], [c], [d]]
x.dims = [4, 1]

y is a LoDTensor:
y.lod = [[0,    2,    4],
[0, 3, 6, 7, 8]]

ref_level: 0

then output is a 1-level LoDTensor:
out.lod =  [[0,   2,        4,        6,        8]]
out.data = [[a], [b], [a], [b], [c], [d], [c], [d]]
out.dims = [8, 1]

* Case 2
x is a Tensor:
x.data = [[a], [b], [c]]
x.dims = [3, 1]

y is a LoDTensor:
y.lod = [[0, 2, 2, 5]]

ref_level: -1

then output is a Tensor:
out.data = [[a], [a], [c], [c], [c]]
out.dims = [5, 1]

Parameters: x (Variable) – The input variable which is a Tensor or LoDTensor. y (Variable) – The input variable which is a LoDTensor. ref_level (int) – Lod level of y to be referred by x. If set to -1, refer the last level of lod. name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically. The expanded variable which is a LoDTensor. Variable

Examples

x = fluid.layers.data(name='x', shape=[10], dtype='float32')
y = fluid.layers.data(name='y', shape=[10, 20],
dtype='float32', lod_level=1)
out = layers.sequence_expand(x=x, y=y, ref_level=0)


### lstm_unit¶

paddle.fluid.layers.lstm_unit(x_t, hidden_t_prev, cell_t_prev, forget_bias=0.0, param_attr=None, bias_attr=None, name=None)

Lstm unit layer. The equation of a lstm step is:

\begin{align}\begin{aligned}i_t & = \sigma(W_{x_i}x_{t} + W_{h_i}h_{t-1} + b_i)\\f_t & = \sigma(W_{x_f}x_{t} + W_{h_f}h_{t-1} + b_f)\\c_t & = f_tc_{t-1} + i_t tanh (W_{x_c}x_t + W_{h_c}h_{t-1} + b_c)\\o_t & = \sigma(W_{x_o}x_{t} + W_{h_o}h_{t-1} + b_o)\\h_t & = o_t tanh(c_t)\end{aligned}\end{align}

The inputs of lstm unit include $x_t$, $h_{t-1}$ and $c_{t-1}$. The 2nd dimensions of $h_{t-1}$ and $c_{t-1}$ should be same. The implementation separates the linear transformation and non-linear transformation apart. Here, we take $i_t$ as an example. The linear transformation is applied by calling a fc layer and the equation is:

$L_{i_t} = W_{x_i}x_{t} + W_{h_i}h_{t-1} + b_i$

The non-linear transformation is applied by calling lstm_unit_op and the equation is:

$i_t = \sigma(L_{i_t})$

This layer has two outputs including $h_t$ and $o_t$.

Parameters: x_t (Variable) – The input value of current step, a 2-D tensor with shape M x N, M for batch size and N for input size. hidden_t_prev (Variable) – The hidden value of lstm unit, a 2-D tensor with shape M x S, M for batch size and S for size of lstm unit. cell_t_prev (Variable) – The cell value of lstm unit, a 2-D tensor with shape M x S, M for batch size and S for size of lstm unit. forget_bias (float) – The forget bias of lstm unit. param_attr (ParamAttr) – The attributes of parameter weights, used to set initializer, name etc. bias_attr (ParamAttr) – The attributes of bias weights, if not False, bias weights will be created and be set to default value. name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically. The hidden value and cell value of lstm unit. tuple ValueError – The ranks of x_t, hidden_t_prev and cell_t_prev not be 2 or the 1st dimensions of x_t, hidden_t_prev and cell_t_prev not be the same or the 2nd dimensions of hidden_t_prev and cell_t_prev not be the same.

Examples

x_t = fluid.layers.fc(input=x_t_data, size=10)
prev_hidden = fluid.layers.fc(input=prev_hidden_data, size=30)
prev_cell = fluid.layers.fc(input=prev_cell_data, size=30)
hidden_value, cell_value = fluid.layers.lstm_unit(x_t=x_t,
hidden_t_prev=prev_hidden,
cell_t_prev=prev_cell)


### reduce_sum¶

paddle.fluid.layers.reduce_sum(input, dim=None, keep_dim=False, name=None)

Computes the sum of tensor elements over the given dimension.

Parameters: input (Variable) – The input variable which is a Tensor or LoDTensor. dim (list|int|None) – The dimensions along which the sum is performed. If None, sum all elements of input and return a Tensor variable with a single element, otherwise must be in the range $[-rank(input), rank(input))$. If $dim[i] < 0$, the dimension to reduce is $rank + dim[i]$. keep_dim (bool|False) – Whether to reserve the reduced dimension in the output Tensor. The result tensor will have one fewer dimension than the input unless keep_dim is true. name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically. The reduced Tensor variable. Variable

Examples

# x is a Tensor variable with following elements:
#    [[0.2, 0.3, 0.5, 0.9]
#     [0.1, 0.2, 0.6, 0.7]]
# Each example is followed by the correspending output tensor.
fluid.layers.reduce_sum(x)  # [3.5]
fluid.layers.reduce_sum(x, dim=0)  # [0.3, 0.5, 1.1, 1.6]
fluid.layers.reduce_sum(x, dim=-1)  # [1.9, 1.6]
fluid.layers.reduce_sum(x, dim=1, keep_dim=True)  # [[1.9], [1.6]]

# x is a Tensor variable with shape [2, 2, 2] and elements as below:
#      [[[1, 2], [3, 4]],
#      [[5, 6], [7, 8]]]
# Each example is followed by the correspending output tensor.
fluid.layers.reduce_sum(x, dim=[1, 2]) # [10, 26]
fluid.layers.reduce_sum(x, dim=[0, 1]) # [16, 20]


### reduce_mean¶

paddle.fluid.layers.reduce_mean(input, dim=None, keep_dim=False, name=None)

Computes the mean of tensor elements over the given dimension.

Parameters: input (Variable) – The input variable which is a Tensor or LoDTensor. dim (list|int|None) – The dimensions along which the mean is computed. If None, compute the mean over all elements of input and return a Tensor variable with a single element, otherwise must be in the range $[-rank(input), rank(input))$. If $dim[i] < 0$, the dimension to reduce is $rank + dim[i]$. keep_dim (bool) – Whether to reserve the reduced dimension in the output Tensor. The result tensor will have one fewer dimension than the input unless keep_dim is true. name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically. The reduced Tensor variable. Variable

Examples

# x is a Tensor variable with following elements:
#    [[0.2, 0.3, 0.5, 0.9]
#     [0.1, 0.2, 0.6, 0.7]]
# Each example is followed by the correspending output tensor.
fluid.layers.reduce_mean(x)  # [0.4375]
fluid.layers.reduce_mean(x, dim=0)  # [0.15, 0.25, 0.55, 0.8]
fluid.layers.reduce_mean(x, dim=-1)  # [0.475, 0.4]
fluid.layers.reduce_mean(
x, dim=1, keep_dim=True)  # [[0.475], [0.4]]

# x is a Tensor variable with shape [2, 2, 2] and elements as below:
#      [[[1.0, 2.0], [3.0, 4.0]],
#      [[5.0, 6.0], [7.0, 8.0]]]
# Each example is followed by the correspending output tensor.
fluid.layers.reduce_mean(x, dim=[1, 2]) # [2.5, 6.5]
fluid.layers.reduce_mean(x, dim=[0, 1]) # [4.0, 5.0]


### reduce_max¶

paddle.fluid.layers.reduce_max(input, dim=None, keep_dim=False, name=None)

Computes the maximum of tensor elements over the given dimension.

Parameters: input (Variable) – The input variable which is a Tensor or LoDTensor. dim (list|int|None) – The dimension along which the maximum is computed. If None, compute the maximum over all elements of input and return a Tensor variable with a single element, otherwise must be in the range $[-rank(input), rank(input))$. If $dim[i] < 0$, the dimension to reduce is $rank + dim[i]$. keep_dim (bool) – Whether to reserve the reduced dimension in the output Tensor. The result tensor will have one fewer dimension than the input unless keep_dim is true. name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically. The reduced Tensor variable. Variable

Examples

# x is a Tensor variable with following elements:
#    [[0.2, 0.3, 0.5, 0.9]
#     [0.1, 0.2, 0.6, 0.7]]
# Each example is followed by the correspending output tensor.
fluid.layers.reduce_max(x)  # [0.9]
fluid.layers.reduce_max(x, dim=0)  # [0.2, 0.3, 0.6, 0.9]
fluid.layers.reduce_max(x, dim=-1)  # [0.9, 0.7]
fluid.layers.reduce_max(x, dim=1, keep_dim=True)  # [[0.9], [0.7]]

# x is a Tensor variable with shape [2, 2, 2] and elements as below:
#      [[[1.0, 2.0], [3.0, 4.0]],
#      [[5.0, 6.0], [7.0, 8.0]]]
# Each example is followed by the correspending output tensor.
fluid.layers.reduce_max(x, dim=[1, 2]) # [4.0, 8.0]
fluid.layers.reduce_max(x, dim=[0, 1]) # [7.0, 8.0]


### reduce_min¶

paddle.fluid.layers.reduce_min(input, dim=None, keep_dim=False, name=None)

Computes the minimum of tensor elements over the given dimension.

Parameters: input (Variable) – The input variable which is a Tensor or LoDTensor. dim (list|int|None) – The dimensions along which the minimum is computed. If None, compute the minimum over all elements of input and return a Tensor variable with a single element, otherwise must be in the range $[-rank(input), rank(input))$. If $dim[i] < 0$, the dimension to reduce is $rank + dim[i]$. keep_dim (bool) – Whether to reserve the reduced dimension in the output Tensor. The result tensor will have one fewer dimension than the input unless keep_dim is true. name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically. The reduced Tensor variable. Variable

Examples

# x is a Tensor variable with following elements:
#    [[0.2, 0.3, 0.5, 0.9]
#     [0.1, 0.2, 0.6, 0.7]]
# Each example is followed by the correspending output tensor.
fluid.layers.reduce_min(x)  # [0.1]
fluid.layers.reduce_min(x, dim=0)  # [0.1, 0.2, 0.5, 0.7]
fluid.layers.reduce_min(x, dim=-1)  # [0.2, 0.1]
fluid.layers.reduce_min(x, dim=1, keep_dim=True)  # [[0.2], [0.1]]

# x is a Tensor variable with shape [2, 2, 2] and elements as below:
#      [[[1.0, 2.0], [3.0, 4.0]],
#      [[5.0, 6.0], [7.0, 8.0]]]
# Each example is followed by the correspending output tensor.
fluid.layers.reduce_min(x, dim=[1, 2]) # [1.0, 5.0]
fluid.layers.reduce_min(x, dim=[0, 1]) # [1.0, 2.0]


### reduce_prod¶

paddle.fluid.layers.reduce_prod(input, dim=None, keep_dim=False, name=None)

Computes the product of tensor elements over the given dimension.

Parameters: input (Variable) – The input variable which is a Tensor or LoDTensor. dim (list|int|None) – The dimensions along which the product is performed. If None, multipy all elements of input and return a Tensor variable with a single element, otherwise must be in the range $[-rank(input), rank(input))$. If $dim[i] < 0$, the dimension to reduce is $rank + dim[i]$. keep_dim (bool|False) – Whether to reserve the reduced dimension in the output Tensor. The result tensor will have one fewer dimension than the input unless keep_dim is true. name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically. The reduced Tensor variable. Variable

Examples

# x is a Tensor variable with following elements:
#    [[0.2, 0.3, 0.5, 0.9]
#     [0.1, 0.2, 0.6, 0.7]]
# Each example is followed by the correspending output tensor.
fluid.layers.reduce_prod(x)  # [0.0002268]
fluid.layers.reduce_prod(x, dim=0)  # [0.02, 0.06, 0.3, 0.63]
fluid.layers.reduce_prod(x, dim=-1)  # [0.027, 0.0084]
fluid.layers.reduce_prod(x, dim=1,
keep_dim=True)  # [[0.027], [0.0084]]

# x is a Tensor variable with shape [2, 2, 2] and elements as below:
#      [[[1.0, 2.0], [3.0, 4.0]],
#      [[5.0, 6.0], [7.0, 8.0]]]
# Each example is followed by the correspending output tensor.
fluid.layers.reduce_prod(x, dim=[1, 2]) # [24.0, 1680.0]
fluid.layers.reduce_prod(x, dim=[0, 1]) # [105.0, 384.0]


### sequence_first_step¶

paddle.fluid.layers.sequence_first_step(input)

This funciton get the first step of sequence.

x is a 1-level LoDTensor:
x.lod = [[0, 2, 5, 7]]
x.data = [1, 3, 2, 4, 6, 5, 1]
x.dims = [7, 1]

then output is a Tensor:
out.dim = [3, 1]
with condition len(x.lod[-1]) - 1 == out.dims[0]
out.data = [1, 2, 5], where 1=first(1,3), 2=first(2,4,6), 5=first(5,1)

Parameters: input (variable) – The input variable which is a LoDTensor. The sequence’s first step variable which is a Tensor.

Examples

x = fluid.layers.data(name='x', shape=[7, 1],
dtype='float32', lod_level=1)
x_first_step = fluid.layers.sequence_first_step(input=x)


### sequence_last_step¶

paddle.fluid.layers.sequence_last_step(input)

This funciton get the last step of sequence.

x is a 1-level LoDTensor:
x.lod = [[0, 2, 5, 7]]
x.data = [1, 3, 2, 4, 6, 5, 1]
x.dims = [7, 1]

then output is a Tensor:
out.dim = [3, 1]
with condition len(x.lod[-1]) - 1 == out.dims[0]
out.data = [3, 6, 1], where 3=last(1,3), 6=last(2,4,6), 1=last(5,1)

Parameters: input (variable) – The input variable which is a LoDTensor. The sequence’s last step variable which is a Tensor.

Examples

x = fluid.layers.data(name='x', shape=[7, 1],
dtype='float32', lod_level=1)
x_last_step = fluid.layers.sequence_last_step(input=x)


### dropout¶

paddle.fluid.layers.dropout(x, dropout_prob, is_test=False, seed=None, name=None)

Computes dropout.

Drop or keep each element of x independently. Dropout is a regularization technique for reducing overfitting by preventing neuron co-adaption during training. The dropout operator randomly set (according to the given dropout probability) the outputs of some units to zero, while others are remain unchanged.

Parameters: x (variable) – The input tensor. dropout_prob (float) – Probability of setting units to zero. is_test (bool) – A flag indicating whether it is in test phrase or not. seed (int) – A Python integer used to create random seeds. If this parameter is set to None, a random seed is used. NOTE: If an integer seed is given, always the same output units will be dropped. DO NOT use a fixed seed in training. name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically. A tensor variable. Variable

Examples

x = fluid.layers.data(name="data", shape=[32, 32], dtype="float32")
droped = fluid.layers.dropout(input=x, dropout_rate=0.5)


### split¶

paddle.fluid.layers.split(input, num_or_sections, dim=-1, name=None)

Split the input tensor into multiple sub-tensors.

Parameters: input (Variable) – The input variable which is a Tensor or LoDTensor. num_or_sections (int|list) – If num_or_sections is an integer, then the integer indicates the number of equal sized sub-tensors that the tensor will be divided into. If num_or_sections is a list of integers, the length of list indicates the number of sub-tensors and the integers indicate the sizes of sub-tensors’ dim dimension orderly. dim (int) – The dimension along which to split. If $dim < 0$, the dimension to split along is $rank(input) + dim$. name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically. The list of segmented tensor variables. List

Examples

# x is a Tensor variable with shape [3, 9, 5]:
x0, x1, x2 = fluid.layers.split(x, num_or_sections=3, dim=1)
x0.shape  # [3, 3, 5]
x1.shape  # [3, 3, 5]
x2.shape  # [3, 3, 5]
x0, x1, x2 = fluid.layers.split(
x, num_or_sections=[2, 3, 4], dim=1)
x0.shape  # [3, 2, 5]
x1.shape  # [3, 3, 5]
x2.shape  # [3, 4, 5]


### ctc_greedy_decoder¶

paddle.fluid.layers.ctc_greedy_decoder(input, blank, name=None)

This op is used to decode sequences by greedy policy by below steps: 1. Get the indexes of max value for each row in input. a.k.a.

numpy.argmax(input, axis=0).
1. For each sequence in result of step1, merge repeated tokens between two blanks and delete all blanks.

A simple example as below:

Given:

input.data = [[0.6, 0.1, 0.3, 0.1],
[0.3, 0.2, 0.4, 0.1],
[0.1, 0.5, 0.1, 0.3],
[0.5, 0.1, 0.3, 0.1],

[0.5, 0.1, 0.3, 0.1],
[0.2, 0.2, 0.2, 0.4],
[0.2, 0.2, 0.1, 0.5],
[0.5, 0.1, 0.3, 0.1]]

input.lod = [[0, 4, 8]]

Then:

output.data = [[2],
[1],
[3]]

output.lod = [[0, 2, 3]]

Parameters: input (Variable) – (LoDTensor), the probabilities of variable-length sequences, which is a 2-D Tensor with LoD information. It’s shape is [Lp, num_classes + 1], where Lp is the sum of all input sequences’ length and num_classes is the true number of classes. (not including the blank label). blank (int) – the blank label index of Connectionist Temporal Classification (CTC) loss, which is in thehalf-opened interval [0, num_classes + 1). CTC greedy decode result. If all the sequences in result were empty, the result LoDTensor will be [-1] with LoD [[0]] and dims [1, 1]. Variable

Examples

x = fluid.layers.data(name='x', shape=[8], dtype='float32')

cost = fluid.layers.ctc_greedy_decoder(input=x, blank=0)


### edit_distance¶

paddle.fluid.layers.edit_distance(input, label, normalized=True, ignored_tokens=None, name=None)

EditDistance operator computes the edit distances between a batch of hypothesis strings and their references. Edit distance, also called Levenshtein distance, measures how dissimilar two strings are by counting the minimum number of operations to transform one string into anthor. Here the operations include insertion, deletion, and substitution.

For example, given hypothesis string A = “kitten” and reference B = “sitting”, the edit distance is 3 for A will be transformed into B at least after two substitutions and one insertion:

“kitten” -> “sitten” -> “sittin” -> “sitting”

Input(Hyps) is a LoDTensor consisting of all the hypothesis strings with the total number denoted by batch_size, and the separation is specified by the LoD information. And the batch_size reference strings are arranged in order in the same way in the LoDTensor Input(Refs).

Output(Out) contains the batch_size results and each stands for the edit distance for a pair of strings respectively. If Attr(normalized) is true, the edit distance will be divided by the length of reference string.

Parameters: input (Variable) – The indices for hypothesis strings. label (Variable) – The indices for reference strings. normalized (bool) – Indicated whether to normalize the edit distance by the length of reference string. ignored_tokens (list of int) – Tokens that should be removed before calculating edit distance. sequence-to-sequence edit distance in shape [batch_size, 1]. Variable

Examples

x = fluid.layers.data(name='x', shape=[8], dtype='float32')
y = fluid.layers.data(name='y', shape=[7], dtype='float32')

cost = fluid.layers.edit_distance(input=x,label=y)


### l2_normalize¶

paddle.fluid.layers.l2_normalize(x, axis, epsilon=1e-12, name=None)

L2 normalize Layer

The l2 normalize layer normalizes x along dimension axis using an L2 norm. For a 1-D tensor (dim is fixed to 0), this layer computes

output = x / sqrt(max(sum(x**2), epsilon))

For x with more dimensions, this layer independently normalizes each 1-D slice along dimension axis.

Parameters: x (Variable|list) – The input tensor to l2_normalize layer. axis (int) – Dimension along which to normalize the input. epsilon (float) – A lower bound value for x‘s l2 norm. sqrt(epsilon) will be used as the divisor if the l2 norm of x is less than sqrt(epsilon). name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically. The output tensor variable. Variable

Examples

data = fluid.layers.data(name="data",
shape=(3, 17, 13),
dtype="float32")
normed = fluid.layers.l2_normalize(x=data, axis=1)


### matmul¶

paddle.fluid.layers.matmul(x, y, transpose_x=False, transpose_y=False, name=None)

Applies matrix multiplication to two tensors.

Currently, the input tensors’ rank can be any, but when the rank of any inputs is bigger than 3, this two inputs’ rank should be equal.

The actual behavior depends on the shapes of $x$, $y$ and the flag values of transpose_x, transpose_y. Specifically:

• If a transpose flag is specified, the last two dimensions of the tensor are transposed. If the tensor is rank-1 of shape $[D]$, then for $x$ it is treated as $[1, D]$ in nontransposed form and as $[D, 1]$ in transposed form, whereas for $y$ it is the opposite: It is treated as $[D, 1]$ in nontransposed form and as $[1, D]$ in transposed form.
• After transpose, the two tensors are 2-D or n-D and matrix multiplication performs in the following way.
• If both are 2-D, they are multiplied like conventional matrices.
• If either is n-D, it is treated as a stack of matrices residing in the last two dimensions and a batched matrix multiply supporting broadcast applies on the two tensors.

Also note that if the raw tensor $x$ or $y$ is rank-1 and nontransposed, the prepended or appended dimension $1$ will be removed after matrix multiplication.

Parameters: x (Variable) – The input variable which is a Tensor or LoDTensor. y (Variable) – The input variable which is a Tensor or LoDTensor. transpose_x (bool) – Whether to transpose $x$ before multiplication. transpose_y (bool) – Whether to transpose $y$ before multiplication. name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically. The product Tensor variable. Variable

Examples

# Examples to clarify shapes of the inputs and output
# x: [B, ..., M, K], y: [B, ..., K, N]
fluid.layers.matmul(x, y)  # out: [B, ..., M, N]

# x: [B, M, K], y: [B, K, N]
fluid.layers.matmul(x, y)  # out: [B, M, N]

# x: [B, M, K], y: [K, N]
fluid.layers.matmul(x, y)  # out: [B, M, N]

# x: [M, K], y: [K, N]
fluid.layers.matmul(x, y)  # out: [M, N]

# x: [B, M, K], y: [K]
fluid.layers.matmul(x, y)  # out: [B, M]

# x: [K], y: [K]
fluid.layers.matmul(x, y)  # out: [1]

# x: [M], y: [N]
fluid.layers.matmul(x, y, True, True)  # out: [M, N]


### topk¶

paddle.fluid.layers.topk(input, k, name=None)

This operator is used to find values and indices of the k largest entries for the last dimension.

If the input is a vector (rank=1), finds the k largest entries in the vector and outputs their values and indices as vectors. Thus values[j] is the j-th largest entry in input, and its index is indices[j].

If the input is a Tensor with higher rank, this operator computes the top k entries along the last dimension.

Parameters: input (Variable) – The input variable which can be a vector or Tensor with higher rank. k (int) – An integer value to specify the top k largest elements. name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically. The k largest elements along each last dimensional slice. indices(Variable): The indices of values within the last dimension of input. values(Variable)

Examples

top5_values, top5_indices = layers.topk(input, k=5)


### warpctc¶

paddle.fluid.layers.warpctc(input, label, blank=0, norm_by_times=False)

An operator integrating the open source Warp-CTC library (https://github.com/baidu-research/warp-ctc) to compute Connectionist Temporal Classification (CTC) loss. It can be aliased as softmax with CTC, since a native softmax activation is interated to the Warp-CTC library, to to normlize values for each row of the input tensor.

Parameters: input (Variable) – (LodTensor, default: LoDTensor), the unscaled probabilities of variable-length sequences, which is a 2-D Tensor with LoD information. It’s shape is [Lp, num_classes + 1], where Lp is the sum of all input sequences’ length and num_classes is the true number of classes. (not including the blank label). label (Variable) – (LodTensor, default: LoDTensor), the ground truth of variable-length sequence, which is a 2-D Tensor with LoD information. It is of the shape [Lg, 1], where Lg is th sum of all labels’ length. blank – (int, default: 0), the blank label index of Connectionist Temporal Classification (CTC) loss, which is in the half-opened interval [0, num_classes + 1). norm_by_times – (bool, default: false), whether to normalize gradients by the number of time-step, which is also the (the) – length. There is no need to normalize the gradients (sequence's) – warpctc layer was follewed by a mean_op. (if) – The Connectionist Temporal Classification (CTC) loss, which is a 2-D Tensor of the shape [batch_size, 1]. Variable

Examples

### sequence_reshape¶

paddle.fluid.layers.sequence_reshape(input, new_dim)

Sequence Reshape Layer

This layer will rearrange the input sequences. The new dimension is set by user. Length of each sequence is computed according to original length, original dimension and new dimension. The following example will help to illustrate the function of this layer:

x is a LoDTensor:
x.lod  = [[0, 2, 6]]
x.data = [[1, 2], [3, 4],
[5, 6], [7, 8], [9, 10], [11, 12]]
x.dims = [6, 2]

set new_dim = 4

then out is a LoDTensor:
out.lod  = [[0, 1, 3]]
out.data = [[1, 2, 3, 4],
[5, 6, 7, 8], [9, 10, 11, 12]]
out.dims = [3, 4]


Currently, only 1-level LoDTensor is supported and please make sure (original length * original dimension) can be divided by new dimension with no remainder for each sequence.

Parameters: input (Variable) – (LodTensor, default: LoDTensor), a 2-D LoDTensor with shape being [N, M] where M for dimension. new_dim (int) – New dimension which the input LoDTensor is reshaped to. Reshaped LoDTensor according to new dimension. Variable

Examples

x = fluid.layers.data(name='x', shape=[5, 20],
dtype='float32', lod_level=1)
x_reshaped = layers.sequence_reshape(input=x, new_dim=10)


### transpose¶

paddle.fluid.layers.transpose(x, perm, name=None)

transpose Layer

Permute the dimensions of input according to perm.

The i-th dimension of the returned tensor will correspond to the perm[i]-th dimension of input.

Parameters: input (Variable) – (Tensor), A Tensor. perm (list) – A permutation of the dimensions of input. A transposed Tensor. Variable

Examples

x = fluid.layers.data(name='x', shape=[5, 10, 15], dtype='float32')
x_transposed = layers.transpose(x, perm=[1, 0, 2])


### im2sequence¶

paddle.fluid.layers.im2sequence(input, filter_size=1, stride=1, padding=0, name=None)

Extracts image patches from the input tensor to form a tensor of shape {input.batch_size * output_height * output_width, filter_size_H * filter_size_W * input.channels} which is similar with im2col. This op use filter / kernel to scan images and convert these images to sequences. After expanding, the number of time step are output_height * output_width for an image, in which output_height and output_width are calculated by below equation:

$output\_size = 1 + (2 * padding + img\_size - block\_size + stride - 1) / stride$

And the dimension of each time step is block_y * block_x * input.channels.

Examples:

As an example:

Given:

x = [[[[ 6.  2.  1.]
[ 8.  3.  5.]
[ 0.  2.  6.]]

[[ 2.  4.  4.]
[ 6.  3.  0.]
[ 6.  4.  7.]]]

[[[ 6.  7.  1.]
[ 5.  7.  9.]
[ 2.  4.  8.]]

[[ 1.  2.  1.]
[ 1.  3.  5.]
[ 9.  0.  8.]]]]

x.dims = {2, 2, 3, 3}

And:

filter = [2, 2]
stride = [1, 1]

Then:

output.data = [[ 6.  2.  8.  3.  2.  4.  6.  3.]
[ 2.  1.  3.  5.  4.  4.  3.  0.]
[ 8.  3.  0.  2.  6.  3.  6.  4.]
[ 3.  5.  2.  6.  3.  0.  4.  7.]
[ 6.  7.  5.  7.  1.  2.  1.  3.]
[ 7.  1.  7.  9.  2.  1.  3.  5.]
[ 5.  7.  2.  4.  1.  3.  9.  0.]
[ 7.  9.  4.  8.  3.  5.  0.  8.]]

output.dims = {8, 9}

output.lod = [[0, 4, 8]]


The simple usage is:

output = fluid.layers.im2sequence(
input=layer, stride=[1, 1], filter_size=[2, 2])


### nce¶

paddle.fluid.layers.nce(input, label, num_total_classes, sample_weight=None, param_attr=None, bias_attr=None, num_neg_samples=None)

Compute and return the noise-contrastive estimation training loss. See [Noise-contrastive estimation: A new estimation principle for unnormalized statistical models](http://www.jmlr.org/proceedings/papers/v9/gutmann10a/gutmann10a.pdf). By default this operator uses a uniform distribution for sampling.

Parameters: input – (Tensor) A tensor of shape [batch_size, dim]. Duplicable: False Optional: False label – (Tensor) A tensor of shape [batch_size, num_true_class]. ‘num_true_class’ is the number of target classes in each sample.The number of target classes per sample should be same. If you have a variable number of target classes, you can pad them out to a constant number by either repeating them or by padding with an otherwise unused class.) Duplicable: False Optional: False weight – (Tensor) A tensor of shape [num_class, dim]. ‘num_class’ is the total number of class. Duplicable: False Optional: False bias – (Tensor) A tensor of shape [num_class, 1]. ‘num_class’ is the total number of class. It is a dispensable input. Duplicable: False Optional: True sample_weight – (Tensor) A tensor of shape [batch_size, 1] storing a weight for each sample. And it is a dispensable input. The default value of sample is 1. Duplicable: False Optional: True num_total_classes (INT) – Total number of classes in all samples. num_neg_samples (INT) – The number of negative classes. The default value is 10. custom_neg_classes (INTS) – This attribute only be used in unitest. Classes in this list wiil be used as negative classes for every samples. Under normal conditions, user should avoid setting this attribute. op_role (INT) – The role of this operator op_role_var (STRINGS) – Optimized for variable (Tensor) A tensor of shape [batch_size, 1]. Cost of samples.

### row_conv¶

paddle.fluid.layers.row_conv(input, future_context_size, param_attr=None, act=None)

Row Conv Operator. This layer will apply lookahead convolution to input. The input variable should be a 2D LoDTensor with shape [T, D]. Parameters with shape [future_context_size + 1, D] will be created. The math equation of row convolution is as follows:

$Out_{i} = \sum_{j = i} ^ {i + \tau} X_{j} \odot W_{i - j}$

In the above equation:

• $Out_{i}$: The i-th row of output variable with shape [1, D].
• $\tau$: Future context size.
• $X_{j}$: The j-th row of input variable with shape [1, D].
• $W_{i-j}$: The (i-j)-th row of parameters with shape [1, D].

Parameters: input (Variable) – Input variable, a 2D LoDTensor with shape [T, D]. future_context_size (int) – Future context size. Please note, the shape of convolution kernel is [future_context_size + 1, D]. param_attr (ParamAttr) – Attributes of parameters, including name, initializer etc. act (str) – Non-linear activation to be applied to output variable. The output tensor with same shape as input tensor. Variable

Examples

x = fluid.layers.data(name='x', shape=[16],
dtype='float32', lod_level=1)
out = fluid.layers.row_conv(input=x, future_context_size=2)


### multiplex¶

paddle.fluid.layers.multiplex(inputs, index)

Multiplex Layer

Referring to the given index variable, this layer selects rows from the input variables to construct a multiplex variable. Assuming that there are $m$ input variables and $I_i$ represents the i-th input variable and $i$ is in [0, $m$). All input variables are tensors with same shape [$d_0$, $d_1$, ..., $d_R$]. Please note that rank of the input tensor should be at least 2. Each input variable will be treated as a 2-D matrix with shape [$M$, $N$] where $M$ for $d_0$ and $N$ for $d_1$ * $d_2$ * ... * $d_R$. Let $I_i[j]$ be the j-th row of the i-th input variable. The given index variable should be a 2-D tensor with shape [$M$, 1]. Let ID[i] be the i-th index value of the index variable. Then the output variable will be a tensor with shape [$d_0$, $d_1$, ..., $d_R$]. If we treat the output tensor as a 2-D matrix with shape [$M$, $N$] and let $O[i]$ be the i-th row of the matrix, then O[i] is equal to $I_{ID[i]}[i]$.

Parameters: inputs (list) – A list of variables to gather from. All variables have the same shape and the rank is at least 2. index (Variable) – Tensor, index variable which is a 2-D tensor with shape [M, 1] where M is the batch size. Multiplex variable gathered from input variables. Variable

Examples

x1 = fluid.layers.data(name='x1', shape=[4], dtype='float32')
x2 = fluid.layers.data(name='x2', shape=[4], dtype='float32')
index = fluid.layers.data(name='index', shape=[1], dtype='int32')
out = fluid.layers.multiplex(inputs=[x1, x2], index=index)


### layer_norm¶

paddle.fluid.layers.layer_norm(input, scale=True, shift=True, begin_norm_axis=1, epsilon=1e-05, param_attr=None, bias_attr=None, act=None, name=None)

Layer Normalization

Assume feature vectors exist on dimensions begin_norm_axis ... rank(input) and calculate the moment statistics along these dimensions for each feature vector $a$ with size $H$, then normalize each feature vector using the corresponding statistics. After that, apply learnable gain and bias on the normalized tensor to scale and shift if scale and shift are set.

Refer to Layer Normalization

The formula is as follows:

\begin{align}\begin{aligned}\mu & = \frac{1}{H}\sum_{i=1}^{H} a_i\\\sigma & = \sqrt{\frac{1}{H}\sum_{i=1}^{H}(a_i - \mu)^2}\\h & = f(\frac{g}{\sigma}(a - \mu) + b)\end{aligned}\end{align}
Parameters: input (Variable) – The input tensor variable. scale (bool) – Whether to learn the adaptive gain $g$ after normalization. shift (bool) – Whether to learn the adaptive bias $b$ after normalization. begin_norm_axis (bool) – The normalization will be performed along dimensions from begin_norm_axis to rank(input). epsilon (float) – The small value added to the variance to prevent division by zero. param_attr (ParamAttr|None) – The parameter attribute for the learnable gain $g$. bias_attr (ParamAttr|None) – The parameter attribute for the learnable bias $b$. act (str) – Activation to be applied to the output of layer normalizaiton. A tensor variable with the same shape as the input. Variable

Examples

data = fluid.layers.data(
name='data', shape=[3, 32, 32], dtype='float32')
x = fluid.layers.layer_norm(input=data, begin_norm_axis=1)


### softmax_with_cross_entropy¶

paddle.fluid.layers.softmax_with_cross_entropy(logits, label, soft_label=False)

Softmax With Cross Entropy Operator.

Cross entropy loss with softmax is used as the output layer extensively. This operator computes the softmax normalized values for each row of the input tensor, after which cross-entropy loss is computed. This provides a more numerically stable gradient.

Because this operator performs a softmax on logits internally, it expects unscaled logits. This operator should not be used with the output of softmax operator since that would produce incorrect results.

When the attribute soft_label is set false, this operators expects mutually exclusive hard labels, each sample in a batch is in exactly one class with a probability of 1.0. Each sample in the batch will have a single label.

The equation is as follows:

1. Hard label (one-hot label, so every sample has exactly one class)
$loss_j = -\text{logit}_{label_j} + \log\left(\sum_{i=0}^{K}\exp(\text{logit}_i)\right), j = 1,..., K$
1. Soft label (each sample can have a distribution over all classes)
$loss_j = -\sum_{i=0}^{K}\text{label}_i \left(\text{logit}_i - \log\left(\sum_{i=0}^{K} \exp(\text{logit}_i)\right)\right), j = 1,...,K$
Parameters: logits (Variable) – The unscaled log probabilities, which is a 2-D tensor with shape [N x K]. N is the batch_size, and K is the class number. label (Variable) – The ground truth which is a 2-D tensor. If soft_label is set to false, Label is a Tensor with shape [N x 1]. If soft_label is set to true, Label is a Tensor with soft_label (bool) – A flag to indicate whether to interpretate the given labels as soft labels. By default, soft_label is set to False. The cross entropy loss is a 2-D tensor with shape [N x 1]. Variable

Examples

data = fluid.layers.data(name='data', shape=[128], dtype='float32')
label = fluid.layers.data(name='label', shape=[1], dtype='int64')
fc = fluid.layers.fc(input=data, size=100)
out = fluid.layers.softmax_with_cross_entropy(
logits=fc, label=label)


### smooth_l1¶

paddle.fluid.layers.smooth_l1(x, y, inside_weight=None, outside_weight=None, sigma=None)

**Smooth L1 Loss Operator. **

This operator computes the smooth L1 loss for X and Y. The operator takes the first dimension of X and Y as batch size. For each instance, it computes the smooth L1 loss element by element first and then sums all the losses. So the shape of Out is [batch_size, 1].

Parameters: x (Variable) – A tensor with rank at least 2. The input value of smooth L1 loss op with shape [batch_size, dim1, ..., dimN]. y (Variable) – A tensor with rank at least 2. The target value of smooth L1 loss op with same shape as x. inside_weight (Variable|None) – A tensor with rank at least 2. This input is optional and should have same shape with x. If provided, the result of (x - y) will be multiplied by this tensor element by element. outside_weight (Variable|None) – A tensor with rank at least 2. This input is optional and should have same shape with x. If provided, the out smooth L1 loss will be multiplied by this tensor element by element. sigma (float|None) – Hyper parameter of smooth L1 loss op. A float scalar with default value 1.0. A tensor with rank be 2. The output smooth L1 loss with shape [batch_size, 1]. Variable

Examples

data = fluid.layers.data(name='data', shape=[128], dtype='float32')
label = fluid.layers.data(
name='label', shape=[100], dtype='float32')
fc = fluid.layers.fc(input=data, size=100)
out = fluid.layers.smooth_l1(x=fc, y=label)


### one_hot¶

paddle.fluid.layers.one_hot(input, depth)

One Hot Operator. This operator creates the one-hot representations for input index values. The following example will help to explain the function of this operator.

Parameters: input (variable) – A Tensor/LodTensor of indices, last dimension must be 1. depth (scalar) – an interger defining the depth of the one hot dimension. The one-hot tensor or LodTensor, same as input.

Examples



X is a LoDTensor:
X.lod = [[0, 1, 4]] X.shape = [4, 1] X.data = [[1], [1], [3], [0]]

set depth = 4 Out is a LoDTensor:

Out.lod = [[0, 1, 4]] Out.shape = [4, 4] Out.data = [[0., 1., 0., 0.],

[0., 1., 0., 0.], [0., 0., 0., 1.], [1., 0., 0., 0.]]

### autoincreased_step_counter¶

paddle.fluid.layers.autoincreased_step_counter(counter_name=None, begin=1, step=1)

NOTE: The counter will be automatically increased by 1 every mini-batch Return the run counter of the main program, which is started with 1.

Parameters: counter_name (str) – The counter name, default is ‘@STEP_COUNTER@’. begin (int) – The first value of this counter. step (int) – The increment step between each execution.

Returns(Variable): The global run counter.

### reshape¶

paddle.fluid.layers.reshape(x, shape, actual_shape=None, act=None, inplace=True, name=None)

Gives a new shape to the input Tensor without changing its data.

The target shape can be given by shape or actual_shape. shape is a list of integer while actual_shape is a tensor variable. actual_shape has a higher priority than shape if it is provided, while shape still should be set correctly to gurantee shape inference in compile-time.

Some tricks exist when specifying the target shape.

1. -1 means the value of this dimension is inferred from the total element number of x and remaining dimensions. Thus one and only one dimension can be set -1.

2. 0 means the actual dimension value is going to be copied from the corresponding dimension of x. The indice of 0s in shape can not exceed Rank(X).

Here are some examples to explain it.

1. Given a 3-D tensor x with a shape [2, 4, 6], and the target shape is [6, 8], the reshape operator will transform x into a 2-D tensor with shape [6, 8] and leaving x’s data unchanged.

2. Given a 3-D tensor x with a shape [2, 4, 6], and the target shape specified is [2, 3, -1, 2], the reshape operator will transform x into a 4-D tensor with shape [2, 3, 4, 2] and leaving x’s data unchanged. In this case, one dimension of the target shape is set to -1, the value of this dimension is inferred from the total element number of x and remaining dimensions.

3. Given a 3-D tensor x with a shape [2, 4, 6], and the target shape is [-1, 0, 3, 2], the reshape operator will transform x into a 4-D tensor with shape [2, 4, 3, 2] and leaving x’s data unchanged. In this case, besides -1, 0 means the actual dimension value is going to be copied from the corresponding dimension of x.

Parameters: input (variable) – The input tensor. shape (list) – The new shape. At most one dimension of the new shape can be -1. actual_shape (variable) – An optional input. If provided, reshape according to this given shape rather than shape specifying shape. That is to say actual_shape has a higher priority than shape. act (str) – The non-linear activation to be applied to output variable. inplace (bool) – If this flag is set true, a new output tensor is created whose data is copied from input x, otherwise the output shares data with input without copying.

Returns(variable): The output tensor.

Examples

data = fluid.layers.data(
name='data', shape=[2, 4, 6], dtype='float32')
reshaped = fluid.layers.reshape(
x=data, shape=[-1, 0, 3, 2], act='tanh', inplace=True)


### lod_reset¶

paddle.fluid.layers.lod_reset(x, y=None, target_lod=None)

LoD Reset Operator. Set LoD of x to a new one specified by y or target_lod. When y provided, y.lod would be considered as target LoD first, otherwise y.data would be considered as target LoD. If y is not provided, target LoD should be specified by target_lod. If target LoD is specified by Y.data or target_lod, only one level LoD is supported.

* Example 1:

Given a 1-level LoDTensor x:
x.lod =  [[ 0,     2,                   5      6 ]]
x.data = [[1.0], [2.0], [3.0], [4.0], [5.0], [6.0]]
x.dims = [6, 1]

target_lod: [0, 4, 6]

then we get a 1-level LoDTensor:
out.lod =  [[ 0,                   4,            6 ]]
out.data = [[1.0], [2.0], [3.0], [4.0], [5.0], [6.0]]
out.dims = [6, 1]

* Example 2:

Given a 1-level LoDTensor x:
x.lod =  [[ 0,     2,                   5      6 ]]
x.data = [[1.0], [2.0], [3.0], [4.0], [5.0], [6.0]]
x.dims = [6, 1]

y is a Tensor:
y.data = [[0, 2, 6]]
y.dims = [1, 3]

then we get a 1-level LoDTensor:
out.lod =  [[ 0,     2,                          6 ]]
out.data = [[1.0], [2.0], [3.0], [4.0], [5.0], [6.0]]
out.dims = [6, 1]

* Example 3:

Given a 1-level LoDTensor x:
x.lod =  [[ 0,      2,                   5     6 ]]
x.data = [[1.0], [2.0], [3.0], [4.0], [5.0], [6.0]]
x.dims = [6, 1]

y is a 2-level LoDTensor:
y.lod =  [[0, 2, 4], [0, 2, 5, 6]]
y.data = [[1.1], [2.1], [3.1], [4.1], [5.1], [6.1]]
y.dims = [6, 1]

then we get a 2-level LoDTensor:
out.lod =  [[0, 2, 4], [0, 2, 5, 6]]
out.data = [[1.0], [2.0], [3.0], [4.0], [5.0], [6.0]]
out.dims = [6, 1]

Parameters: x (Variable) – Input variable which could be a Tensor or LodTensor. y (Variable|None) – If provided, output’s LoD would be derived from y. target_lod (list|tuple|None) – One level LoD which should be considered as target LoD when y not provided. Output variable with LoD specified by this operator. Variable ValueError – If y and target_lod are both None.

Examples

x = layers.data(name='x', shape=[10])
y = layers.data(name='y', shape=[10, 20], lod_level=2)
out = layers.lod_reset(x=x, y=y)


### lrn¶

paddle.fluid.layers.lrn(input, n=5, k=1.0, alpha=0.0001, beta=0.75, name=None)

Local Response Normalization Layer. This layer performs a type of “lateral inhibition” by normalizing over local input regions.

The formula is as follows:

$Output(i, x, y) = Input(i, x, y) / \left( k + lpha \sum\limits^{\min(C, c + n/2)}_{j = \max(0, c - n/2)} (Input(j, x, y))^2$

ight)^{eta}

In the above equation:

• $n$: The number of channels to sum over.
• $k$: The offset (avoid being divided by 0).
• $alpha$: The scaling parameter.
• $beta$: The exponent parameter.
Args:
input (Variable): The input tensor of this layer, and the dimension of input tensor must be 4. n (int, default 5): The number of channels to sum over. k (float, default 1.0): An offset (usually positive to avoid dividing by 0). alpha (float, default 1e-4): The scaling parameter. beta (float, default 0.75): The exponent. name (str, default None): A name for this operation.
Raises:
ValueError: If rank of the input tensor is not 4.
Returns:
A tensor variable storing the transformation result.
Examples:
data = fluid.layers.data(
name="data", shape=[3, 112, 112], dtype="float32")
lrn = fluid.layers.lrn(input=data)


paddle.fluid.layers.pad(x, paddings, pad_value=0.0, name=None)

Pads a tensor with a constant value given by pad_value, and the padded width is specified by paddings.

Specifically, the number of values padded before the contents of x in dimension i is indicated by paddings[i], and the number of values padded after the contents of x in dimension i is indicated by paddings[i+1].

See below for an example.

Given:
x = [[1, 2], [3, 4]]

paddings = [0, 1, 1, 2]

Return:

out = [[0, 1, 2, 0, 0]
[0, 3, 4, 0, 0]
[0, 0, 0, 0, 0]]

Parameters: x (Variable) – The input tensor variable. paddings (list) – A list of integers. Its elements specify the padded width before and after for each dimension in turn. The length of :attr:paddings must be $rank(x) \times 2$. pad_value (float) – The constant value used to pad. name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically. The padded tensor variable. Variable

Examples

# x is a rank 2 tensor variable.


### label_smooth¶

paddle.fluid.layers.label_smooth(label, prior_dist=None, epsilon=0.1, dtype='float32', name=None)

Label smoothing is a mechanism to regularize the classifier layer and is called label-smoothing regularization (LSR).

Label smoothing is proposed to encourage the model to be less confident, since optimizing the log-likelihood of the correct label directly may cause overfitting and reduce the ability of the model to adapt. Label smoothing replaces the ground-truth label $y$ with the weighted sum of itself and some fixed distribution $\mu$. For class $k$, i.e.

$\tilde{y_k} = (1 - \epsilon) * y_k + \epsilon * \mu_k,$

where $1 - \epsilon$ and $\epsilon$ are the weights respectively, and $\tilde{y}_k$ is the smoothed label. Usually uniform distribution is used for $\mu$.

See more details about label smoothing in https://arxiv.org/abs/1512.00567.

Parameters: label (Variable) – The input variable containing the label data. The label data should use one-hot representation. prior_dist (Variable) – The prior distribution to be used to smooth labels. If not provided, an uniform distribution is used. The shape of prior_dist should be $(1, class\_num)$. epsilon (float) – The weight used to mix up the original ground-truth distribution and the fixed distribution. dtype (np.dtype|core.VarDesc.VarType|str) – The type of data : float32, float_64, int etc. name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically. The tensor variable containing the smoothed labels. Variable

Examples

label = layers.data(name="label", shape=[1], dtype="float32")
one_hot_label = layers.one_hot(input=label, depth=10)
smooth_label = layers.label_smooth(
label=one_hot_label, epsilon=0.1, dtype="float32")


### roi_pool¶

paddle.fluid.layers.roi_pool(input, rois, pooled_height=1, pooled_width=1, spatial_scale=1.0)
Region of interest pooling (also known as RoI pooling) is to perform
is to perform max pooling on inputs of nonuniform sizes to obtain fixed-size feature maps (e.g. 7*7).
The operator has three steps:
1. Dividing each region proposal into equal-sized sections with the pooled_width and pooled_height
2. Finding the largest value in each section
3. Copying these max values to the output buffer
Parameters: input (Variable) – The input for ROI pooling. rois (Variable) – ROIs (Regions of Interest) to pool over. It should be a 2-D one level LoTensor of shape [num_rois, 4]. The layout is [x1, y1, x2, y2], where (x1, y1) is the top left coordinates, and (x2, y2) is the bottom right coordinates. The num_rois is the total number of ROIs in this batch data. pooled_height (integer) – The pooled output height. Default: 1 pooled_width (integer) – The pooled output width. Default: 1 spatial_scale (float) – Multiplicative spatial scale factor. To translate ROI coords from their input scale to the scale used when pooling. Default: 1.0 The output is a 4-D tensor of the shape (num_rois, channels, pooled_h, pooled_w). pool_out (Variable)

Examples

pool_out = fluid.layers.roi_pool(input=x, rois=rois, 7, 7, 1.0)


## ops¶

### mean¶

paddle.fluid.layers.mean(*args, **kwargs)

Mean Operator.

Out is a scalar which is the mean of all elements in X.

Parameters: x – The input of mean op Duplicable: False Optional: False op_role (INT) – The role of this operator op_role_var (STRINGS) – Optimized for variable The output of mean op

### mul¶

paddle.fluid.layers.mul(*args, **kwargs)

Mul Operator.

This operator is used to perform matrix multiplication for input $X$ and $Y$.

The equation is:

$$Out = X * Y$$

Both the input $X$ and $Y$ can carry the LoD (Level of Details) information, or not. But the output only shares the LoD information with input $X$.

Parameters: x – (Tensor), The first input tensor of mul op. Duplicable: False Optional: False y – (Tensor), The second input tensor of mul op. Duplicable: False Optional: False x_num_col_dims (INT) – (int, default 1), The mul_op can take tensors with more than two dimensions as its inputs. If the input $X$ is a tensor with more than two dimensions, $X$ will be flattened into a two-dimensional matrix first. The flattening rule is: the first num_col_dims will be flattened to form the first dimension of the final matrix (the height of the matrix), and the rest rank(X) - num_col_dims dimensions are flattened to form the second dimension of the final matrix (the width of the matrix). As a result, height of the flattened matrix is equal to the product of $X$’s first x_num_col_dims dimensions’ sizes, and width of the flattened matrix is equal to the product of $X$’s last rank(x) - num_col_dims dimensions’ size. For example, suppose $X$ is a 6-dimensional tensor with the shape [2, 3, 4, 5, 6], and x_num_col_dims = 3. Thus, the flattened matrix will have a shape [2 x 3 x 4, 5 x 6] = [24, 30]. y_num_col_dims (INT) – (int, default 1), The mul_op can take tensors with more than two, dimensions as its inputs. If the input $Y$ is a tensor with more than two dimensions, $Y$ will be flattened into a two-dimensional matrix first. The attribute y_num_col_dims determines how $Y$ is flattened. See comments of x_num_col_dims for more details. op_role (INT) – The role of this operator op_role_var (STRINGS) – Optimized for variable (Tensor), The output tensor of mul op.

### scale¶

paddle.fluid.layers.scale(*args, **kwargs)

Scale operator

$$Out = scale*X$$

Parameters: x – (Tensor) Input tensor of scale operator. Duplicable: False Optional: False scale (FLOAT) – (float, default 1.0)The scaling factor of the scale operator. op_role (INT) – The role of this operator op_role_var (STRINGS) – Optimized for variable (Tensor) Output tensor of scale operator.

### sigmoid_cross_entropy_with_logits¶

paddle.fluid.layers.sigmoid_cross_entropy_with_logits(*args, **kwargs)

SigmoidCrossEntropyWithLogits Operator.

This measures the element-wise probability error in classification tasks in which each class is independent. This can be thought of as predicting labels for a data-point, where labels are not mutually exclusive. For example, a news article can be about politics, technology or sports at the same time or none of these.

The logistic loss is given as follows:

$$loss = -Labels * log(sigma(X)) - (1 - Labels) * log(1 - sigma(X))$$

We know that $$sigma(X) = (1 / (1 + exp(-X)))$$. By substituting this we get:

$$loss = X - X * Labels + log(1 + exp(-X))$$

For stability and to prevent overflow of $$exp(-X)$$ when X < 0, we reformulate the loss as follows:

$$loss = max(X, 0) - X * Labels + log(1 + exp(-|X|))$$

Both the input X and Labels can carry the LoD (Level of Details) information. However the output only shares the LoD with input X.

Parameters: x – (Tensor, default Tensor), a 2-D tensor with shape N x D, where N is the batch size and D is the number of classes. This input is a tensor of logits computed by the previous operator. Logits are unscaled log probabilities given as log(p/(1-p)). Duplicable: False Optional: False label – (Tensor, default Tensor), a 2-D tensor of the same type and shape as X. This input is a tensor of probabalistic labels for each logit Duplicable: False Optional: False op_role (INT) – The role of this operator op_role_var (STRINGS) – Optimized for variable (Tensor, default Tensor), a 2-D tensor with shape N x D of elementwise logistic losses.

paddle.fluid.layers.elementwise_add(*args, **kwargs)

The equation is:

$$Out = X + Y$$

$X$ is a tensor of any dimension and the dimensions of tensor $Y$ must be smaller than or equal to the dimensions of $X$.

There are two cases for this operator: 1. The shape of $Y$ is same with $X$; 2. The shape of $Y$ is a congiguous subsequencet of $X$. The trailing dimensions

of size 1 for $Y$ will be ignored for the consideration of subsequence.

For case 2:

$Y$ will be broadcasted to match the shape of $X$ and axis should be set to index of the start dimension to broadcast $Y$ onto $X$.

If axis is -1, it is treated as axis=rank(X)-rank(Y).

For example
shape(X) = (2, 3, 4, 5), shape(Y) = (,)
shape(X) = (2, 3, 4, 5), shape(Y) = (5,)
shape(X) = (2, 3, 4, 5), shape(Y) = (4, 5)
shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1
shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0
shape(X) = (2, 3, 4, 5), shape(Y) = (2, 1), with axis=0


Either of the inputs $X$ and $Y$ or none can carry the LoD (Level of Details) information. However, the output only shares the LoD information with input $X$.

Parameters: x – (Tensor), The first input tensor of elementwise op. Duplicable: False Optional: False y – (Tensor), The second input tensor of elementwise op. Duplicable: False Optional: False axis (INT) – (int, default -1). The start dimension index for broadcasting Y onto X. op_role (INT) – The role of this operator op_role_var (STRINGS) – Optimized for variable The output of elementwise op.

### elementwise_div¶

paddle.fluid.layers.elementwise_div(*args, **kwargs)

Limited Elementwise Div Operator.

The equation is:

$$Out = X / Y$$

$X$ is a tensor of any dimension and the dimensions of tensor $Y$ must be smaller than or equal to the dimensions of $X$.

There are two cases for this operator: 1. The shape of $Y$ is same with $X$; 2. The shape of $Y$ is a congiguous subsequencet of $X$. The trailing dimensions

of size 1 for $Y$ will be ignored for the consideration of subsequence.

For case 2:

$Y$ will be broadcasted to match the shape of $X$ and axis should be set to index of the start dimension to broadcast $Y$ onto $X$.

If axis is -1, it is treated as axis=rank(X)-rank(Y).

For example
shape(X) = (2, 3, 4, 5), shape(Y) = (,)
shape(X) = (2, 3, 4, 5), shape(Y) = (5,)
shape(X) = (2, 3, 4, 5), shape(Y) = (4, 5)
shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1
shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0
shape(X) = (2, 3, 4, 5), shape(Y) = (2, 1), with axis=0


Either of the inputs $X$ and $Y$ or none can carry the LoD (Level of Details) information. However, the output only shares the LoD information with input $X$.

Parameters: x – (Tensor), The first input tensor of elementwise op. Duplicable: False Optional: False y – (Tensor), The second input tensor of elementwise op. Duplicable: False Optional: False axis (INT) – (int, default -1). The start dimension index for broadcasting Y onto X. op_role (INT) – The role of this operator op_role_var (STRINGS) – Optimized for variable The output of elementwise op.

### elementwise_sub¶

paddle.fluid.layers.elementwise_sub(*args, **kwargs)

Limited Elementwise Sub Operator.

The equation is:

$$Out = X - Y$$

$X$ is a tensor of any dimension and the dimensions of tensor $Y$ must be smaller than or equal to the dimensions of $X$.

There are two cases for this operator: 1. The shape of $Y$ is same with $X$; 2. The shape of $Y$ is a congiguous subsequencet of $X$. The trailing dimensions

of size 1 for $Y$ will be ignored for the consideration of subsequence.

For case 2:

$Y$ will be broadcasted to match the shape of $X$ and axis should be set to index of the start dimension to broadcast $Y$ onto $X$.

If axis is -1, it is treated as axis=rank(X)-rank(Y).

For example
shape(X) = (2, 3, 4, 5), shape(Y) = (,)
shape(X) = (2, 3, 4, 5), shape(Y) = (5,)
shape(X) = (2, 3, 4, 5), shape(Y) = (4, 5)
shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1
shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0
shape(X) = (2, 3, 4, 5), shape(Y) = (2, 1), with axis=0


Either of the inputs $X$ and $Y$ or none can carry the LoD (Level of Details) information. However, the output only shares the LoD information with input $X$.

Parameters: x – (Tensor), The first input tensor of elementwise op. Duplicable: False Optional: False y – (Tensor), The second input tensor of elementwise op. Duplicable: False Optional: False axis (INT) – (int, default -1). The start dimension index for broadcasting Y onto X. op_role (INT) – The role of this operator op_role_var (STRINGS) – Optimized for variable The output of elementwise op.

### elementwise_mul¶

paddle.fluid.layers.elementwise_mul(*args, **kwargs)

Limited Elementwise Mul Operator.

The equation is:

$$Out = X odotY$$

$X$ is a tensor of any dimension and the dimensions of tensor $Y$ must be smaller than or equal to the dimensions of $X$.

There are two cases for this operator: 1. The shape of $Y$ is same with $X$; 2. The shape of $Y$ is a congiguous subsequencet of $X$. The trailing dimensions

of size 1 for $Y$ will be ignored for the consideration of subsequence.

For case 2:

$Y$ will be broadcasted to match the shape of $X$ and axis should be set to index of the start dimension to broadcast $Y$ onto $X$.

If axis is -1, it is treated as axis=rank(X)-rank(Y).

For example
shape(X) = (2, 3, 4, 5), shape(Y) = (,)
shape(X) = (2, 3, 4, 5), shape(Y) = (5,)
shape(X) = (2, 3, 4, 5), shape(Y) = (4, 5)
shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1
shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0
shape(X) = (2, 3, 4, 5), shape(Y) = (2, 1), with axis=0


Either of the inputs $X$ and $Y$ or none can carry the LoD (Level of Details) information. However, the output only shares the LoD information with input $X$.

Parameters: x – (Tensor), The first input tensor of elementwise op. Duplicable: False Optional: False y – (Tensor), The second input tensor of elementwise op. Duplicable: False Optional: False axis (INT) – (int, default -1). The start dimension index for broadcasting Y onto X. op_role (INT) – The role of this operator op_role_var (STRINGS) – Optimized for variable The output of elementwise op.

### elementwise_max¶

paddle.fluid.layers.elementwise_max(*args, **kwargs)

Limited Elementwise Max Operator.

The equation is:

$$Out = max(X, Y)$$

$X$ is a tensor of any dimension and the dimensions of tensor $Y$ must be smaller than or equal to the dimensions of $X$.

There are two cases for this operator: 1. The shape of $Y$ is same with $X$; 2. The shape of $Y$ is a congiguous subsequencet of $X$. The trailing dimensions

of size 1 for $Y$ will be ignored for the consideration of subsequence.

For case 2:

$Y$ will be broadcasted to match the shape of $X$ and axis should be set to index of the start dimension to broadcast $Y$ onto $X$.

If axis is -1, it is treated as axis=rank(X)-rank(Y).

For example
shape(X) = (2, 3, 4, 5), shape(Y) = (,)
shape(X) = (2, 3, 4, 5), shape(Y) = (5,)
shape(X) = (2, 3, 4, 5), shape(Y) = (4, 5)
shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1
shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0
shape(X) = (2, 3, 4, 5), shape(Y) = (2, 1), with axis=0


Either of the inputs $X$ and $Y$ or none can carry the LoD (Level of Details) information. However, the output only shares the LoD information with input $X$.

Parameters: x – (Tensor), The first input tensor of elementwise op. Duplicable: False Optional: False y – (Tensor), The second input tensor of elementwise op. Duplicable: False Optional: False axis (INT) – (int, default -1). The start dimension index for broadcasting Y onto X. op_role (INT) – The role of this operator op_role_var (STRINGS) – Optimized for variable The output of elementwise op.

### elementwise_min¶

paddle.fluid.layers.elementwise_min(*args, **kwargs)

Limited Elementwise Min Operator.

The equation is:

$$Out = min(X, Y)$$

$X$ is a tensor of any dimension and the dimensions of tensor $Y$ must be smaller than or equal to the dimensions of $X$.

There are two cases for this operator: 1. The shape of $Y$ is same with $X$; 2. The shape of $Y$ is a congiguous subsequencet of $X$. The trailing dimensions

of size 1 for $Y$ will be ignored for the consideration of subsequence.

For case 2:

$Y$ will be broadcasted to match the shape of $X$ and axis should be set to index of the start dimension to broadcast $Y$ onto $X$.

If axis is -1, it is treated as axis=rank(X)-rank(Y).

For example
shape(X) = (2, 3, 4, 5), shape(Y) = (,)
shape(X) = (2, 3, 4, 5), shape(Y) = (5,)
shape(X) = (2, 3, 4, 5), shape(Y) = (4, 5)
shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1
shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0
shape(X) = (2, 3, 4, 5), shape(Y) = (2, 1), with axis=0


Either of the inputs $X$ and $Y$ or none can carry the LoD (Level of Details) information. However, the output only shares the LoD information with input $X$.

Parameters: x – (Tensor), The first input tensor of elementwise op. Duplicable: False Optional: False y – (Tensor), The second input tensor of elementwise op. Duplicable: False Optional: False axis (INT) – (int, default -1). The start dimension index for broadcasting Y onto X. op_role (INT) – The role of this operator op_role_var (STRINGS) – Optimized for variable The output of elementwise op.

### elementwise_pow¶

paddle.fluid.layers.elementwise_pow(*args, **kwargs)

Limited Elementwise Pow Operator.

The equation is:

$$Out = X ^ Y$$

$X$ is a tensor of any dimension and the dimensions of tensor $Y$ must be smaller than or equal to the dimensions of $X$.

There are two cases for this operator: 1. The shape of $Y$ is same with $X$; 2. The shape of $Y$ is a congiguous subsequencet of $X$. The trailing dimensions

of size 1 for $Y$ will be ignored for the consideration of subsequence.

For case 2:

$Y$ will be broadcasted to match the shape of $X$ and axis should be set to index of the start dimension to broadcast $Y$ onto $X$.

If axis is -1, it is treated as axis=rank(X)-rank(Y).

For example
shape(X) = (2, 3, 4, 5), shape(Y) = (,)
shape(X) = (2, 3, 4, 5), shape(Y) = (5,)
shape(X) = (2, 3, 4, 5), shape(Y) = (4, 5)
shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1
shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0
shape(X) = (2, 3, 4, 5), shape(Y) = (2, 1), with axis=0


Either of the inputs $X$ and $Y$ or none can carry the LoD (Level of Details) information. However, the output only shares the LoD information with input $X$.

Parameters: x – (Tensor), The first input tensor of elementwise op. Duplicable: False Optional: False y – (Tensor), The second input tensor of elementwise op. Duplicable: False Optional: False axis (INT) – (int, default -1). The start dimension index for broadcasting Y onto X. op_role (INT) – The role of this operator op_role_var (STRINGS) – Optimized for variable The output of elementwise op.

### clip¶

paddle.fluid.layers.clip(*args, **kwargs)

Clip Operator.

The clip operator limits the value of given input within an interval. The interval is specified with arguments ‘min’ and ‘max’:

$$Out = min(max(X, min), max)$$

Parameters: x – (Tensor)The input of clip op.The number of dimensions must be between [1, 9]. Duplicable: False Optional: False min (FLOAT) – (float)Minimum value, under which element is replaced by min. max (FLOAT) – (float)Maximum value, above which element is replaced by max op_role (INT) – The role of this operator op_role_var (STRINGS) – Optimized for variable (Tensor)The output of clip op with shape as input(X)

### clip_by_norm¶

paddle.fluid.layers.clip_by_norm(*args, **kwargs)

ClipByNorm Operator.

This operator limits the L2 norm of the input $X$ within $max_norm$. If the L2 norm of $X$ is less than or equal to $max_norm$, $Out$ will be the same as $X$. If the L2 norm of $X$ is greater than $max_norm$, $X$ will be linearly scaled to make the L2 norm of $Out$ equal to $max_norm$, as shown in the following formula:

$$Out = frac{max_norm * X}{norm(X)},$$

where $norm(X)$ represents the L2 norm of $X$.

Parameters: x – (Tensor) The input of clip_by_norm op.The number of dimensions must be between [1, 9]. Duplicable: False Optional: False max_norm (FLOAT) – (float) The maximum norm value. op_role (INT) – The role of this operator op_role_var (STRINGS) – Optimized for variable (Tensor) The output of clip_by_norm op with shape as input(X)

### logical_and¶

paddle.fluid.layers.logical_and(*args, **kwargs)

logical_and Operator

It operates element-wise on X and Y, and returns the Out. X, Y and Out are N-dim boolean tensors. Each element of Out is calculated by $$Out = X && Y$$

Parameters: x – (LoDTensor) Left hand operand of logical_and operator Duplicable: False Optional: False y – (LoDTensor) Right hand operand of logical_and operator Duplicable: False Optional: False op_role (INT) – The role of this operator op_role_var (STRINGS) – Optimized for variable (LoDTensor) n-dim bool tensor. Each element is $$Out = X && Y$$

### logical_or¶

paddle.fluid.layers.logical_or(*args, **kwargs)

logical_or Operator

It operates element-wise on X and Y, and returns the Out. X, Y and Out are N-dim boolean tensors. Each element of Out is calculated by $$Out = X || Y$$

Parameters: x – (LoDTensor) Left hand operand of logical_or operator Duplicable: False Optional: False y – (LoDTensor) Right hand operand of logical_or operator Duplicable: False Optional: False op_role (INT) – The role of this operator op_role_var (STRINGS) – Optimized for variable (LoDTensor) n-dim bool tensor. Each element is $$Out = X || Y$$

### logical_xor¶

paddle.fluid.layers.logical_xor(*args, **kwargs)

logical_xor Operator

It operates element-wise on X and Y, and returns the Out. X, Y and Out are N-dim boolean tensors. Each element of Out is calculated by $$Out = (X || Y) , && , !(X && Y)$$

Parameters: x – (LoDTensor) Left hand operand of logical_xor operator Duplicable: False Optional: False y – (LoDTensor) Right hand operand of logical_xor operator Duplicable: False Optional: False op_role (INT) – The role of this operator op_role_var (STRINGS) – Optimized for variable (LoDTensor) n-dim bool tensor. Each element is $$Out = (X || Y) , && , !(X && Y)$$

### logical_not¶

paddle.fluid.layers.logical_not(*args, **kwargs)

logical_not Operator

It operates element-wise on X, and returns the Out. X and Out are N-dim boolean tensors. Each element of Out is calculated by $$Out = !X$$

Parameters: x – (LoDTensor) Operand of logical_not operator Duplicable: False Optional: False op_role (INT) – The role of this operator op_role_var (STRINGS) – Optimized for variable (LoDTensor) n-dim bool tensor. Each element is $$Out = !X$$

### uniform_random¶

paddle.fluid.layers.uniform_random(*args, **kwargs)

Uniform random operator.

This operator initializes a tensor with random values sampled from a uniform distribution.

Parameters: shape (INTS) – (vector) The shape of the output tensor min (FLOAT) – (float, default -1.0) Minimum value of uniform random max (FLOAT) – (float, default 1.0) Maximun value of uniform random seed (INT) – (int, default 0) Random seed used for generating samples. 0 means use a seed generated by the system.Note that if seed is not 0, this operator will always generate the same random numbers every time. dtype (INT) – (int, default 5(FP32)) Output tensor data type op_role (INT) – The role of this operator op_role_var (STRINGS) – Optimized for variable (Tensor) The output tensor of uniform random op

### uniform_random_batch_size_like¶

paddle.fluid.layers.uniform_random_batch_size_like(*args, **kwargs)

Uniform random operator

This operator initializes a tensor with the same batch_size as the Input tensor
with random values sampled from a uniform distribution.
Parameters: input – (Tensor) Tensor whose input_dim_idx’th dimension specifies the batch_size Duplicable: False Optional: False shape (INTS) – (vector) The shape of the output input_dim_idx (INT) – (int, default 0) The index of input’s batch size dimension output_dim_idx (INT) – (int, default 0) The index of output’s batch size dimension min (FLOAT) – (float, default -1.0) Minimum value of uniform random max (FLOAT) – (float, default 1.0) Maximun value of uniform random seed (INT) – (int, default 0) Random seed used for generating samples. 0 means use a seed generated by the system.Note that if seed is not 0, this operator will always generate the same random numbers every time. dtype (INT) – (int, default 5(FP32)) Output tensor data type op_role (INT) – The role of this operator op_role_var (STRINGS) – Optimized for variable (Tensor) Tensor of specified shape will be filled with the specified value

### gaussian_random¶

paddle.fluid.layers.gaussian_random(*args, **kwargs)

GaussianRandom Operator.

Used to initialize tensors with gaussian random generator.

Parameters: shape (INTS) – (vector) The dimension of random tensor. mean (FLOAT) – (float, default 0.0) mean of random tensor. std (FLOAT) – (float, default 1.0) std of random tensor. seed (INT) – (int, default 0) Random seed of generator.0 means use system wide seed.Note that if seed is not 0, this operator will always generate the same random numbers every time. dtype (INT) – (int, default 5(FP32)) Output data type. op_role (INT) – The role of this operator op_role_var (STRINGS) – Optimized for variable Output matrix of gaussian random op

### gaussian_random_batch_size_like¶

paddle.fluid.layers.gaussian_random_batch_size_like(*args, **kwargs)

GaussianRandom Operator.

Used to initialize tensors with gaussian random generator.

Parameters: input – (Tensor) Tensor whose input_dim_idx’th dimension specifies the batch_size Duplicable: False Optional: False shape (INTS) – (vector) The shape of the output input_dim_idx (INT) – (int, default 0) The index of input’s batch size dimension output_dim_idx (INT) – (int, default 0) The index of output’s batch size dimension mean (FLOAT) – (float, default 0.0) mean of random tensor. std (FLOAT) – (float, default 1.0) std of random tensor. seed (INT) – (int, default 0) Random seed of generator.0 means use system wide seed.Note that if seed is not 0, this operator will always generate the same random numbers every time. dtype (INT) – (int, default 5(FP32)) Output data type. op_role (INT) – The role of this operator op_role_var (STRINGS) – Optimized for variable (Tensor) Tensor of specified shape will be filled with the specified value

### cumsum¶

paddle.fluid.layers.cumsum(*args, **kwargs)

The cumulative sum of the elements along a given axis. By default, the first element of the result is the same of the first element of the input. If exlusive is true, the first element of the result is 0.

Parameters: x – Input of Cumsum operator Duplicable: False Optional: False axis (INT) – (int, default -1). The dimenstion to accumulate along. -1 means the last dimenstion exclusive (BOOLEAN) – bool, default false). Whether to perform exclusive cumsum reverse (BOOLEAN) – bool, default false). If true, the cumsum is performed in the reversed direction op_role (INT) – The role of this operator op_role_var (STRINGS) – Optimized for variable Output of Cumsum operator

### scatter¶

paddle.fluid.layers.scatter(*args, **kwargs)

Scatter Operator.

This operator obtains output by updating the input on selected indices on the first axis:

$$Out = X \ Out[Ids] = X[Ids] + Updates$$

Parameters: x – The source input of scatter op Duplicable: False Optional: False ids – The index input of scatter op where X will be updated Duplicable: False Optional: False updates – The updated value of updates op Duplicable: False Optional: False op_role (INT) – The role of this operator op_role_var (STRINGS) – Optimized for variable The output of add op

### sum¶

paddle.fluid.layers.sum(*args, **kwargs)

Sum operator.

This operators sums the input tensors. All the inputs can carry the LoD (Level of Details) information. However, the output only shares the LoD information with the first input.

Parameters: x – (vector) The input tensors of sum operator. Duplicable: True Optional: False op_role (INT) – The role of this operator op_role_var (STRINGS) – Optimized for variable (Tensor) The output tensor of sum operator.

### sigmoid¶

paddle.fluid.layers.sigmoid(*args, **kwargs)

SigmoidDoc :param x: Input of Sigmoidoperator

Duplicable: False Optional: False
Parameters: use_mkldnn (BOOLEAN) – (bool, default false) Only used in mkldnn kernel op_role (INT) – The role of this operator op_role_var (STRINGS) – Optimized for variable Output ofSigmoidoperator

### logsigmoid¶

paddle.fluid.layers.logsigmoid(*args, **kwargs)

LogSigmoidDoc :param x: Input of LogSigmoidoperator

Duplicable: False Optional: False
Parameters: use_mkldnn (BOOLEAN) – (bool, default false) Only used in mkldnn kernel op_role (INT) – The role of this operator op_role_var (STRINGS) – Optimized for variable Output ofLogSigmoidoperator

### exp¶

paddle.fluid.layers.exp(*args, **kwargs)

ExpDoc :param x: Input of Expoperator

Duplicable: False Optional: False
Parameters: use_mkldnn (BOOLEAN) – (bool, default false) Only used in mkldnn kernel op_role (INT) – The role of this operator op_role_var (STRINGS) – Optimized for variable Output ofExpoperator

### relu¶

paddle.fluid.layers.relu(*args, **kwargs)

ReluDoc :param x: Input of Reluoperator

Duplicable: False Optional: False
Parameters: use_mkldnn (BOOLEAN) – (bool, default false) Only used in mkldnn kernel op_role (INT) – The role of this operator op_role_var (STRINGS) – Optimized for variable Output ofReluoperator

### tanh¶

paddle.fluid.layers.tanh(*args, **kwargs)

TanhDoc :param x: Input of Tanhoperator

Duplicable: False Optional: False
Parameters: use_mkldnn (BOOLEAN) – (bool, default false) Only used in mkldnn kernel op_role (INT) – The role of this operator op_role_var (STRINGS) – Optimized for variable Output ofTanhoperator

### tanh_shrink¶

paddle.fluid.layers.tanh_shrink(*args, **kwargs)

TanhShrinkDoc :param x: Input of TanhShrinkoperator

Duplicable: False Optional: False
Parameters: use_mkldnn (BOOLEAN) – (bool, default false) Only used in mkldnn kernel op_role (INT) – The role of this operator op_role_var (STRINGS) – Optimized for variable Output ofTanhShrinkoperator

### softshrink¶

paddle.fluid.layers.softshrink(*args, **kwargs)

Softshrink Activation Operator.

$$out = begin{cases} x - lambda, text{if } x > lambda \ x + lambda, text{if } x < -lambda \ 0, text{otherwise} end{cases}$$

Parameters: x – Input of Softshrink operator Duplicable: False Optional: False lambda (FLOAT) – non-negative offset op_role (INT) – The role of this operator op_role_var (STRINGS) – Optimized for variable Output of Softshrink operator

### sqrt¶

paddle.fluid.layers.sqrt(*args, **kwargs)

SqrtDoc :param x: Input of Sqrtoperator

Duplicable: False Optional: False
Parameters: use_mkldnn (BOOLEAN) – (bool, default false) Only used in mkldnn kernel op_role (INT) – The role of this operator op_role_var (STRINGS) – Optimized for variable Output ofSqrtoperator

### abs¶

paddle.fluid.layers.abs(*args, **kwargs)

AbsDoc :param x: Input of Absoperator

Duplicable: False Optional: False
Parameters: use_mkldnn (BOOLEAN) – (bool, default false) Only used in mkldnn kernel op_role (INT) – The role of this operator op_role_var (STRINGS) – Optimized for variable Output ofAbsoperator

### ceil¶

paddle.fluid.layers.ceil(*args, **kwargs)

CeilDoc :param x: Input of Ceiloperator

Duplicable: False Optional: False
Parameters: use_mkldnn (BOOLEAN) – (bool, default false) Only used in mkldnn kernel op_role (INT) – The role of this operator op_role_var (STRINGS) – Optimized for variable Output ofCeiloperator

### floor¶

paddle.fluid.layers.floor(*args, **kwargs)

FloorDoc :param x: Input of Flooroperator

Duplicable: False Optional: False
Parameters: use_mkldnn (BOOLEAN) – (bool, default false) Only used in mkldnn kernel op_role (INT) – The role of this operator op_role_var (STRINGS) – Optimized for variable Output ofFlooroperator

### cos¶

paddle.fluid.layers.cos(*args, **kwargs)

CosDoc :param x: Input of Cosoperator

Duplicable: False Optional: False
Parameters: use_mkldnn (BOOLEAN) – (bool, default false) Only used in mkldnn kernel op_role (INT) – The role of this operator op_role_var (STRINGS) – Optimized for variable Output ofCosoperator

### sin¶

paddle.fluid.layers.sin(*args, **kwargs)

SinDoc :param x: Input of Sinoperator

Duplicable: False Optional: False
Parameters: use_mkldnn (BOOLEAN) – (bool, default false) Only used in mkldnn kernel op_role (INT) – The role of this operator op_role_var (STRINGS) – Optimized for variable Output ofSinoperator

### round¶

paddle.fluid.layers.round(*args, **kwargs)

RoundDoc :param x: Input of Roundoperator

Duplicable: False Optional: False
Parameters: use_mkldnn (BOOLEAN) – (bool, default false) Only used in mkldnn kernel op_role (INT) – The role of this operator op_role_var (STRINGS) – Optimized for variable Output ofRoundoperator

### reciprocal¶

paddle.fluid.layers.reciprocal(*args, **kwargs)

ReciprocalDoc :param x: Input of Reciprocaloperator

Duplicable: False Optional: False
Parameters: use_mkldnn (BOOLEAN) – (bool, default false) Only used in mkldnn kernel op_role (INT) – The role of this operator op_role_var (STRINGS) – Optimized for variable Output ofReciprocaloperator

### log¶

paddle.fluid.layers.log(*args, **kwargs)

LogDoc :param x: Input of Logoperator

Duplicable: False Optional: False
Parameters: use_mkldnn (BOOLEAN) – (bool, default false) Only used in mkldnn kernel op_role (INT) – The role of this operator op_role_var (STRINGS) – Optimized for variable Output ofLogoperator

### square¶

paddle.fluid.layers.square(*args, **kwargs)

SquareDoc :param x: Input of Squareoperator

Duplicable: False Optional: False
Parameters: use_mkldnn (BOOLEAN) – (bool, default false) Only used in mkldnn kernel op_role (INT) – The role of this operator op_role_var (STRINGS) – Optimized for variable Output ofSquareoperator

### softplus¶

paddle.fluid.layers.softplus(*args, **kwargs)

SoftplusDoc :param x: Input of Softplusoperator

Duplicable: False Optional: False
Parameters: use_mkldnn (BOOLEAN) – (bool, default false) Only used in mkldnn kernel op_role (INT) – The role of this operator op_role_var (STRINGS) – Optimized for variable Output ofSoftplusoperator

### softsign¶

paddle.fluid.layers.softsign(*args, **kwargs)

SoftsignDoc :param x: Input of Softsignoperator

Duplicable: False Optional: False
Parameters: use_mkldnn (BOOLEAN) – (bool, default false) Only used in mkldnn kernel op_role (INT) – The role of this operator op_role_var (STRINGS) – Optimized for variable Output ofSoftsignoperator

### brelu¶

paddle.fluid.layers.brelu(*args, **kwargs)

BRelu Activation Operator.

$out = max(min(x, t_{min}), t_{max})$

Parameters: x – Input of BRelu operator Duplicable: False Optional: False t_min (FLOAT) – The min marginal value of BRelu t_max (FLOAT) – The max marginal value of BRelu op_role (INT) – The role of this operator op_role_var (STRINGS) – Optimized for variable Output of BRelu operator

### leaky_relu¶

paddle.fluid.layers.leaky_relu(*args, **kwargs)

LeakyRelu Activation Operator.

$out = max(x, alpha * x)$

Parameters: x – Input of LeakyRelu operator Duplicable: False Optional: False alpha (FLOAT) – The small negative slope op_role (INT) – The role of this operator op_role_var (STRINGS) – Optimized for variable Output of LeakyRelu operator

### soft_relu¶

paddle.fluid.layers.soft_relu(*args, **kwargs)

SoftRelu Activation Operator.

$out = ln(1 + exp(max(min(x, threshold), threshold))$

Parameters: x – Input of SoftRelu operator Duplicable: False Optional: False threshold (FLOAT) – The threshold value of SoftRelu op_role (INT) – The role of this operator op_role_var (STRINGS) – Optimized for variable Output of SoftRelu operator

### elu¶

paddle.fluid.layers.elu(*args, **kwargs)

ELU Activation Operator.

Applies the following element-wise computation on the input according to https://arxiv.org/abs/1511.07289.

$out = max(0, x) + min(0, alpha * (e^x - 1))$

Parameters: x – Input of ELU operator Duplicable: False Optional: False alpha (FLOAT) – The alpha value of ELU op_role (INT) – The role of this operator op_role_var (STRINGS) – Optimized for variable Output of ELU operator

### relu6¶

paddle.fluid.layers.relu6(*args, **kwargs)

Relu6 Activation Operator.

$out = min(max(0, x), 6)$

Parameters: x – Input of Relu6 operator Duplicable: False Optional: False threshold (FLOAT) – The threshold value of Relu6 op_role (INT) – The role of this operator op_role_var (STRINGS) – Optimized for variable Output of Relu6 operator

### pow¶

paddle.fluid.layers.pow(*args, **kwargs)

Pow Activation Operator.

$out = x^{factor}$

Parameters: x – Input of Pow operator Duplicable: False Optional: False factor (FLOAT) – The exponential factor of Pow op_role (INT) – The role of this operator op_role_var (STRINGS) – Optimized for variable Output of Pow operator

### stanh¶

paddle.fluid.layers.stanh(*args, **kwargs)

STanh Activation Operator.

$$out = b * frac{e^{a * x} - e^{-a * x}}{e^{a * x} + e^{-a * x}}$$

Parameters: x – Input of STanh operator Duplicable: False Optional: False scale_a (FLOAT) – The scale parameter of a for the input scale_b (FLOAT) – The scale parameter of b for the input op_role (INT) – The role of this operator op_role_var (STRINGS) – Optimized for variable Output of STanh operator

### hard_shrink¶

paddle.fluid.layers.hard_shrink(*args, **kwargs)

HardShrink Activation Operator.

$$out = begin{cases} x, text{if } x > lambda \ x, text{if } x < -lambda \ 0, text{otherwise} end{cases}$$

Parameters: x – Input of HardShrink operator Duplicable: False Optional: False threshold (FLOAT) – The value of threshold for HardShrink op_role (INT) – The role of this operator op_role_var (STRINGS) – Optimized for variable Output of HardShrink operator

### thresholded_relu¶

paddle.fluid.layers.thresholded_relu(*args, **kwargs)

ThresholdedRelu Activation Operator.

$$out = begin{cases} x, text{if } x > threshold \ 0, text{otherwise} end{cases}$$

Parameters: x – Input of ThresholdedRelu operator Duplicable: False Optional: False threshold (FLOAT) – The threshold location of activation op_role (INT) – The role of this operator op_role_var (STRINGS) – Optimized for variable Output of ThresholdedRelu operator

### hard_sigmoid¶

paddle.fluid.layers.hard_sigmoid(*args, **kwargs)

HardSigmoid Activation Operator.

Segment-wise linear approximation of sigmoid(https://arxiv.org/abs/1603.00391), which is much faster than sigmoid.

$out = max(0, min(1, slope * x + shift))$

The slope should be positive. The offset can be either positive or negative. The default slope and shift are set according to the above reference. It is recommended to use the defaults for this activation.

Parameters: x – Input of HardSigmoid operator Duplicable: False Optional: False slope (FLOAT) – Slope for linear approximation of sigmoid offset (FLOAT) – Offset for linear approximation of sigmoid op_role (INT) – The role of this operator op_role_var (STRINGS) – Optimized for variable Output of HardSigmoid operator

### swish¶

paddle.fluid.layers.swish(*args, **kwargs)

Swish Activation Operator.

$$out = frac{x}{1 + e^{- beta x}}$$

Parameters: x – Input of Swish operator Duplicable: False Optional: False beta (FLOAT) – Constant beta of swish operator op_role (INT) – The role of this operator op_role_var (STRINGS) – Optimized for variable Output of Swish operator

## tensor¶

### create_tensor¶

paddle.fluid.layers.create_tensor(dtype, name=None, persistable=False)

### create_parameter¶

paddle.fluid.layers.create_parameter(shape, dtype, name=None, attr=None, is_bias=False, default_initializer=None)

Create a parameter :param shape: shape of the parameter :type shape: list[int] :param dtype: element type of the parameter :type dtype: string :param attr: attributes of the parameter :type attr: ParamAttr :param is_bias: This can affect which default initializer is chosen

when default_initializer is None. If is_bias, initializer.Constant(0.0) will be used. Otherwise, Xavier() will be used.
Parameters: default_initializer (Initializer) – initializer for the parameter the created parameter Parameter

### create_global_var¶

paddle.fluid.layers.create_global_var(shape, value, dtype, persistable=False, force_cpu=False, name=None)

Create a global variable. such as global_step :param shape: shape of the variable :type shape: list[int] :param value: the value of the variable :type value: float :param dtype: element type of the parameter :type dtype: string :param persistable: if this variable is persistable :type persistable: bool :param force_cpu: force this variable to be on CPU :type force_cpu: bool

Returns: the created Variable Variable

### cast¶

paddle.fluid.layers.cast(x, dtype)

This function takes in the input with input_dtype and casts it to the output_dtype as the output.

### concat¶

paddle.fluid.layers.concat(input, axis=0, name=None)

Concat

This function concatenates the input along the axis mentioned and returns that as the output.

Parameters: input (list) – List of tensors to be concatenated axis (int) – Integer axis along which the tensors will be concatenated name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically. Output variable of the concatenation Variable

Examples

### sums¶

paddle.fluid.layers.sums(input, out=None)

This function performs the sum operation on the input and returns the result as the output.

Parameters: input (Variable|list) – The input tensor that has the elements that need to be summed up. The tensor type variable that has the sum of input written to it. Variable

Examples

### assign¶

paddle.fluid.layers.assign(input, output)

Assign

This function copies the input Variable to the output Variable.

Parameters: input (Variable|numpy.ndarray) – The source variable output (Variable) – The destination variable The destination variable that was supplied as the output. Variable

Examples

### fill_constant_batch_size_like¶

paddle.fluid.layers.fill_constant_batch_size_like(input, shape, dtype, value, input_dim_idx=0, output_dim_idx=0)

fill_constant_batch_size_like

This function creates a tensor of specified shape, dtype and batch size, and initializes this with a constant supplied in value. The batch size is obtained from the input tensor.

It also sets stop_gradient to True.

Parameters: input (Variable) – Tensor whose dimensions will be used to get batch size shape (tuple|list|None) – Shape of output tensor dtype (np.dtype|core.VarDesc.VarType|str) – Data type of output tensor value (float) – Constant value to initialize the output tensor input_dim_idx (int) – Index of input’s batch size dimension output_dim_idx (int) – Index of output’s batch size dimension The tensor variable storing the output Variable

Examples

data = fluid.layers.fill_constant_batch_size_like(
input=like, shape=[1], value=0, dtype='int64')


### fill_constant¶

paddle.fluid.layers.fill_constant(shape, dtype, value, force_cpu=False, out=None)

fill_constant

This function creates a tensor with specified shape and dtype, and initializes it with a constant specifed by value.

The attribute stop_gradient of the created tensor is set to True.

Parameters: shape (tuple|list|None) – Shape of the output tensor. dtype (np.dtype|core.VarDesc.VarType|str) – Data type of the output tensor. value (float) – The constant value used to initialize the output tensor. out (Variable) – The output tensor. force_cpu (True|False) – data should be on CPU if set true. The tensor variable storing the output. Variable

Examples

data = fluid.layers.fill_constant(shape=[1], value=0, dtype='int64')


### ones¶

paddle.fluid.layers.ones(shape, dtype, force_cpu=False)

ones

This function creates a tensor of specified shape and dtype, and initializes this with 1.

It also sets stop_gradient to True.

Parameters: shape (tuple|list|None) – Shape of output tensor dtype (np.dtype|core.VarDesc.VarType|str) – Data type of output tensor The tensor variable storing the output Variable

Examples

data = fluid.layers.ones(shape=[1], dtype='int64')


### zeros¶

paddle.fluid.layers.zeros(shape, dtype, force_cpu=False)

zeros

This function creates a tensor of specified shape and dtype, and initializes this with 0.

It also sets stop_gradient to True.

Parameters: shape (tuple|list|None) – Shape of output tensor dtype (np.dtype|core.VarDesc.VarType|str) – Data type of output tensor The tensor variable storing the output Variable

Examples

data = fluid.layers.zeros(shape=[1], dtype='int64')


### topk¶

paddle.fluid.layers.topk(input, k, name=None)

This operator is used to find values and indices of the k largest entries for the last dimension.

If the input is a vector (rank=1), finds the k largest entries in the vector and outputs their values and indices as vectors. Thus values[j] is the j-th largest entry in input, and its index is indices[j].

If the input is a Tensor with higher rank, this operator computes the top k entries along the last dimension.

Parameters: input (Variable) – The input variable which can be a vector or Tensor with higher rank. k (int) – An integer value to specify the top k largest elements. name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically. The k largest elements along each last dimensional slice. indices(Variable): The indices of values within the last dimension of input. values(Variable)

Examples

top5_values, top5_indices = layers.topk(input, k=5)


### dice_loss¶

paddle.fluid.layers.dice_loss(input, label, epsilon=1e-05)

Dice loss Layer Dice loss for comparing the similarity of two batch of data, usually is used for binary image segmentation i.e. labels are binary. The dice loss can be defined as below equation:

$\begin{split}dice\_loss &= 1 - \frac{2 * intersection\_area}{total\_area} \\ &= \frac{(total\_area - intersection\_area) - intersection\_area}{total\_area} \\ &= \frac{(union\_area - intersection\_area)}{total\_area}\end{split}$
Parameters: input (Variable) – The predictions with rank>=2. The first dimension is batch size, and the last dimension is class number. label (Variable) – The groud truth with the same rank with input. The first dimension is batch size, and the last dimension is 1. epsilon (float) – The epsilon will be added to the numerator and denominator. If both input and label are empty, it makes sure dice is 1. Default: 0.00001 The dice loss with shape [1]. dice_loss (Variable)

Examples

predictions = fluid.layers.softmax(x)
loss = fluid.layers.dice_loss(input=predictions, label=label, 2)


#### upsampling_bilinear2d¶

paddle.fluid.layers.upsampling_bilinear2d(input, out_shape=None, scale=None, name=None)

The mathematical meaning of upsampling_bilinear2d is also called Bilinear interpolation. Bilinear interpolation is an extension of linear interpolation for interpolating functions of two variables (e.g. H-direction and W-direction in this layer) on a rectilinear 2D grid.

For details, please refer to Wikipedia: https://en.wikipedia.org/wiki/Bilinear_interpolation

Parameters: input (Variable) – The input tensor of bilinear interpolation, This is a 4-D tensor of the shape (num_batches, channels, in_h, in_w). out_shape (list|tuple|None) – Output shape of bilinear interpolation layer, the shape is (out_h, out_w). Default: None scale (int|None) – The multiplier for the input height or width. At least one of out_shape or scale must be set. And out_shape has a higher priority than scale. Default: None name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically. The output is a 4-D tensor of the shape (num_batches, channls, out_h, out_w). out (Variable)

Examples

out = fluid.layers.bilinear_interp(input, out_shape=[12, 12])