layers

control_flow

split_lod_tensor

paddle.fluid.layers.split_lod_tensor(input, mask, level=0)

This function takes in an input that contains the complete lod information, and takes in a mask which is used to mask certain parts of the input. The output is the true branch and the false branch with the mask applied to the input at a certain level in the tensor. Mainly used in IfElse to split data into two parts.

Parameters:
  • input (tuple|list|None) – The input tensor that contains complete lod information needed to construct the output.
  • mask (list) – A bool column vector which masks the input.
  • level (int) – The specific lod level to split.
Returns:

The true branch of tensor as per the mask applied to input.

The false branch of tensor as per the mask applied to input.

Return type:

tuple(Variable, Variable)

Examples

x = fluid.layers.data(name='x', shape=[1])
x.persistable = True

y = fluid.layers.data(name='y', shape=[1])
y.persistable = True

out_true, out_false = fluid.layers.split_lod_tensor(
      input=x, mask=y, level=level)

merge_lod_tensor

paddle.fluid.layers.merge_lod_tensor(in_true, in_false, x, mask, level=0)

merge_lod_tensor

This function takes in an input \(x\), the True branch, the False branch and a binary \(mask\). Using this information, this function merges the True and False branches of the tensor into a single tensor as output at a certain lod level indicated by \(level\). Used in IfElse to merge the output if True block and False Block.

Parameters:
  • in_true (tuple|list|None) – The True branch to be merged.
  • in_false (tuple|list|None) – The False branch to be merged.
  • x (tuple|list|None) – The input tensor that contains complete lod information needed to construct the output.
  • mask (list) – A bool column vector which masks the input.
  • level (int) – The specific lod level to merge.
Returns:

The merged output tensor.

Return type:

Variable

Examples

x = layers.data(
            name='x', shape=[1], dtype='float32', stop_gradient=False)
y = layers.data(
      name='y', shape=[1], dtype='bool', stop_gradient=False)

level = 0

out_true, out_false = layers.split_lod_tensor(
      input=x, mask=y, level=level)
out = layers.merge_lod_tensor(
      in_true=out_true, in_false=out_false, mask=y, x=x, level=level)

BlockGuard

class paddle.fluid.layers.BlockGuard(main_program)

BlockGuard class.

BlockGuard class is used to create a sub-block in a program by using the Python with keyword.

BlockGuardWithCompletion

class paddle.fluid.layers.BlockGuardWithCompletion(rnn)

BlockGuardWithCompletion class.

BlockGuardWithCompletion class is used to create an op with a block in a program.

WhileGuard

class paddle.fluid.layers.WhileGuard(while_op)

While

class paddle.fluid.layers.While(cond, name=None)

while loop control flow.

Parameters:
  • cond (Variable) – condition used to compare.
  • name (str) – The name of this layer.

Examples

d0 = layers.data("d0", shape=[10], dtype='float32')
data_array = layers.array_write(x=d0, i=i)
array_len = layers.fill_constant(shape=[1],dtype='int64', value=3)

cond = layers.less_than(x=i, y=array_len)
while_op = layers.While(cond=cond)
with while_op.block():
    d = layers.array_read(array=data_array, i=i)
    i = layers.increment(x=i, in_place=True)
    layers.array_write(result, i=i, array=d)
    layers.less_than(x=i, y=array_len, cond=cond)

Switch

class paddle.fluid.layers.Switch(name=None)

Switch class works just like a if-elif-else. Can be used in learning rate scheduler to modify learning rate

The Semantics:

  1. A switch control-flow checks cases one-by-one.
  2. The condition of each case is a boolean value, which is a scalar Variable.
  3. It runs the first matched case, or the default case if there is one.
  4. Once it matches a case, it runs the corresponding branch and only that branch.

Examples

lr = fluid.layers.tensor.create_global_var(
    shape=[1],
    value=0.0,
    dtype='float32',
    persistable=True,
    name="learning_rate")
one_var = tensor.fill_constant(
    shape=[1], dtype='float32', value=1.0)
two_var = tensor.fill_constant(
    shape=[1], dtype='float32', value=2.0)

with fluid.layers.control_flow.Switch() as switch:
    with switch.case(global_step == zero_var):
        fluid.layers.tensor.assign(input=one_var, output=lr)
    with switch.default():
        fluid.layers.tensor.assign(input=two_var, output=lr)
case(condition)

create a new block for this condition

default()

create a default case for this switch

lod_rank_table

paddle.fluid.layers.lod_rank_table(x, level=0)

LoD Rank Table Operator. Given an input variable x and a level number of LoD, this layer creates a LodRankTable object. A LoDRankTable object contains a list of bi-element tuples. Each tuple consists of an index and a length, both of which are int type. Refering to specified level of LoD, the index is the sequence index number and the length representes the sequence length. Please note that the list is ranked in descending order by the length. The following is an example:

x is a LoDTensor:
    x.lod = [[2,                1],
             [5,             1, 1]]
    x.data = [a, b, c, d, e, f, g]

1. set level to 0:
    Create lod rank table:
        lod_rank_table_obj = lod_rank_table(x, level=0)

    Get:
        lod_rank_table_obj.items() = [(0, 2), (1, 1)]

2. set level to 1:
    Create lod rank table:
        lod_rank_table_obj = lod_rank_table(x, level=1)

    Get:
        lod_rank_table_obj.items() = [(0, 5), (1, 1), (2, 1)]
Parameters:
  • x (Variable) – Input variable, a LoDTensor based which to create the lod rank table.
  • level (int) – Specify the LoD level, on which to create the lod rank table.
Returns:

The created LoDRankTable object.

Return type:

Variable

Examples

x = fluid.layers.data(name='x', shape=[10],
                      dtype='float32', lod_level=1)
out = layers.lod_rank_table(x=x, level=0)

max_sequence_len

paddle.fluid.layers.max_sequence_len(rank_table)

Given a LoDRankTable object, this layer returns the max length of a batch of sequences. In fact, a LoDRankTable object contains a list of tuples(<sequence index, sequence length>) and the list is already sorted by sequence length in descending order, so the operator just returns the sequence length of the first tuple element

>>> import paddle.fluid as fluid
>>> x = fluid.layers.data(name='x', shape=[10], dtype='float32',
>>>                       lod_level=1)
>>> rank_table = layers.lod_rank_table(x=x, level=0)
>>> max_seq_len = layers.max_sequence_len(rank_table)
Parameters:rank_table (Variable) – Input variable which is a LoDRankTable object.
Returns:The max sequence length.

lod_tensor_to_array

paddle.fluid.layers.lod_tensor_to_array(x, table)

Convert a LoDTensor to a LoDTensorArray.

This function split a LoDTesnor to a LoDTensorArray according to its LoD information. LoDTensorArray is an alias of C++ std::vector<LoDTensor> in PaddlePaddle. The generated LoDTensorArray of this function can be further read or written by read_from_array() and write_to_array() operators. However, this function is generally an internal component of PaddlePaddle DynamicRNN. Users should not use it directly.

Parameters:
  • x (Variable|list) – The LoDTensor to be converted to a LoDTensorArray.
  • table (ParamAttr|list) – The variable that stores the level of lod which is ordered by sequence length in descending order. It is generally generated by layers.lod_rank_table() API.
Returns:

The LoDTensorArray that has been converted from the input tensor.

Return type:

Variable

Examples

x = fluid.layers.data(name='x', shape=[10])
table = fluid.layers.lod_rank_table(x, level=0)
array = fluid.layers.lod_tensor_to_array(x, table)

array_to_lod_tensor

paddle.fluid.layers.array_to_lod_tensor(x, table)

Convert a LoD_Tensor_Aarry to an LoDTensor.

Parameters:
  • x (Variable|list) – The lod tensor array to be converted to a tensor.
  • table (ParamAttr|list) – The variable that stores the level of lod which is ordered by sequence length in descending order.
Returns:

The variable of type tensor that has been converted

from an array.

Return type:

Variable

Examples

x = fluid.layers.data(name='x', shape=[10])
table = fluid.layers.lod_rank_table(x, level=0)
array = fluid.layers.lod_tensor_to_array(x, table)
lod_tensor = fluid.layers.array_to_lod_tensor(array, table)

increment

paddle.fluid.layers.increment(x, value=1.0, in_place=True)

This function performs an operation that increments each value in the input \(x\) by an amount: \(value\) as mentioned in the input parameter. This operation is performed in-place by default.

Parameters:
  • x (Variable|list) – The tensor that has the input values.
  • value (float) – The amount by which the values should be incremented.
  • in_place (bool) – If the increment should be performed in-place.
Returns:

The elementwise-incremented object.

Return type:

Variable

Examples

data = fluid.layers.data(name='data', shape=[32, 32], dtype='float32')
data = fluid.layers.increment(x=data, value=3.0, in_place=True)

array_write

paddle.fluid.layers.array_write(x, i, array=None)

This function writes the given input variable to the specified position indicating by the arrary index to an output LOD_TENSOR_ARRAY. If the output LOD_TENSOR_ARRAY is not given(None), a new one will be created and returned.

Parameters:
  • x (Variable|list) – The input tensor from which the data will be read.
  • i (Variable|list) – The index of the output LOD_TENSOR_ARRAY, pointing to the position to which the input tensor will be written.
  • array (Variable|list) – The output LOD_TENSOR_ARRAY to which the input tensor will be written. If this parameter is NONE, a new LOD_TENSOR_ARRAY will be created and returned.
Returns:

The output LOD_TENSOR_ARRAY where the input tensor is written.

Return type:

Variable

Examples

tmp = fluid.layers.zeros(shape=[10], dtype='int32')
i = fluid.layers.fill_constant(shape=[1], dtype='int64', value=10)
arr = layers.array_write(tmp, i=i)

create_array

paddle.fluid.layers.create_array(dtype)

Create LoDTensorArray

This function creates an array of LOD_TENSOR_ARRAY . It is mainly used to implement RNN with array_write, array_read and While.

Parameters:dtype (int|float) – The data type of the elements in the lod_tensor_array.
Returns:The lod_tensor_array variable storing the elements of data type.
Return type:Variable

Examples

data = fluid.layers.create_array(dtype='float32')

less_than

paddle.fluid.layers.less_than(x, y, force_cpu=None, cond=None, **ignored)

It operates element-wise on X and Y, and returns the Out. Each of them is a N-dim tensor. X and Y could be any type. The each element of the Out tensor is calculated by \(Out = X < Y\)

>>> import paddle.fluid as fluid
>>> less = fluid.layers.less_than(x=label, y=limit)
Parameters:
  • x (Variable) – the left hand operand of less_than operator.
  • y (Variable) – the right hand operand of less_than operator.
  • force_cpu (BOOLEAN) – Force fill output variable to cpu memory. Otherwise, fill output variable to the running device [default true].
  • cond (Variable|None) – Optional output variable to store the result of less_than
Returns:

n-dim bool tensor. Each element is Out = X < Y.

equal

paddle.fluid.layers.equal(x, y, cond=None, **ignored)

equal

This layer returns the truth value of \(x == y\) elementwise.

Parameters:
  • x (Variable) – First operand of equal
  • y (Variable) – Second operand of equal
  • cond (Variable|None) – Optional output variable to store the result of equal
Returns:

The tensor variable storing the output of equal.

Return type:

Variable

Examples

less = fluid.layers.equal(x=label, y=limit)

array_read

paddle.fluid.layers.array_read(array, i)

This function performs the operation to read the data in as an LOD_TENSOR_ARRAY.

Given:

array = [0.6, 0.1, 0.3, 0.1]

And:

i = 2

Then:

output = 0.3
Parameters:
  • array (Variable|list) – The input tensor that store data to be read.
  • i (Variable|list) – The index of the data to be read from input array.
Returns:

The tensor type variable that has the data written to it.

Return type:

Variable

Examples

tmp = fluid.layers.zeros(shape=[10], dtype='int32')
i = fluid.layers.fill_constant(shape=[1], dtype='int64', value=10)
arr = layers.array_read(tmp, i=i)

shrink_memory

paddle.fluid.layers.shrink_memory(x, i, table)

This function creates an operator to shrink rnn memory using the RankTable as mentioned in the input parameter.

NOTE: This API is very low-level API. It is used by DynamicRNN only.

Since the Dynamic RNN uses no-padding way to implement RNN. The sequence will be sorted by order, and the length of valid memory will be shrink after each time step.

Parameters:
  • x (Variable) – The memory object in the previous time step.
  • i (Variable) – The step count variable. A int scalar as LoDTensor.
  • table (Variable) – The RNNRankTable object.
Returns:

the memory variable after shrink.

Examples

Since this API is very low level API. The example is not provided. Please reference the implementation of class DynamicRNN for detail usage.

array_length

paddle.fluid.layers.array_length(array)

Get the Length of Input LoDTensorArray

This function performs the operation to find the length of the input LOD_TENSOR_ARRAY.

Related API: array_read, array_write, While.

Parameters:array (LOD_TENSOR_ARRAY) – The input array that will be used to compute the length.
Returns:The length of the input LoDTensorArray.
Return type:Variable

Examples

tmp = fluid.layers.zeros(shape=[10], dtype='int32')
i = fluid.layers.fill_constant(shape=[1], dtype='int64', value=10)
arr = fluid.layers.array_write(tmp, i=i)
arr_len = fluid.layers.array_length(arr)

IfElse

class paddle.fluid.layers.IfElse(cond, name=None)

if-else control flow.

Parameters:
  • cond (Variable) – condition used to compare.
  • name (str, default None) – The name of this layer.

Examples

limit = fluid.layers.fill_constant_batch_size_like(
    input=label, dtype='int64', shape=[1], value=5.0)
cond = fluid.layers.less_than(x=label, y=limit)
ie = fluid.layers.IfElse(cond)
with ie.true_block():
    true_image = ie.input(image)
    hidden = fluid.layers.fc(input=true_image, size=100, act='tanh')
    prob = fluid.layers.fc(input=hidden, size=10, act='softmax')
    ie.output(prob)

with ie.false_block():
    false_image = ie.input(image)
    hidden = fluid.layers.fc(
        input=false_image, size=200, act='tanh')
    prob = fluid.layers.fc(input=hidden, size=10, act='softmax')
    ie.output(prob)
prob = ie()

DynamicRNN

class paddle.fluid.layers.DynamicRNN(name=None)

The dynamic RNN can process a batch of sequence data. The length of each sample sequence can be different. This API automatically process them in batch.

The input lod must be set. Please reference lod_tensor

>>> import paddle.fluid as fluid
>>> data = fluid.layers.data(name='sentence', dtype='int64', lod_level=1)
>>> embedding = fluid.layers.embedding(input=data, size=[65535, 32],
>>>                                    is_sparse=True)
>>>
>>> drnn = fluid.layers.DynamicRNN()
>>> with drnn.block():
>>>     word = drnn.step_input(embedding)
>>>     prev = drnn.memory(shape=[200])
>>>     hidden = fluid.layers.fc(input=[word, prev], size=200, act='relu')
>>>     drnn.update_memory(prev, hidden)  # set prev to hidden
>>>     drnn.output(hidden)
>>>
>>> # last is the last time step of rnn. It is the encoding result.
>>> last = fluid.layers.sequence_last_step(drnn())

The dynamic RNN will unfold sequence into timesteps. Users need to define how to process each time step during the with block.

The memory is used staging data cross time step. The initial value of memory can be zero or another variable.

The dynamic RNN can mark multiple variables as its output. Use drnn() to get the output sequence.

step_input(x)

Mark a sequence as a dynamic RNN input. :param x: The input sequence. :type x: Variable

Returns:The current timestep in the input sequence.
static_input(x)

Mark a variable as a RNN input. The input will not be scattered into time steps. :param x: The input variable. :type x: Variable

Returns:The input variable that can access in RNN.
block(*args, **kwds)

The block for user to define operators in RNN. See the class docstring for more details.

memory(init=None, shape=None, value=0.0, need_reorder=False, dtype='float32')

Create a memory variable for dynamic rnn.

If the init is not None, memory will be initialized by this variable. The need_reorder is used to reorder the memory as the input variable. It should be set to true when the initialized memory depends on the input sample.

For example,

>>> import paddle.fluid as fluid
>>> sentence = fluid.layers.data(
>>>                 name='sentence', dtype='float32', shape=[32])
>>> boot_memory = fluid.layers.data(
>>>                 name='boot', dtype='float32', shape=[10])
>>>
>>> drnn = fluid.layers.DynamicRNN()
>>> with drnn.block():
>>>     word = drnn.step_input(sentence)
>>>     memory = drnn.memory(init=boot_memory, need_reorder=True)
>>>     hidden = fluid.layers.fc(
>>>                 input=[word, memory], size=10, act='tanh')
>>>     drnn.update_memory(ex_mem=memory, new_mem=hidden)
>>>     drnn.output(hidden)
>>> rnn_output = drnn()

Otherwise, if shape, value, dtype are set, the memory will be initialized by this value.

For example,

>>> import paddle.fluid as fluid
>>> sentence = fluid.layers.data(
>>>                 name='sentence', dtype='float32', shape=[32])
>>>
>>> drnn = fluid.layers.DynamicRNN()
>>> with drnn.block():
>>>     word = drnn.step_input(sentence)
>>>     memory = drnn.memory(shape=[10], dtype='float32', value=0)
>>>     hidden = fluid.layers.fc(
>>>             input=[word, memory], size=10, act='tanh')
>>>     drnn.update_memory(ex_mem=memory, new_mem=hidden)
>>>     drnn.output(hidden)
>>> rnn_output = drnn()
Parameters:
  • init (Variable|None) – The initialized variable.
  • shape (list|tuple) – The memory shape. NOTE the shape does not contain
  • batch_size.
  • value (float) – the initalized value.
  • need_reorder (bool) – True if the initialized memory depends on the
  • sample. (input) –
  • dtype (str|numpy.dtype) – The data type of the initialized memory.
Returns:

the memory variable.

update_memory(ex_mem, new_mem)

Update the memory from ex_mem to new_mem. NOTE that the shape and data type of ex_mem and new_mem must be same. :param ex_mem: the memory variable. :type ex_mem: Variable :param new_mem: the plain variable generated in RNN block. :type new_mem: Variable

Returns:None
output(*outputs)

mark the RNN output variables.

Parameters:outputs – The output variables.
Returns:None

ConditionalBlock

class paddle.fluid.layers.ConditionalBlock(inputs, is_scalar_condition=False, name=None)

ConditionalBlock

ConditionalBlock is an operator that bind a block to a specific condition, if the condition matches, the corresponding block will be executed.

Parameters:
  • inputs (Variable) – bool conditions.
  • is_scalar_condition (bool) – whether the branch is controled by a scalar.
  • name (str) – name of this ConditionalBlock.

Examples

cond = layers.less_than(x=label, y=limit)
true_image, false_image = layers.split_lod_tensor(
    input=image, mask=cond)
true_cond = layers.ConditionalBlock([true_image])

with true_cond.block():
    ...
with false_cond.block():
    ...

StaticRNN

class paddle.fluid.layers.StaticRNN(name=None)

StaticRNN class.

StaticRNN class is used to create a StaticRNN. The RNN will have its own parameters like inputs, outputs, memories, status and length.

memory(init=None, shape=None, batch_ref=None, init_value=0.0, init_batch_dim_idx=0, ref_batch_dim_idx=1)
Parameters:
  • init – boot memory, if not set, a shape, batch_ref must be provided
  • shape – shape of the boot memory
  • batch_ref – batch size reference variable
  • init_value – the init value of boot memory
  • init_batch_dim_idx – the index of batch size in init’s dimension
  • ref_batch_dim_idx – the index of batch size in batch_ref’s dimension

reorder_lod_tensor_by_rank

paddle.fluid.layers.reorder_lod_tensor_by_rank(x, rank_table)

ReorderLoDTensorByRankTable operator.

Input(X) is a batch of sequences. Input(RankTable) stores new orders of the input sequence batch. The reorder_lod_tensor_by_rank operator reorders the Input(X) according to the information provided by Input(RankTable).

For example:

If the indices stored in the Input(RankTable) are [3, 0, 2, 1], the Input(X) will be reordered that the fourth sequence in Input(X) will become the first one, and then followed by the original first, third, and the second one.

This is: X = [Seq0, Seq1, Seq2, Seq3]. The indices in RankTable are [3, 0, 2, 1]. Out = [Seq3, Seq0, Seq2, Seq1] with a new LoD information.

If the LoD information of Input(X) is empty, this means Input(X) is not sequence data. This is also identical to a batch of sequences where each sequence has a fixed length 1. In this case, the reorder_lod_tensor_by_rank operator reorders each slice of Input(X) along the first axis according to Input(RankTable).

This is: X = [Slice0, Slice1, Slice2, Slice3] and its LoD information is empty. The indices in RankTable are [3, 0, 2, 1]. Out = [Slice3, Slice0, Slice2, Slice1] with no LoD information is appended.

NOTE: This operator sorts Input(X) according to a given LoDRankTable which does not need to be calculated according to Input(X). It can be calculated according to another different sequence, and then this operator sorts Input(X) according to the given LoDRankTable.

Parameters:
  • x – (LoDTensor), the input lod tensor to be reordered according to Input(RankTable).
  • rank_table – (LoDRankTable), the rank table according to which Input(X) is reordered.
Returns:

(LoDTensor), the reordered lod tensor.

ParallelDo

class paddle.fluid.layers.ParallelDo(places, use_nccl=False, name=None)

ParallelDo is used to represent multi-thread data parallel processing.

Its vanilla implementation can be shown as the following (\(|\) means single thread and \(||||\) means multiple threads)

In the forward pass
  |      Split input onto different devices
  |      Copy parameter onto different devices
  ||||   Compute forward pass in parallel
  |      Merge output from different devices

In the backward pass
  |      Split output@grad onto different devices
  ||||   Compute backward pass in parallel
  |      accumulate param@grad from different devices to the first device
  |      Merge input@grad from different devices
  |      Copy param@grad to the place of parallel_do_op

Examples:

images = fluid.layers.data(name='pixel', shape=[1, 28, 28], dtype=DTYPE)
label = fluid.layers.data(name='label', shape=[1], dtype='int64')

# ParallelDo version & Single-thread version
if thread_num > 1:
    places = fluid.layers.get_places(thread_num)
    pd = fluid.layers.ParallelDo(places)
    with pd.do():
        images = pd.read_input(images)
        label = pd.read_input(label)
        predict = cnn_model(images)
        cost = fluid.layers.cross_entropy(input=predict, label=label)

        avg_cost = fluid.layers.mean(x=cost)
        pd.write_output(avg_cost)

    avg_cost = pd()
    avg_cost = fluid.layers.mean(avg_cost)
else:
    predict = cnn_model(images)
    cost = fluid.layers.cross_entropy(input=predict, label=label)
    avg_cost = fluid.layers.mean(x=cost)

Warning

It will be soon deprecated, please use ParallelExecutor instead.

Print

paddle.fluid.layers.Print(input, first_n=-1, message=None, summarize=-1, print_tensor_name=True, print_tensor_type=True, print_tensor_shape=True, print_tensor_lod=True, print_phase='both')

Print operator

This creates a print op that will print when a tensor is accessed.

Wraps the tensor passed in so that whenever that a tensor is accessed, the message message is printed, along with the current value of the tensor t.

Parameters:
  • input (Variable) – A Tensor to print.
  • summarize (int) – Print this number of elements in the tensor, will print all if left is negative.
  • message (str) – A string message to print as a prefix.
  • first_n (int) – Only log first_n number of times.
  • print_tensor_name (bool) – Print the tensor name.
  • print_tensor_type (bool) – Print the tensor type.
  • print_tensor_shape (bool) – Print the tensor shape.
  • print_tensor_lod (bool) – Print the tensor lod.
  • print_phase (str) – Which phase to displace, including ‘forward’, ‘backward’ and ‘both’. If set to ‘backward’ or ‘both’, will print the gradients of input tensor.
Returns:

Output tensor, same data with input tensor.

Return type:

Variable

Examples

value = some_layer(...)
Print(value, summarize=10,
    message="The content of some_layer: ")

is_empty

paddle.fluid.layers.is_empty(x, cond=None, **ignored)

Test whether a Variable is empty.

Parameters:
  • x (Variable) – The Variable to be tested.
  • cond (Variable|None) – Output parameter. Returns the test result of given ‘x’. Default: None
Returns:

A bool scalar. True if ‘x’ is an empty Variable.

Return type:

Variable

Raises:

TypeError – If input cond is not a variable, or cond’s dtype is not bool.

Examples

res = fluid.layers.is_empty(x=input)
# or:
fluid.layers.is_empty(x=input, cond=res)

device

get_places

paddle.fluid.layers.get_places(device_count=None, device_type=None)

Returns a list of places based on arguments. The list will be used for parallel execution.

Parameters:
  • device_count (INT) – device count
  • device_type (STRING) – device type
Returns:

vector of Place

io

data

paddle.fluid.layers.data(name, shape, append_batch_size=True, dtype='float32', lod_level=0, type=VarType.LOD_TENSOR, stop_gradient=True)

Data Layer

This function takes in the input and based on whether data has to be returned back as a minibatch, it creates the global variable by using the helper functions. The global variables can be accessed by all the following operators in the graph.

All the input variables of this function are passed in as local variables to the LayerHelper constructor.

Parameters:
  • name (str) – The name/alias of the function
  • shape (list) – Tuple declaring the shape.
  • append_batch_size (bool) – Whether or not to append the data as a batch.
  • dtype (int|float) – The type of data : float32, float_16, int etc
  • type (VarType) – The output type. By default it is LOD_TENSOR.
  • lod_level (int) – The LoD Level. 0 means the input data is not a sequence.
  • stop_gradient (bool) – A boolean that mentions whether gradient should flow.
Returns:

The global variable that gives access to the data.

Return type:

Variable

Examples

data = fluid.layers.data(name='x', shape=[784], dtype='float32')

BlockGuardServ

class paddle.fluid.layers.BlockGuardServ(server)

BlockGuardServ class.

BlockGuardServ class is used to create an op with a block in a program.

ListenAndServ

class paddle.fluid.layers.ListenAndServ(endpoint, inputs, fan_in=1, optimizer_mode=True)

ListenAndServ Layer

ListenAndServ is used to create a rpc server bind and listen on specific TCP port, this server will run the sub-block when received variables from clients.

Parameters:
  • endpoint (string) – IP:port string which the server will listen on.
  • inputs (list) – a list of variables that the server will get from clients.
  • fan_in (int) – how many client are expected to report to this server, default: 1.
  • optimizer_mode (bool) – whether to run the server as a parameter server, default: True.

Examples

with fluid.program_guard(main):
    serv = layers.ListenAndServ(
        "127.0.0.1:6170", ["X"], optimizer_mode=False)
    with serv.do():
        x = layers.data(
            shape=[32, 32],
            dtype='float32',
            name="X",
            append_batch_size=False)
        fluid.initializer.Constant(value=1.0)(x, main.global_block())
        layers.scale(x=x, scale=10.0, out=out_var)

exe = fluid.Executor(place)
exe.run(main)

Send

paddle.fluid.layers.Send(endpoints, send_vars, sync=True)

Send variables to the server side, and get vars from server side when server have finished running server side program.

Parameters:
  • endpoints (str) – comma seperated IP:PORT pairs in the order of send_vars to send
  • send_vars (list) – variables to send to server
  • sync (bool) – whether to wait the request finish

Recv

paddle.fluid.layers.Recv(endpoints, get_vars, sync=True)

Receive variables from server side

Parameters:
  • endpoints (str) – comma seperated IP:PORT pairs in the order of send_vars to send
  • get_vars (list) – vars to get from server after send completes.
  • sync (bool) – whether to wait the request finish
Returns:

list of received variables

Return type:

list

open_recordio_file

paddle.fluid.layers.open_recordio_file(filename, shapes, lod_levels, dtypes, pass_num=1, for_parallel=True)

Open a recordio file and return the reader object. The returned reader object is thread-safe.

NOTE: This is a very low-level API. It is used for debugging data file or training. Please use open_files instead of this API for production usage.

Parameters:
  • filename (STRING) – The filename of record file. This file will given to reader.
  • shapes (list) – List of tuples which declaring data shapes.
  • lod_levels (INTS) – The LoD levels of each data.
  • dtypes (list) – List of strs which declaring data type.
  • pass_num (int) – Number of passes to run.
  • for_parallel (Bool) – Set it as True if you are going to run subsequent operators in parallel.
Returns:

The created random reader.

Return type:

(ReaderHolder)

Examples

>>> import paddle.fluid as fluid
>>> reader = fluid.layers.io.open_recordio_file(
>>>                               filename='./data.recordio',
>>>                               shapes=[(3,224,224), (1)],
>>>                               lod_levels=[0, 0],
>>>                               dtypes=['float32', 'int64'])
>>> # Via the reader, we can use 'read_file' layer to get data:
>>> image, label = fluid.layers.io.read_file(reader)

open_files

paddle.fluid.layers.open_files(filenames, shapes, lod_levels, dtypes, thread_num=1, buffer_size=None, pass_num=1, for_parallel=True)

Open files

This layer takes a list of files to read from and returns a Reader Variable. Via the Reader Variable, we can get data from given files. All files must have name suffixs to indicate their formats, e.g., ‘*.recordio’.

Parameters:
  • filenames (list) – The list of file names.
  • shapes (list) – List of tuples which declaring data shapes.
  • lod_levels (list) – List of ints which declaring data lod_level.
  • dtypes (list) – List of strs which declaring data type.
  • thread_num (int) – The maximal concurrent prefetch thread number.
  • buffer_size (int) – The size of prefetch buffer.
  • pass_num (int) – Number of passes to run.
  • for_parallel (Bool) – Set it as True if you are going to run subsequent operators in parallel.
Returns:

A Reader Variable via which we can get file data.

Return type:

Variable

Examples

reader = fluid.layers.io.open_files(filenames=['./data1.recordio',
                                            './data2.recordio'],
                                    shapes=[(3,224,224), (1)],
                                    lod_levels=[0, 0],
                                    dtypes=['float32', 'int64'],
                                    thread_num=2,
                                    buffer_size=2)

# Via the reader, we can use 'read_file' layer to get data:
image, label = fluid.layers.io.read_file(reader)

read_file

paddle.fluid.layers.read_file(reader)

Execute the given reader and get data via it.

A reader is also a Variable. It can be a raw reader generated by fluid.layers.open_files() or a decorated one generated by fluid.layers.double_buffer() and so on.

Parameters:reader (Variable) – The reader to execute.
Returns:Data read via the given reader.
Return type:Tuple[Variable]

Examples

data_file = fluid.layers.open_files(
     filenames=['mnist.recordio'],
     shapes=[(-1, 748), (-1, 1)],
     lod_levels=[0, 0],
     dtypes=["float32", "int64"])
 data_file = fluid.layers.double_buffer(
     fluid.layers.batch(data_file, batch_size=64))
 input, label = fluid.layers.read_file(data_file)

shuffle

paddle.fluid.layers.shuffle(reader, buffer_size)

Shuffle the reader.

batch

paddle.fluid.layers.batch(reader, batch_size)

This layer is a reader decorator. It takes a reader and adds ‘batching’ decoration on it. When reading with the result decorated reader, output data will be automatically organized to the form of batches.

Parameters:
  • reader (Variable) – The reader to be decorated with ‘batching’.
  • batch_size (int) – The batch size.
Returns:

The reader which has been decorated with ‘batching’.

Return type:

Variable

Examples

raw_reader = fluid.layers.io.open_files(filenames=['./data1.recordio',
                                               './data2.recordio'],
                                        shapes=[(3,224,224), (1)],
                                        lod_levels=[0, 0],
                                        dtypes=['float32', 'int64'],
                                        thread_num=2,
                                        buffer_size=2)
batch_reader = fluid.layers.batch(reader=raw_reader, batch_size=5)

# If we read data with the raw_reader:
#     data = fluid.layers.read_file(raw_reader)
# We can only get data instance by instance.
#
# However, if we read data with the batch_reader:
#     data = fluid.layers.read_file(batch_reader)
# Each 5 adjacent instances will be automatically combined together
# to become a batch. So what we get('data') is a batch data instead
# of an instance.

double_buffer

paddle.fluid.layers.double_buffer(reader, place=None, name=None)

Wrap a double buffer reader. The data will copy to target place with a double buffer queue. If the target place is None, the place that executor perform on will be used.

Parameters:
  • reader (Variable) – the reader variable need to be wrapped.
  • place (Place) – the place of target data. Default is the sample place of executor perform.
  • name (str) – Variable name. None if the user does not care.
Returns:

wrapped reader with double buffer.

Examples

>>> reader = fluid.layers.open_files(filenames=['somefile'],
>>>                                  shapes=[[-1, 784], [-1, 1]],
>>>                                  dtypes=['float32', 'int64'])
>>> reader = fluid.layers.double_buffer(reader)
>>> img, label = fluid.layers.read_file(reader)

random_data_generator

paddle.fluid.layers.random_data_generator(low, high, shapes, lod_levels, for_parallel=True)

Create a uniform random data generator

This layer returns a Reader Variable. Instead of opening a file and reading data from it, this Reader Variable generates float uniform random data by itself. It can be used as a dummy reader to test a network without opening a real file.

Parameters:
  • low (float) – The lower bound of data’s uniform distribution.
  • high (float) – The upper bound of data’s uniform distribution.
  • shapes (list) – List of tuples which declaring data shapes.
  • lod_levels (list) – List of ints which declaring data lod_level.
  • for_parallel (Bool) – Set it as True if you are going to run subsequent operators in parallel.
Returns:

A Reader Variable from which we can get random data.

Return type:

Variable

Examples

reader = fluid.layers.random_data_generator(
                                 low=0.0,
                                 high=1.0,
                                 shapes=[[3,224,224], [1]],
                                 lod_levels=[0, 0])
# Via the reader, we can use 'read_file' layer to get data:
image, label = fluid.layers.read_file(reader)

Preprocessor

class paddle.fluid.layers.Preprocessor(reader, name=None)

A block for data pre-processing in reader.

Parameters:
  • reader (Variable) – A reader variable.
  • name (str, default None) – The name of the reader.

Examples

preprocessor = fluid.layers.io.Preprocessor(reader=reader)
with preprocessor.block():
    img, lbl = preprocessor.inputs()
    img_out = img / 2
    lbl_out = lbl + 1
    preprocessor.outputs(img_out, lbl_out)

data_file = fluid.layers.io.double_buffer(preprocessor())

load

paddle.fluid.layers.load(out, file_path, load_as_fp16=None)

Load operator will load a tensor variable from disk file.

>>> import paddle.fluid as fluid
>>> tmp_tensor = fluid.layers.create_tensor(dtype='float32')
>>> fluid.layers.load(tmp_tensor, "./tmp_tensor.bin")
Parameters:
  • out (Variable) – The tensor need to be loaded.
  • file_path (STRING) – Variable will be loaded from “file_path”.
  • load_as_fp16 (BOOLEAN) – If true, the tensor will be first loaded and then converted to float16 data type. Otherwise, the tensor will be directly loaded without data type conversion. Default is false.
Returns:

None

nn

fc

paddle.fluid.layers.fc(input, size, num_flatten_dims=1, param_attr=None, bias_attr=None, use_mkldnn=False, act=None, is_test=False, name=None)

Fully Connected Layer

This function creates a fully connected layer in the network. It can take multiple tensors as its inputs. It creates a variable called weights for each input tensor, which represents a fully connected weight matrix from each input unit to each output unit. The fully connected layer multiplies each input tensor with its coresponding weight to produce an output Tensor. If multiple input tensors are given, the results of multiple multiplications will be sumed up. If bias_attr is not None, a bias variable will be created and added to the output. Finally, if activation is not None, it will be applied to the output as well.

This process can be formulated as follows:

\[Out = Act({\sum_{i=0}^{N-1}X_iW_i + b})\]

In the above equation:

  • \(N\): Number of the input.
  • \(X_i\): The input tensor.
  • \(W\): The weights created by this layer.
  • \(b\): The bias parameter created by this layer (if needed).
  • \(Act\): The activation function.
  • \(Out\): The output tensor.
Parameters:
  • input (Variable|list of Variable) – The input tensor(s) of this layer, and the dimension of the input tensor(s) is at least 2.
  • size (int) – The number of output units in this layer.
  • num_flatten_dims (int, default 1) – The fc layer can accept an input tensor with more than two dimensions. If this happens, the multidimensional tensor will first be flattened into a 2-dimensional matrix. The parameter num_flatten_dims determines how the input tensor is flattened: the first num_flatten_dims (inclusive, index starts from 1) dimensions will be flatten to form the first dimension of the final matrix (height of the matrix), and the rest rank(X) - num_flatten_dims dimensions are flattened to form the second dimension of the final matrix (width of the matrix). For example, suppose X is a 6-dimensional tensor with a shape [2, 3, 4, 5, 6], and num_flatten_dims = 3. Then, the flattened matrix will have a shape [2 x 3 x 4, 5 x 6] = [24, 30].
  • param_attr (ParamAttr|list of ParamAttr, default None) – The parameter attribute for learnable parameters/weights of this layer.
  • bias_attr (ParamAttr|list of ParamAttr, default None) – The parameter attribute for the bias of this layer. If it is set to None, no bias will be added to the output units.
  • act (str, default None) – Activation to be applied to the output of this layer.
  • is_test (bool) – A flag indicating whether execution is in test phase.
  • use_mkldnn (bool) – Use mkldnn kernel or not, it is valid only when the mkldnn library is installed. Default: False
  • name (str, default None) – The name of this layer.
Returns:

The transformation result.

Return type:

Variable

Raises:

ValueError – If rank of the input tensor is less than 2.

Examples

data = fluid.layers.data(name="data", shape=[32, 32], dtype="float32")
fc = fluid.layers.fc(input=data, size=1000, act="tanh")

embedding

paddle.fluid.layers.embedding(input, size, is_sparse=False, is_distributed=False, padding_idx=None, param_attr=None, dtype='float32')

Embedding Layer

This layer is used to lookup embeddings of IDs, provided by input, in a lookup table. The result of this lookup is the embedding of each ID in the input.

All the input variables are passed in as local variables to the LayerHelper constructor.

Parameters:
  • input (Variable) – The tensor variable containing the IDs.
  • size (tuple|list) – The shape of the look up table parameter. It should have two elements which indicate the size of the dictionary of embeddings and the size of each embedding vector respectively.
  • is_sparse (bool) – The flag indicating whether to use sparse update.
  • is_distributed (bool) – Whether to run lookup table from remote parameter server.
  • padding_idx (int|long|None) – If None, it makes no effect to lookup. Otherwise the given padding_idx indicates padding the output with zeros whenever lookup encounters it in input. If \(padding_idx < 0\), the padding_idx to use in lookup is \(size[0] + dim\).
  • param_attr (ParamAttr) – Parameters for this layer
  • dtype (np.dtype|core.VarDesc.VarType|str) – The type of data : float32, float_16, int etc
Returns:

The tensor variable storing the embeddings of the supplied inputs.

Return type:

Variable

Examples

dict_size = len(dataset.ids)
data = fluid.layers.data(name='ids', shape=[32, 32], dtype='float32')
fc = fluid.layers.embedding(input=data, size=[dict_size, 16])

dynamic_lstm

paddle.fluid.layers.dynamic_lstm(input, size, h_0=None, c_0=None, param_attr=None, bias_attr=None, use_peepholes=True, is_reverse=False, gate_activation='sigmoid', cell_activation='tanh', candidate_activation='tanh', dtype='float32', name=None)

Long-Short Term Memory (LSTM) Operator.

The defalut implementation is diagonal/peephole connection (https://arxiv.org/pdf/1402.1128.pdf), the formula is as follows:

$$ i_t = \sigma(W_{ix}x_{t} + W_{ih}h_{t-1} + W_{ic}c_{t-1} + b_i) $$

$$ f_t = \sigma(W_{fx}x_{t} + W_{fh}h_{t-1} + W_{fc}c_{t-1} + b_f) $$

$$ \tilde{c_t} = act_g(W_{cx}x_t + W_{ch}h_{t-1} + b_c) $$

$$ o_t = \sigma(W_{ox}x_{t} + W_{oh}h_{t-1} + W_{oc}c_t + b_o) $$

$$ c_t = f_t \odot c_{t-1} + i_t \odot \tilde{c_t} $$

$$ h_t = o_t \odot act_h(c_t) $$

  • W terms denote weight matrices (e.g. \(W_{xi}\) is the matrix of weights from the input gate to the input), \(W_{ic}, W_{fc}, W_{oc}\) are diagonal weight matrices for peephole connections. In our implementation, we use vectors to reprenset these diagonal weight matrices. - The b terms denote bias vectors (\(b_i\) is the input gate bias vector). - \(\sigma\) is the non-line activations, such as logistic sigmoid function. - \(i, f, o\) and \(c\) are the input gate, forget gate, output gate, and cell activation vectors, respectively, all of which have the same size as the cell output activation vector \(h\). - The \(\odot\) is the element-wise product of the vectors. - \(act_g\) and \(act_h\) are the cell input and cell output activation functions and tanh is usually used for them. - \(\tilde{c_t}\) is also called candidate hidden state, which is computed based on the current input and the previous hidden state.

Set use_peepholes False to disable peephole connection. The formula is omitted here, please refer to the paper http://www.bioinf.jku.at/publications/older/2604.pdf for details.

Note that these \(W_{xi}x_{t}, W_{xf}x_{t}, W_{xc}x_{t}, W_{xo}x_{t}\) operations on the input \(x_{t}\) are NOT included in this operator. Users can choose to use fully-connect operator before LSTM operator.

Parameters:
  • input (Variable) – (LoDTensor) the first input is a LodTensor, which support variable-time length input sequence. The underlying tensor in this LoDTensor is a matrix with shape (T X 4D), where T is the total time steps in this mini-batch, D is the hidden size
  • size (int) – 4 * hidden size.
  • h_0 (Variable) – The initial hidden state is an optional input, default is zero. This is a tensor with shape (N x D), where N is the batch size and D is the hidden size.
  • c_0 (Variable) – The initial cell state is an optional input, default is zero. This is a tensor with shape (N x D), where N is the batch size. h_0 and c_0 can be NULL but only at the same time.
  • param_attr (ParamAttr|None) –

    The parameter attribute for the learnable hidden-hidden weights.

    • Weights = {\(W_{ch}, W_{ih}, W_{fh}, W_{oh}\)}
    • The shape is (D x 4D), where D is the hidden size.
  • bias_attr (ParamAttr|None) –

    The bias attribute for the learnable bias weights, which contains two parts, input-hidden bias weights and peephole connections weights if setting use_peepholes to True.

    1. use_peepholes = False - Biases = {\(b_c, b_i, b_f, b_o\)}. - The shape is (1 x 4D).
    2. use_peepholes = True - Biases = { \(b_c, b_i, b_f, b_o, W_{ic}, W_{fc}, W_{oc}\)}. - The shape is (1 x 7D).
  • use_peepholes (bool) – (bool, defalut: True) whether to enable diagonal/peephole connections
  • is_reverse (bool) – (bool, defalut: False) whether to compute reversed LSTM
  • gate_activation (str) – (string, default: sigmoid)The activation for input gate, forget gate and output gate, sigmoid by default
  • cell_activation (str) – (string, default: tanh)The activation for cell output, tanh by defalut
  • candidate_activation (str) – (string, default: tanh)The activation for candidate hidden state, tanh by default
  • dtype (str) – Data type. Choices = [“float32”, “float64”], default “float32”.
  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
Returns:

The hidden state, and cell state of LSTM. The shape of both is (T x D), and lod is the same with the input.

Return type:

tuple

Examples

hidden_dim = 512
forward_proj = fluid.layers.fc(input=input_seq, size=hidden_dim * 4,
                               act=None, bias_attr=None)
forward, _ = fluid.layers.dynamic_lstm(
    input=forward_proj, size=hidden_dim * 4, use_peepholes=False)

dynamic_lstmp

paddle.fluid.layers.dynamic_lstmp(input, size, proj_size, param_attr=None, bias_attr=None, use_peepholes=True, is_reverse=False, gate_activation='sigmoid', cell_activation='tanh', candidate_activation='tanh', proj_activation='tanh', dtype='float32', name=None)

Dynamic LSTMP Layer

LSTMP (LSTM with recurrent projection) layer has a separate projection layer after the LSTM layer, projecting the original hidden state to a lower-dimensional one, which is proposed to reduce the number of total parameters and furthermore computational complexity for the LSTM, espeacially for the case that the size of output units is relative large (https://research.google.com/pubs/archive/43905.pdf).

The formula is as follows:

\[ \begin{align}\begin{aligned}i_t & = \sigma(W_{ix}x_{t} + W_{ir}r_{t-1} + W_{ic}c_{t-1} + b_i)\\f_t & = \sigma(W_{fx}x_{t} + W_{fr}r_{t-1} + W_{fc}c_{t-1} + b_f)\\\tilde{c_t} & = act_g(W_{cx}x_t + W_{cr}r_{t-1} + b_c)\\o_t & = \sigma(W_{ox}x_{t} + W_{or}r_{t-1} + W_{oc}c_t + b_o)\\c_t & = f_t \odot c_{t-1} + i_t \odot \tilde{c_t}\\h_t & = o_t \odot act_h(c_t)\\r_t & = \overline{act_h}(W_{rh}h_t)\end{aligned}\end{align} \]

In the above formula:

  • \(W\): Denotes weight matrices (e.g. \(W_{xi}\) is the matrix of weights from the input gate to the input).
  • \(W_{ic}\), \(W_{fc}\), \(W_{oc}\): Diagonal weight matrices for peephole connections. In our implementation, we use vectors to reprenset these diagonal weight matrices.
  • \(b\): Denotes bias vectors (e.g. \(b_i\) is the input gate bias vector).
  • \(\sigma\): The activation, such as logistic sigmoid function.
  • \(i, f, o\) and \(c\): The input gate, forget gate, output gate, and cell activation vectors, respectively, all of which have the same size as the cell output activation vector \(h\).
  • \(h\): The hidden state.
  • \(r\): The recurrent projection of the hidden state.
  • \(\tilde{c_t}\): The candidate hidden state, whose computation is based on the current input and previous hidden state.
  • \(\odot\): The element-wise product of the vectors.
  • \(act_g\) and \(act_h\): The cell input and cell output activation functions and tanh is usually used for them.
  • \(\overline{act_h}\): The activation function for the projection output, usually using identity or same as \(act_h\).

Set use_peepholes to False to disable peephole connection. The formula is omitted here, please refer to the paper http://www.bioinf.jku.at/publications/older/2604.pdf for details.

Note that these \(W_{xi}x_{t}, W_{xf}x_{t}, W_{xc}x_{t}, W_{xo}x_{t}\) operations on the input \(x_{t}\) are NOT included in this operator. Users can choose to use fully-connected layer before LSTMP layer.

Parameters:
  • input (Variable) – The input of dynamic_lstmp layer, which supports variable-time length input sequence. The underlying tensor in this Variable is a matrix with shape (T X 4D), where T is the total time steps in this mini-batch, D is the hidden size.
  • size (int) – 4 * hidden size.
  • proj_size (int) – The size of projection output.
  • param_attr (ParamAttr|None) –

    The parameter attribute for the learnable hidden-hidden weight and projection weight.

    • Hidden-hidden weight = {\(W_{ch}, W_{ih}, W_{fh}, W_{oh}\)}.
    • The shape of hidden-hidden weight is (P x 4D), where P is the projection size and D the hidden size.
    • Projection weight = {\(W_{rh}\)}.
    • The shape of projection weight is (D x P).
  • bias_attr (ParamAttr|None) –

    The bias attribute for the learnable bias weights, which contains two parts, input-hidden bias weights and peephole connections weights if setting use_peepholes to True.

    1. use_peepholes = False
    • Biases = {\(b_c, b_i, b_f, b_o\)}.
    • The shape is (1 x 4D).
    1. use_peepholes = True
    • Biases = { \(b_c, b_i, b_f, b_o, W_{ic}, W_{fc}, W_{oc}\)}.
    • The shape is (1 x 7D).
  • use_peepholes (bool) – Whether to enable diagonal/peephole connections, default True.
  • is_reverse (bool) – Whether to compute reversed LSTM, default False.
  • gate_activation (str) – The activation for input gate, forget gate and output gate. Choices = [“sigmoid”, “tanh”, “relu”, “identity”], default “sigmoid”.
  • cell_activation (str) – The activation for cell output. Choices = [“sigmoid”, “tanh”, “relu”, “identity”], default “tanh”.
  • candidate_activation (str) – The activation for candidate hidden state. Choices = [“sigmoid”, “tanh”, “relu”, “identity”], default “tanh”.
  • proj_activation (str) – The activation for projection output. Choices = [“sigmoid”, “tanh”, “relu”, “identity”], default “tanh”.
  • dtype (str) – Data type. Choices = [“float32”, “float64”], default “float32”.
  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
Returns:

A tuple of two output variable: the projection of hidden state, and cell state of LSTMP. The shape of projection is (T x P), for the cell state which is (T x D), and both LoD is the same with the input.

Return type:

tuple

Examples

dict_dim, emb_dim = 128, 64
data = fluid.layers.data(name='sequence', shape=[1],
                         dtype='int32', lod_level=1)
emb = fluid.layers.embedding(input=data, size=[dict_dim, emb_dim])
hidden_dim, proj_dim = 512, 256
fc_out = fluid.layers.fc(input=emb, size=hidden_dim * 4,
                         act=None, bias_attr=None)
proj_out, _ = fluid.layers.dynamic_lstmp(input=fc_out,
                                         size=hidden_dim * 4,
                                         proj_size=proj_dim,
                                         use_peepholes=False,
                                         is_reverse=True,
                                         cell_activation="tanh",
                                         proj_activation="tanh")

dynamic_gru

paddle.fluid.layers.dynamic_gru(input, size, param_attr=None, bias_attr=None, is_reverse=False, gate_activation='sigmoid', candidate_activation='tanh', h_0=None)

Gated Recurrent Unit (GRU) Layer

Refer to Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling .

The formula is as follows:

\[ \begin{align}\begin{aligned}u_t & = act_g(W_{ux}x_{t} + W_{uh}h_{t-1} + b_u)\\r_t & = act_g(W_{rx}x_{t} + W_{rh}h_{t-1} + b_r)\\\tilde{h_t} & = act_c(W_{cx}x_{t} + W_{ch}(r_t \odot h_{t-1}) + b_c)\\h_t & = (1-u_t) \odot h_{t-1} + u_t \odot \tilde{h_t}\end{aligned}\end{align} \]

The \(\odot\) is the element-wise product of the vectors. \(act_g\) is the update gate and reset gate activation function and \(sigmoid\) is usually used for it. \(act_c\) is the activation function for candidate hidden state and \(tanh\) is usually used for it.

Note that these \(W_{ux}x_{t}, W_{rx}x_{t}, W_{cx}x_{t}\) operations on the input \(x_{t}\) are NOT included in this operator. Users can choose to use fully-connect layer before GRU layer.

Parameters:
  • input (Variable) – The input of dynamic_gru layer, which supports variable-time length input sequence. The underlying tensor in this Variable is a matrix with shape \((T \times 3D)\), where \(T\) is the total time steps in this mini-batch, \(D\) is the hidden size.
  • size (int) – The dimension of the gru cell.
  • param_attr (ParamAttr|None) –

    The parameter attribute for the learnable hidden-hidden weight matrix. Note:

    • The shape of the weight matrix is \((T \times 3D)\), where \(D\) is the hidden size.
    • All elements in the weight matrix can be divided into two parts. The first part are weights of the update gate and reset gate with shape \((D \times 2D)\), and the second part are weights for candidate hidden state with shape \((D \times D)\).
  • bias_attr (ParamAttr) – The parameter attribute for learnable the hidden-hidden bias.
  • is_reverse (bool) – Whether to compute reversed GRU, default False.
  • gate_activation (str) – The activation for update gate and reset gate. Choices = [“sigmoid”, “tanh”, “relu”, “identity”], default “sigmoid”.
  • candidate_activation (str) – The activation for candidate hidden state. Choices = [“sigmoid”, “tanh”, “relu”, “identity”], default “tanh”.
  • h_0 (Variable) – This is initial hidden state. If not set, default is zero. This is a tensor with shape (N x D), where N is the number of total time steps of input mini-batch feature and D is the hidden size.
Returns:

The hidden state of GRU. The shape is \((T \times D)\), and sequence length is the same with the input.

Return type:

Variable

Examples

dict_dim, emb_dim = 128, 64
data = fluid.layers.data(name='sequence', shape=[1],
                         dtype='int32', lod_level=1)
emb = fluid.layers.embedding(input=data, size=[dict_dim, emb_dim])
hidden_dim = 512
x = fluid.layers.fc(input=emb, size=hidden_dim * 3)
hidden = fluid.layers.dynamic_gru(input=x, dim=hidden_dim)

gru_unit

paddle.fluid.layers.gru_unit(input, hidden, size, param_attr=None, bias_attr=None, activation='tanh', gate_activation='sigmoid')

GRU unit layer. The equation of a gru step is:

\[ \begin{align}\begin{aligned}u_t & = actGate(xu_{t} + W_u h_{t-1} + b_u)\\r_t & = actGate(xr_{t} + W_r h_{t-1} + b_r)\\m_t & = actNode(xm_t + W_c dot(r_t, h_{t-1}) + b_m)\\h_t & = dot((1-u_t), m_t) + dot(u_t, h_{t-1})\end{aligned}\end{align} \]

The inputs of gru unit includes \(z_t\), \(h_{t-1}\). In terms of the equation above, the \(z_t\) is split into 3 parts - \(xu_t\), \(xr_t\) and \(xm_t\). This means that in order to implement a full GRU unit operator for an input, a fully connected layer has to be applied, such that \(z_t = W_{fc}x_t\).

The terms \(u_t\) and \(r_t\) represent the update and reset gates of the GRU cell. Unlike LSTM, GRU has one lesser gate. However, there is an intermediate candidate hidden output, which is denoted by \(m_t\). This layer has three outputs \(h_t\), \(dot(r_t, h_{t-1})\) and concatenation of \(u_t\), \(r_t\) and \(m_t\).

Parameters:
  • input (Variable) – The fc transformed input value of current step.
  • hidden (Variable) – The hidden value of lstm unit from previous step.
  • size (integer) – The input dimension value.
  • param_attr (ParamAttr) – The weight parameters for gru unit. Default: None
  • bias_attr (ParamAttr) – The bias parameters for gru unit. Default: None
  • activation (string) – The activation type for cell (actNode). Default: ‘tanh’
  • gate_activation (string) – The activation type for gates (actGate). Default: ‘sigmoid’
Returns:

The hidden value, reset-hidden value and gate values.

Return type:

tuple

Examples

# assuming we have x_t_data and prev_hidden of size=10
x_t = fluid.layers.fc(input=x_t_data, size=30)
hidden_val, r_h_val, gate_val = fluid.layers.gru_unit(input=x_t,
                                       hidden = prev_hidden)

linear_chain_crf

paddle.fluid.layers.linear_chain_crf(input, label, param_attr=None)

Linear Chain CRF.

Conditional Random Field defines an undirected probabilistic graph with nodes denoting random variables and edges denoting dependencies between these variables. CRF learns the conditional probability \(P(Y|X)\), where \(X = (x_1, x_2, ... , x_n)\) are structured inputs and \(Y = (y_1, y_2, ... , y_n)\) are labels for the inputs.

Linear chain CRF is a special case of CRF that is useful for sequence labeling task. Sequence labeling tasks do not assume a lot of conditional independences among inputs. The only constraint they impose is that the input and output must be linear sequences. Thus, the graph of such a CRF is a simple chain or a line, which results in the linear chain CRF.

This operator implements the Forward-Backward algorithm for the linear chain CRF. Please refer to http://www.cs.columbia.edu/~mcollins/fb.pdf and http://cseweb.ucsd.edu/~elkan/250Bwinter2012/loglinearCRFs.pdf for details.

Equation:

  1. Denote Input(Emission) to this operator as \(x\) here. 2. The first D values of Input(Transition) to this operator are for starting weights, denoted as \(a\) here. 3. The next D values of Input(Transition) of this operator are for ending weights, denoted as \(b\) here. 4. The remaning values of Input(Transition) are for transition weights, denoted as \(w\) here. 5. Denote Input(Label) as \(s\) here.

The probability of a sequence \(s\) of length \(L\) is defined as: $$P(s) = (1/Z) exp(a_{s_1} + b_{s_L} + sum_{l=1}^L x_{s_l} + sum_{l=2}^L w_{s_{l-1},s_l})$$

where \(Z\) is a normalization value so that the sum of \(P(s)\) over all possible sequences is 1, and \(x\) is the emission feature weight to the linear chain CRF.

Finally, the linear chain CRF operator outputs the logarithm of the conditional likelihood of each training sample in a mini-batch.

NOTE:

  1. The feature function for a CRF is made up of the emission features and the transition features. The emission feature weights are NOT computed in this operator. They MUST be computed first before this operator is called.
  2. Because this operator performs global normalization over all possible sequences internally, it expects UNSCALED emission feature weights. Please do not call this op with the emission feature being output of any nonlinear activation.
  3. The 2nd dimension of Input(Emission) MUST be equal to the tag number.
Parameters:
  • input (Variable) – (LoDTensor, default LoDTensor<float>) A 2-D LoDTensor with shape [N x D], where N is the size of the mini-batch and D is the total tag number. The unscaled emission weight matrix for the linear chain CRF.
  • input – (Tensor, default Tensor<float>) A 2-D Tensor with shape [(D + 2) x D]. The learnable parameter for the linear_chain_crf operator. See more details in the operator’s comments
  • label (Variable) – (LoDTensor, default LoDTensor<int64_t>) A LoDTensor with shape [N x 1], where N is the total element number in a mini-batch. The ground truth
  • param_attr (ParamAttr) – The attribute of the learnable parameter.
Returns:

(Tensor, default Tensor<float>) A 2-D Tensor with shape [N x D]. The exponentials of Input(Emission). This is an intermediate computational result in forward computation, and will be reused in backward computation

output(Variable): (Tensor, default Tensor<float>) A 2-D Tensor with shape [(D + 2) x D]. The exponentials of Input(Transition). This is an intermediate computational result in forward computation, and will be reused in backward computation

output(Variable): (Tensor, default Tensor<float>) The logarithm of the conditional likelihood of each training sample in a mini-batch. This is a 2-D tensor with shape [S x 1], where S is the sequence number in a mini-batch. Note: S is equal to the sequence number in a mini-batch. The output is no longer a LoDTensor

Return type:

output(Variable)

crf_decoding

paddle.fluid.layers.crf_decoding(input, param_attr, label=None)

The crf_decoding operator reads the emission feature weights and the transition feature weights learned by the linear_chain_crf operator. It implements the Viterbi algorithm which is a dynamic programming algorithm for finding the most likely sequence of hidden states, called the Viterbi path, that results in a sequence of observed tags.

The output of this operator changes according to whether Input(Label) is given:

  1. Input(Label) is given: This happens in training. This operator is used to co-work with the chunk_eval operator. When Input(Label) is given, the crf_decoding operator returns a row vector with shape [N x 1] whose values are fixed to be 0, indicating an incorrect prediction, or 1 indicating a tag is correctly predicted. Such an output is the input to chunk_eval operator.
  2. Input(Label) is not given: This is the standard decoding process.

The crf_decoding operator returns a row vector with shape [N x 1] whose values range from 0 to maximum tag number - 1, Each element indicates an index of a predicted tag.

Parameters:
  • input (Variable) – (LoDTensor, default: LoDTensor<float>). A LoDTensor with shape [N x D] where N is the size of the mini-batch and D is the total tag number. This input is the unscaled emission weight matrix of the linear_chain_crf operator
  • param_attr (ParamAttr) – The parameter attribute for training.
  • label (Variable) – (LoDTensor, LoDTensor<int64_t>). The ground truth with shape [N x 1]. This input is optional. See more details in the operator’s comments
Returns:

(LoDTensor, LoDTensor<int64_t>). The decoding results. What to return changes depending on whether the Input(Label) (the ground truth) is given. See more details in the operator’s comment

Return type:

Variable

Examples

crf_decode = layers.crf_decoding(
     input=hidden, param_attr=ParamAttr(name="crfw"))

cos_sim

paddle.fluid.layers.cos_sim(X, Y)

Cosine Similarity Operator

\(Out = \frac{X^T * Y}{(\sqrt{X^T * X} * \sqrt{Y^T * Y})}\)

The input X and Y must have the same shape, except that the 1st dimension of input Y could be just 1 (different from input X), which will be broadcasted to match the shape of input X before computing their cosine similarity.

Both the input X and Y can carry the LoD (Level of Details) information, or not. But the output only shares the LoD information with input X.

Parameters:
  • X (Variable) – The 1st input of cos_sim op.
  • Y (Variable) – The 2nd input of cos_sim op.
Returns:

the output of cosine(X, Y).

Return type:

Variable

cross_entropy

paddle.fluid.layers.cross_entropy(input, label, soft_label=False)

Cross Entropy Layer

This layer computes the cross entropy between input and label. It supports both standard cross-entropy and soft-label cross-entropy loss computation.

  1. One-hot cross-entropy:

    soft_label = False, Label[i, 0] indicates the class index for sample i:

    \[Y[i] = -\log(X[i, Label[i]])\]
  2. Soft-label cross-entropy:

    soft_label = True, Label[i, j] indicates the soft label of class j for sample i:

    \[Y[i] = \sum_j{-Label[i, j] * log(X[i, j])}\]

    Please make sure that in this case the summation of each row of label equals one.

  3. One-hot cross-entropy with vecterized label:

    As a special case of 2), when each row of ‘label’ has only one non-zero element which is equal to 1, soft-label cross-entropy degenerates to a one-hot cross-entropy with one-hot label representation.

Parameters:
  • input (Variable|list) – a 2-D tensor with shape [N x D], where N is the batch size and D is the number of classes. This input is a probability computed by the previous operator, which is almost always the result of a softmax operator.
  • label (Variable|list) – the ground truth which is a 2-D tensor. When soft_label is set to False, label is a tensor<int64> with shape [N x 1]. When soft_label is set to True, label is a tensor<float/double> with shape [N x D].
  • soft_label (bool) – a flag indicating whether to interpretate the given labels as soft labels, default False.
Returns:

A 2-D tensor with shape [N x 1], the cross entropy loss.

Raises:

ValueError – 1) the 1st dimension of input and label are not equal. 2) when soft_label == True, and the 2nd dimension of

input and label are not equal.

  1. when soft_label == False, and the 2nd dimension of label is not 1.

Examples

predict = fluid.layers.fc(input=net, size=classdim, act='softmax')
cost = fluid.layers.cross_entropy(input=predict, label=label)

square_error_cost

paddle.fluid.layers.square_error_cost(input, label)

Square error cost layer

This layer accepts input predictions and target label and returns the squared error cost.

For predictions, \(X\), and target labels, \(Y\), the equation is:

\[Out = (X - Y)^2\]

In the above equation:

  • \(X\): Input predictions, a tensor.
  • \(Y\): Input labels, a tensor.
  • \(Out\): Output value, same shape with \(X\).
Parameters:
  • input (Variable) – Input tensor, has predictions.
  • label (Variable) – Label tensor, has target labels.
Returns:

The tensor variable storing the element-wise squared error difference of input and label.

Return type:

Variable

Examples

y = layers.data(name='y', shape=[1], dtype='float32')
y_predict = layers.data(name='y_predict', shape=[1], dtype='float32')
cost = layers.square_error_cost(input=y_predict, label=y)

chunk_eval

paddle.fluid.layers.chunk_eval(input, label, chunk_scheme, num_chunk_types, excluded_chunk_types=None)

Chunk Evaluator

This function computes and outputs the precision, recall and F1-score of chunk detection.

For some basics of chunking, please refer to ‘Chunking with Support Vector Machines <https://aclanthology.info/pdf/N/N01/N01-1025.pdf>’.

ChunkEvalOp computes the precision, recall, and F1-score of chunk detection, and supports IOB, IOE, IOBES and IO (also known as plain) tagging schemes. Here is a NER example of labeling for these tagging schemes:

====== ====== ======  =====  ==  ============   =====  ===== =====  ==  =========
       Li     Ming    works  at  Agricultural   Bank   of    China  in  Beijing.
====== ====== ======  =====  ==  ============   =====  ===== =====  ==  =========
IO     I-PER  I-PER   O      O   I-ORG          I-ORG  I-ORG I-ORG  O   I-LOC
IOB    B-PER  I-PER   O      O   B-ORG          I-ORG  I-ORG I-ORG  O   B-LOC
IOE    I-PER  E-PER   O      O   I-ORG          I-ORG  I-ORG E-ORG  O   E-LOC
IOBES  B-PER  E-PER   O      O   I-ORG          I-ORG  I-ORG E-ORG  O   S-LOC
====== ====== ======  =====  ==  ============   =====  ===== =====  ==  =========

There are three chunk types(named entity types) including PER(person), ORG(organization) and LOC(LOCATION), and we can see that the labels have the form <tag type>-<chunk type>.

Since the calculations actually use label ids rather than labels, extra attention should be paid when mapping labels to ids to make CheckEvalOp work. The key point is that the listed equations are satisfied by ids.

tag_type = label % num_tag_type
chunk_type = label / num_tag_type

where num_tag_type is the num of tag types in the tagging scheme, num_chunk_type is the num of chunk types, and tag_type get its value from the following table.

Scheme Begin Inside End   Single
 plain   0     -      -     -
 IOB     0     1      -     -
 IOE     -     0      1     -
 IOBES   0     1      2     3

Still use NER as example, assuming the tagging scheme is IOB while chunk types are ORG, PER and LOC. To satisfy the above equations, the label map can be like this:

B-ORG  0
I-ORG  1
B-PER  2
I-PER  3
B-LOC  4
I-LOC  5
O      6

It’s not hard to verify the equations noting that the num of chunk types is 3 and the num of tag types in IOB scheme is 2. For example, the label id of I-LOC is 5, the tag type id of I-LOC is 1, and the chunk type id of I-LOC is 2, which consistent with the results from the equations.

Parameters:
  • input (Variable) – prediction output of the network.
  • label (Variable) – label of the test data set.
  • chunk_scheme (str) – The labeling scheme indicating how to encode the chunks. Must be IOB, IOE, IOBES or plain. See the descriptionfor details
  • num_chunk_types (int) – The number of chunk type. See the description for details
  • excluded_chunk_types (list) – A list including chunk type ids indicating chunk types that are not counted. See the description for details
Returns:

tuple containing: precision, recall, f1_score, num_infer_chunks, num_label_chunks, num_correct_chunks

Return type:

tuple

Examples

crf = fluid.layers.linear_chain_crf(
    input=hidden, label=label, param_attr=ParamAttr(name="crfw"))
crf_decode = fluid.layers.crf_decoding(
    input=hidden, param_attr=ParamAttr(name="crfw"))
fluid.layers.chunk_eval(
    input=crf_decode,
    label=label,
    chunk_scheme="IOB",
    num_chunk_types=(label_dict_len - 1) / 2)

sequence_conv

paddle.fluid.layers.sequence_conv(input, num_filters, filter_size=3, filter_stride=1, padding=None, bias_attr=None, param_attr=None, act=None)

This function creates the op for sequence_conv, using the inputs and other convolutional configurations for the filters and stride as given in the input parameters to the function.

Parameters:
  • input (Variable) – (LoDTensor) the input(X) is a LodTensor, which supports variable-time length input sequence. The underlying tensor in this LoDTensor is a matrix with shape (T, N), where T is the total time steps in this mini-batch and N is the input_hidden_size
  • num_filters (int) – number of filters.
  • filter_size (int) – the filter size (H and W).
  • filter_stride (int) – stride of the filter.
  • padding (bool) – if True, add paddings.
  • bias_attr (ParamAttr|None) – attributes for bias
  • param_attr (ParamAttr|None) – attributes for parameter
  • act (str) – the activation type
Returns:

output of sequence_conv

Return type:

Variable

conv2d

paddle.fluid.layers.conv2d(input, num_filters, filter_size, stride=1, padding=0, dilation=1, groups=None, param_attr=None, bias_attr=None, use_cudnn=True, use_mkldnn=False, act=None, name=None)

The convolution2D layer calculates the output based on the input, filter and strides, paddings, dilations, groups parameters. Input and Output are in NCHW format, where N is batch size, C is the number of channels, H is the height of the feature, and W is the width of the feature. Filter is in MCHW format, where M is the number of output image channels, C is the number of input image channels, H is the height of the filter, and W is the width of the filter. If the groups is greater than 1, C will equal the number of input image channels divided by the groups. Please refer to UFLDL’s convolution for more detials. If bias attribution and activation type are provided, bias is added to the output of the convolution, and the corresponding activation function is applied to the final result.

For each input \(X\), the equation is:

\[Out = \sigma (W \ast X + b)\]

Where:

  • \(X\): Input value, a tensor with NCHW format.
  • \(W\): Filter value, a tensor with MCHW format.
  • \(\ast\): Convolution operation.
  • \(b\): Bias value, a 2-D tensor with shape [M, 1].
  • \(\sigma\): Activation function.
  • \(Out\): Output value, the shape of \(Out\) and \(X\) may be different.

Example

  • Input:

    Input shape: \((N, C_{in}, H_{in}, W_{in})\)

    Filter shape: \((C_{out}, C_{in}, H_f, W_f)\)

  • Output:

    Output shape: \((N, C_{out}, H_{out}, W_{out})\)

Where

\[\begin{split}H_{out}&= \frac{(H_{in} + 2 * paddings[0] - (dilations[0] * (H_f - 1) + 1))}{strides[0]} + 1 \\ W_{out}&= \frac{(W_{in} + 2 * paddings[1] - (dilations[1] * (W_f - 1) + 1))}{strides[1]} + 1\end{split}\]
Parameters:
  • input (Variable) – The input image with [N, C, H, W] format.
  • num_filters (int) – The number of filter. It is as same as the output image channel.
  • filter_size (int|tuple|None) – The filter size. If filter_size is a tuple, it must contain two integers, (filter_size_H, filter_size_W). Otherwise, the filter will be a square.
  • stride (int|tuple) – The stride size. If stride is a tuple, it must contain two integers, (stride_H, stride_W). Otherwise, the stride_H = stride_W = stride. Default: stride = 1.
  • padding (int|tuple) – The padding size. If padding is a tuple, it must contain two integers, (padding_H, padding_W). Otherwise, the padding_H = padding_W = padding. Default: padding = 0.
  • dilation (int|tuple) – The dilation size. If dilation is a tuple, it must contain two integers, (dilation_H, dilation_W). Otherwise, the dilation_H = dilation_W = dilation. Default: dilation = 1.
  • groups (int) – The groups number of the Conv2d Layer. According to grouped convolution in Alex Krizhevsky’s Deep CNN paper: when group=2, the first half of the filters is only connected to the first half of the input channels, while the second half of the filters is only connected to the second half of the input channels. Default: groups=1
  • param_attr (ParamAttr) – The parameters to the Conv2d Layer. Default: None
  • bias_attr (ParamAttr) – Bias parameter for the Conv2d layer. Default: None
  • use_cudnn (bool) – Use cudnn kernel or not, it is valid only when the cudnn library is installed. Default: True
  • use_mkldnn (bool) – Use mkldnn kernels or not, it is valid only when compiled with mkldnn library. Default: False
  • act (str) – Activation type. Default: None
  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
Returns:

The tensor variable storing the convolution and non-linearity activation result.

Return type:

Variable

Raises:

ValueError – If the shapes of input, filter_size, stride, padding and groups mismatch.

Examples

data = fluid.layers.data(name='data', shape=[3, 32, 32], dtype='float32')
conv2d = fluid.layers.conv2d(input=data, num_filters=2, filter_size=3, act="relu")

conv3d

paddle.fluid.layers.conv3d(input, num_filters, filter_size, stride=1, padding=0, dilation=1, groups=None, param_attr=None, bias_attr=None, use_cudnn=True, use_mkldnn=False, act=None, name=None)

Convlution3D Layer

The convolution3D layer calculates the output based on the input, filter and strides, paddings, dilations, groups parameters. Input(Input) and Output(Output) are in NCDHW format. Where N is batch size C is the number of channels, D is the depth of the feature, H is the height of the feature, and W is the width of the feature. Convlution3D is similar with Convlution2D but adds one dimension(depth). If bias attribution and activation type are provided, bias is added to the output of the convolution, and the corresponding activation function is applied to the final result.

For each input \(X\), the equation is:

\[Out = \sigma (W \ast X + b)\]

In the above equation:

  • \(X\): Input value, a tensor with NCDHW format.
  • \(W\): Filter value, a tensor with MCDHW format.
  • \(\ast\): Convolution operation.
  • \(b\): Bias value, a 2-D tensor with shape [M, 1].
  • \(\sigma\): Activation function.
  • \(Out\): Output value, the shape of \(Out\) and \(X\) may be different.

Example

  • Input:

    Input shape: \((N, C_{in}, D_{in}, H_{in}, W_{in})\)

    Filter shape: \((C_{out}, C_{in}, D_f, H_f, W_f)\)

  • Output: Output shape: \((N, C_{out}, D_{out}, H_{out}, W_{out})\)

Where

\[\begin{split}D_{out}&= \frac{(D_{in} + 2 * paddings[0] - (dilations[0] * (D_f - 1) + 1))}{strides[0]} + 1 \\ H_{out}&= \frac{(H_{in} + 2 * paddings[1] - (dilations[1] * (H_f - 1) + 1))}{strides[1]} + 1 \\ W_{out}&= \frac{(W_{in} + 2 * paddings[2] - (dilations[2] * (W_f - 1) + 1))}{strides[2]} + 1\end{split}\]
Parameters:
  • input (Variable) – The input image with [N, C, D, H, W] format. num_filters(int): The number of filter. It is as same as the output image channel.
  • filter_size (int|tuple|None) – The filter size. If filter_size is a tuple, it must contain three integers, (filter_size_D, filter_size_H, filter_size_W). Otherwise, the filter will be a square.
  • stride (int|tuple) – The stride size. If stride is a tuple, it must contain three integers, (stride_D, stride_H, stride_W). Otherwise, the stride_D = stride_H = stride_W = stride. Default: stride = 1.
  • padding (int|tuple) – The padding size. If padding is a tuple, it must contain three integers, (padding_D, padding_H, padding_W). Otherwise, the padding_D = padding_H = padding_W = padding. Default: padding = 0.
  • dilation (int|tuple) – The dilation size. If dilation is a tuple, it must contain three integers, (dilation_D, dilation_H, dilation_W). Otherwise, the dilation_D = dilation_H = dilation_W = dilation. Default: dilation = 1.
  • groups (int) – The groups number of the Conv3d Layer. According to grouped convolution in Alex Krizhevsky’s Deep CNN paper: when group=2, the first half of the filters is only connected to the first half of the input channels, while the second half of the filters is only connected to the second half of the input channels. Default: groups=1
  • param_attr (ParamAttr) – The parameters to the Conv3d Layer. Default: None
  • bias_attr (ParamAttr) – Bias parameter for the Conv3d layer. Default: None
  • use_cudnn (bool) – Use cudnn kernel or not, it is valid only when the cudnn library is installed. Default: True
  • use_mkldnn (bool) – Use mkldnn kernels or not.
  • act (str) – Activation type. Default: None
  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
Returns:

The tensor variable storing the convolution and non-linearity activation result.

Return type:

Variable

Raises:

ValueError – If the shapes of input, filter_size, stride, padding and groups mismatch.

Examples

data = fluid.layers.data(name='data', shape=[3, 12, 32, 32], dtype='float32')
conv3d = fluid.layers.conv3d(input=data, num_filters=2, filter_size=3, act="relu")

sequence_pool

paddle.fluid.layers.sequence_pool(input, pool_type)

This function add the operator for sequence pooling. It pools features of all time-steps of each instance, and is applied on top of the input using pool_type mentioned in the parameters.

It supports four pool_type:

  • average: \(Out[i] = \frac{\sum_i X_i}{N}\)
  • sum: \(Out[i] = \sum_jX_{ij}\)
  • sqrt: \(Out[i] = \frac{\sum_jX_{ij}}{\sqrt{len(X_i)}}\)
  • max: \(Out[i] = max(X_i)\)
x is a 1-level LoDTensor:
  x.lod = [[2, 3, 2]]
  x.data = [1, 3, 2, 4, 6, 5, 1]
  x.dims = [7, 1]

then output is a Tensor:
  out.dim = [3, 1]
  with condition len(x.lod[-1]) == out.dims[0]

for different pool_type:
  average: out.data = [2, 4, 3], where 2=(1+3)/2, 4=(2+4+6)/3, 3=(5+1)/2
  sum    : out.data = [4, 12, 6], where 4=1+3, 12=2+4+6, 6=5+1
  sqrt   : out.data = [2.82, 6.93, 4.24], where 2.82=(1+3)/sqrt(2),
             6.93=(2+4+6)/sqrt(3), 4.24=(5+1)/sqrt(2)
  max    : out.data = [3, 6, 5], where 3=max(1,3), 6=max(2,4,6), 5=max(5,1)
  last   : out.data = [3, 6, 1], where 3=last(1,3), 6=last(2,4,6), 1=last(5,1)
  first  : out.data = [1, 2, 5], where 1=first(1,3), 2=first(2,4,6), 5=first(5,1)
Parameters:
  • input (variable) – The input variable which is a LoDTensor.
  • pool_type (string) – The pooling type of sequence_pool. It supports average, sum, sqrt and max.
Returns:

The sequence pooling variable which is a Tensor.

Examples

x = fluid.layers.data(name='x', shape=[7, 1],
                 dtype='float32', lod_level=1)
avg_x = fluid.layers.sequence_pool(input=x, pool_type='average')
sum_x = fluid.layers.sequence_pool(input=x, pool_type='sum')
sqrt_x = fluid.layers.sequence_pool(input=x, pool_type='sqrt')
max_x = fluid.layers.sequence_pool(input=x, pool_type='max')
last_x = fluid.layers.sequence_pool(input=x, pool_type='last')
first_x = fluid.layers.sequence_pool(input=x, pool_type='first')

sequence_softmax

paddle.fluid.layers.sequence_softmax(input, param_attr=None, bias_attr=None, use_cudnn=True)

This function computes the softmax activation among all time-steps for each sequence. The dimension of each time-step should be 1. Thus, the shape of input Tensor can be either \([N, 1]\) or \([N]\), where \(N\) is the sum of the length of all sequences.

For i-th sequence in a mini-batch:

\[Out(X[lod[i]:lod[i+1]], :) = \frac{\exp(X[lod[i]:lod[i+1], :])}{\sum(\exp(X[lod[i]:lod[i+1], :]))}\]

For example, for a mini-batch of 3 sequences with variable-length, each containing 2, 3, 2 time-steps, the lod of which is [0, 2, 5, 7], then softmax will be computed among \(X[0:2, :]\), \(X[2:5, :]\), \(X[5:7, :]\), and \(N\) turns out to be 7.

Parameters:
  • input (Variable) – The input variable which is a LoDTensor.
  • bias_attr (ParamAttr|None) – attributes for bias
  • param_attr (ParamAttr|None) – attributes for parameter
  • use_cudnn (bool) – Use cudnn kernel or not, it is valid only when the cudnn library is installed. Default: True
Returns:

output of sequence_softmax

Return type:

Variable

Examples

x = fluid.layers.data(name='x', shape=[7, 1],
                 dtype='float32', lod_level=1)
x_sequence_softmax = fluid.layers.sequence_softmax(input=x)

softmax

paddle.fluid.layers.softmax(input, param_attr=None, bias_attr=None, use_cudnn=True, name=None)

The input of the softmax layer is a 2-D tensor with shape N x K (N is the batch_size, K is the dimension of input feature). The output tensor has the same shape as the input tensor.

For each row of the input tensor, the softmax operator squashes the K-dimensional vector of arbitrary real values to a K-dimensional vector of real values in the range [0, 1] that add up to 1.

It computes the exponential of the given dimension and the sum of exponential values of all the other dimensions in the K-dimensional vector input. Then the ratio of the exponential of the given dimension and the sum of exponential values of all the other dimensions is the output of the softmax operator.

For each row \(i\) and each column \(j\) in Input(X), we have:

\[Out[i, j] = \frac{\exp(X[i, j])}{\sum_j(exp(X[i, j])}\]
Parameters:
  • input (Variable) – The input variable.
  • bias_attr (ParamAttr) – attributes for bias
  • param_attr (ParamAttr) – attributes for parameter
  • use_cudnn (bool) – Use cudnn kernel or not, it is valid only when the cudnn library is installed.
Returns:

output of softmax

Return type:

Variable

Examples

fc = fluid.layers.fc(input=x, size=10)
softmax = fluid.layers.softmax(input=fc)

pool2d

paddle.fluid.layers.pool2d(input, pool_size=-1, pool_type='max', pool_stride=1, pool_padding=0, global_pooling=False, use_cudnn=True, ceil_mode=False, use_mkldnn=False, name=None)

The pooling2d operation calculates the output based on the input, pooling_type and ksize, strides, paddings parameters. Input(X) and output(Out) are in NCHW format, where N is batch size, C is the number of channels, H is the height of the feature, and W is the width of the feature. Parameters(ksize, strides, paddings) are two elements. These two elements represent height and width, respectively. The input(X) size and output(Out) size may be different.

Example:

Input:

X shape: \((N, C, H_{in}, W_{in})\)

Output:

Out shape: \((N, C, H_{out}, W_{out})\)

For ceil_mode = false: $$ H_{out} = \frac{(H_{in} - ksize[0] + 2 * paddings[0])}{strides[0]} + 1 $$ $$ W_{out} = \frac{(W_{in} - ksize[1] + 2 * paddings[1])}{strides[1]} + 1 $$ For ceil_mode = true: $$ H_{out} = \frac{(H_{in} - ksize[0] + 2 * paddings[0] + strides[0] - 1)}{strides[0]} + 1 $$ $$ W_{out} = \frac{(W_{in} - ksize[1] + 2 * paddings[1] + strides[1] - 1)}{strides[1]} + 1 $$

Parameters:
  • input (Variable) – The input tensor of pooling operator. The format of input tensor is NCHW, where N is batch size, C is the number of channels, H is the height of the feature, and W is the width of the feature.
  • pool_size (int) – The side length of pooling windows. All pooling windows are squares with pool_size on a side.
  • pool_type – (string), pooling type, can be “max” for max-pooling and “avg” for average-pooling
  • pool_stride (int) – stride of the pooling layer.
  • pool_padding (int) – padding size.
  • global_pooling – (bool, default false) Whether to use the global pooling. If global_pooling = true, ksize and paddings will be ignored
  • use_cudnn – (bool, default false) Only used in cudnn kernel, need install cudnn
  • ceil_mode – (bool, default false) Wether to use the ceil function to calculate output height and width. False is the default. If it is set to False, the floor function will be used
  • use_mkldnn – (bool, default false) Only used in mkldnn kernel
  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
Returns:

The pooling result.

Return type:

Variable

Raises:
  • ValueError – If ‘pool_type’ is not “max” nor “avg”
  • ValueError – If ‘global_pooling’ is False and ‘pool_size’ is -1
  • ValueError – If ‘use_cudnn’ is not a bool value.

Examples

data = fluid.layers.data(
    name='data', shape=[3, 32, 32], dtype='float32')
conv2d = fluid.layers.pool2d(
                  input=data,
                  pool_size=2,
                  pool_type='max',
                  pool_stride=1,
                  global_pooling=False)

pool3d

paddle.fluid.layers.pool3d(input, pool_size=-1, pool_type='max', pool_stride=1, pool_padding=0, global_pooling=False, use_cudnn=True, ceil_mode=False, use_mkldnn=False, name=None)

This function adds the operator for pooling in 3-dimensions, using the pooling configurations mentioned in input parameters.

Parameters:
  • input (Variable) – ${input_comment}
  • pool_size (int) – ${ksize_comment}
  • pool_type (str) – ${pooling_type_comment}
  • pool_stride (int) – stride of the pooling layer.
  • pool_padding (int) – padding size.
  • global_pooling (bool) – ${global_pooling_comment}
  • use_cudnn (bool) – ${use_cudnn_comment}
  • ceil_mode (bool) – ${ceil_mode_comment}
  • use_mkldnn (bool) – ${use_mkldnn_comment}
  • name (str) – A name for this layer(optional). If set None, the layer will be named automatically.
Returns:

output of pool3d layer.

Return type:

Variable

batch_norm

paddle.fluid.layers.batch_norm(input, act=None, is_test=False, momentum=0.9, epsilon=1e-05, param_attr=None, bias_attr=None, data_layout='NCHW', in_place=False, use_mkldnn=False, name=None, moving_mean_name=None, moving_variance_name=None, do_model_average_for_mean_and_var=False)

Batch Normalization Layer

Can be used as a normalizer function for conv2d and fully_connected operations. The required data format for this layer is one of the following:

  1. NHWC [batch, in_height, in_width, in_channels]
  2. NCHW [batch, in_channels, in_height, in_width]

Refer to Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift for more details.

\(input\) is the input features over a mini-batch.

\[\begin{split}\mu_{\beta} &\gets \frac{1}{m} \sum_{i=1}^{m} x_i \qquad &//\ \ mini-batch\ mean \\ \sigma_{\beta}^{2} &\gets \frac{1}{m} \sum_{i=1}^{m}(x_i - \ \mu_{\beta})^2 \qquad &//\ mini-batch\ variance \\ \hat{x_i} &\gets \frac{x_i - \mu_\beta} {\sqrt{\ \sigma_{\beta}^{2} + \epsilon}} \qquad &//\ normalize \\ y_i &\gets \gamma \hat{x_i} + \beta \qquad &//\ scale\ and\ shift\end{split}\]
Parameters:
  • input (variable) – The input variable which is a LoDTensor.
  • act (string, Default None) – Activation type, linear|relu|prelu|...
  • is_test (bool, Default False) – Used for training or training.
  • momentum (float, Default 0.9) –
  • epsilon (float, Default 1e-05) –
  • param_attr (ParamAttr) – The parameter attribute for Parameter scale.
  • bias_attr (ParamAttr) – The parameter attribute for Parameter bias.
  • data_layout (string, default NCHW) – NCHW|NHWC
  • in_place (bool, Default False) – Make the input and output of batch norm reuse memory.
  • use_mkldnn (bool, Default false) – ${use_mkldnn_comment}
  • name (string, Default None) – A name for this layer(optional). If set None, the layer will be named automatically.
  • moving_mean_name (string, Default None) – The name of moving_mean which store the global Mean.
  • moving_variance_name (string, Default None) – The name of the moving_variance which store the global Variance.
  • do_model_average_for_mean_and_var (bool, Default False) – Do model average for mean and variance or not.
Returns:

A tensor variable which is the result after applying batch normalization on the input.

Return type:

Variable

Examples

hidden1 = fluid.layers.fc(input=x, size=200, param_attr='fc1.w')
hidden2 = fluid.layers.batch_norm(input=hidden1)

beam_search_decode

paddle.fluid.layers.beam_search_decode(ids, scores, name=None)

Beam Search Decode

This layers is to pack the output of beam search layer into sentences and associated scores. It is usually called after the beam search layer. Typically, the output of beam search layer is a tensor of selected ids, with a tensor of the score of each id. Beam search layer’s output ids, however, are generated directly during the tree search, and they are stacked by each level of the search tree. Thus we need to reorganize them into sentences, based on the score of each id. This layer takes the output of beam search layer as input and repack them into sentences.

Parameters:
  • ids (Variable) – The selected ids, output of beam search layer.
  • scores (Variable) – The associated scores of the ids, out put of beam search layer.
  • name (str) – The name of this layer. It is optional.
Returns:

a tuple of two output tensors: sentence_ids, sentence_scores. sentence_ids is a tensor with shape [size, length], where size is the beam size of beam search, and length is the length of each sentence. Note that the length of sentences may vary. sentence_scores is a tensor with the same shape as sentence_ids.

Return type:

tuple(Variable)

Examples

ids, scores = fluid.layers.beam_search(
    pre_ids, ids, scores, beam_size, end_id)
sentence_ids, sentence_scores = fluid.layers.beam_search_decode(
    ids, scores)

conv2d_transpose

paddle.fluid.layers.conv2d_transpose(input, num_filters, output_size=None, filter_size=None, padding=0, stride=1, dilation=1, groups=None, param_attr=None, bias_attr=None, use_cudnn=True, act=None, name=None)

Convlution2D transpose layer

The convolution2D transpose layer calculates the output based on the input, filter, and dilations, strides, paddings. Input(Input) and output(Output) are in NCHW format. Where N is batch size, C is the number of channels, H is the height of the feature, and W is the width of the feature. Parameters(dilations, strides, paddings) are two elements. These two elements represent height and width, respectively. The details of convolution transpose layer, please refer to the following explanation and references therein. If bias attribution and activation type are provided, bias is added to the output of the convolution, and the corresponding activation function is applied to the final result.

For each input \(X\), the equation is:

\[Out = \sigma (W \ast X + b)\]

Where:

  • \(X\): Input value, a tensor with NCHW format.
  • \(W\): Filter value, a tensor with MCHW format.
  • \(\ast\): Convolution operation.
  • \(b\): Bias value, a 2-D tensor with shape [M, 1].
  • \(\sigma\): Activation function.
  • \(Out\): Output value, the shape of \(Out\) and \(X\) may be different.

Example

  • Input:

    Input shape: \((N, C_{in}, H_{in}, W_{in})\)

    Filter shape: \((C_{in}, C_{out}, H_f, W_f)\)

  • Output:

    Output shape: \((N, C_{out}, H_{out}, W_{out})\)

Where

\[\begin{split}H_{out} &= (H_{in} - 1) * strides[0] - 2 * paddings[0] + dilations[0] * (H_f - 1) + 1 \\ W_{out} &= (W_{in} - 1) * strides[1] - 2 * paddings[1] + dilations[1] * (W_f - 1) + 1\end{split}\]
Parameters:
  • input (Variable) – The input image with [N, C, H, W] format.
  • num_filters (int) – The number of the filter. It is as same as the output image channel.
  • output_size (int|tuple|None) – The output image size. If output size is a tuple, it must contain two integers, (image_H, image_W). This parameter only works when filter_size is None.
  • filter_size (int|tuple|None) – The filter size. If filter_size is a tuple, it must contain two integers, (filter_size_H, filter_size_W). Otherwise, the filter will be a square. None if use output size to calculate filter_size.
  • padding (int|tuple) – The padding size. If padding is a tuple, it must contain two integers, (padding_H, padding_W). Otherwise, the padding_H = padding_W = padding. Default: padding = 0.
  • stride (int|tuple) – The stride size. If stride is a tuple, it must contain two integers, (stride_H, stride_W). Otherwise, the stride_H = stride_W = stride. Default: stride = 1.
  • dilation (int|tuple) – The dilation size. If dilation is a tuple, it must contain two integers, (dilation_H, dilation_W). Otherwise, the dilation_H = dilation_W = dilation. Default: dilation = 1.
  • groups (int) – The groups number of the Conv2d transpose layer. Inspired by grouped convolution in Alex Krizhevsky’s Deep CNN paper, in which when group=2, the first half of the filters is only connected to the first half of the input channels, while the second half of the filters is only connected to the second half of the input channels. Default: groups=1
  • param_attr (ParamAttr) – The parameters to the Conv2d_transpose Layer. Default: None
  • bias_attr (ParamAttr) – Bias parameter for the Conv2d layer. Default: None
  • use_cudnn (bool) – Use cudnn kernel or not, it is valid only when the cudnn library is installed. Default: True
  • act (str) – Activation type. Default: None
  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
Returns:

The tensor variable storing the convolution transpose result.

Return type:

Variable

Raises:

ValueError – If the shapes of input, filter_size, stride, padding and groups mismatch.

Examples

data = fluid.layers.data(name='data', shape=[3, 32, 32], dtype='float32')
conv2d_transpose = fluid.layers.conv2d_transpose(input=data, num_filters=2, filter_size=3)

conv3d_transpose

paddle.fluid.layers.conv3d_transpose(input, num_filters, output_size=None, filter_size=None, padding=0, stride=1, dilation=1, groups=None, param_attr=None, bias_attr=None, use_cudnn=True, act=None, name=None)

Convlution3D transpose layer

The convolution3D transpose layer calculates the output based on the input, filter, and dilations, strides, paddings. Input(Input) and output(Output) are in NCDHW format. Where N is batch size, C is the number of channels, D is the depth of the feature, H is the height of the feature, and W is the width of the feature. Parameters(dilations, strides, paddings) are two elements. These two elements represent height and width, respectively. The details of convolution transpose layer, please refer to the following explanation and references therein. If bias attribution and activation type are provided, bias is added to the output of the convolution, and the corresponding activation function is applied to the final result.

For each input \(X\), the equation is:

\[Out = \sigma (W \ast X + b)\]

In the above equation:

  • \(X\): Input value, a tensor with NCDHW format.
  • \(W\): Filter value, a tensor with MCDHW format.
  • \(\ast\): Convolution operation.
  • \(b\): Bias value, a 2-D tensor with shape [M, 1].
  • \(\sigma\): Activation function.
  • \(Out\): Output value, the shape of \(Out\) and \(X\) may be different.

Example

  • Input:

    Input shape: \((N, C_{in}, D_{in}, H_{in}, W_{in})\)

    Filter shape: \((C_{in}, C_{out}, D_f, H_f, W_f)\)

  • Output:

    Output shape: \((N, C_{out}, D_{out}, H_{out}, W_{out})\)

Where

\[\begin{split}D_{out} &= (D_{in} - 1) * strides[0] - 2 * paddings[0] + dilations[0] * (D_f - 1) + 1 \\ H_{out} &= (H_{in} - 1) * strides[1] - 2 * paddings[1] + dilations[1] * (H_f - 1) + 1 \\ W_{out} &= (W_{in} - 1) * strides[2] - 2 * paddings[2] + dilations[2] * (W_f - 1) + 1\end{split}\]
Parameters:
  • input (Variable) – The input image with [N, C, D, H, W] format.
  • num_filters (int) – The number of the filter. It is as same as the output image channel.
  • output_size (int|tuple|None) – The output image size. If output size is a tuple, it must contain three integers, (image_D, image_H, image_W). This parameter only works when filter_size is None.
  • filter_size (int|tuple|None) – The filter size. If filter_size is a tuple, it must contain three integers, (filter_size_D, filter_size_H, filter_size_W). Otherwise, the filter will be a square. None if use output size to calculate filter_size.
  • padding (int|tuple) – The padding size. If padding is a tuple, it must contain three integers, (padding_D, padding_H, padding_W). Otherwise, the padding_D = padding_H = padding_W = padding. Default: padding = 0.
  • stride (int|tuple) – The stride size. If stride is a tuple, it must contain three integers, (stride_D, stride_H, stride_W). Otherwise, the stride_D = stride_H = stride_W = stride. Default: stride = 1.
  • dilation (int|tuple) – The dilation size. If dilation is a tuple, it must contain three integers, (dilation_D, dilation_H, dilation_W). Otherwise, the dilation_D = dilation_H = dilation_W = dilation. Default: dilation = 1.
  • groups (int) – The groups number of the Conv3d transpose layer. Inspired by grouped convolution in Alex Krizhevsky’s Deep CNN paper, in which when group=2, the first half of the filters is only connected to the first half of the input channels, while the second half of the filters is only connected to the second half of the input channels. Default: groups=1
  • param_attr (ParamAttr) – The parameters to the Conv3d_transpose Layer. Default: None
  • bias_attr (ParamAttr) – Bias parameter for the Conv3d layer. Default: None
  • use_cudnn (bool) – Use cudnn kernel or not, it is valid only when the cudnn library is installed. Default: True
  • act (str) – Activation type. Default: None
  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
Returns:

The tensor variable storing the convolution transpose result.

Return type:

Variable

Raises:

ValueError – If the shapes of input, filter_size, stride, padding and groups mismatch.

Examples

data = fluid.layers.data(name='data', shape=[3, 12, 32, 32], dtype='float32')
conv3d_transpose = fluid.layers.conv3d_transpose(input=data, num_filters=2, filter_size=3)

sequence_expand

paddle.fluid.layers.sequence_expand(x, y, ref_level=-1, name=None)

Sequence Expand Layer. This layer will expand the input variable x according to specified level lod of y. Please note that lod level of x is at most 1 and rank of x is at least 2. When rank of x is greater than 2, then it would be viewed as a 2-D tensor. Following examples will explain how sequence_expand works:

* Case 1
    x is a LoDTensor:
        x.lod  = [[2,        2]]
        x.data = [[a], [b], [c], [d]]
        x.dims = [4, 1]

    y is a LoDTensor:
        y.lod = [[2,    2],
                 [3, 3, 1, 1]]

    ref_level: 0

    then output is a 1-level LoDTensor:
        out.lod =  [[2,        2,        2,        2]]
        out.data = [[a], [b], [a], [b], [c], [d], [c], [d]]
        out.dims = [8, 1]

* Case 2
    x is a Tensor:
        x.data = [[a], [b], [c]]
        x.dims = [3, 1]

    y is a LoDTensor:
        y.lod = [[2, 0, 3]]

    ref_level: -1

    then output is a Tensor:
        out.data = [[a], [a], [c], [c], [c]]
        out.dims = [5, 1]
Parameters:
  • x (Variable) – The input variable which is a Tensor or LoDTensor.
  • y (Variable) – The input variable which is a LoDTensor.
  • ref_level (int) – Lod level of y to be referred by x. If set to -1, refer the last level of lod.
  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
Returns:

The expanded variable which is a LoDTensor.

Return type:

Variable

Examples

x = fluid.layers.data(name='x', shape=[10], dtype='float32')
y = fluid.layers.data(name='y', shape=[10, 20],
                 dtype='float32', lod_level=1)
out = layers.sequence_expand(x=x, y=y, ref_level=0)

lstm_unit

paddle.fluid.layers.lstm_unit(x_t, hidden_t_prev, cell_t_prev, forget_bias=0.0, param_attr=None, bias_attr=None, name=None)

Lstm unit layer. The equation of a lstm step is:

\[ \begin{align}\begin{aligned}i_t & = \sigma(W_{x_i}x_{t} + W_{h_i}h_{t-1} + b_i)\\f_t & = \sigma(W_{x_f}x_{t} + W_{h_f}h_{t-1} + b_f)\\c_t & = f_tc_{t-1} + i_t tanh (W_{x_c}x_t + W_{h_c}h_{t-1} + b_c)\\o_t & = \sigma(W_{x_o}x_{t} + W_{h_o}h_{t-1} + b_o)\\h_t & = o_t tanh(c_t)\end{aligned}\end{align} \]

The inputs of lstm unit include \(x_t\), \(h_{t-1}\) and \(c_{t-1}\). The 2nd dimensions of \(h_{t-1}\) and \(c_{t-1}\) should be same. The implementation separates the linear transformation and non-linear transformation apart. Here, we take \(i_t\) as an example. The linear transformation is applied by calling a fc layer and the equation is:

\[L_{i_t} = W_{x_i}x_{t} + W_{h_i}h_{t-1} + b_i\]

The non-linear transformation is applied by calling lstm_unit_op and the equation is:

\[i_t = \sigma(L_{i_t})\]

This layer has two outputs including \(h_t\) and \(o_t\).

Parameters:
  • x_t (Variable) – The input value of current step, a 2-D tensor with shape M x N, M for batch size and N for input size.
  • hidden_t_prev (Variable) – The hidden value of lstm unit, a 2-D tensor with shape M x S, M for batch size and S for size of lstm unit.
  • cell_t_prev (Variable) – The cell value of lstm unit, a 2-D tensor with shape M x S, M for batch size and S for size of lstm unit.
  • forget_bias (float) – The forget bias of lstm unit.
  • param_attr (ParamAttr) – The attributes of parameter weights, used to set initializer, name etc.
  • bias_attr (ParamAttr) – The attributes of bias weights, if not False, bias weights will be created and be set to default value.
  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
Returns:

The hidden value and cell value of lstm unit.

Return type:

tuple

Raises:

ValueError – The ranks of x_t, hidden_t_prev and cell_t_prev not be 2 or the 1st dimensions of x_t, hidden_t_prev and cell_t_prev not be the same or the 2nd dimensions of hidden_t_prev and cell_t_prev not be the same.

Examples

x_t = fluid.layers.fc(input=x_t_data, size=10)
prev_hidden = fluid.layers.fc(input=prev_hidden_data, size=30)
prev_cell = fluid.layers.fc(input=prev_cell_data, size=30)
hidden_value, cell_value = fluid.layers.lstm_unit(x_t=x_t,
                                       hidden_t_prev=prev_hidden,
                                       cell_t_prev=prev_cell)

reduce_sum

paddle.fluid.layers.reduce_sum(input, dim=None, keep_dim=False, name=None)

Computes the sum of tensor elements over the given dimension.

Parameters:
  • input (Variable) – The input variable which is a Tensor or LoDTensor.
  • dim (list|int|None) – The dimensions along which the sum is performed. If None, sum all elements of input and return a Tensor variable with a single element, otherwise must be in the range \([-rank(input), rank(input))\). If \(dim[i] < 0\), the dimension to reduce is \(rank + dim[i]\).
  • keep_dim (bool|False) – Whether to reserve the reduced dimension in the output Tensor. The result tensor will have one fewer dimension than the input unless keep_dim is true.
  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
Returns:

The reduced Tensor variable.

Return type:

Variable

Examples

# x is a Tensor variable with following elements:
#    [[0.2, 0.3, 0.5, 0.9]
#     [0.1, 0.2, 0.6, 0.7]]
# Each example is followed by the correspending output tensor.
fluid.layers.reduce_sum(x)  # [3.5]
fluid.layers.reduce_sum(x, dim=0)  # [0.3, 0.5, 1.1, 1.6]
fluid.layers.reduce_sum(x, dim=-1)  # [1.9, 1.6]
fluid.layers.reduce_sum(x, dim=1, keep_dim=True)  # [[1.9], [1.6]]

# x is a Tensor variable with shape [2, 2, 2] and elements as below:
#      [[[1, 2], [3, 4]],
#      [[5, 6], [7, 8]]]
# Each example is followed by the correspending output tensor.
fluid.layers.reduce_sum(x, dim=[1, 2]) # [10, 26]
fluid.layers.reduce_sum(x, dim=[0, 1]) # [16, 20]

reduce_mean

paddle.fluid.layers.reduce_mean(input, dim=None, keep_dim=False, name=None)

Computes the mean of the input tensor’s elements along the given dimension.

Parameters:
  • input (Variable) – The input variable which is a Tensor or LoDTensor.
  • dim (list|int|None) – The dimension along which the mean is computed. If None, compute the mean over all elements of input and return a variable with a single element, otherwise it must be in the range \([-rank(input), rank(input))\). If \(dim[i] < 0\), the dimension to reduce is \(rank(input) + dim[i]\).
  • keep_dim (bool) – Whether to reserve the reduced dimension in the output Tensor. The result tensor will have one fewer dimension than the input unless keep_dim is true.
  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
Returns:

The reduced mean Variable.

Return type:

Variable

Examples

# x is a Tensor variable with following elements:
#    [[0.2, 0.3, 0.5, 0.9]
#     [0.1, 0.2, 0.6, 0.7]]
# Each example is followed by the correspending output tensor.
fluid.layers.reduce_mean(x)  # [0.4375]
fluid.layers.reduce_mean(x, dim=0)  # [0.15, 0.25, 0.55, 0.8]
fluid.layers.reduce_mean(x, dim=-1)  # [0.475, 0.4]
fluid.layers.reduce_mean(
    x, dim=1, keep_dim=True)  # [[0.475], [0.4]]

# x is a Tensor variable with shape [2, 2, 2] and elements as below:
#      [[[1.0, 2.0], [3.0, 4.0]],
#      [[5.0, 6.0], [7.0, 8.0]]]
# Each example is followed by the correspending output tensor.
fluid.layers.reduce_mean(x, dim=[1, 2]) # [2.5, 6.5]
fluid.layers.reduce_mean(x, dim=[0, 1]) # [4.0, 5.0]

reduce_max

paddle.fluid.layers.reduce_max(input, dim=None, keep_dim=False, name=None)

Computes the maximum of tensor elements over the given dimension.

Parameters:
  • input (Variable) – The input variable which is a Tensor or LoDTensor.
  • dim (list|int|None) – The dimension along which the maximum is computed. If None, compute the maximum over all elements of input and return a Tensor variable with a single element, otherwise must be in the range \([-rank(input), rank(input))\). If \(dim[i] < 0\), the dimension to reduce is \(rank + dim[i]\).
  • keep_dim (bool) – Whether to reserve the reduced dimension in the output Tensor. The result tensor will have one fewer dimension than the input unless keep_dim is true.
  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
Returns:

The reduced Tensor variable.

Return type:

Variable

Examples

# x is a Tensor variable with following elements:
#    [[0.2, 0.3, 0.5, 0.9]
#     [0.1, 0.2, 0.6, 0.7]]
# Each example is followed by the correspending output tensor.
fluid.layers.reduce_max(x)  # [0.9]
fluid.layers.reduce_max(x, dim=0)  # [0.2, 0.3, 0.6, 0.9]
fluid.layers.reduce_max(x, dim=-1)  # [0.9, 0.7]
fluid.layers.reduce_max(x, dim=1, keep_dim=True)  # [[0.9], [0.7]]

# x is a Tensor variable with shape [2, 2, 2] and elements as below:
#      [[[1.0, 2.0], [3.0, 4.0]],
#      [[5.0, 6.0], [7.0, 8.0]]]
# Each example is followed by the correspending output tensor.
fluid.layers.reduce_max(x, dim=[1, 2]) # [4.0, 8.0]
fluid.layers.reduce_max(x, dim=[0, 1]) # [7.0, 8.0]

reduce_min

paddle.fluid.layers.reduce_min(input, dim=None, keep_dim=False, name=None)

Computes the minimum of tensor elements over the given dimension.

Parameters:
  • input (Variable) – The input variable which is a Tensor or LoDTensor.
  • dim (list|int|None) – The dimensions along which the minimum is computed. If None, compute the minimum over all elements of input and return a Tensor variable with a single element, otherwise must be in the range \([-rank(input), rank(input))\). If \(dim[i] < 0\), the dimension to reduce is \(rank + dim[i]\).
  • keep_dim (bool) – Whether to reserve the reduced dimension in the output Tensor. The result tensor will have one fewer dimension than the input unless keep_dim is true.
  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
Returns:

The reduced Tensor variable.

Return type:

Variable

Examples

# x is a Tensor variable with following elements:
#    [[0.2, 0.3, 0.5, 0.9]
#     [0.1, 0.2, 0.6, 0.7]]
# Each example is followed by the correspending output tensor.
fluid.layers.reduce_min(x)  # [0.1]
fluid.layers.reduce_min(x, dim=0)  # [0.1, 0.2, 0.5, 0.7]
fluid.layers.reduce_min(x, dim=-1)  # [0.2, 0.1]
fluid.layers.reduce_min(x, dim=1, keep_dim=True)  # [[0.2], [0.1]]

# x is a Tensor variable with shape [2, 2, 2] and elements as below:
#      [[[1.0, 2.0], [3.0, 4.0]],
#      [[5.0, 6.0], [7.0, 8.0]]]
# Each example is followed by the correspending output tensor.
fluid.layers.reduce_min(x, dim=[1, 2]) # [1.0, 5.0]
fluid.layers.reduce_min(x, dim=[0, 1]) # [1.0, 2.0]

reduce_prod

paddle.fluid.layers.reduce_prod(input, dim=None, keep_dim=False, name=None)

Computes the product of tensor elements over the given dimension.

Parameters:
  • input (Variable) – The input variable which is a Tensor or LoDTensor.
  • dim (list|int|None) – The dimensions along which the product is performed. If None, multipy all elements of input and return a Tensor variable with a single element, otherwise must be in the range \([-rank(input), rank(input))\). If \(dim[i] < 0\), the dimension to reduce is \(rank + dim[i]\).
  • keep_dim (bool|False) – Whether to reserve the reduced dimension in the output Tensor. The result tensor will have one fewer dimension than the input unless keep_dim is true.
  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
Returns:

The reduced Tensor variable.

Return type:

Variable

Examples

# x is a Tensor variable with following elements:
#    [[0.2, 0.3, 0.5, 0.9]
#     [0.1, 0.2, 0.6, 0.7]]
# Each example is followed by the correspending output tensor.
fluid.layers.reduce_prod(x)  # [0.0002268]
fluid.layers.reduce_prod(x, dim=0)  # [0.02, 0.06, 0.3, 0.63]
fluid.layers.reduce_prod(x, dim=-1)  # [0.027, 0.0084]
fluid.layers.reduce_prod(x, dim=1,
                         keep_dim=True)  # [[0.027], [0.0084]]

# x is a Tensor variable with shape [2, 2, 2] and elements as below:
#      [[[1.0, 2.0], [3.0, 4.0]],
#      [[5.0, 6.0], [7.0, 8.0]]]
# Each example is followed by the correspending output tensor.
fluid.layers.reduce_prod(x, dim=[1, 2]) # [24.0, 1680.0]
fluid.layers.reduce_prod(x, dim=[0, 1]) # [105.0, 384.0]

sequence_first_step

paddle.fluid.layers.sequence_first_step(input)

This function gets the first step of sequence.

x is a 1-level LoDTensor:
  x.lod = [[2, 3, 2]]
  x.data = [1, 3, 2, 4, 6, 5, 1]
  x.dims = [7, 1]

then output is a Tensor:
  out.dim = [3, 1]
  with condition len(x.lod[-1]) == out.dims[0]
  out.data = [1, 2, 5], where 1=first(1,3), 2=first(2,4,6), 5=first(5,1)
Parameters:input (variable) – The input variable which is a LoDTensor.
Returns:The sequence’s first step variable which is a Tensor.

Examples

x = fluid.layers.data(name='x', shape=[7, 1],
                 dtype='float32', lod_level=1)
x_first_step = fluid.layers.sequence_first_step(input=x)

sequence_last_step

paddle.fluid.layers.sequence_last_step(input)

This function gets the last step of sequence.

x is a 1-level LoDTensor:
  x.lod = [[2, 3, 2]]
  x.data = [1, 3, 2, 4, 6, 5, 1]
  x.dims = [7, 1]

then output is a Tensor:
  out.dim = [3, 1]
  with condition len(x.lod[-1]) == out.dims[0]
  out.data = [3, 6, 1], where 3=last(1,3), 6=last(2,4,6), 1=last(5,1)
Parameters:input (variable) – The input variable which is a LoDTensor.
Returns:The sequence’s last step variable which is a Tensor.

Examples

x = fluid.layers.data(name='x', shape=[7, 1],
                 dtype='float32', lod_level=1)
x_last_step = fluid.layers.sequence_last_step(input=x)

dropout

paddle.fluid.layers.dropout(x, dropout_prob, is_test=False, seed=None, name=None)

Computes dropout.

Drop or keep each element of x independently. Dropout is a regularization technique for reducing overfitting by preventing neuron co-adaption during training. The dropout operator randomly sets (according to the given dropout probability) the outputs of some units to zero, while others are remain unchanged.

Parameters:
  • x (Variable) – The input tensor variable.
  • dropout_prob (float) – Probability of setting units to zero.
  • is_test (bool) – A flag indicating whether it is in test phrase or not.
  • seed (int) – A Python integer used to create random seeds. If this parameter is set to None, a random seed is used. NOTE: If an integer seed is given, always the same output units will be dropped. DO NOT use a fixed seed in training.
  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
Returns:

A tensor variable is the shape with x.

Return type:

Variable

Examples

x = fluid.layers.data(name="data", shape=[32, 32], dtype="float32")
droped = fluid.layers.dropout(x, dropout_prob=0.5)

split

paddle.fluid.layers.split(input, num_or_sections, dim=-1, name=None)

Split the input tensor into multiple sub-tensors.

Parameters:
  • input (Variable) – The input variable which is a Tensor or LoDTensor.
  • num_or_sections (int|list) – If num_or_sections is an integer, then the integer indicates the number of equal sized sub-tensors that the tensor will be divided into. If num_or_sections is a list of integers, the length of list indicates the number of sub-tensors and the integers indicate the sizes of sub-tensors’ dim dimension orderly.
  • dim (int) – The dimension along which to split. If \(dim < 0\), the dimension to split along is \(rank(input) + dim\).
  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
Returns:

The list of segmented tensor variables.

Return type:

list(Variable)

Examples

# x is a Tensor variable with shape [3, 9, 5]:
x0, x1, x2 = fluid.layers.split(x, num_or_sections=3, dim=1)
x0.shape  # [3, 3, 5]
x1.shape  # [3, 3, 5]
x2.shape  # [3, 3, 5]
x0, x1, x2 = fluid.layers.split(
    x, num_or_sections=[2, 3, 4], dim=1)
x0.shape  # [3, 2, 5]
x1.shape  # [3, 3, 5]
x2.shape  # [3, 4, 5]

ctc_greedy_decoder

paddle.fluid.layers.ctc_greedy_decoder(input, blank, name=None)

This op is used to decode sequences by greedy policy by below steps:

  1. Get the indexes of max value for each row in input. a.k.a. numpy.argmax(input, axis=0).
  2. For each sequence in result of step1, merge repeated tokens between two blanks and delete all blanks.

A simple example as below:

Given:

input.data = [[0.6, 0.1, 0.3, 0.1],
              [0.3, 0.2, 0.4, 0.1],
              [0.1, 0.5, 0.1, 0.3],
              [0.5, 0.1, 0.3, 0.1],

              [0.5, 0.1, 0.3, 0.1],
              [0.2, 0.2, 0.2, 0.4],
              [0.2, 0.2, 0.1, 0.5],
              [0.5, 0.1, 0.3, 0.1]]

input.lod = [[4, 4]]

Then:

output.data = [[2],
               [1],
               [3]]

output.lod = [[2, 1]]
Parameters:
  • input (Variable) – (LoDTensor<float>), the probabilities of variable-length sequences, which is a 2-D Tensor with LoD information. It’s shape is [Lp, num_classes + 1], where Lp is the sum of all input sequences’ length and num_classes is the true number of classes. (not including the blank label).
  • blank (int) – the blank label index of Connectionist Temporal Classification (CTC) loss, which is in thehalf-opened interval [0, num_classes + 1).
  • name (str) – The name of this layer. It is optional.
Returns:

CTC greedy decode result. If all the sequences in result were empty, the result LoDTensor will be [-1] with LoD [[]] and dims [1, 1].

Return type:

Variable

Examples

x = fluid.layers.data(name='x', shape=[8], dtype='float32')

cost = fluid.layers.ctc_greedy_decoder(input=x, blank=0)

edit_distance

paddle.fluid.layers.edit_distance(input, label, normalized=True, ignored_tokens=None)

EditDistance operator computes the edit distances between a batch of hypothesis strings and their references. Edit distance, also called Levenshtein distance, measures how dissimilar two strings are by counting the minimum number of operations to transform one string into anthor. Here the operations include insertion, deletion, and substitution.

For example, given hypothesis string A = “kitten” and reference B = “sitting”, the edit distance is 3 for A will be transformed into B at least after two substitutions and one insertion:

“kitten” -> “sitten” -> “sittin” -> “sitting”

The input is a LoDTensor consisting of all the hypothesis strings with the total number denoted by batch_size, and the separation is specified by the LoD information. And the batch_size reference strings are arranged in order in the same way in the input LoDTensor.

The output contains the batch_size results and each stands for the edit distance for a pair of strings respectively. If Attr(normalized) is true, the edit distance will be divided by the length of reference string.

Parameters:
  • input (Variable) – The indices for hypothesis strings.
  • label (Variable) – The indices for reference strings.
  • normalized (bool, default True) – Indicated whether to normalize the edit distance by the length of reference string.
  • ignored_tokens (list<int>, default None) – Tokens that should be removed before calculating edit distance.
  • name (str) – The name of this layer. It is optional.
Returns:

sequence-to-sequence edit distance in shape [batch_size, 1].

Return type:

Variable

Examples

x = fluid.layers.data(name='x', shape=[8], dtype='float32')
y = fluid.layers.data(name='y', shape=[7], dtype='float32')
cost = fluid.layers.edit_distance(input=x,label=y)

l2_normalize

paddle.fluid.layers.l2_normalize(x, axis, epsilon=1e-12, name=None)

L2 normalize Layer

The l2 normalize layer normalizes x along dimension axis using an L2 norm. For a 1-D tensor (dim is fixed to 0), this layer computes

\[y = \frac{x}{ \sqrt{\sum {x^2} + epsion }}\]

For x with more dimensions, this layer independently normalizes each 1-D slice along dimension axis.

Parameters:
  • x (Variable|list) – The input tensor to l2_normalize layer.
  • axis (int) – The axis on which to apply normalization. If axis < 0, the dimension to normalization is rank(X) + axis. -1 is the last dimension.
  • epsilon (float) – The epsilon value is used to avoid division by zero, the defalut value is 1e-10.
  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
Returns:

The output tensor variable is the same shape with x.

Return type:

Variable

Examples

data = fluid.layers.data(name="data",
                         shape=(3, 17, 13),
                         dtype="float32")
normed = fluid.layers.l2_normalize(x=data, axis=1)

matmul

paddle.fluid.layers.matmul(x, y, transpose_x=False, transpose_y=False, name=None)

Applies matrix multiplication to two tensors.

Currently, the input tensors’ rank can be any, but when the rank of any inputs is bigger than 3, this two inputs’ rank should be equal.

The actual behavior depends on the shapes of \(x\), \(y\) and the flag values of transpose_x, transpose_y. Specifically:

  • If a transpose flag is specified, the last two dimensions of the tensor are transposed. If the tensor is rank-1 of shape \([D]\), then for \(x\) it is treated as \([1, D]\) in nontransposed form and as \([D, 1]\) in transposed form, whereas for \(y\) it is the opposite: It is treated as \([D, 1]\) in nontransposed form and as \([1, D]\) in transposed form.
  • After transpose, the two tensors are 2-D or n-D and matrix multiplication performs in the following way.
    • If both are 2-D, they are multiplied like conventional matrices.
    • If either is n-D, it is treated as a stack of matrices residing in the last two dimensions and a batched matrix multiply supporting broadcast applies on the two tensors.

Also note that if the raw tensor \(x\) or \(y\) is rank-1 and nontransposed, the prepended or appended dimension \(1\) will be removed after matrix multiplication.

Parameters:
  • x (Variable) – The input variable which is a Tensor or LoDTensor.
  • y (Variable) – The input variable which is a Tensor or LoDTensor.
  • transpose_x (bool) – Whether to transpose \(x\) before multiplication.
  • transpose_y (bool) – Whether to transpose \(y\) before multiplication.
  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
Returns:

The product Tensor variable.

Return type:

Variable

Examples

# Examples to clarify shapes of the inputs and output
# x: [B, ..., M, K], y: [B, ..., K, N]
fluid.layers.matmul(x, y)  # out: [B, ..., M, N]

# x: [B, M, K], y: [B, K, N]
fluid.layers.matmul(x, y)  # out: [B, M, N]

# x: [B, M, K], y: [K, N]
fluid.layers.matmul(x, y)  # out: [B, M, N]

# x: [M, K], y: [K, N]
fluid.layers.matmul(x, y)  # out: [M, N]

# x: [B, M, K], y: [K]
fluid.layers.matmul(x, y)  # out: [B, M]

# x: [K], y: [K]
fluid.layers.matmul(x, y)  # out: [1]

# x: [M], y: [N]
fluid.layers.matmul(x, y, True, True)  # out: [M, N]

topk

paddle.fluid.layers.topk(input, k, name=None)

This operator is used to find values and indices of the k largest entries for the last dimension.

If the input is a vector (1-D Tensor), finds the k largest entries in the vector and outputs their values and indices as vectors. Thus values[j] is the j-th largest entry in input, and its index is indices[j].

If the input is a Tensor with higher rank, this operator computes the top k entries along the last dimension.

For example:

If:
    input = [[5, 4, 2, 3],
             [9, 7, 10, 25],
             [6, 2, 10, 1]]
    k = 2

Then:
    The first output:
    values = [[5, 4],
              [10, 25],
              [6, 10]]

    The second output:
    indices = [[0, 1],
               [2, 3],
               [0, 2]]
Parameters:
  • input (Variable) – The input variable which can be a vector or Tensor with higher rank.
  • k (int) – The number of top elements to look for along the last dimension of input.
  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically. Default: None
Returns:

A tuple with two elements. Each element is a Variable. The first one is k largest elements along each last dimensional slice. The second one is indices of values within the last dimension of input.

Return type:

Tuple[Variable]

Raises:

ValueError – If k < 1 or k is not less than the last dimension of input

Examples

top5_values, top5_indices = layers.topk(input, k=5)

warpctc

paddle.fluid.layers.warpctc(input, label, blank=0, norm_by_times=False)

An operator integrating the open source Warp-CTC library (https://github.com/baidu-research/warp-ctc) to compute Connectionist Temporal Classification (CTC) loss. It can be aliased as softmax with CTC, since a native softmax activation is interated to the Warp-CTC library, to to normlize values for each row of the input tensor.

Parameters:
  • input (Variable) – The unscaled probabilities of variable-length sequences, which is a 2-D Tensor with LoD information. It’s shape is [Lp, num_classes + 1], where Lp is the sum of all input sequences’ length and num_classes is the true number of classes. (not including the blank label).
  • label (Variable) – The ground truth of variable-length sequence, which is a 2-D Tensor with LoD information. It is of the shape [Lg, 1], where Lg is th sum of all labels’ length.
  • blank (int, default 0) – The blank label index of Connectionist Temporal Classification (CTC) loss, which is in the half-opened interval [0, num_classes + 1).
  • norm_by_times (bool, default false) – Whether to normalize the gradients by the number of time-step, which is also the sequence’s length. There is no need to normalize the gradients if warpctc layer was follewed by a mean_op.
Returns:

The Connectionist Temporal Classification (CTC) loss, which is a 2-D Tensor of the shape [batch_size, 1].

Return type:

Variable

Examples

label = fluid.layers.data(shape=[11, 8], dtype='float32', lod_level=1)
predict = fluid.layers.data(shape=[11, 1], dtype='float32')
cost = fluid.layers.warpctc(input=predict, label=label)

sequence_reshape

paddle.fluid.layers.sequence_reshape(input, new_dim)

Sequence Reshape Layer

This layer will rearrange the input sequences. The new dimension is set by user. Length of each sequence is computed according to original length, original dimension and new dimension. The following example will help to illustrate the function of this layer:

x is a LoDTensor:
    x.lod  = [[0, 2, 6]]
    x.data = [[1,  2], [3,  4],
              [5,  6], [7,  8],
              [9, 10], [11, 12]]
    x.dims = [6, 2]

set new_dim = 4

then out is a LoDTensor:

    out.lod  = [[0, 1, 3]]

    out.data = [[1,  2,  3,  4],
                [5,  6,  7,  8],
                [9, 10, 11, 12]]
    out.dims = [3, 4]

Currently, only 1-level LoDTensor is supported and please make sure (original length * original dimension) can be divided by new dimension with no remainder for each sequence.

Parameters:
  • input (Variable) – A 2-D LoDTensor with shape being [N, M] where M for dimension.
  • new_dim (int) – New dimension that the input LoDTensor is reshaped to.
Returns:

Reshaped LoDTensor according to new dimension.

Return type:

Variable

Examples

x = fluid.layers.data(shape=[5, 20], dtype='float32', lod_level=1)
x_reshaped = fluid.layers.sequence_reshape(input=x, new_dim=10)

transpose

paddle.fluid.layers.transpose(x, perm, name=None)

Permute the dimensions of input according to perm.

The i-th dimension of the returned tensor will correspond to the perm[i]-th dimension of input.

Parameters:
  • x (Variable) – The input Tensor.
  • perm (list) – A permutation of the dimensions of input.
  • name (str) – The name of this layer. It is optional.
Returns:

A transposed Tensor.

Return type:

Variable

Examples

x = fluid.layers.data(name='x', shape=[5, 10, 15], dtype='float32')
x_transposed = layers.transpose(x, perm=[1, 0, 2])

im2sequence

paddle.fluid.layers.im2sequence(input, filter_size=1, stride=1, padding=0, name=None)

Extracts image patches from the input tensor to form a tensor of shape {input.batch_size * output_height * output_width, filter_size_H * filter_size_W * input.channels} which is similar with im2col. This op use filter / kernel to scan images and convert these images to sequences. After expanding, the number of time step are output_height * output_width for an image, in which output_height and output_width are calculated by below equation:

\[output\_size = 1 + (2 * padding + img\_size - block\_size + stride - 1) / stride\]

And the dimension of each time step is block_y * block_x * input.channels.

Parameters:
  • input (Variable) – The input should be a tensor in NCHW format.
  • filter_size (int|tuple|None) – The filter size. If filter_size is a tuple, it must contain two integers, (filter_size_H, filter_size_W). Otherwise, the filter will be a square.
  • stride (int|tuple) – The stride size. If stride is a tuple, it must contain two integers, (stride_H, stride_W). Otherwise, the stride_H = stride_W = stride. Default: stride = 1.
  • padding (int|tuple) – The padding size. If padding is a tuple, it can contain two integers like (padding_H, padding_W) which means padding_up = padding_down = padding_H and padding_left = padding_right = padding_W. Or it can use (padding_up, padding_left, padding_down, padding_right) to indicate paddings of four direction. Otherwise, a scalar padding means padding_up = padding_down = padding_left = padding_right = padding Default: padding = 0.
  • name (int) – The name of this layer. It is optional.
Returns:

The output is a LoDTensor with shape {input.batch_size * output_height * output_width, filter_size_H * filter_size_W * input.channels}. If we regard output as a matrix, each row of this matrix is a step of a sequence.

Return type:

output

Examples

   Given:

   x = [[[[ 6.  2.  1.]
          [ 8.  3.  5.]
          [ 0.  2.  6.]]

         [[ 2.  4.  4.]
          [ 6.  3.  0.]
          [ 6.  4.  7.]]]

        [[[ 6.  7.  1.]
          [ 5.  7.  9.]
          [ 2.  4.  8.]]

         [[ 1.  2.  1.]
          [ 1.  3.  5.]
          [ 9.  0.  8.]]]]

   x.dims = {2, 2, 3, 3}

   And:

   filter = [2, 2]
   stride = [1, 1]
   padding = [0, 0]

   Then:

   output.data = [[ 6.  2.  8.  3.  2.  4.  6.  3.]
                  [ 2.  1.  3.  5.  4.  4.  3.  0.]
                  [ 8.  3.  0.  2.  6.  3.  6.  4.]
                  [ 3.  5.  2.  6.  3.  0.  4.  7.]
                  [ 6.  7.  5.  7.  1.  2.  1.  3.]
                  [ 7.  1.  7.  9.  2.  1.  3.  5.]
                  [ 5.  7.  2.  4.  1.  3.  9.  0.]
                  [ 7.  9.  4.  8.  3.  5.  0.  8.]]

   output.dims = {8, 9}

   output.lod = [[4, 4]]

Examples:

   .. code-block:: python

       output = fluid.layers.im2sequence(
           input=layer, stride=[1, 1], filter_size=[2, 2])

nce

paddle.fluid.layers.nce(input, label, num_total_classes, sample_weight=None, param_attr=None, bias_attr=None, num_neg_samples=None)

Compute and return the noise-contrastive estimation training loss. See Noise-contrastive estimation: A new estimation principle for unnormalized statistical models. By default this operator uses a uniform distribution for sampling.

Parameters:
  • input (Variable) – input variable.
  • label (Variable) – label.
  • num_total_classes (int) – Total number of classes in all samples
  • sample_weight (Variable|None) – A Variable of shape [batch_size, 1] storing a weight for each sample. The default weight for each sample is 1.0.
  • param_attr (ParamAttr|None) – attributes for parameter
  • bias_attr (ParamAttr|None) – attributes for bias
  • num_neg_samples (int) – The number of negative classes. The default value is 10
Returns:

The output nce loss.

Return type:

Variable

Examples

window_size = 5
words = []
for i in xrange(window_size):
    words.append(layers.data(
        name='word_{0}'.format(i), shape=[1], dtype='int64'))

dict_size = 10000
label_word = int(window_size / 2) + 1

embs = []
for i in xrange(window_size):
    if i == label_word:
        continue

    emb = layers.embedding(input=words[i], size=[dict_size, 32],
                           param_attr='emb.w', is_sparse=True)
    embs.append(emb)

embs = layers.concat(input=embs, axis=1)
loss = layers.nce(input=embs, label=words[label_word],
              num_total_classes=dict_size, param_attr='nce.w',
              bias_attr='nce.b')

row_conv

paddle.fluid.layers.row_conv(input, future_context_size, param_attr=None, act=None)

Row-convolution operator

The row convolution is called lookahead convolution. This operator was introduced in the following paper for DeepSpeech2: http://www.cs.cmu.edu/~dyogatam/papers/wang+etal.iclrworkshop2016.pdf

The main motivation is that a bidirectional RNN, useful in DeepSpeech like speech models, learns representation for a sequence by performing a forward and a backward pass through the entire sequence. However, unlike unidirectional RNNs, bidirectional RNNs are challenging to deploy in an online and low-latency setting. The lookahead convolution incorporates information from future subsequences in a computationally efficient manner to improve unidirectional recurrent neural networks. The row convolution operator is different from the 1D sequence convolution, and is computed as follows:

Given an input sequence \(in\) of length \(t\) and input dimension \(d\), and a filter (\(W\)) of size \(context \times d\), the output sequence is convolved as:

$$ out_{i, :} = \sum_{j=i}^{i + context} in_{j,:} \cdot W_{i-j, :} $$

In the above equation:

  • \(Out_{i}\): The i-th row of output variable with shape [1, D].
  • \(\\tau\): Future context size.
  • \(X_{j}\): The j-th row of input variable with shape [1, D].
  • \(W_{i-j}\): The (i-j)-th row of parameters with shape [1, D].

More details about row_conv please refer to the design document https://github.com/PaddlePaddle/Paddle/issues/2228#issuecomment-303903645 .

Parameters:
  • input (Variable) – the input(X) is a LodTensor, which supports variable time-length input sequences. The underlying tensor in this LoDTensor is a matrix with shape (T x N), where T is the total time steps in this mini-batch and N is the input data dimension.
  • future_context_size (int) – Future context size. Please note, the shape of convolution kernel is [future_context_size + 1, D].
  • param_attr (ParamAttr) – Attributes of parameters, including name, initializer etc.
  • act (str) – Non-linear activation to be applied to output variable.
Returns:

the output(Out) is a LodTensor, which supports variable time-length input sequences. The underlying tensor in this LodTensor is a matrix with shape T x N, i.e., the same shape as X.

Examples

>>> import paddle.fluid as fluid
>>> x = fluid.layers.data(name='x', shape=[16],
>>>                        dtype='float32', lod_level=1)
>>> out = fluid.layers.row_conv(input=x, future_context_size=2)

multiplex

paddle.fluid.layers.multiplex(inputs, index)

Referring to the given index variable, this layer selects rows from the input variables to construct a multiplex variable. Assuming that there are \(m\) input variables and \(I_i\) represents the i-th input variable and \(i\) is in [0, \(m\)). All input variables are tensors with same shape [\(d_0\), \(d_1\), ..., \(d_R\)]. Please note that rank of the input tensor should be at least 2. Each input variable will be treated as a 2-D matrix with shape [\(M\), \(N\)] where \(M\) for \(d_0\) and \(N\) for \(d_1\) * \(d_2\) * ... * \(d_R\). Let \(I_i[j]\) be the j-th row of the i-th input variable. The given index variable should be a 2-D tensor with shape [\(M\), 1]. Let ID[i] be the i-th index value of the index variable. Then the output variable will be a tensor with shape [\(d_0\), \(d_1\), ..., \(d_R\)]. If we treat the output tensor as a 2-D matrix with shape [\(M\), \(N\)] and let \(O[i]\) be the i-th row of the matrix, then O[i] is equal to \(I_{ID[i]}[i]\).

  • Ids: the index tensor.
  • X[0 : N - 1]: the candidate tensors for output (N >= 2).
  • For each index i from 0 to batchSize - 1, the output is the i-th row of the the (Ids[i])-th tensor.

For i-th row of the output tensor:

$$ y[i] = x_{k}[i] $$

where \(y\) is the output tensor, \(x_{k}\) is the k-th input tensor, and \(k = Ids[i]\).

>>> import paddle.fluid as fluid
>>> x1 = fluid.layers.data(name='x1', shape=[4], dtype='float32')
>>> x2 = fluid.layers.data(name='x2', shape=[4], dtype='float32')
>>> index = fluid.layers.data(name='index', shape=[1], dtype='int32')
>>> out = fluid.layers.multiplex(inputs=[x1, x2], index=index)
Parameters:
  • inputs (list) – A list of variables to gather from. All variables have the same shape and the rank is at least 2.
  • index (Variable) – Tensor<int32>, index variable which is a 2-D tensor with shape [M, 1] where M is the batch size.
Returns:

The output tensor of multiplex operator.

layer_norm

paddle.fluid.layers.layer_norm(input, scale=True, shift=True, begin_norm_axis=1, epsilon=1e-05, param_attr=None, bias_attr=None, act=None, name=None)

Assume feature vectors exist on dimensions begin_norm_axis ... rank(input) and calculate the moment statistics along these dimensions for each feature vector \(a\) with size \(H\), then normalize each feature vector using the corresponding statistics. After that, apply learnable gain and bias on the normalized tensor to scale and shift if scale and shift are set.

Refer to Layer Normalization

The formula is as follows:

\[ \begin{align}\begin{aligned}\mu & = \frac{1}{H}\sum_{i=1}^{H} a_i\\\sigma & = \sqrt{\frac{1}{H}\sum_{i=1}^{H}(a_i - \mu)^2}\\h & = f(\frac{g}{\sigma}(a - \mu) + b)\end{aligned}\end{align} \]
  • \(a\): the vector representation of the summed inputs to the neurons

in that layer.

  • \(H\): the number of hidden units in a layers
  • \(g\): the trainable scale parameter.
  • \(b\): the trainable bias parameter.
Parameters:
  • input (Variable) – The input tensor variable.
  • scale (bool) – Whether to learn the adaptive gain \(g\) after normalization.
  • shift (bool) – Whether to learn the adaptive bias \(b\) after normalization.
  • begin_norm_axis (bool) – The normalization will be performed along dimensions from begin_norm_axis to rank(input).
  • epsilon (float) – The small value added to the variance to prevent division by zero.
  • param_attr (ParamAttr|None) – The parameter attribute for the learnable gain \(g\).
  • bias_attr (ParamAttr|None) – The parameter attribute for the learnable bias \(b\).
  • act (str) – Activation to be applied to the output of layer normalizaiton.
  • name (str) – The name of this layer. It is optional.
Returns:

Result after normalization

Examples

>>> data = fluid.layers.data(name='data', shape=[3, 32, 32],
>>>                          dtype='float32')
>>> x = fluid.layers.layer_norm(input=data, begin_norm_axis=1)

softmax_with_cross_entropy

paddle.fluid.layers.softmax_with_cross_entropy(logits, label, soft_label=False)

Softmax With Cross Entropy Operator.

Cross entropy loss with softmax is used as the output layer extensively. This operator computes the softmax normalized values for each row of the input tensor, after which cross-entropy loss is computed. This provides a more numerically stable gradient.

Because this operator performs a softmax on logits internally, it expects unscaled logits. This operator should not be used with the output of softmax operator since that would produce incorrect results.

When the attribute soft_label is set false, this operators expects mutually exclusive hard labels, each sample in a batch is in exactly one class with a probability of 1.0. Each sample in the batch will have a single label.

The equation is as follows:

  1. Hard label (one-hot label, so every sample has exactly one class)
\[loss_j = -\text{logit}_{label_j} + \log\left(\sum_{i=0}^{K}\exp(\text{logit}_i)\right), j = 1,..., K\]
  1. Soft label (each sample can have a distribution over all classes)
\[loss_j = -\sum_{i=0}^{K}\text{label}_i \left(\text{logit}_i - \log\left(\sum_{i=0}^{K} \exp(\text{logit}_i)\right)\right), j = 1,...,K\]
Parameters:
  • logits (Variable) – The unscaled log probabilities, which is a 2-D tensor with shape [N x K]. N is the batch_size, and K is the class number.
  • label (Variable) – The ground truth which is a 2-D tensor. If soft_label is set to false, Label is a Tensor<int64> with shape [N x 1]. If soft_label is set to true, Label is a Tensor<float/double> with
  • soft_label (bool) – A flag to indicate whether to interpretate the given labels as soft labels. By default, soft_label is set to False.
Returns:

The cross entropy loss is a 2-D tensor with shape [N x 1].

Return type:

Variable

Examples

data = fluid.layers.data(name='data', shape=[128], dtype='float32')
label = fluid.layers.data(name='label', shape=[1], dtype='int64')
fc = fluid.layers.fc(input=data, size=100)
out = fluid.layers.softmax_with_cross_entropy(
    logits=fc, label=label)

smooth_l1

paddle.fluid.layers.smooth_l1(x, y, inside_weight=None, outside_weight=None, sigma=None)

This layer computes the smooth L1 loss for Variable x and y. It takes the first dimension of x and y as batch size. For each instance, it computes the smooth L1 loss element by element first and then sums all the losses. So the shape of ouput Variable is [batch_size, 1].

Parameters:
  • x (Variable) – A tensor with rank at least 2. The input value of smooth L1 loss op with shape [batch_size, dim1, ..., dimN].
  • y (Variable) – A tensor with rank at least 2. The target value of smooth L1 loss op with same shape as x.
  • inside_weight (Variable|None) – A tensor with rank at least 2. This input is optional and should have same shape with x. If provided, the result of (x - y) will be multiplied by this tensor element by element.
  • outside_weight (Variable|None) – A tensor with rank at least 2. This input is optional and should have same shape with x. If provided, the out smooth L1 loss will be multiplied by this tensor element by element.
  • sigma (float|None) – Hyper parameter of smooth L1 loss layer. A float scalar with default value 1.0.
Returns:

The output smooth L1 loss with shape [batch_size, 1].

Return type:

Variable

Examples

data = fluid.layers.data(name='data', shape=[128], dtype='float32')
label = fluid.layers.data(
    name='label', shape=[100], dtype='float32')
fc = fluid.layers.fc(input=data, size=100)
out = fluid.layers.smooth_l1(x=fc, y=label)

one_hot

paddle.fluid.layers.one_hot(input, depth)

This layer creates the one-hot representations for input indices.

Parameters:
  • input (Variable) – Input indices, last dimension must be 1.
  • depth (scalar) – An interger defining the depth of the one-hot dimension.
Returns:

The one-hot representations of input.

Return type:

Variable

Examples

label = layers.data(name="label", shape=[1], dtype="float32")
one_hot_label = layers.one_hot(input=label, depth=10)

autoincreased_step_counter

paddle.fluid.layers.autoincreased_step_counter(counter_name=None, begin=1, step=1)

Create an auto-increase variable which will be automatically increased by 1 every mini-batch Return the run counter of the main program, default is started from 1.

Parameters:
  • counter_name (str) – The counter name, default is ‘@STEP_COUNTER@’.
  • begin (int) – The first value of this counter.
  • step (int) – The increment step between each execution.
Returns:

The global run counter.

Return type:

Variable

Examples

global_step = fluid.layers.autoincreased_step_counter(
    counter_name='@LR_DECAY_COUNTER@', begin=begin, step=1)

reshape

paddle.fluid.layers.reshape(x, shape, actual_shape=None, act=None, inplace=True, name=None)

Gives a new shape to the input Tensor without changing its data.

The target shape can be given by shape or actual_shape. shape is a list of integer while actual_shape is a tensor variable. actual_shape has a higher priority than shape if it is provided, while shape still should be set correctly to gurantee shape inference in compile-time.

Some tricks exist when specifying the target shape.

1. -1 means the value of this dimension is inferred from the total element number of x and remaining dimensions. Thus one and only one dimension can be set -1.

2. 0 means the actual dimension value is going to be copied from the corresponding dimension of x. The indice of 0s in shape can not exceed Rank(X).

Here are some examples to explain it.

1. Given a 3-D tensor x with a shape [2, 4, 6], and the target shape is [6, 8], the reshape operator will transform x into a 2-D tensor with shape [6, 8] and leaving x’s data unchanged.

2. Given a 3-D tensor x with a shape [2, 4, 6], and the target shape specified is [2, 3, -1, 2], the reshape operator will transform x into a 4-D tensor with shape [2, 3, 4, 2] and leaving x’s data unchanged. In this case, one dimension of the target shape is set to -1, the value of this dimension is inferred from the total element number of x and remaining dimensions.

3. Given a 3-D tensor x with a shape [2, 4, 6], and the target shape is [-1, 0, 3, 2], the reshape operator will transform x into a 4-D tensor with shape [2, 4, 3, 2] and leaving x’s data unchanged. In this case, besides -1, 0 means the actual dimension value is going to be copied from the corresponding dimension of x.

Parameters:
  • x (variable) – The input tensor.
  • shape (list) – The new shape. At most one dimension of the new shape can be -1.
  • actual_shape (variable) – An optional input. If provided, reshape according to this given shape rather than shape specifying shape. That is to say actual_shape has a higher priority than shape.
  • act (str) – The non-linear activation to be applied to output variable.
  • inplace (bool) – If this flag is set true, a new output tensor is created whose data is copied from input x, otherwise the output shares data with input without copying.
  • name (str) – The name of this layer. It is optional.
Returns:

The output tensor.

Return type:

Variable

Examples

data = fluid.layers.data(
    name='data', shape=[2, 4, 6], dtype='float32')
reshaped = fluid.layers.reshape(
    x=data, shape=[-1, 0, 3, 2], act='tanh', inplace=True)

lod_reset

paddle.fluid.layers.lod_reset(x, y=None, target_lod=None)

Set LoD of x to a new one specified by y or target_lod. When y provided, y.lod would be considered as target LoD first, otherwise y.data would be considered as target LoD. If y is not provided, target LoD should be specified by target_lod. If target LoD is specified by Y.data or target_lod, only one level LoD is supported.

* Example 1:

    Given a 1-level LoDTensor x:
        x.lod =  [[ 2,           3,                   1 ]]
        x.data = [[1.0], [2.0], [3.0], [4.0], [5.0], [6.0]]
        x.dims = [6, 1]

    target_lod: [4, 2]

    then we get a 1-level LoDTensor:
        out.lod =  [[4,                          2]]
        out.data = [[1.0], [2.0], [3.0], [4.0], [5.0], [6.0]]
        out.dims = [6, 1]

* Example 2:

    Given a 1-level LoDTensor x:
        x.lod =  [[2,            3,                   1]]
        x.data = [[1.0], [2.0], [3.0], [4.0], [5.0], [6.0]]
        x.dims = [6, 1]

    y is a Tensor:
        y.data = [[2, 4]]
        y.dims = [1, 3]

    then we get a 1-level LoDTensor:
        out.lod =  [[2,            4]]
        out.data = [[1.0], [2.0], [3.0], [4.0], [5.0], [6.0]]
        out.dims = [6, 1]

* Example 3:

    Given a 1-level LoDTensor x:
        x.lod =  [[2,            3,                   1]]
        x.data = [[1.0], [2.0], [3.0], [4.0], [5.0], [6.0]]
        x.dims = [6, 1]

    y is a 2-level LoDTensor:
        y.lod =  [[2, 2], [2, 2, 1, 1]]
        y.data = [[1.1], [2.1], [3.1], [4.1], [5.1], [6.1]]
        y.dims = [6, 1]

    then we get a 2-level LoDTensor:
        out.lod =  [[2, 2], [2, 2, 1, 1]]
        out.data = [[1.0], [2.0], [3.0], [4.0], [5.0], [6.0]]
        out.dims = [6, 1]
Parameters:
  • x (Variable) – Input variable which could be a Tensor or LodTensor.
  • y (Variable|None) – If provided, output’s LoD would be derived from y.
  • target_lod (list|tuple|None) – One level LoD which should be considered as target LoD when y not provided.
Returns:

Output variable with LoD specified by this layer.

Return type:

Variable

Raises:

ValueError – If y and target_lod are both None.

Examples

x = layers.data(name='x', shape=[10])
y = layers.data(name='y', shape=[10, 20], lod_level=2)
out = layers.lod_reset(x=x, y=y)

lrn

paddle.fluid.layers.lrn(input, n=5, k=1.0, alpha=0.0001, beta=0.75, name=None)

Local Response Normalization Layer. This layer performs a type of “lateral inhibition” by normalizing over local input regions.

The formula is as follows:

\[Output(i, x, y) = Input(i, x, y) / \left(k + \alpha \sum\limits^{\min(C, c + n/2)}_{j = \max(0, c - n/2)}(Input(j, x, y))^2\right)^{\beta}\]

In the above equation:

  • \(n\): The number of channels to sum over.
  • \(k\): The offset (avoid being divided by 0).
  • \(alpha\): The scaling parameter.
  • \(beta\): The exponent parameter.

Refer to ImageNet Classification with Deep Convolutional Neural Networks

Parameters:
  • input (Variable) – The input tensor of this layer, and the dimension of input tensor must be 4.
  • n (int, default 5) – The number of channels to sum over.
  • k (float, default 1.0) – An offset (usually positive to avoid dividing by 0).
  • alpha (float, default 1e-4) – The scaling parameter.
  • beta (float, default 0.75) – The exponent.
  • name (str, default None) – A name for this operation.
Raises:

ValueError – If rank of the input tensor is not 4.

Returns:

A tensor variable storing the transformation result.

Examples

data = fluid.layers.data(
    name="data", shape=[3, 112, 112], dtype="float32")
lrn = fluid.layers.lrn(input=data)

pad

paddle.fluid.layers.pad(x, paddings, pad_value=0.0, name=None)

Pads a tensor with a constant value given by pad_value, and the padded width is specified by paddings.

Specifically, the number of values padded before the contents of x in dimension i is indicated by paddings[i], and the number of values padded after the contents of x in dimension i is indicated by paddings[i+1].

See below for an example.

Given:
    x = [[1, 2], [3, 4]]

    paddings = [0, 1, 1, 2]

    pad_value = 0

Return:

    out = [[0, 1, 2, 0, 0]
           [0, 3, 4, 0, 0]
           [0, 0, 0, 0, 0]]
Parameters:
  • x (Variable) – The input tensor variable.
  • paddings (list) – A list of integers. Its elements specify the padded width before and after for each dimension in turn. The length of :attr:paddings must be \(rank(x) \times 2\).
  • pad_value (float) – The constant value used to pad.
  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
Returns:

The padded tensor variable.

Return type:

Variable

Examples

# x is a rank 2 tensor variable.
out = fluid.layers.pad(
    x=x, paddings=[0, 1, 1, 2], pad_value=0.)

label_smooth

paddle.fluid.layers.label_smooth(label, prior_dist=None, epsilon=0.1, dtype='float32', name=None)

Label smoothing is a mechanism to regularize the classifier layer and is called label-smoothing regularization (LSR).

Label smoothing is proposed to encourage the model to be less confident, since optimizing the log-likelihood of the correct label directly may cause overfitting and reduce the ability of the model to adapt. Label smoothing replaces the ground-truth label \(y\) with the weighted sum of itself and some fixed distribution \(\mu\). For class \(k\), i.e.

\[\tilde{y_k} = (1 - \epsilon) * y_k + \epsilon * \mu_k,\]

where \(1 - \epsilon\) and \(\epsilon\) are the weights respectively, and \(\tilde{y}_k\) is the smoothed label. Usually uniform distribution is used for \(\mu\).

See more details about label smoothing in https://arxiv.org/abs/1512.00567.

Parameters:
  • label (Variable) – The input variable containing the label data. The label data should use one-hot representation.
  • prior_dist (Variable) – The prior distribution to be used to smooth labels. If not provided, an uniform distribution is used. The shape of prior_dist should be \((1, class\_num)\).
  • epsilon (float) – The weight used to mix up the original ground-truth distribution and the fixed distribution.
  • dtype (np.dtype|core.VarDesc.VarType|str) – The type of data : float32, float_64, int etc.
  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
Returns:

The tensor variable containing the smoothed labels.

Return type:

Variable

Examples

label = layers.data(name="label", shape=[1], dtype="float32")
one_hot_label = layers.one_hot(input=label, depth=10)
smooth_label = layers.label_smooth(
    label=one_hot_label, epsilon=0.1, dtype="float32")

roi_pool

paddle.fluid.layers.roi_pool(input, rois, pooled_height=1, pooled_width=1, spatial_scale=1.0)

ROIPool Operator

Region of interest pooling (also known as RoI pooling) is to perform is to perform max pooling on inputs of nonuniform sizes to obtain fixed-size feature maps (e.g. 7*7).

The operator has three steps:

  1. Dividing each region proposal into equal-sized sections with the pooled_width and pooled_height
  2. Finding the largest value in each section
  3. Copying these max values to the output buffer

ROI Pooling for Faster-RCNN. The link below is a further introduction: https://stackoverflow.com/questions/43430056/what-is-roi-layer-in-fast-rcnn

Parameters:
  • input (Variable) – (Tensor), the input of ROIPoolOp. The format of input tensor is NCHW. Where N is batch size, C is the number of input channels, H is the height of the feature, and W is the width of the feature
  • rois (Variable) – ROIs (Regions of Interest) to pool over.
  • pooled_height (integer) – (int, default 1), The pooled output height Default: 1
  • pooled_width (integer) – (int, default 1), The pooled output width Default: 1
  • spatial_scale (float) – (float, default 1.0), Multiplicative spatial scale factor to translate ROI coords from their input scale to the scale used when pooling Default: 1.0
Returns:

(Tensor), The output of ROIPoolOp is a 4-D tensor with shape (num_rois, channels, pooled_h, pooled_w).

Return type:

Variable

Examples

pool_out = fluid.layers.roi_pool(input=x, rois=rois, 7, 7, 1.0)

dice_loss

paddle.fluid.layers.dice_loss(input, label, epsilon=1e-05)

Dice loss for comparing the similarity of two batch of data, usually is used for binary image segmentation i.e. labels are binary. The dice loss can be defined as below equation:

\[\begin{split}dice\_loss &= 1 - \frac{2 * intersection\_area}{total\_area} \\ &= \frac{(total\_area - intersection\_area) - intersection\_area}{total\_area} \\ &= \frac{(union\_area - intersection\_area)}{total\_area}\end{split}\]
Parameters:
  • input (Variable) – The predictions with rank>=2. The first dimension is batch size, and the last dimension is class number.
  • label (Variable) – The groud truth with the same rank with input. The first dimension is batch size, and the last dimension is 1.
  • epsilon (float) – The epsilon will be added to the numerator and denominator. If both input and label are empty, it makes sure dice is 1. Default: 0.00001
Returns:

The dice loss with shape [1].

Return type:

dice_loss (Variable)

Examples

predictions = fluid.layers.softmax(x)
loss = fluid.layers.dice_loss(input=predictions, label=label, 2)

image_resize

paddle.fluid.layers.image_resize(input, out_shape=None, scale=None, name=None, resample='BILINEAR')

Resize a Batch of Images

The input must be a tensor of the shape (num_batches, channels, in_h, in_w), and the resizing only applies on the last two dimensions(hight and width).

Supporting resample methods:

‘BILINEAR’ : Bilinear interpolation
Parameters:
  • input (Variable) – The input tensor of image resize layer, This is a 4-D tensor of the shape (num_batches, channels, in_h, in_w).
  • out_shape (list|tuple|Variable|None) – Output shape of image resize layer, the shape is (out_h, out_w). Default: None
  • scale (float|None) – The multiplier for the input height or width. At least one of out_shape or scale must be set. And out_shape has a higher priority than scale. Default: None
  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
  • resample (str) – The resample method. It can only be ‘BILINEAR’ currently. Default: ‘BILINEAR’
Returns:

The output is a 4-D tensor of the shape (num_batches, channls, out_h, out_w).

Return type:

Variable

Examples

out = fluid.layers.image_resize(input, out_shape=[12, 12])

image_resize_short

paddle.fluid.layers.image_resize_short(input, out_short_len, resample='BILINEAR')

Resize a batch of images. The short edge of input images will be resized to the given ‘out_short_len’. The long edge of input images will be resized proportionately to make images’ length-width ratio constant.

Parameters:
  • input (Variable) – The input tensor of image resize layer, This is a 4-D tensor of the shape (num_batches, channels, in_h, in_w).
  • out_short_len (int) – The length of output images’ short edge.
  • resample (str) – resample method, default: BILINEAR.
Returns:

The output is a 4-D tensor of the shape (num_batches, channls, out_h, out_w).

Return type:

Variable

resize_bilinear

paddle.fluid.layers.resize_bilinear(input, out_shape=None, scale=None, name=None)

Bilinear interpolation is an extension of linear interpolation for interpolating functions of two variables (e.g. H-direction and W-direction in this op) on a rectilinear 2D grid.

The key idea is to perform linear interpolation first in one direction, and then again in the other direction.

For details, please refer to Wikipedia: https://en.wikipedia.org/wiki/Bilinear_interpolation

Parameters:
  • input (Variable) – The input tensor of bilinear interpolation, This is a 4-D tensor with shape of (N x C x h x w).
  • out_shape (Variable) – This is a 1-D tensor with two number. The first number is height and the second number is width.
  • scale (float|None) – The multiplier for the input height or width. At least one of out_shape or scale must be set. And out_shape has a higher priority than scale. Default: None.
  • name (str|None) – The output variable name.
Returns:

The dimension of output is (N x C x out_h x out_w).

gather

paddle.fluid.layers.gather(input, index)

Gather Layer

Output is obtained by gathering entries of the outer-most dimension of X indexed by index and concatenate them together.

\[Out = X[Index]\]
Given:

X = [[1, 2],
     [3, 4],
     [5, 6]]

Index = [1, 2]

Then:

Out = [[3, 4],
       [5, 6]]
Parameters:
  • input (Variable) – The source input with rank>=1.
  • index (Variable) – The index input with rank=1.
Returns:

The output is a tensor with the same rank as input.

Return type:

output (Variable)

Examples

output = fluid.layers.gather(x, index)

random_crop

paddle.fluid.layers.random_crop(x, shape, seed=None)

This operator takes a batch of instance, and do random cropping on each instance. It means that cropping positions differs on each instance, which is determined by an uniform random generator. All cropped instances have the same shape, which is determined by the operator’s attribute ‘shape’.

Parameters:
  • x (Variable) – A batch of instances to random crop
  • shape (INTS) – The shape of a cropped instance
  • seed (int|Variable|None) – The random seed By default, the seed will get from random.randint(-65536, 65535).
Returns:

The cropped instance batch

Examples

>>> img = fluid.layers.data("img", [3, 256, 256])
>>> cropped_img = fluid.layers.random_crop(img, shape=[3, 224, 224])

mean_iou

paddle.fluid.layers.mean_iou(input, label, num_classes)

Mean Intersection-Over-Union is a common evaluation metric for semantic image segmentation, which first computes the IOU for each semantic class and then computes the average over classes. IOU is defined as follows:

\[IOU = \frac{true\_positiv}{(true\_positive + false\_positive + false\_negative)}.\]

The predictions are accumulated in a confusion matrix and mean-IOU is then calculated from it.

Parameters:
  • input (Variable) – A Tensor of prediction results for semantic labels with type int32 or int64.
  • label (Variable) – A Tensor of ground truth labels with type int32 or int64. Its shape should be the same as input.
  • num_classes (int) – The possible number of labels.
Returns:

A Tensor representing the mean intersection-over-union with shape [1]. out_wrong(Variable): A Tensor with shape [num_classes]. The wrong numbers of each class. out_correct(Variable): A Tensor with shape [num_classes]. The correct numbers of each class.

Return type:

mean_iou (Variable)

Examples

iou, wrongs, corrects = fluid.layers.mean_iou(predict, label, num_classes)

ops

mean

paddle.fluid.layers.mean(*args, **kwargs)

Mean Operator calculates the mean of all elements in X.

Parameters:x – (Tensor) The input of mean op
Returns:(Tensor) The output of mean op

mul

paddle.fluid.layers.mul(*args, **kwargs)

Mul Operator.

This operator is used to perform matrix multiplication for input \(X\) and \(Y\).

The equation is:

$$Out = X * Y$$

Both the input \(X\) and \(Y\) can carry the LoD (Level of Details) information, or not. But the output only shares the LoD information with input \(X\).

Parameters:
  • x – (Tensor), The first input tensor of mul op.
  • y – (Tensor), The second input tensor of mul op.
  • x_num_col_dims (INT) – (int, default 1), The mul_op can take tensors with more than two dimensions as its inputs. If the input \(X\) is a tensor with more than two dimensions, \(X\) will be flattened into a two-dimensional matrix first. The flattening rule is: the first num_col_dims will be flattened to form the first dimension of the final matrix (the height of the matrix), and the rest rank(X) - num_col_dims dimensions are flattened to form the second dimension of the final matrix (the width of the matrix). As a result, height of the flattened matrix is equal to the product of \(X\)‘s first x_num_col_dims dimensions’ sizes, and width of the flattened matrix is equal to the product of \(X\)‘s last rank(x) - num_col_dims dimensions’ size. For example, suppose \(X\) is a 6-dimensional tensor with the shape [2, 3, 4, 5, 6], and x_num_col_dims = 3. Thus, the flattened matrix will have a shape [2 x 3 x 4, 5 x 6] = [24, 30].
  • y_num_col_dims (INT) – (int, default 1), The mul_op can take tensors with more than two, dimensions as its inputs. If the input \(Y\) is a tensor with more than two dimensions, \(Y\) will be flattened into a two-dimensional matrix first. The attribute y_num_col_dims determines how \(Y\) is flattened. See comments of x_num_col_dims for more details.
Returns:

(Tensor), The output tensor of mul op.

scale

paddle.fluid.layers.scale(*args, **kwargs)

Scale operator

Multiply the input tensor with a float scalar to scale the input tensor.

$$Out = scale*X$$

Parameters:
  • x – (Tensor) Input tensor of scale operator.
  • scale (FLOAT) – The scaling factor of the scale operator.
Returns:

(Tensor) Output tensor of scale operator.

sigmoid_cross_entropy_with_logits

paddle.fluid.layers.sigmoid_cross_entropy_with_logits(*args, **kwargs)

SigmoidCrossEntropyWithLogits Operator.

This measures the element-wise probability error in classification tasks in which each class is independent. This can be thought of as predicting labels for a data-point, where labels are not mutually exclusive. For example, a news article can be about politics, technology or sports at the same time or none of these.

The logistic loss is given as follows:

$$loss = -Labels * log(sigma(X)) - (1 - Labels) * log(1 - sigma(X))$$

We know that $$sigma(X) = \frac{1}{1 + exp(-X)}$$. By substituting this we get:

$$loss = X - X * Labels + log(1 + exp(-X))$$

For stability and to prevent overflow of $$exp(-X)$$ when X < 0, we reformulate the loss as follows:

$$loss = max(X, 0) - X * Labels + log(1 + exp(-|X|))$$

Both the input X and Labels can carry the LoD (Level of Details) information. However the output only shares the LoD with input X.

Parameters:
  • x – (Tensor, default Tensor<float>), a 2-D tensor with shape N x D, where N is the batch size and D is the number of classes. This input is a tensor of logits computed by the previous operator. Logits are unscaled log probabilities given as log(p/(1-p)).
  • label – (Tensor, default Tensor<float>), a 2-D tensor of the same type and shape as X. This input is a tensor of probabalistic labels for each logit
Returns:

(Tensor, default Tensor<float>), a 2-D tensor with shape N x D of elementwise logistic losses.

elementwise_add

paddle.fluid.layers.elementwise_add(*args, **kwargs)

Limited Elementwise Add Operator

The equation is:

$$Out = X + Y$$

  • \(X\): a tensor of any dimension.
  • \(Y\): a tensor whose dimensions must be less than or equal to the dimensions of \(X\).

There are two cases for this operator:

  1. The shape of \(Y\) is the same with \(X\).
  2. The shape of \(Y\) is a continuous subsequence of \(X\).

For case 2:

  1. Broadcast \(Y\) to match the shape of \(X\), where \(axis\) is the start dimension index for broadcasting \(Y\) onto \(X\).
  2. If \(axis\) is -1 (default), \(axis = rank(X) - rank(Y)\).
  3. The trailing dimensions of size 1 for \(Y\) will be ignored for the consideration of subsequence, such as shape(Y) = (2, 1) => (2).

For example:

shape(X) = (2, 3, 4, 5), shape(Y) = (,)
shape(X) = (2, 3, 4, 5), shape(Y) = (5,)
shape(X) = (2, 3, 4, 5), shape(Y) = (4, 5), with axis=-1(default) or axis=2
shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1
shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0
shape(X) = (2, 3, 4, 5), shape(Y) = (2, 1), with axis=0

The inputs \(X\) and \(Y\) can carry the different LoD information. But the output only shares the LoD information with the input \(X\).

Parameters:
  • x – (Tensor), The first input tensor of elementwise op.
  • y – (Tensor), The second input tensor of elementwise op.
  • axis (INT) – (int, default -1). The start dimension index for broadcasting Y onto X.
Returns:

The output of elementwise op.

elementwise_div

paddle.fluid.layers.elementwise_div(*args, **kwargs)

Limited Elementwise Div Operator

The equation is:

$$Out = X / Y$$

  • \(X\): a tensor of any dimension.
  • \(Y\): a tensor whose dimensions must be less than or equal to the dimensions of \(X\).

There are two cases for this operator:

  1. The shape of \(Y\) is the same with \(X\).
  2. The shape of \(Y\) is a continuous subsequence of \(X\).

For case 2:

  1. Broadcast \(Y\) to match the shape of \(X\), where \(axis\) is the start dimension index for broadcasting \(Y\) onto \(X\).
  2. If \(axis\) is -1 (default), \(axis = rank(X) - rank(Y)\).
  3. The trailing dimensions of size 1 for \(Y\) will be ignored for the consideration of subsequence, such as shape(Y) = (2, 1) => (2).

For example:

shape(X) = (2, 3, 4, 5), shape(Y) = (,)
shape(X) = (2, 3, 4, 5), shape(Y) = (5,)
shape(X) = (2, 3, 4, 5), shape(Y) = (4, 5), with axis=-1(default) or axis=2
shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1
shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0
shape(X) = (2, 3, 4, 5), shape(Y) = (2, 1), with axis=0

The inputs \(X\) and \(Y\) can carry the different LoD information. But the output only shares the LoD information with the input \(X\).

Parameters:
  • x – (Tensor), The first input tensor of elementwise op.
  • y – (Tensor), The second input tensor of elementwise op.
  • axis (INT) – (int, default -1). The start dimension index for broadcasting Y onto X.
Returns:

The output of elementwise op.

elementwise_sub

paddle.fluid.layers.elementwise_sub(*args, **kwargs)

Limited Elementwise Sub Operator

The equation is:

$$Out = X - Y$$

  • \(X\): a tensor of any dimension.
  • \(Y\): a tensor whose dimensions must be less than or equal to the dimensions of \(X\).

There are two cases for this operator:

  1. The shape of \(Y\) is the same with \(X\).
  2. The shape of \(Y\) is a continuous subsequence of \(X\).

For case 2:

  1. Broadcast \(Y\) to match the shape of \(X\), where \(axis\) is the start dimension index for broadcasting \(Y\) onto \(X\).
  2. If \(axis\) is -1 (default), \(axis = rank(X) - rank(Y)\).
  3. The trailing dimensions of size 1 for \(Y\) will be ignored for the consideration of subsequence, such as shape(Y) = (2, 1) => (2).

For example:

shape(X) = (2, 3, 4, 5), shape(Y) = (,)
shape(X) = (2, 3, 4, 5), shape(Y) = (5,)
shape(X) = (2, 3, 4, 5), shape(Y) = (4, 5), with axis=-1(default) or axis=2
shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1
shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0
shape(X) = (2, 3, 4, 5), shape(Y) = (2, 1), with axis=0

The inputs \(X\) and \(Y\) can carry the different LoD information. But the output only shares the LoD information with the input \(X\).

Parameters:
  • x – (Tensor), The first input tensor of elementwise op.
  • y – (Tensor), The second input tensor of elementwise op.
  • axis (INT) – (int, default -1). The start dimension index for broadcasting Y onto X.
Returns:

The output of elementwise op.

elementwise_mul

paddle.fluid.layers.elementwise_mul(*args, **kwargs)

Limited Elementwise Mul Operator

The equation is:

$$Out = X \odot Y$$

  • \(X\): a tensor of any dimension.
  • \(Y\): a tensor whose dimensions must be less than or equal to the dimensions of \(X\).

There are two cases for this operator:

  1. The shape of \(Y\) is the same with \(X\).
  2. The shape of \(Y\) is a continuous subsequence of \(X\).

For case 2:

  1. Broadcast \(Y\) to match the shape of \(X\), where \(axis\) is the start dimension index for broadcasting \(Y\) onto \(X\).
  2. If \(axis\) is -1 (default), \(axis = rank(X) - rank(Y)\).
  3. The trailing dimensions of size 1 for \(Y\) will be ignored for the consideration of subsequence, such as shape(Y) = (2, 1) => (2).

For example:

shape(X) = (2, 3, 4, 5), shape(Y) = (,)
shape(X) = (2, 3, 4, 5), shape(Y) = (5,)
shape(X) = (2, 3, 4, 5), shape(Y) = (4, 5), with axis=-1(default) or axis=2
shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1
shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0
shape(X) = (2, 3, 4, 5), shape(Y) = (2, 1), with axis=0

The inputs \(X\) and \(Y\) can carry the different LoD information. But the output only shares the LoD information with the input \(X\).

Parameters:
  • x – (Tensor), The first input tensor of elementwise op.
  • y – (Tensor), The second input tensor of elementwise op.
  • axis (INT) – (int, default -1). The start dimension index for broadcasting Y onto X.
Returns:

The output of elementwise op.

elementwise_max

paddle.fluid.layers.elementwise_max(*args, **kwargs)

Limited Elementwise Max Operator

The equation is:

$$Out = max(X, Y)$$

  • \(X\): a tensor of any dimension.
  • \(Y\): a tensor whose dimensions must be less than or equal to the dimensions of \(X\).

There are two cases for this operator:

  1. The shape of \(Y\) is the same with \(X\).
  2. The shape of \(Y\) is a continuous subsequence of \(X\).

For case 2:

  1. Broadcast \(Y\) to match the shape of \(X\), where \(axis\) is the start dimension index for broadcasting \(Y\) onto \(X\).
  2. If \(axis\) is -1 (default), \(axis = rank(X) - rank(Y)\).
  3. The trailing dimensions of size 1 for \(Y\) will be ignored for the consideration of subsequence, such as shape(Y) = (2, 1) => (2).

For example:

shape(X) = (2, 3, 4, 5), shape(Y) = (,)
shape(X) = (2, 3, 4, 5), shape(Y) = (5,)
shape(X) = (2, 3, 4, 5), shape(Y) = (4, 5), with axis=-1(default) or axis=2
shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1
shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0
shape(X) = (2, 3, 4, 5), shape(Y) = (2, 1), with axis=0

The inputs \(X\) and \(Y\) can carry the different LoD information. But the output only shares the LoD information with the input \(X\).

Parameters:
  • x – (Tensor), The first input tensor of elementwise op.
  • y – (Tensor), The second input tensor of elementwise op.
  • axis (INT) – (int, default -1). The start dimension index for broadcasting Y onto X.
Returns:

The output of elementwise op.

elementwise_min

paddle.fluid.layers.elementwise_min(*args, **kwargs)

Limited Elementwise Min Operator

The equation is:

$$Out = min(X, Y)$$

  • \(X\): a tensor of any dimension.
  • \(Y\): a tensor whose dimensions must be less than or equal to the dimensions of \(X\).

There are two cases for this operator:

  1. The shape of \(Y\) is the same with \(X\).
  2. The shape of \(Y\) is a continuous subsequence of \(X\).

For case 2:

  1. Broadcast \(Y\) to match the shape of \(X\), where \(axis\) is the start dimension index for broadcasting \(Y\) onto \(X\).
  2. If \(axis\) is -1 (default), \(axis = rank(X) - rank(Y)\).
  3. The trailing dimensions of size 1 for \(Y\) will be ignored for the consideration of subsequence, such as shape(Y) = (2, 1) => (2).

For example:

shape(X) = (2, 3, 4, 5), shape(Y) = (,)
shape(X) = (2, 3, 4, 5), shape(Y) = (5,)
shape(X) = (2, 3, 4, 5), shape(Y) = (4, 5), with axis=-1(default) or axis=2
shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1
shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0
shape(X) = (2, 3, 4, 5), shape(Y) = (2, 1), with axis=0

The inputs \(X\) and \(Y\) can carry the different LoD information. But the output only shares the LoD information with the input \(X\).

Parameters:
  • x – (Tensor), The first input tensor of elementwise op.
  • y – (Tensor), The second input tensor of elementwise op.
  • axis (INT) – (int, default -1). The start dimension index for broadcasting Y onto X.
Returns:

The output of elementwise op.

elementwise_pow

paddle.fluid.layers.elementwise_pow(*args, **kwargs)

Limited Elementwise Pow Operator

The equation is:

$$Out = X ^ Y$$

  • \(X\): a tensor of any dimension.
  • \(Y\): a tensor whose dimensions must be less than or equal to the dimensions of \(X\).

There are two cases for this operator:

  1. The shape of \(Y\) is the same with \(X\).
  2. The shape of \(Y\) is a continuous subsequence of \(X\).

For case 2:

  1. Broadcast \(Y\) to match the shape of \(X\), where \(axis\) is the start dimension index for broadcasting \(Y\) onto \(X\).
  2. If \(axis\) is -1 (default), \(axis = rank(X) - rank(Y)\).
  3. The trailing dimensions of size 1 for \(Y\) will be ignored for the consideration of subsequence, such as shape(Y) = (2, 1) => (2).

For example:

shape(X) = (2, 3, 4, 5), shape(Y) = (,)
shape(X) = (2, 3, 4, 5), shape(Y) = (5,)
shape(X) = (2, 3, 4, 5), shape(Y) = (4, 5), with axis=-1(default) or axis=2
shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1
shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0
shape(X) = (2, 3, 4, 5), shape(Y) = (2, 1), with axis=0

The inputs \(X\) and \(Y\) can carry the different LoD information. But the output only shares the LoD information with the input \(X\).

Parameters:
  • x – (Tensor), The first input tensor of elementwise op.
  • y – (Tensor), The second input tensor of elementwise op.
  • axis (INT) – (int, default -1). The start dimension index for broadcasting Y onto X.
Returns:

The output of elementwise op.

clip

paddle.fluid.layers.clip(*args, **kwargs)

Clip Operator.

The clip operator limits the value of given input within an interval. The interval is specified with arguments ‘min’ and ‘max’:

$$ Out = min(max(X, min), max) $$

Parameters:
  • x – (Tensor)The input of clip op.The number of dimensions must be between [1, 9].
  • min (FLOAT) – (float)Minimum value, under which element is replaced by min.
  • max (FLOAT) – (float)Maximum value, above which element is replaced by max
Returns:

(Tensor)The output of clip op with shape as input(X)

clip_by_norm

paddle.fluid.layers.clip_by_norm(*args, **kwargs)

ClipByNorm Operator.

This operator limits the L2 norm of the input \(X\) within \(max\_norm\). If the L2 norm of \(X\) is less than or equal to \(max\_norm\), \(Out\) will be the same as \(X\). If the L2 norm of \(X\) is greater than \(max\_norm\), \(X\) will be linearly scaled to make the L2 norm of \(Out\) equal to \(max\_norm\), as shown in the following formula:

$$ Out = \frac{max\_norm * X}{norm(X)}, $$

where \(norm(X)\) represents the L2 norm of \(X\).

Examples

data = fluid.layer.data(
    name='data', shape=[2, 4, 6], dtype='float32')
reshaped = fluid.layers.clip_by_norm(
    x=data, max_norm=0.5)
Parameters:
  • x – (Tensor) The input of clip_by_norm op.The number of dimensions must be between [1, 9].
  • max_norm (FLOAT) – (float) The maximum norm value.
Returns:

(Tensor) The output of clip_by_norm op with shape as input(X)

logical_and

paddle.fluid.layers.logical_and(*args, **kwargs)

logical_and Operator

It operates element-wise on X and Y, and returns the Out. X, Y and Out are N-dim boolean tensors. Each element of Out is calculated by $$Out = X && Y$$

Parameters:
  • x – (LoDTensor) Left hand operand of logical_and operator
  • y – (LoDTensor) Right hand operand of logical_and operator
Returns:

(LoDTensor) n-dim bool tensor. Each element is $$Out = X && Y$$

logical_or

paddle.fluid.layers.logical_or(*args, **kwargs)

logical_or Operator

It operates element-wise on X and Y, and returns the Out. X, Y and Out are N-dim boolean tensors. Each element of Out is calculated by $$Out = X || Y$$

Parameters:
  • x – (LoDTensor) Left hand operand of logical_or operator
  • y – (LoDTensor) Right hand operand of logical_or operator
Returns:

(LoDTensor) n-dim bool tensor. Each element is $$Out = X || Y$$

logical_xor

paddle.fluid.layers.logical_xor(*args, **kwargs)

logical_xor Operator

It operates element-wise on X and Y, and returns the Out. X, Y and Out are N-dim boolean tensors. Each element of Out is calculated by !!Out = (X || Y) && !(X && Y)!!

Parameters:
  • x – (LoDTensor) Left hand operand of logical_xor operator
  • y – (LoDTensor) Right hand operand of logical_xor operator
Returns:

(LoDTensor) n-dim bool tensor. Each element is !!Out = (X || Y) && !(X && Y)!!

logical_not

paddle.fluid.layers.logical_not(*args, **kwargs)

logical_not Operator

It operates element-wise on X, and returns the Out. X and Out are N-dim boolean tensors. Each element of Out is calculated by !!Out = !X!!

Parameters:x – (LoDTensor) Operand of logical_not operator
Returns:(LoDTensor) n-dim bool tensor. Each element is !!Out = !X!!

uniform_random_batch_size_like

paddle.fluid.layers.uniform_random_batch_size_like(*args, **kwargs)

UniformRandomBatchSizeLike operator.

This operator initializes a tensor with the same batch_size as the Input tensor with random values sampled from a uniform distribution.

Parameters:
  • input – Tensor whose input_dim_idx’th dimension specifies the batch_size
  • shape (INTS) – The shape of the output
  • input_dim_idx (INT) – default 0. The index of input’s batch size dimension
  • output_dim_idx (INT) – default 0. The index of output’s batch size dimension
  • min (FLOAT) – (float, default -1.0) Minimum value of uniform random
  • max (FLOAT) – (float, default 1.0) Maximun value of uniform random
  • seed (INT) – (int, default 0) Random seed used for generating samples. 0 means use a seed generated by the system.Note that if seed is not 0, this operator will always generate the same random numbers every time.
  • dtype (INT) – (int, default 5(FP32)) Output tensor data type
Returns:

Tensor of specified shape will be filled with the specified value

gaussian_random

paddle.fluid.layers.gaussian_random(*args, **kwargs)

GaussianRandom Operator.

Used to initialize tensors with gaussian random generator.

Parameters:
  • shape (INTS) – (vector<int>) The dimension of random tensor.
  • mean (FLOAT) – (float, default 0.0) mean of random tensor.
  • std (FLOAT) – (float, default 1.0) std of random tensor.
  • seed (INT) – (int, default 0) Random seed of generator.0 means use system wide seed.Note that if seed is not 0, this operator will always generate the same random numbers every time.
  • dtype (INT) – (int, default 5(FP32)) Output data type.
  • use_mkldnn (BOOLEAN) – (bool, default false) Only used in mkldnn kernel
Returns:

Output matrix of gaussian random op

gaussian_random_batch_size_like

paddle.fluid.layers.gaussian_random_batch_size_like(*args, **kwargs)

Used to initialize tensors with gaussian random generator. The defalut mean of the distribution is 0. and defalut standard deviation (std) of the distribution is 1.. Uers can set mean and std by input arguments.

Parameters:
  • input – Tensor whose input_dim_idx’th dimension specifies the batch_size
  • shape (INTS) – The shape of the output
  • input_dim_idx (INT) – default 0. The index of input’s batch size dimension
  • output_dim_idx (INT) – default 0. The index of output’s batch size dimension
  • mean (FLOAT) – (float, default 0.0) The mean (or center) of the gaussian distribution.
  • std (FLOAT) – (float, default 1.0) The standard deviation (std, or spread) of the gaussian distribution.
  • seed (INT) – (int, default 0) Random seed of generator.0 means use system wide seed.Note that if seed is not 0, this operator will always generate the same random numbers every time.
  • dtype (INT) – (int, default 5(FP32)) Output data type.
Returns:

Tensor of specified shape will be filled with the specified value

scatter

paddle.fluid.layers.scatter(*args, **kwargs)

Scatter Operator.

This operator obtains output by updating the input on selected indices on the first axis:

$$ Out = X \ Out[Ids] = X[Ids] + Updates $$

Parameters:
  • x – The source input of scatter op
  • ids – The index input of scatter op where X will be updated
  • updates – The updated value of updates op
Returns:

The output of add op

sum

paddle.fluid.layers.sum(*args, **kwargs)

Sum operator.

This operators sums the input tensors. All the inputs can carry the LoD (Level of Details) information. However, the output only shares the LoD information with the first input.

Parameters:
  • x – (vector<Tensor>) The input tensors of sum operator. Duplicatable.
  • use_mkldnn (BOOLEAN) – (bool, default false) Only used in mkldnn kernel
Returns:

(Tensor) The output tensor of sum operator.

slice

paddle.fluid.layers.slice(*args, **kwargs)

Slice Operator.

Produces a slice of the input tensor along multiple axes. Similar to numpy: https://docs.scipy.org/doc/numpy/reference/arrays.indexing.html Slice uses axes, starts and ends attributes to specify the start and end dimension for each axis in the list of axes, it uses this information to slice the input data tensor. If a negative value is passed for any of the start or end indices, it represents number of elements before the end of that dimension. If the value passed to start or end is larger than the n (the number of elements in this dimension), it represents n. For slicing to the end of a dimension with unknown size, it is recommended to pass in INT_MAX. If axes are omitted, they are set to [0, ..., ndim-1]. Following examples will explain how slice works:

Cast1:
    Given:
        data = [ [1, 2, 3, 4], [5, 6, 7, 8], ]
        axes = [0, 1]
        starts = [1, 0]
        ends = [2, 3]
    Then:
        result = [ [5, 6, 7], ]

Cast2:
    Given:
        data = [ [1, 2, 3, 4], [5, 6, 7, 8], ]
        starts = [0, 1]
        ends = [-1, 1000]
    Then:
        result = [ [2, 3, 4], ]
Parameters:
  • input – Tensor of data to extract slices from.
  • axes (INTS) – (list<int>) Axes that starts and ends apply to. It’s optional.If not present, will be treated as [0, 1, ..., len(starts) - 1].
  • starts (INTS) – (list<int>) Starting indices of corresponding axis in axes
  • ends (INTS) – (list<int>) Starting indices of corresponding axis in axes.
Returns:

Sliced data tensor.

polygon_box_transform

paddle.fluid.layers.polygon_box_transform(*args, **kwargs)

PolygonBoxTransform Operator.

PolygonBoxTransform Operator is used to transform the coordinate shift to the real coordinate.

The input is the final geometry output in detection network. We use 2*n numbers to denote the coordinate shift from n corner vertices of the polygon_box to the pixel location. As each distance offset contains two numbers (xi, yi), the geometry output contains 2*n channels.

Parameters:input – The input with shape [batch_size, geometry_channels, height, width]
Returns:The output with the same shape as input

shape

paddle.fluid.layers.shape(*args, **kwargs)

Shape Operator

Get the shape of input tensor. Only support CPU input Tensor now.

Parameters:input – (Tensor), The input tensor.
Returns:(Tensor), The shape of input tensor, the data type of the shape is int64_t, will be on the same device with the input Tensor.

maxout

paddle.fluid.layers.maxout(*args, **kwargs)

MaxOut Operator.

Assumed the input shape is (N, Ci, H, W). The output shape is (N, Co, H, W). Then \(Co = Ci / groups\) and the operator formula is as follows:

$$ y_{si+j} = max_k x_{gsi + sk + j} \ g = groups \ s = frac{input.size}{num_channels} \ 0 le i < frac{num_channels}{groups} \ 0 le j < s \ 0 le k < groups $$

Please refer to Paper:
Parameters:
  • x – (Tensor) The input tensor of maxout operator. The format of input tensor is NCHW. Where N is batch size, C is the number of channels, H and W is the height and width of feature.
  • groups (INT) – “Specifies how many groups the input tensor will be split” “in the channel dimension. And the number of output channel is ” “the number of channels divided by groups..”
Returns:

(Tensor) The output tensor of maxout operator.The format of output tensor is also NCHW.Where N is batch size, C is the number of channels, H and W is the height and width of feature.

sigmoid

paddle.fluid.layers.sigmoid(*args, **kwargs)

SigmoidDoc :param x: Input of Sigmoid operator :param use_mkldnn: (bool, default false) Only used in mkldnn kernel :type use_mkldnn: BOOLEAN

Returns:Output of Sigmoid operator

logsigmoid

paddle.fluid.layers.logsigmoid(*args, **kwargs)

LogSigmoidDoc :param x: Input of LogSigmoid operator :param use_mkldnn: (bool, default false) Only used in mkldnn kernel :type use_mkldnn: BOOLEAN

Returns:Output of LogSigmoid operator

exp

paddle.fluid.layers.exp(*args, **kwargs)

ExpDoc :param x: Input of Exp operator :param use_mkldnn: (bool, default false) Only used in mkldnn kernel :type use_mkldnn: BOOLEAN

Returns:Output of Exp operator

relu

paddle.fluid.layers.relu(input)

Relu takes one input data (Tensor) and produces one output data (Tensor) where the rectified linear function, y = max(0, input), is applied to the tensor elementwise.

\[Out = \max(0, input)\]
Parameters:input (Variable) – The input tensor.
Returns:The output tensor with the same shape as input.
Return type:Variable

Examples

output = fluid.layers.relu(input)

tanh

paddle.fluid.layers.tanh(*args, **kwargs)

TanhDoc :param x: Input of Tanh operator :param use_mkldnn: (bool, default false) Only used in mkldnn kernel :type use_mkldnn: BOOLEAN

Returns:Output of Tanh operator

tanh_shrink

paddle.fluid.layers.tanh_shrink(*args, **kwargs)

TanhShrinkDoc :param x: Input of TanhShrink operator :param use_mkldnn: (bool, default false) Only used in mkldnn kernel :type use_mkldnn: BOOLEAN

Returns:Output of TanhShrink operator

softshrink

paddle.fluid.layers.softshrink(*args, **kwargs)

Softshrink Activation Operator

\[\begin{split}out = \begin{cases} x - \lambda, \text{if } x > \lambda \\ x + \lambda, \text{if } x < -\lambda \\ 0, \text{otherwise} \end{cases}\end{split}\]
Parameters:
  • x – Input of Softshrink operator
  • lambda (FLOAT) – non-negative offset
Returns:

Output of Softshrink operator

sqrt

paddle.fluid.layers.sqrt(*args, **kwargs)

SqrtDoc :param x: Input of Sqrt operator :param use_mkldnn: (bool, default false) Only used in mkldnn kernel :type use_mkldnn: BOOLEAN

Returns:Output of Sqrt operator

abs

paddle.fluid.layers.abs(*args, **kwargs)

AbsDoc :param x: Input of Abs operator :param use_mkldnn: (bool, default false) Only used in mkldnn kernel :type use_mkldnn: BOOLEAN

Returns:Output of Abs operator

ceil

paddle.fluid.layers.ceil(*args, **kwargs)

CeilDoc :param x: Input of Ceil operator :param use_mkldnn: (bool, default false) Only used in mkldnn kernel :type use_mkldnn: BOOLEAN

Returns:Output of Ceil operator

floor

paddle.fluid.layers.floor(*args, **kwargs)

FloorDoc :param x: Input of Floor operator :param use_mkldnn: (bool, default false) Only used in mkldnn kernel :type use_mkldnn: BOOLEAN

Returns:Output of Floor operator

cos

paddle.fluid.layers.cos(*args, **kwargs)

CosDoc :param x: Input of Cos operator :param use_mkldnn: (bool, default false) Only used in mkldnn kernel :type use_mkldnn: BOOLEAN

Returns:Output of Cos operator

sin

paddle.fluid.layers.sin(*args, **kwargs)

SinDoc :param x: Input of Sin operator :param use_mkldnn: (bool, default false) Only used in mkldnn kernel :type use_mkldnn: BOOLEAN

Returns:Output of Sin operator

round

paddle.fluid.layers.round(*args, **kwargs)

RoundDoc :param x: Input of Round operator :param use_mkldnn: (bool, default false) Only used in mkldnn kernel :type use_mkldnn: BOOLEAN

Returns:Output of Round operator

reciprocal

paddle.fluid.layers.reciprocal(*args, **kwargs)

ReciprocalDoc :param x: Input of Reciprocal operator :param use_mkldnn: (bool, default false) Only used in mkldnn kernel :type use_mkldnn: BOOLEAN

Returns:Output of Reciprocal operator

log

paddle.fluid.layers.log(input)

Calculates the natural log of the given input tensor, element-wise.

\[Out = \ln(input)\]
Parameters:input (Variable) – Input tensor.
Returns:The natural log of the input tensor computed element-wise.
Return type:Variable

Examples

output = fluid.layers.log(input)

square

paddle.fluid.layers.square(*args, **kwargs)

SquareDoc :param x: Input of Square operator :param use_mkldnn: (bool, default false) Only used in mkldnn kernel :type use_mkldnn: BOOLEAN

Returns:Output of Square operator

softplus

paddle.fluid.layers.softplus(*args, **kwargs)

SoftplusDoc :param x: Input of Softplus operator :param use_mkldnn: (bool, default false) Only used in mkldnn kernel :type use_mkldnn: BOOLEAN

Returns:Output of Softplus operator

softsign

paddle.fluid.layers.softsign(*args, **kwargs)

SoftsignDoc :param x: Input of Softsign operator :param use_mkldnn: (bool, default false) Only used in mkldnn kernel :type use_mkldnn: BOOLEAN

Returns:Output of Softsign operator

brelu

paddle.fluid.layers.brelu(*args, **kwargs)

BRelu Activation Operator.

\(out = \max(\min(x, t_{min}), t_{max})\)

Parameters:
  • x – Input of BRelu operator
  • t_min (FLOAT) – The min marginal value of BRelu
  • t_max (FLOAT) – The max marginal value of BRelu
Returns:

Output of BRelu operator

leaky_relu

paddle.fluid.layers.leaky_relu(*args, **kwargs)

LeakyRelu Activation Operator.

\(out = \max(x, \alpha * x)\)

Parameters:
  • x – Input of LeakyRelu operator
  • alpha (FLOAT) – The small negative slope
Returns:

Output of LeakyRelu operator

soft_relu

paddle.fluid.layers.soft_relu(*args, **kwargs)

SoftRelu Activation Operator.

\(out = \ln(1 + \exp(\max(\min(x, threshold), threshold))\)

Parameters:
  • x – Input of SoftRelu operator
  • threshold (FLOAT) – The threshold value of SoftRelu
Returns:

Output of SoftRelu operator

elu

paddle.fluid.layers.elu(*args, **kwargs)

ELU Activation Operator.

Applies the following element-wise computation on the input according to https://arxiv.org/abs/1511.07289.

\(out = \max(0, x) + \min(0, \alpha * (e^x - 1))\)

Parameters:
  • x – Input of ELU operator
  • alpha (FLOAT) – The alpha value of ELU
Returns:

Output of ELU operator

relu6

paddle.fluid.layers.relu6(*args, **kwargs)

Relu6 Activation Operator.

\(out = \min(\max(0, x), 6)\)

Parameters:
  • x – Input of Relu6 operator
  • threshold (FLOAT) – The threshold value of Relu6
Returns:

Output of Relu6 operator

pow

paddle.fluid.layers.pow(*args, **kwargs)

Pow Activation Operator.

\(out = x^{factor}\)

Parameters:
  • x – Input of Pow operator
  • factor (FLOAT) – The exponential factor of Pow
Returns:

Output of Pow operator

stanh

paddle.fluid.layers.stanh(*args, **kwargs)

STanh Activation Operator.

$$out = b * \frac{e^{a * x} - e^{-a * x}}{e^{a * x} + e^{-a * x}}$$

Parameters:
  • x – Input of STanh operator
  • scale_a (FLOAT) – The scale parameter of a for the input
  • scale_b (FLOAT) – The scale parameter of b for the input
Returns:

Output of STanh operator

hard_sigmoid

paddle.fluid.layers.hard_sigmoid(*args, **kwargs)

HardSigmoid Activation Operator.

Segment-wise linear approximation of sigmoid(https://arxiv.org/abs/1603.00391), which is much faster than sigmoid.

\(out = \max(0, \min(1, slope * x + shift))\)

The slope should be positive. The offset can be either positive or negative. The default slope and shift are set according to the above reference. It is recommended to use the defaults for this activation.

Parameters:
  • x – Input of HardSigmoid operator
  • slope (FLOAT) – Slope for linear approximation of sigmoid
  • offset (FLOAT) – Offset for linear approximation of sigmoid
Returns:

Output of HardSigmoid operator

swish

paddle.fluid.layers.swish(*args, **kwargs)

Swish Activation Operator.

$$out = \frac{x}{1 + e^{- beta x}}$$

Parameters:
  • x – Input of Swish operator
  • beta (FLOAT) – Constant beta of swish operator
Returns:

Output of Swish operator

uniform_random

paddle.fluid.layers.uniform_random(shape, dtype=None, min=None, max=None, seed=None)

This operator initializes a tensor with random values sampled from a uniform distribution. The random result is in set [min, max].

Parameters:
  • shape (INTS) – The shape of the output tensor
  • min (FLOAT) – Minimum value of uniform random. [default -1.0].
  • max (FLOAT) – Maximun value of uniform random. [default 1.0].
  • seed (INT) – Random seed used for generating samples. 0 means use a seed generated by the system.Note that if seed is not 0, this operator will always generate the same random numbers every time. [default 0].
  • dtype (INT) – Output tensor data type. [default 5(FP32)].
Returns:

The output tensor of uniform random op

Examples

>>> result = fluid.layers.uniform_random(shape=[32, 784])

hard_shrink

paddle.fluid.layers.hard_shrink(x, threshold=None)

HardShrink activation operator

\[\begin{split}out = \begin{cases} x, \text{if } x > \lambda \\ x, \text{if } x < -\lambda \\ 0, \text{otherwise} \end{cases}\end{split}\]
Parameters:
  • x – Input of HardShrink operator
  • threshold (FLOAT) – The value of threshold for HardShrink. [default: 0.5]
Returns:

Output of HardShrink operator

Examples

>>> data = fluid.layers.data(name="input", shape=[784])
>>> result = fluid.layers.hard_shrink(x=data, threshold=0.3)

cumsum

paddle.fluid.layers.cumsum(x, axis=None, exclusive=None, reverse=None)

The cumulative sum of the elements along a given axis. By default, the first element of the result is the same of the first element of the input. If exlusive is true, the first element of the result is 0.

Parameters:
  • x – Input of cumsum operator
  • axis (INT) – The dimenstion to accumulate along. -1 means the last dimenstion [default -1].
  • exclusive (BOOLEAN) – Whether to perform exclusive cumsum. [default false].
  • reverse (BOOLEAN) – If true, the cumsum is performed in the reversed direction. [default false].
Returns:

Output of cumsum operator

Examples

>>> data = fluid.layers.data(name="input", shape=[32, 784])
>>> result = fluid.layers.cumsum(data, axis=0)

thresholded_relu

paddle.fluid.layers.thresholded_relu(x, threshold=None)

ThresholdedRelu activation operator

\[\begin{split}out = \begin{cases} x, \text{if } x > threshold \\ 0, \text{otherwise} \end{cases}\end{split}\]
Parameters:
  • x – Input of ThresholdedRelu operator
  • threshold (FLOAT) – The threshold location of activation. [default 1.0].
Returns:

Output of ThresholdedRelu operator

Examples

>>> data = fluid.layers.data(name="input", shape=[1])
>>> result = fluid.layers.thresholded_relu(data, threshold=0.4)

tensor

create_tensor

paddle.fluid.layers.create_tensor(dtype, name=None, persistable=False)

Create an variable, which will hold a LoDTensor with data type dtype.

Parameters:
  • dtype (string) – ‘float32’|’int32’|..., the data type of the created tensor.
  • name (string) – The name of the created tensor, if not set, the name will be a random unique one.
  • persistable (bool) – Set the persistable flag of the create tensor.
Returns:

The tensor variable storing the created tensor.

Return type:

Variable

Examples

tensor = fluid.layers.create_tensor(dtype='float32')

create_parameter

paddle.fluid.layers.create_parameter(shape, dtype, name=None, attr=None, is_bias=False, default_initializer=None)

Create a parameter. The parameter is a learnable variable, which can have gradient, and can be optimized.

NOTE: this is a very low-level API. This API is useful when you create operator by your self. instead of using layers.

Parameters:
  • shape (list[int]) – shape of the parameter
  • dtype (string) – element type of the parameter
  • attr (ParamAttr) – attributes of the parameter
  • is_bias (bool) – This can affect which default initializer is chosen when default_initializer is None. If is_bias, initializer.Constant(0.0) will be used. Otherwise, Xavier() will be used.
  • default_initializer (Initializer) – initializer for the parameter
Returns:

the created parameter.

Examples

>>> W = fluid.layers.create_parameter(shape=[784, 200], dtype='float32')
>>> data = fluid.layers.data(name="img", shape=[64, 784], append_batch_size=False)
>>> hidden = fluid.layers.matmul(x=data, y=W)

create_global_var

paddle.fluid.layers.create_global_var(shape, value, dtype, persistable=False, force_cpu=False, name=None)

Create a new variable in the global block(block 0).

Parameters:
  • shape (list[int]) – shape of the variable
  • value (float) – the value of the variable. The new created variable will be filled with it.
  • dtype (string) – data type of the variable
  • persistable (bool) – if this variable is persistable. Default: False
  • force_cpu (bool) – force this variable to be on CPU. Default: False
  • name (str|None) – The name of the variable. If set to None the variable name will be generated automatically. Default: None
Returns:

the created Variable

Return type:

Variable

Examples

var = fluid.create_global_var(shape=[2,3], value=1.0, dtype='float32',
                     persistable=True, force_cpu=True, name='new_var')

cast

paddle.fluid.layers.cast(x, dtype)

This layer takes in the Variable x with x.dtype and casts it to the output with dtype.

Parameters:
  • x (Variable) – The input Variable for casting.
  • dtype (np.dtype|core.VarDesc.VarType|str) – Data type of the output Variable.
Returns:

The output Variable after casting.

Return type:

Variable

Examples

data = fluid.layers.data(name='x', shape=[13], dtype='float32')
result = fluid.layers.cast(x=data, dtype='float64')

concat

paddle.fluid.layers.concat(input, axis=0, name=None)

Concat

This function concatenates the input along the axis mentioned and returns that as the output.

Parameters:
  • input (list) – List of tensors to be concatenated
  • axis (int) – Integer axis along which the tensors will be concatenated
  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
Returns:

Output variable of the concatenation

Return type:

Variable

Examples

out = fluid.layers.concat(input=[Efirst, Esecond, Ethird, Efourth])

sums

paddle.fluid.layers.sums(input, out=None)

This function performs the sum operation on the input and returns the result as the output.

Parameters:
  • input (Variable|list) – The input tensor that has the elements that need to be summed up.
  • out (Variable|None) – Output parameter. The sum result. Default: None
Returns:

the sum of input. The same as the argument ‘out’

Return type:

Variable

Examples

tmp = fluid.layers.zeros(shape=[10], dtype='int32')
i = fluid.layers.fill_constant(shape=[1], dtype='int64', value=10)
a0 = layers.array_read(array=tmp, i=i)
i = layers.increment(x=i)
a1 = layers.array_read(array=tmp, i=i)
mean_a0 = layers.mean(a0)
mean_a1 = layers.mean(a1)
a_sum = layers.sums(input=[mean_a0, mean_a1])

assign

paddle.fluid.layers.assign(input, output)

Assign

This function copies the input Variable to the output Variable.

Parameters:
  • input (Variable|numpy.ndarray) – The source variable
  • output (Variable) – The destination variable
Returns:

The destination variable that was supplied as the output.

Return type:

Variable

Examples

out = fluid.layers.create_tensor(dtype='float32')
hidden = fluid.layers.fc(input=data, size=10)
fluid.layers.assign(hidden, out)

fill_constant_batch_size_like

paddle.fluid.layers.fill_constant_batch_size_like(input, shape, dtype, value, input_dim_idx=0, output_dim_idx=0)

This function creates a tensor of specified shape, dtype and batch size, and initializes this with a constant supplied in value. The batch size is obtained from the input tensor.

It also sets stop_gradient to True.

>>> data = fluid.layers.fill_constant_batch_size_like(
>>>             input=like, shape=[1], value=0, dtype='int64')
Parameters:
  • input (Variable) – Tensor whose input_dim_idx’th dimension specifies the batch_size.
  • shape (INTS) – The shape of the output.
  • dtype (INT) – It could be numpy.dtype. Output data type. Default is float32.
  • value (FLOAT) – default 0. The value to be filled.
  • input_dim_idx (INT) – default 0. The index of input’s batch size dimension.
  • output_dim_idx (INT) – default 0. The index of output’s batch size dimension.
Returns:

Tensor of specified shape will be filled with the specified value.

fill_constant

paddle.fluid.layers.fill_constant(shape, dtype, value, force_cpu=False, out=None)

fill_constant

This function creates a tensor with specified shape and dtype, and initializes it with a constant specifed by value.

The attribute stop_gradient of the created tensor is set to True.

Parameters:
  • shape (tuple|list|None) – Shape of the output tensor.
  • dtype (np.dtype|core.VarDesc.VarType|str) – Data type of the output tensor.
  • value (float) – The constant value used to initialize the output tensor.
  • out (Variable) – The output tensor.
  • force_cpu (True|False) – data should be on CPU if set true.
Returns:

The tensor variable storing the output.

Return type:

Variable

Examples

data = fluid.layers.fill_constant(shape=[1], value=0, dtype='int64')

argmin

paddle.fluid.layers.argmin(x, axis=0)

argmin

This function computes the indices of the min elements of the input tensor’s element along the provided axis.

Parameters:
  • x (Variable) – The input to compute the indices of the min elements.
  • axis (int) – Axis to compute indices along.
Returns:

The tensor variable storing the output

Return type:

Variable

Examples

out = fluid.layers.argmin(x=in, axis=0)
out = fluid.layers.argmin(x=in, axis=-1)

argmax

paddle.fluid.layers.argmax(x, axis=0)

argmax

This function computes the indices of the max elements of the input tensor’s element along the provided axis.

Parameters:
  • x (Variable) – The input to compute the indices of the max elements.
  • axis (int) – Axis to compute indices along.
Returns:

The tensor variable storing the output

Return type:

Variable

Examples

out = fluid.layers.argmax(x=in, axis=0)
out = fluid.layers.argmax(x=in, axis=-1)

ones

paddle.fluid.layers.ones(shape, dtype, force_cpu=False)

ones

This function creates a tensor of specified shape and dtype, and initializes this with 1.

It also sets stop_gradient to True.

Parameters:
  • shape (tuple|list|None) – Shape of output tensor
  • dtype (np.dtype|core.VarDesc.VarType|str) – Data type of output tensor
Returns:

The tensor variable storing the output

Return type:

Variable

Examples

data = fluid.layers.ones(shape=[1], dtype='int64')

zeros

paddle.fluid.layers.zeros(shape, dtype, force_cpu=False)

zeros

This function creates a tensor of specified shape and dtype, and initializes this with 0.

It also sets stop_gradient to True.

Parameters:
  • shape (tuple|list|None) – Shape of output tensor.
  • dtype (np.dtype|core.VarDesc.VarType|str) – Data type of output tensor.
  • force_cpu (bool, default False) – Whether to make output stay on CPU.
Returns:

The tensor variable storing the output.

Return type:

Variable

Examples

data = fluid.layers.zeros(shape=[1], dtype='int64')

detection

prior_box

paddle.fluid.layers.prior_box(input, image, min_sizes, max_sizes=None, aspect_ratios=[1.0], variance=[0.1, 0.1, 0.2, 0.2], flip=False, clip=False, steps=[0.0, 0.0], offset=0.5, name=None)

Prior Box Operator

Generate prior boxes for SSD(Single Shot MultiBox Detector) algorithm. Each position of the input produce N prior boxes, N is determined by the count of min_sizes, max_sizes and aspect_ratios, The size of the box is in range(min_size, max_size) interval, which is generated in sequence according to the aspect_ratios.

Parameters:
  • input (Variable) – The Input Variables, the format is NCHW.
  • image (Variable) – The input image data of PriorBoxOp, the layout is NCHW.
  • min_sizes (list|tuple|float value) – min sizes of generated prior boxes.
  • max_sizes (list|tuple|None) – max sizes of generated prior boxes. Default: None.
  • aspect_ratios (list|tuple|float value) – the aspect ratios of generated prior boxes. Default: [1.].
  • variance (list|tuple) – the variances to be encoded in prior boxes. Default:[0.1, 0.1, 0.2, 0.2].
  • flip (bool) – Whether to flip aspect ratios. Default:False.
  • clip (bool) – Whether to clip out-of-boundary boxes. Default: False.
  • step (list|turple) – Prior boxes step across width and height, If step[0] == 0.0/step[1] == 0.0, the prior boxes step across height/weight of the input will be automatically calculated. Default: [0., 0.]
  • offset (float) – Prior boxes center offset. Default: 0.5
  • name (str) – Name of the prior box op. Default: None.
Returns:

A tuple with two Variable (boxes, variances)

boxes: the output prior boxes of PriorBox. The layout is [H, W, num_priors, 4]. H is the height of input, W is the width of input, num_priors is the total box count of each position of input.

variances: the expanded variances of PriorBox. The layout is [H, W, num_priors, 4]. H is the height of input, W is the width of input num_priors is the total box count of each position of input

Return type:

tuple

Examples

box, var = fluid.layers.prior_box(
    input=conv1,
    image=images,
    min_sizes=[100.],
    flip=True,
    clip=True)

multi_box_head

paddle.fluid.layers.multi_box_head(inputs, image, base_size, num_classes, aspect_ratios, min_ratio=None, max_ratio=None, min_sizes=None, max_sizes=None, steps=None, step_w=None, step_h=None, offset=0.5, variance=[0.1, 0.1, 0.2, 0.2], flip=True, clip=False, kernel_size=1, pad=0, stride=1, name=None)

Generate prior boxes for SSD(Single Shot MultiBox Detector) algorithm. The details of this algorithm, please refer the section 2.2 of SSD paper SSD: Single Shot MultiBox Detector .

Parameters:
  • inputs (list|tuple) – The list of input Variables, the format of all Variables is NCHW.
  • image (Variable) – The input image data of PriorBoxOp, the layout is NCHW.
  • base_size (int) – the base_size is used to get min_size and max_size according to min_ratio and max_ratio.
  • num_classes (int) – The number of classes.
  • aspect_ratios (list|tuple) – the aspect ratios of generated prior boxes. The length of input and aspect_ratios must be equal.
  • min_ratio (int) – the min ratio of generated prior boxes.
  • max_ratio (int) – the max ratio of generated prior boxes.
  • min_sizes (list|tuple|None) – If len(inputs) <=2, min_sizes must be set up, and the length of min_sizes should equal to the length of inputs. Default: None.
  • max_sizes (list|tuple|None) – If len(inputs) <=2, max_sizes must be set up, and the length of min_sizes should equal to the length of inputs. Default: None.
  • steps (list|tuple) – If step_w and step_h are the same, step_w and step_h can be replaced by steps.
  • step_w (list|tuple) – Prior boxes step across width. If step_w[i] == 0.0, the prior boxes step across width of the inputs[i] will be automatically calculated. Default: None.
  • step_h (list|tuple) – Prior boxes step across height, If step_h[i] == 0.0, the prior boxes step across height of the inputs[i] will be automatically calculated. Default: None.
  • offset (float) – Prior boxes center offset. Default: 0.5
  • variance (list|tuple) – the variances to be encoded in prior boxes. Default:[0.1, 0.1, 0.2, 0.2].
  • flip (bool) – Whether to flip aspect ratios. Default:False.
  • clip (bool) – Whether to clip out-of-boundary boxes. Default: False.
  • kernel_size (int) – The kernel size of conv2d. Default: 1.
  • pad (int|list|tuple) – The padding of conv2d. Default:0.
  • stride (int|list|tuple) – The stride of conv2d. Default:1,
  • name (str) – Name of the prior box layer. Default: None.
Returns:

A tuple with four Variables. (mbox_loc, mbox_conf, boxes, variances)

mbox_loc: The predicted boxes’ location of the inputs. The layout is [N, H*W*Priors, 4]. where Priors is the number of predicted boxes each position of each input.

mbox_conf: The predicted boxes’ confidence of the inputs. The layout is [N, H*W*Priors, C]. where Priors is the number of predicted boxes each position of each input and C is the number of Classes.

boxes: the output prior boxes of PriorBox. The layout is [num_priors, 4]. num_priors is the total box count of each position of inputs.

variances: the expanded variances of PriorBox. The layout is [num_priors, 4]. num_priors is the total box count of each position of inputs

Return type:

tuple

Examples

mbox_locs, mbox_confs, box, var = fluid.layers.multi_box_head(
  inputs=[conv1, conv2, conv3, conv4, conv5, conv5],
  image=images,
  num_classes=21,
  min_ratio=20,
  max_ratio=90,
  aspect_ratios=[[2.], [2., 3.], [2., 3.], [2., 3.], [2.], [2.]],
  base_size=300,
  offset=0.5,
  flip=True,
  clip=True)

bipartite_match

paddle.fluid.layers.bipartite_match(dist_matrix, match_type=None, dist_threshold=None, name=None)

This operator implements a greedy bipartite matching algorithm, which is used to obtain the matching with the maximum distance based on the input distance matrix. For input 2D matrix, the bipartite matching algorithm can find the matched column for each row (matched means the largest distance), also can find the matched row for each column. And this operator only calculate matched indices from column to row. For each instance, the number of matched indices is the column number of the input distance matrix.

There are two outputs, matched indices and distance. A simple description, this algorithm matched the best (maximum distance) row entity to the column entity and the matched indices are not duplicated in each row of ColToRowMatchIndices. If the column entity is not matched any row entity, set -1 in ColToRowMatchIndices.

NOTE: the input DistMat can be LoDTensor (with LoD) or Tensor. If LoDTensor with LoD, the height of ColToRowMatchIndices is batch size. If Tensor, the height of ColToRowMatchIndices is 1.

NOTE: This API is a very low level API. It is used by ssd_loss layer. Please consider to use ssd_loss instead.

Parameters:
  • dist_matrix (Variable) –

    This input is a 2-D LoDTensor with shape [K, M]. It is pair-wise distance matrix between the entities represented by each row and each column. For example, assumed one entity is A with shape [K], another entity is B with shape [M]. The dist_matrix[i][j] is the distance between A[i] and B[j]. The bigger the distance is, the better matching the pairs are.

    NOTE: This tensor can contain LoD information to represent a batch of inputs. One instance of this batch can contain different numbers of entities.

  • match_type (string|None) – The type of matching method, should be ‘bipartite’ or ‘per_prediction’. [default ‘bipartite’].
  • dist_threshold (float|None) – If match_type is ‘per_prediction’, this threshold is to determine the extra matching bboxes based on the maximum distance, 0.5 by default.
Returns:

a tuple with two elements is returned. The first is matched_indices, the second is matched_distance.

The matched_indices is a 2-D Tensor with shape [N, M] in int type. N is the batch size. If match_indices[i][j] is -1, it means B[j] does not match any entity in i-th instance. Otherwise, it means B[j] is matched to row match_indices[i][j] in i-th instance. The row number of i-th instance is saved in match_indices[i][j].

The matched_distance is a 2-D Tensor with shape [N, M] in float type . N is batch size. If match_indices[i][j] is -1, match_distance[i][j] is also -1.0. Otherwise, assumed match_distance[i][j] = d, and the row offsets of each instance are called LoD. Then match_distance[i][j] = dist_matrix[d+LoD[i]][j].

Return type:

tuple

Examples

>>> x = fluid.layers.data(name='x', shape=[4], dtype='float32')
>>> y = fluid.layers.data(name='y', shape=[4], dtype='float32')
>>> iou = fluid.layers.iou_similarity(x=x, y=y)
>>> matched_indices, matched_dist = fluid.layers.bipartite_match(iou)

target_assign

paddle.fluid.layers.target_assign(input, matched_indices, negative_indices=None, mismatch_value=None, name=None)

This operator can be, for given the target bounding boxes or labels, to assign classification and regression targets to each prediction as well as weights to prediction. The weights is used to specify which prediction would not contribute to training loss.

For each instance, the output out and`out_weight` are assigned based on match_indices and negative_indices. Assumed that the row offset for each instance in input is called lod, this operator assigns classification/regression targets by performing the following steps:

  1. Assigning all outpts based on match_indices:
If id = match_indices[i][j] > 0,

    out[i][j][0 : K] = X[lod[i] + id][j % P][0 : K]
    out_weight[i][j] = 1.

Otherwise,

    out[j][j][0 : K] = {mismatch_value, mismatch_value, ...}
    out_weight[i][j] = 0.
  1. Assigning out_weight based on neg_indices if neg_indices is provided:

Assumed that the row offset for each instance in neg_indices is called neg_lod, for i-th instance and each id of neg_indices in this instance:

out[i][id][0 : K] = {mismatch_value, mismatch_value, ...}
out_weight[i][id] = 1.0
Parameters:
  • inputs (Variable) – This input is a 3D LoDTensor with shape [M, P, K].
  • matched_indices (Variable) – Tensor<int>), The input matched indices is 2D Tenosr<int32> with shape [N, P], If MatchIndices[i][j] is -1, the j-th entity of column is not matched to any entity of row in i-th instance.
  • negative_indices (Variable) – The input negative example indices are an optional input with shape [Neg, 1] and int32 type, where Neg is the total number of negative example indices.
  • mismatch_value (float32) – Fill this value to the mismatched location.
Returns:

A tuple(out, out_weight) is returned. out is a 3D Tensor with shape [N, P, K], N and P is the same as they are in neg_indices, K is the same as it in input of X. If match_indices[i][j]. out_weight is the weight for output with the shape of [N, P, 1].

Return type:

tuple

Examples

matched_indices, matched_dist = fluid.layers.bipartite_match(iou)
gt = layers.data(
            name='gt', shape=[1, 1], dtype='int32', lod_level=1)
trg, trg_weight = layers.target_assign(
                gt, matched_indices, mismatch_value=0)

detection_output

paddle.fluid.layers.detection_output(loc, scores, prior_box, prior_box_var, background_label=0, nms_threshold=0.3, nms_top_k=400, keep_top_k=200, score_threshold=0.01, nms_eta=1.0)

Detection Output Layer for Single Shot Multibox Detector (SSD).

This operation is to get the detection results by performing following two steps:

  1. Decode input bounding box predictions according to the prior boxes.
  2. Get the final detection results by applying multi-class non maximum suppression (NMS).

Please note, this operation doesn’t clip the final output bounding boxes to the image window.

Parameters:
  • loc (Variable) – A 3-D Tensor with shape [N, M, 4] represents the predicted locations of M bounding bboxes. N is the batch size, and each bounding box has four coordinate values and the layout is [xmin, ymin, xmax, ymax].
  • scores (Variable) – A 3-D Tensor with shape [N, M, C] represents the predicted confidence predictions. N is the batch size, C is the class number, M is number of bounding boxes. For each category there are total M scores which corresponding M bounding boxes.
  • prior_box (Variable) – A 2-D Tensor with shape [M, 4] holds M boxes, each box is represented as [xmin, ymin, xmax, ymax], [xmin, ymin] is the left top coordinate of the anchor box, if the input is image feature map, they are close to the origin of the coordinate system. [xmax, ymax] is the right bottom coordinate of the anchor box.
  • prior_box_var (Variable) – A 2-D Tensor with shape [M, 4] holds M group of variance.
  • background_label (float) – The index of background label, the background label will be ignored. If set to -1, then all categories will be considered.
  • nms_threshold (float) – The threshold to be used in NMS.
  • nms_top_k (int) – Maximum number of detections to be kept according to the confidences aftern the filtering detections based on score_threshold.
  • keep_top_k (int) – Number of total bboxes to be kept per image after NMS step. -1 means keeping all bboxes after NMS step.
  • score_threshold (float) – Threshold to filter out bounding boxes with low confidence score. If not provided, consider all boxes.
  • nms_eta (float) – The parameter for adaptive NMS.
Returns:

The detection outputs is a LoDTensor with shape [No, 6]. Each row has six values: [label, confidence, xmin, ymin, xmax, ymax]. No is the total number of detections in this mini-batch. For each instance, the offsets in first dimension are called LoD, the offset number is N + 1, N is the batch size. The i-th image has LoD[i + 1] - LoD[i] detected results, if it is 0, the i-th image has no detected results. If all images have not detected results, all the elements in LoD are 0, and output tensor only contains one value, which is -1.

Return type:

Variable

Examples

pb = layers.data(name='prior_box', shape=[10, 4],
             append_batch_size=False, dtype='float32')
pbv = layers.data(name='prior_box_var', shape=[10, 4],
              append_batch_size=False, dtype='float32')
loc = layers.data(name='target_box', shape=[2, 21, 4],
              append_batch_size=False, dtype='float32')
scores = layers.data(name='scores', shape=[2, 21, 10],
              append_batch_size=False, dtype='float32')
nmsed_outs = fluid.layers.detection_output(scores=scores,
                           loc=loc,
                           prior_box=pb,
                           prior_box_var=pbv)

ssd_loss

paddle.fluid.layers.ssd_loss(location, confidence, gt_box, gt_label, prior_box, prior_box_var=None, background_label=0, overlap_threshold=0.5, neg_pos_ratio=3.0, neg_overlap=0.5, loc_loss_weight=1.0, conf_loss_weight=1.0, match_type='per_prediction', mining_type='max_negative', normalize=True, sample_size=None)

Multi-box loss layer for object detection algorithm of SSD

This layer is to compute dection loss for SSD given the location offset predictions, confidence predictions, prior boxes and ground-truth boudding boxes and labels, and the type of hard example mining. The returned loss is a weighted sum of the localization loss (or regression loss) and confidence loss (or classification loss) by performing the following steps:

  1. Find matched bounding box by bipartite matching algorithm.

1.1 Compute IOU similarity between ground-truth boxes and prior boxes.

1.2 Compute matched boundding box by bipartite matching algorithm.

  1. Compute confidence for mining hard examples

2.1. Get the target label based on matched indices.

2.2. Compute confidence loss.

  1. Apply hard example mining to get the negative example indices and update the matched indices.
  2. Assign classification and regression targets

4.1. Encoded bbox according to the prior boxes.

4.2. Assign regression targets.

4.3. Assign classification targets.

  1. Compute the overall objective loss.

5.1 Compute confidence loss.

5.1 Compute localization loss.

5.3 Compute the overall weighted loss.

Parameters:
  • location (Variable) – The location predictions are a 3D Tensor with shape [N, Np, 4], N is the batch size, Np is total number of predictions for each instance. 4 is the number of coordinate values, the layout is [xmin, ymin, xmax, ymax].
  • confidence (Variable) – The confidence predictions are a 3D Tensor with shape [N, Np, C], N and Np are the same as they are in location, C is the class number.
  • gt_box (Variable) – The ground-truth boudding boxes (bboxes) are a 2D LoDTensor with shape [Ng, 4], Ng is the total number of ground-truth bboxes of mini-batch input.
  • gt_label (Variable) – The ground-truth labels are a 2D LoDTensor with shape [Ng, 1].
  • prior_box (Variable) – The prior boxes are a 2D Tensor with shape [Np, 4].
  • prior_box_var (Variable) – The variance of prior boxes are a 2D Tensor with shape [Np, 4].
  • background_label (int) – The index of background label, 0 by default.
  • overlap_threshold (float) –

    If match_type is ‘per_prediction’, use overlap_threshold to determine the extra matching bboxes when

    finding matched boxes. 0.5 by default.
  • neg_pos_ratio (float) – The ratio of the negative boxes to the positive boxes, used only when mining_type is ‘max_negative’, 3.0 by defalut.
  • neg_overlap (float) – The negative overlap upper bound for the unmatched predictions. Use only when mining_type is ‘max_negative’, 0.5 by default.
  • loc_loss_weight (float) – Weight for localization loss, 1.0 by default.
  • conf_loss_weight (float) – Weight for confidence loss, 1.0 by default.
  • match_type (str) – The type of matching method during training, should be ‘bipartite’ or ‘per_prediction’, ‘per_prediction’ by defalut.
  • mining_type (str) – The hard example mining type, should be ‘hard_example’ or ‘max_negative’, now only support max_negative.
  • normalize (bool) – Whether to normalize the SSD loss by the total number of output locations, True by default.
  • sample_size (int) – The max sample size of negative box, used only when mining_type is ‘hard_example’.
Returns:

The weighted sum of the localization loss and confidence loss, with shape [N * Np, 1], N and Np are the same as they are in location.

Raises:

ValueError – If mining_type is ‘hard_example’, now only support mining type of max_negative.

Examples

>>> pb = fluid.layers.data(
>>>                   name='prior_box',
>>>                   shape=[10, 4],
>>>                   append_batch_size=False,
>>>                   dtype='float32')
>>> pbv = fluid.layers.data(
>>>                   name='prior_box_var',
>>>                   shape=[10, 4],
>>>                   append_batch_size=False,
>>>                   dtype='float32')
>>> loc = fluid.layers.data(name='target_box', shape=[10, 4], dtype='float32')
>>> scores = fluid.layers.data(name='scores', shape=[10, 21], dtype='float32')
>>> gt_box = fluid.layers.data(
>>>         name='gt_box', shape=[4], lod_level=1, dtype='float32')
>>> gt_label = fluid.layers.data(
>>>         name='gt_label', shape=[1], lod_level=1, dtype='float32')
>>> loss = fluid.layers.ssd_loss(loc, scores, gt_box, gt_label, pb, pbv)

detection_map

paddle.fluid.layers.detection_map(detect_res, label, class_num, background_label=0, overlap_threshold=0.3, evaluate_difficult=True, has_state=None, input_states=None, out_states=None, ap_version='integral')

Detection mAP evaluate operator. The general steps are as follows. First, calculate the true positive and false positive according to the input of detection and labels, then calculate the mAP evaluate value. Supporting ‘11 point’ and ‘integral’ mAP algorithm. Please get more information from the following articles: https://sanchom.wordpress.com/tag/average-precision/ https://arxiv.org/abs/1512.02325

Parameters:
  • detect_res – (LoDTensor) A 2-D LoDTensor with shape [M, 6] represents the detections. Each row has 6 values: [label, confidence, xmin, ymin, xmax, ymax], M is the total number of detect results in this mini-batch. For each instance, the offsets in first dimension are called LoD, the number of offset is N + 1, if LoD[i + 1] - LoD[i] == 0, means there is no detected data
  • label – (LoDTensor) A 2-D LoDTensor represents theLabeled ground-truth data. Each row has 6 values: [label, xmin, ymin, xmax, ymax, is_difficult] or 5 values: [label, xmin, ymin, xmax, ymax], where N is the total number of ground-truth data in this mini-batch. For each instance, the offsets in first dimension are called LoD, the number of offset is N + 1, if LoD[i + 1] - LoD[i] == 0, means there is no ground-truth data
  • class_num – (int) The class number
  • background_label – (int, defalut: 0) The index of background label, the background label will be ignored. If set to -1, then all categories will be considered
  • overlap_threshold – (float) The lower bound jaccard overlap threshold of detection output and ground-truth data
  • evaluate_difficult – (bool, default true) Switch to control whether the difficult data is evaluated
  • has_state – (Tensor<int>) A tensor with shape [1], 0 means ignoring input states, which including PosCount, TruePos, FalsePos
  • input_states – If not None, It contains 3 elements: 1. pos_count (Tensor) A tensor with shape [Ncls, 1], store the input positive example count of each class, Ncls is the count of input classification. This input is used to pass the AccumPosCount generated by the previous mini-batch when the multi mini-batches cumulative calculation carried out. When the input(PosCount) is empty, the cumulative calculation is not carried out, and only the results of the current mini-batch are calculated. 2. true_pos (LoDTensor) A 2-D LoDTensor with shape [Ntp, 2], store the input true positive example of each class.This input is used to pass the AccumTruePos generated by the previous mini-batch when the multi mini-batches cumulative calculation carried out. . 3. false_pos (LoDTensor) A 2-D LoDTensor with shape [Nfp, 2], store the input false positive example of each class.This input is used to pass the AccumFalsePos generated by the previous mini-batch when the multi mini-batches cumulative calculation carried out. .
  • out_states – If not None, it contains 3 elements. 1. accum_pos_count (Tensor) A tensor with shape [Ncls, 1], store the positive example count of each class. It combines the input input(PosCount) and the positive example count computed from input(Detection) and input(Label). 2. accum_true_pos (LoDTensor) A LoDTensor with shape [Ntp’, 2], store the true positive example of each class. It combines the input(TruePos) and the true positive examples computed from input(Detection) and input(Label). 3. accum_false_pos (LoDTensor) A LoDTensor with shape [Nfp’, 2], store the false positive example of each class. It combines the input(FalsePos) and the false positive examples computed from input(Detection) and input(Label).
  • ap_version – (string, default ‘integral’) The AP algorithm type, ‘integral’ or ‘11point’
Returns:

(Tensor) A tensor with shape [1], store the mAP evaluate result of the detection

Examples

detect_res = fluid.layers.data(
    name='detect_res',
    shape=[10, 6],
    append_batch_size=False,
    dtype='float32')
label = fluid.layers.data(
    name='label',
    shape=[10, 6],
    append_batch_size=False,
    dtype='float32')

map_out = fluid.layers.detection_map(detect_res, label, 21)

iou_similarity

paddle.fluid.layers.iou_similarity(*args, **kwargs)

IOU Similarity Operator

Computes intersection-over-union (IOU) between two box lists. Box list ‘X’ should be a LoDTensor and ‘Y’ is a common Tensor, boxes in ‘Y’ are shared by all instance of the batched inputs of X. Given two boxes A and B, the calculation of IOU is as follows:

$$ IOU(A, B) = \frac{area(A\cap B)}{area(A)+area(B)-area(A\cap B)} $$

Parameters:
  • x – (LoDTensor, default LoDTensor<float>) Box list X is a 2-D LoDTensor with shape [N, 4] holds N boxes, each box is represented as [xmin, ymin, xmax, ymax], the shape of X is [N, 4]. [xmin, ymin] is the left top coordinate of the box if the input is image feature map, they are close to the origin of the coordinate system. [xmax, ymax] is the right bottom coordinate of the box. This tensor can contain LoD information to represent a batch of inputs. One instance of this batch can contain different numbers of entities.
  • y – (Tensor, default Tensor<float>) Box list Y holds M boxes, each box is represented as [xmin, ymin, xmax, ymax], the shape of X is [N, 4]. [xmin, ymin] is the left top coordinate of the box if the input is image feature map, and [xmax, ymax] is the right bottom coordinate of the box.
Returns:

(LoDTensor, the lod is same as input X) The output of iou_similarity op, a tensor with shape [N, M] representing pairwise iou scores.

box_coder

paddle.fluid.layers.box_coder(*args, **kwargs)

Bounding Box Coder.

Encode/Decode the target bounding box with the priorbox information.

The Encoding schema described below:

ox = (tx - px) / pw / pxv

oy = (ty - py) / ph / pyv

ow = log(abs(tw / pw)) / pwv

oh = log(abs(th / ph)) / phv

The Decoding schema described below:

ox = (pw * pxv * tx * + px) - tw / 2

oy = (ph * pyv * ty * + py) - th / 2

ow = exp(pwv * tw) * pw + tw / 2

oh = exp(phv * th) * ph + th / 2

where tx, ty, tw, th denote the target box’s center coordinates, width and height respectively. Similarly, px, py, pw, ph denote the priorbox’s (anchor) center coordinates, width and height. pxv, pyv, pwv, phv denote the variance of the priorbox and ox, oy, ow, oh denote the encoded/decoded coordinates, width and height.

Parameters:
  • prior_box – (Tensor, default Tensor<float>) Box list PriorBox is a 2-D Tensor with shape [M, 4] holds M boxes, each box is represented as [xmin, ymin, xmax, ymax], [xmin, ymin] is the left top coordinate of the anchor box, if the input is image feature map, they are close to the origin of the coordinate system. [xmax, ymax] is the right bottom coordinate of the anchor box.
  • prior_box_var – (Tensor, default Tensor<float>, optional) PriorBoxVar is a 2-D Tensor with shape [M, 4] holds M group of variance. PriorBoxVar will set all elements to 1 by default. Optional.
  • target_box – (LoDTensor or Tensor) This input can be a 2-D LoDTensor with shape [N, 4] when code_type is ‘encode_center_size’. This input also can be a 3-D Tensor with shape [N, M, 4] when code_type is ‘decode_center_size’. [N, 4], each box is represented as [xmin, ymin, xmax, ymax], [xmin, ymin] is the left top coordinate of the box if the input is image feature map, they are close to the origin of the coordinate system. [xmax, ymax] is the right bottom coordinate of the box. This tensor can contain LoD information to represent a batch of inputs. One instance of this batch can contain different numbers of entities.
  • code_type (STRING) – (string, default encode_center_size) the code type used with the target box
  • box_normalized (BOOLEAN) – (bool, default true) whether treat the priorbox as a noramlized box
Returns:

(LoDTensor or Tensor) When code_type is ‘encode_center_size’, the output tensor of box_coder_op with shape [N, M, 4] representing the result of N target boxes encoded with M Prior boxes and variances. When code_type is ‘decode_center_size’, N represents the batch size and M represents the number of deocded boxes.

learning_rate_scheduler

exponential_decay

paddle.fluid.layers.exponential_decay(learning_rate, decay_steps, decay_rate, staircase=False)

Applies exponential decay to the learning rate.

When training a model, it is often recommended to lower the learning rate as the training progresses. By using this function, the learning rate will be decayed by ‘decay_rate’ every ‘decay_steps’ steps.

>>> if staircase == True:
>>>     decayed_learning_rate = learning_rate * decay_rate ^ floor(global_step / decay_steps)
>>> else:
>>>     decayed_learning_rate = learning_rate * decay_rate ^ (global_step / decay_steps)
Parameters:
  • learning_rate (Variable|float) – The initial learning rate.
  • decay_steps (int) – See the decay computation above.
  • decay_rate (float) – The decay rate. See the decay computation above.
  • staircase (Boolean) – If True, decay the learning rate at discrete intervals. Default: False
Returns:

The decayed learning rate

Return type:

Variable

Examples

base_lr = 0.1
sgd_optimizer = fluid.optimizer.SGD(
      learning_rate=fluid.layers.exponential_decay(
          learning_rate=base_lr,
          decay_steps=10000,
          decay_rate=0.5,
          staircase=True))
sgd_optimizer.minimize(avg_cost)

natural_exp_decay

paddle.fluid.layers.natural_exp_decay(learning_rate, decay_steps, decay_rate, staircase=False)

Applies natural exponential decay to the initial learning rate.

>>> if not staircase:
>>>     decayed_learning_rate = learning_rate * exp(- decay_rate * (global_step / decay_steps))
>>> else:
>>>     decayed_learning_rate = learning_rate * exp(- decay_rate * (global_step / decay_steps))
Parameters:
  • learning_rate – A scalar float32 value or a Variable. This will be the initial learning rate during training
  • decay_steps – A Python int32 number.
  • decay_rate – A Python float number.
  • staircase – Boolean. If set true, decay the learning rate every decay_steps.
Returns:

The decayed learning rate

inverse_time_decay

paddle.fluid.layers.inverse_time_decay(learning_rate, decay_steps, decay_rate, staircase=False)

Applies inverse time decay to the initial learning rate.

When training a model, it is often recommended to lower the learning rate as the training progresses. By using this function, an inverse decay function will be applied to the initial learning rate.

>>> if staircase == True:
>>>     decayed_learning_rate = learning_rate / (1 + decay_rate * floor(global_step / decay_step))
>>> else:
>>>     decayed_learning_rate = learning_rate / (1 + decay_rate * global_step / decay_step)
Parameters:
  • learning_rate (Variable|float) – The initial learning rate.
  • decay_steps (int) – See the decay computation above.
  • decay_rate (float) – The decay rate. See the decay computation above.
  • staircase (Boolean) – If True, decay the learning rate at discrete intervals. Default: False
Returns:

The decayed learning rate

Return type:

Variable

Examples

base_lr = 0.1
sgd_optimizer = fluid.optimizer.SGD(
      learning_rate=fluid.layers.inverse_time_decay(
          learning_rate=base_lr,
          decay_steps=10000,
          decay_rate=0.5,
          staircase=True))
sgd_optimizer.minimize(avg_cost)

polynomial_decay

paddle.fluid.layers.polynomial_decay(learning_rate, decay_steps, end_learning_rate=0.0001, power=1.0, cycle=False)

Applies polynomial decay to the initial learning rate.

if cycle:
  decay_steps = decay_steps * ceil(global_step / decay_steps)
else:
  global_step = min(global_step, decay_steps)
  decayed_learning_rate = (learning_rate - end_learning_rate) *
       (1 - global_step / decay_steps) ^ power + end_learning_rate
Parameters:
  • learning_rate (Variable|float32) – A scalar float32 value or a Variable. This will be the initial learning rate during training.
  • decay_steps (int32) – A Python int32 number.
  • end_learning_rate (float) – A Python float number.
  • power (float) – A Python float number.
  • cycle (bool) – If set true, decay the learning rate every decay_steps.
Returns:

The decayed learning rate

Return type:

Variable

piecewise_decay

paddle.fluid.layers.piecewise_decay(boundaries, values)

Applies piecewise decay to the initial learning rate.

The algorithm can be described as the code below.

boundaries = [10000, 20000]
values = [1.0, 0.5, 0.1]
if step < 10000:
    learning_rate = 1.0
elif 10000 <= step < 20000:
    learning_rate = 0.5
else:
    learning_rate = 0.1
Parameters:
  • boundaries – A list of steps numbers.
  • values – A list of learning rate values that will be picked during different step boundaries.
Returns:

The decayed learning rate.

noam_decay

paddle.fluid.layers.noam_decay(d_model, warmup_steps)

Noam decay method. The numpy implementation of noam decay as follows.

>>> import numpy as np
>>> lr_value = np.power(d_model, -0.5) * np.min([
>>>                         np.power(current_steps, -0.5),
>>>                         np.power(warmup_steps, -1.5) * current_steps])

Please reference attention is all you need.

Parameters:
  • d_model (Variable) – The dimensionality of input and output of model.
  • warmup_steps (Variable) – A super parameter.
Returns:

The decayed learning rate.

metric

accuracy

paddle.fluid.layers.accuracy(input, label, k=1, correct=None, total=None)

accuracy layer. Refer to the https://en.wikipedia.org/wiki/Precision_and_recall

This function computes the accuracy using the input and label. If the correct label occurs in top k predictions, then correct will increment by one. Note: the dtype of accuracy is determined by input. the input and label dtype can be different.

Parameters:
  • input (Variable) – The input of accuracy layer, which is the predictions of network. Carry LoD information is supported.
  • label (Variable) – The label of dataset.
  • k (int) – The top k predictions for each class will be checked.
  • correct (Variable) – The correct predictions count.
  • total (Variable) – The total entries count.
Returns:

The correct rate.

Return type:

Variable

Examples

data = fluid.layers.data(name="data", shape=[-1, 32, 32], dtype="float32")
label = fluid.layers.data(name="data", shape=[-1,1], dtype="int32")
predict = fluid.layers.fc(input=data, size=10)
acc = fluid.layers.accuracy(input=predict, label=label, k=5)

auc

paddle.fluid.layers.auc(input, label, curve='ROC', num_thresholds=200)

Area Under the Curve (AUC) Layer

This implementation computes the AUC according to forward output and label. It is used very widely in binary classification evaluation.

Note: If input label contains values other than 0 and 1, it will be cast to bool. Find the relevant definitions here.

There are two types of possible curves:

  1. ROC: Receiver operating characteristic;
  2. PR: Precision Recall
Parameters:
  • input (Variable) – A floating-point 2D Variable, values are in the range [0, 1]. Each row is sorted in descending order. This input should be the output of topk. Typically, this Variable indicates the probability of each label.
  • label (Variable) – A 2D int Variable indicating the label of the training data. The height is batch size and width is always 1.
  • curve (str) – Curve type, can be ‘ROC’ or ‘PR’. Default ‘ROC’.
  • num_thresholds (int) – The number of thresholds to use when discretizing the roc curve. Default 200.
Returns:

A scalar representing the current AUC.

Return type:

Variable

Examples

# network is a binary classification model and label the ground truth
prediction = network(image, is_infer=True)
auc_out=fluid.layers.auc(input=prediction, label=label)