fluid.layers

control_flow

While

class paddle.fluid.layers.While(cond, is_test=False, name=None)

while loop control flow.

Parameters:
  • cond (Variable) – condition used to compare.
  • is_test (bool) – A flag indicating whether execution is in test phase.
  • name (str) – The name of this layer.

Examples

d0 = layers.data("d0", shape=[10], dtype='float32')
data_array = layers.array_write(x=d0, i=i)
array_len = layers.fill_constant(shape=[1],dtype='int64', value=3)

cond = layers.less_than(x=i, y=array_len)
while_op = layers.While(cond=cond)
with while_op.block():
    d = layers.array_read(array=data_array, i=i)
    i = layers.increment(x=i, in_place=True)
    layers.array_write(result, i=i, array=d)
    layers.less_than(x=i, y=array_len, cond=cond)

Switch

class paddle.fluid.layers.Switch(name=None)

Switch class works just like a if-elif-else. Can be used in learning rate scheduler to modify learning rate

The Semantics:

  1. A switch control-flow checks cases one-by-one.
  2. The condition of each case is a boolean value, which is a scalar Variable.
  3. It runs the first matched case, or the default case if there is one.
  4. Once it matches a case, it runs the corresponding branch and only that branch.

Examples

lr = fluid.layers.tensor.create_global_var(
    shape=[1],
    value=0.0,
    dtype='float32',
    persistable=True,
    name="learning_rate")
one_var = tensor.fill_constant(
    shape=[1], dtype='float32', value=1.0)
two_var = tensor.fill_constant(
    shape=[1], dtype='float32', value=2.0)

with fluid.layers.control_flow.Switch() as switch:
    with switch.case(global_step == zero_var):
        fluid.layers.tensor.assign(input=one_var, output=lr)
    with switch.default():
        fluid.layers.tensor.assign(input=two_var, output=lr)
case(condition)

create a new block for this condition

default()

create a default case for this switch

increment

paddle.fluid.layers.increment(x, value=1.0, in_place=True)

This function performs an operation that increments each value in the input \(x\) by an amount: \(value\) as mentioned in the input parameter. This operation is performed in-place by default.

Parameters:
  • x (Variable|list) – The tensor that has the input values.
  • value (float) – The amount by which the values should be incremented.
  • in_place (bool) – If the increment should be performed in-place.
Returns:

The elementwise-incremented object.

Return type:

Variable

Examples

data = fluid.layers.data(name='data', shape=[32, 32], dtype='float32')
data = fluid.layers.increment(x=data, value=3.0, in_place=True)

array_write

paddle.fluid.layers.array_write(x, i, array=None)

This function writes the given input variable to the specified position indicating by the arrary index to an output LOD_TENSOR_ARRAY. If the output LOD_TENSOR_ARRAY is not given(None), a new one will be created and returned.

Parameters:
  • x (Variable|list) – The input tensor from which the data will be read.
  • i (Variable|list) – The index of the output LOD_TENSOR_ARRAY, pointing to the position to which the input tensor will be written.
  • array (Variable|list) – The output LOD_TENSOR_ARRAY to which the input tensor will be written. If this parameter is NONE, a new LOD_TENSOR_ARRAY will be created and returned.
Returns:

The output LOD_TENSOR_ARRAY where the input tensor is written.

Return type:

Variable

Examples

tmp = fluid.layers.zeros(shape=[10], dtype='int32')
i = fluid.layers.fill_constant(shape=[1], dtype='int64', value=10)
arr = layers.array_write(tmp, i=i)

create_array

paddle.fluid.layers.create_array(dtype)

Create LoDTensorArray

This function creates an array of LOD_TENSOR_ARRAY . It is mainly used to implement RNN with array_write, array_read and While.

Parameters:dtype (int|float) – The data type of the elements in the lod_tensor_array.
Returns:The lod_tensor_array variable storing the elements of data type.
Return type:Variable

Examples

data = fluid.layers.create_array(dtype='float32')

less_than

paddle.fluid.layers.less_than(x, y, force_cpu=None, cond=None, **ignored)

It operates element-wise on X and Y, and returns the Out. Each of them is a N-dim tensor. X and Y could be any type. The each element of the Out tensor is calculated by \(Out = X < Y\)

>>> import paddle.fluid as fluid
>>> less = fluid.layers.less_than(x=label, y=limit)
Parameters:
  • x (Variable) – the left hand operand of less_than operator.
  • y (Variable) – the right hand operand of less_than operator.
  • force_cpu (BOOLEAN) – Force fill output variable to cpu memory. Otherwise, fill output variable to the running device [default true].
  • cond (Variable|None) – Optional output variable to store the result of less_than
Returns:

n-dim bool tensor. Each element is Out = X < Y.

equal

paddle.fluid.layers.equal(x, y, cond=None, **ignored)

equal

This layer returns the truth value of \(x == y\) elementwise.

Parameters:
  • x (Variable) – First operand of equal
  • y (Variable) – Second operand of equal
  • cond (Variable|None) – Optional output variable to store the result of equal
Returns:

The tensor variable storing the output of equal.

Return type:

Variable

Examples

less = fluid.layers.equal(x=label, y=limit)

array_read

paddle.fluid.layers.array_read(array, i)

This function performs the operation to read the data in as an LOD_TENSOR_ARRAY.

Given:

array = [0.6, 0.1, 0.3, 0.1]

And:

i = 2

Then:

output = 0.3
Parameters:
  • array (Variable|list) – The input tensor that store data to be read.
  • i (Variable|list) – The index of the data to be read from input array.
Returns:

The tensor type variable that has the data written to it.

Return type:

Variable

Examples

tmp = fluid.layers.zeros(shape=[10], dtype='int32')
i = fluid.layers.fill_constant(shape=[1], dtype='int64', value=10)
arr = layers.array_read(tmp, i=i)

array_length

paddle.fluid.layers.array_length(array)

Get the Length of Input LoDTensorArray

This function performs the operation to find the length of the input LOD_TENSOR_ARRAY.

Related API: array_read, array_write, While.

Parameters:array (LOD_TENSOR_ARRAY) – The input array that will be used to compute the length.
Returns:The length of the input LoDTensorArray.
Return type:Variable

Examples

tmp = fluid.layers.zeros(shape=[10], dtype='int32')
i = fluid.layers.fill_constant(shape=[1], dtype='int64', value=10)
arr = fluid.layers.array_write(tmp, i=i)
arr_len = fluid.layers.array_length(arr)

IfElse

class paddle.fluid.layers.IfElse(cond, name=None)

if-else control flow.

Parameters:
  • cond (Variable) – condition used to compare.
  • name (str, default None) – The name of this layer.

Examples

limit = fluid.layers.fill_constant_batch_size_like(
    input=label, dtype='int64', shape=[1], value=5.0)
cond = fluid.layers.less_than(x=label, y=limit)
ie = fluid.layers.IfElse(cond)
with ie.true_block():
    true_image = ie.input(image)
    hidden = fluid.layers.fc(input=true_image, size=100, act='tanh')
    prob = fluid.layers.fc(input=hidden, size=10, act='softmax')
    ie.output(prob)

with ie.false_block():
    false_image = ie.input(image)
    hidden = fluid.layers.fc(
        input=false_image, size=200, act='tanh')
    prob = fluid.layers.fc(input=hidden, size=10, act='softmax')
    ie.output(prob)
prob = ie()

DynamicRNN

class paddle.fluid.layers.DynamicRNN(name=None)

The dynamic RNN can process a batch of sequence data. The length of each sample sequence can be different. This API automatically process them in batch.

The input lod must be set. Please reference lod_tensor

>>> import paddle.fluid as fluid
>>> data = fluid.layers.data(name='sentence', dtype='int64', lod_level=1)
>>> embedding = fluid.layers.embedding(input=data, size=[65535, 32],
>>>                                    is_sparse=True)
>>>
>>> drnn = fluid.layers.DynamicRNN()
>>> with drnn.block():
>>>     word = drnn.step_input(embedding)
>>>     prev = drnn.memory(shape=[200])
>>>     hidden = fluid.layers.fc(input=[word, prev], size=200, act='relu')
>>>     drnn.update_memory(prev, hidden)  # set prev to hidden
>>>     drnn.output(hidden)
>>>
>>> # last is the last time step of rnn. It is the encoding result.
>>> last = fluid.layers.sequence_last_step(drnn())

The dynamic RNN will unfold sequence into timesteps. Users need to define how to process each time step during the with block.

The memory is used staging data cross time step. The initial value of memory can be zero or another variable.

The dynamic RNN can mark multiple variables as its output. Use drnn() to get the output sequence.

step_input(x)

Mark a sequence as a dynamic RNN input. :param x: The input sequence. :type x: Variable

Returns:The current timestep in the input sequence.
static_input(x)

Mark a variable as a RNN input. The input will not be scattered into time steps. :param x: The input variable. :type x: Variable

Returns:The input variable that can access in RNN.
block(*args, **kwds)

The block for user to define operators in RNN. See the class docstring for more details.

memory(init=None, shape=None, value=0.0, need_reorder=False, dtype='float32')

Create a memory variable for dynamic rnn.

If the init is not None, memory will be initialized by this variable. The need_reorder is used to reorder the memory as the input variable. It should be set to true when the initialized memory depends on the input sample.

For example,

>>> import paddle.fluid as fluid
>>> sentence = fluid.layers.data(
>>>                 name='sentence', dtype='float32', shape=[32])
>>> boot_memory = fluid.layers.data(
>>>                 name='boot', dtype='float32', shape=[10])
>>>
>>> drnn = fluid.layers.DynamicRNN()
>>> with drnn.block():
>>>     word = drnn.step_input(sentence)
>>>     memory = drnn.memory(init=boot_memory, need_reorder=True)
>>>     hidden = fluid.layers.fc(
>>>                 input=[word, memory], size=10, act='tanh')
>>>     drnn.update_memory(ex_mem=memory, new_mem=hidden)
>>>     drnn.output(hidden)
>>> rnn_output = drnn()

Otherwise, if shape, value, dtype are set, the memory will be initialized by this value.

For example,

>>> import paddle.fluid as fluid
>>> sentence = fluid.layers.data(
>>>                 name='sentence', dtype='float32', shape=[32])
>>>
>>> drnn = fluid.layers.DynamicRNN()
>>> with drnn.block():
>>>     word = drnn.step_input(sentence)
>>>     memory = drnn.memory(shape=[10], dtype='float32', value=0)
>>>     hidden = fluid.layers.fc(
>>>             input=[word, memory], size=10, act='tanh')
>>>     drnn.update_memory(ex_mem=memory, new_mem=hidden)
>>>     drnn.output(hidden)
>>> rnn_output = drnn()
Parameters:
  • init (Variable|None) – The initialized variable.
  • shape (list|tuple) – The memory shape. NOTE the shape does not contain
  • batch_size.
  • value (float) – the initalized value.
  • need_reorder (bool) – True if the initialized memory depends on the
  • sample. (input) –
  • dtype (str|numpy.dtype) – The data type of the initialized memory.
Returns:

the memory variable.

update_memory(ex_mem, new_mem)

Update the memory from ex_mem to new_mem. NOTE that the shape and data type of ex_mem and new_mem must be same. :param ex_mem: the memory variable. :type ex_mem: Variable :param new_mem: the plain variable generated in RNN block. :type new_mem: Variable

Returns:None
output(*outputs)

mark the RNN output variables.

Parameters:outputs – The output variables.
Returns:None

StaticRNN

class paddle.fluid.layers.StaticRNN(name=None)

StaticRNN class.

StaticRNN class is used to create a StaticRNN. The RNN will have its own parameters like inputs, outputs, memories, status and length.

memory(init=None, shape=None, batch_ref=None, init_value=0.0, init_batch_dim_idx=0, ref_batch_dim_idx=1)
Parameters:
  • init – boot memory, if not set, a shape, batch_ref must be provided
  • shape – shape of the boot memory
  • batch_ref – batch size reference variable
  • init_value – the init value of boot memory
  • init_batch_dim_idx – the index of batch size in init’s dimension
  • ref_batch_dim_idx – the index of batch size in batch_ref’s dimension

reorder_lod_tensor_by_rank

paddle.fluid.layers.reorder_lod_tensor_by_rank(x, rank_table)

ReorderLoDTensorByRankTable operator.

Input(X) is a batch of sequences. Input(RankTable) stores new orders of the input sequence batch. The reorder_lod_tensor_by_rank operator reorders the Input(X) according to the information provided by Input(RankTable).

For example:

If the indices stored in the Input(RankTable) are [3, 0, 2, 1], the Input(X) will be reordered that the fourth sequence in Input(X) will become the first one, and then followed by the original first, third, and the second one.

This is: X = [Seq0, Seq1, Seq2, Seq3]. The indices in RankTable are [3, 0, 2, 1]. Out = [Seq3, Seq0, Seq2, Seq1] with a new LoD information.

If the LoD information of Input(X) is empty, this means Input(X) is not sequence data. This is also identical to a batch of sequences where each sequence has a fixed length 1. In this case, the reorder_lod_tensor_by_rank operator reorders each slice of Input(X) along the first axis according to Input(RankTable).

This is: X = [Slice0, Slice1, Slice2, Slice3] and its LoD information is empty. The indices in RankTable are [3, 0, 2, 1]. Out = [Slice3, Slice0, Slice2, Slice1] with no LoD information is appended.

NOTE: This operator sorts Input(X) according to a given LoDRankTable which does not need to be calculated according to Input(X). It can be calculated according to another different sequence, and then this operator sorts Input(X) according to the given LoDRankTable.

Parameters:
  • x – (LoDTensor), the input lod tensor to be reordered according to Input(RankTable).
  • rank_table – (LoDRankTable), the rank table according to which Input(X) is reordered.
Returns:

(LoDTensor), the reordered lod tensor.

Print

paddle.fluid.layers.Print(input, first_n=-1, message=None, summarize=-1, print_tensor_name=True, print_tensor_type=True, print_tensor_shape=True, print_tensor_lod=True, print_phase='both')

Print operator

This creates a print op that will print when a tensor is accessed.

Wraps the tensor passed in so that whenever that a tensor is accessed, the message message is printed, along with the current value of the tensor t.

Parameters:
  • input (Variable) – A Tensor to print.
  • summarize (int) – Print this number of elements in the tensor, will print all if left is negative.
  • message (str) – A string message to print as a prefix.
  • first_n (int) – Only log first_n number of times.
  • print_tensor_name (bool) – Print the tensor name.
  • print_tensor_type (bool) – Print the tensor type.
  • print_tensor_shape (bool) – Print the tensor shape.
  • print_tensor_lod (bool) – Print the tensor lod.
  • print_phase (str) – Which phase to displace, including ‘forward’, ‘backward’ and ‘both’. If set to ‘backward’ or ‘both’, will print the gradients of input tensor.
Returns:

Output tensor, same data with input tensor.

Return type:

Variable

Examples

value = some_layer(...)
Print(value, summarize=10,
    message="The content of some_layer: ")

is_empty

paddle.fluid.layers.is_empty(x, cond=None, **ignored)

Test whether a Variable is empty.

Parameters:
  • x (Variable) – The Variable to be tested.
  • cond (Variable|None) – Output parameter. Returns the test result of given ‘x’. Default: None
Returns:

A bool scalar. True if ‘x’ is an empty Variable.

Return type:

Variable

Raises:

TypeError – If input cond is not a variable, or cond’s dtype is not bool.

Examples

res = fluid.layers.is_empty(x=input)
# or:
fluid.layers.is_empty(x=input, cond=res)

device

io

data

paddle.fluid.layers.data(name, shape, append_batch_size=True, dtype='float32', lod_level=0, type=VarType.LOD_TENSOR, stop_gradient=True)

Data Layer

This function takes in the input and based on whether data has to be returned back as a minibatch, it creates the global variable by using the helper functions. The global variables can be accessed by all the following operators in the graph.

All the input variables of this function are passed in as local variables to the LayerHelper constructor.

Parameters:
  • name (str) – The name/alias of the function
  • shape (list) – Tuple declaring the shape.
  • append_batch_size (bool) –
    1. If true, it prepends -1 to the shape.
    For example if shape=[1], the resulting shape is [-1, 1].
    1. If shape contains -1, such as shape=[1, -1],
    append_batch_size will be enforced to be be False (ineffective).
  • dtype (int|float) – The type of data : float32, float_16, int etc
  • type (VarType) – The output type. By default it is LOD_TENSOR.
  • lod_level (int) – The LoD Level. 0 means the input data is not a sequence.
  • stop_gradient (bool) – A boolean that mentions whether gradient should flow.
Returns:

The global variable that gives access to the data.

Return type:

Variable

Examples

data = fluid.layers.data(name='x', shape=[784], dtype='float32')

open_files

paddle.fluid.layers.open_files(filenames, shapes, lod_levels, dtypes, thread_num=None, buffer_size=None, pass_num=1, is_test=None)

Open files

This layer takes a list of files to read from and returns a Reader Variable. Via the Reader Variable, we can get data from given files. All files must have name suffixs to indicate their formats, e.g., ‘*.recordio’.

Parameters:
  • filenames (list) – The list of file names.
  • shapes (list) – List of tuples which declaring data shapes.
  • lod_levels (list) – List of ints which declaring data lod_level.
  • dtypes (list) – List of strs which declaring data type.
  • thread_num (None) – The number of thread to read files. Default: min(len(filenames), cpu_number).
  • buffer_size (None) – The buffer size of reader. Default: 3 * thread_num
  • pass_num (int) – Number of passes to run.
  • is_test (bool|None) – Whether open_files used for testing or not. If it is used for testing, the order of data generated is same as the file order. Otherwise, it is not guaranteed the order of data is same between every epoch. [Default: False].
Returns:

A Reader Variable via which we can get file data.

Return type:

Variable

Examples

reader = fluid.layers.io.open_files(filenames=['./data1.recordio',
                                            './data2.recordio'],
                                    shapes=[(3,224,224), (1)],
                                    lod_levels=[0, 0],
                                    dtypes=['float32', 'int64'])

# Via the reader, we can use 'read_file' layer to get data:
image, label = fluid.layers.io.read_file(reader)

read_file

paddle.fluid.layers.read_file(reader)

Execute the given reader and get data via it.

A reader is also a Variable. It can be a raw reader generated by fluid.layers.open_files() or a decorated one generated by fluid.layers.double_buffer() and so on.

Parameters:reader (Variable) – The reader to execute.
Returns:Data read via the given reader.
Return type:Tuple[Variable]

Examples

data_file = fluid.layers.open_files(
     filenames=['mnist.recordio'],
     shapes=[(-1, 748), (-1, 1)],
     lod_levels=[0, 0],
     dtypes=["float32", "int64"])
 data_file = fluid.layers.double_buffer(
     fluid.layers.batch(data_file, batch_size=64))
 input, label = fluid.layers.read_file(data_file)

shuffle

paddle.fluid.layers.shuffle(reader, buffer_size)

Shuffle the reader.

Parameters:
  • reader (Variable) – The reader to be decorated with ‘shuffling’.
  • buffer_size (int) – The pre-read number of data in reader.
Returns:

The reader which has been decorated with ‘shuffling’.

Return type:

Variable

batch

paddle.fluid.layers.batch(reader, batch_size)

This layer is a reader decorator. It takes a reader and adds ‘batching’ decoration on it. When reading with the result decorated reader, output data will be automatically organized to the form of batches.

Parameters:
  • reader (Variable) – The reader to be decorated with ‘batching’.
  • batch_size (int) – The batch size.
Returns:

The reader which has been decorated with ‘batching’.

Return type:

Variable

Examples

raw_reader = fluid.layers.io.open_files(filenames=['./data1.recordio',
                                               './data2.recordio'],
                                        shapes=[(3,224,224), (1)],
                                        lod_levels=[0, 0],
                                        dtypes=['float32', 'int64'],
                                        thread_num=2,
                                        buffer_size=2)
batch_reader = fluid.layers.batch(reader=raw_reader, batch_size=5)

# If we read data with the raw_reader:
#     data = fluid.layers.read_file(raw_reader)
# We can only get data instance by instance.
#
# However, if we read data with the batch_reader:
#     data = fluid.layers.read_file(batch_reader)
# Each 5 adjacent instances will be automatically combined together
# to become a batch. So what we get('data') is a batch data instead
# of an instance.

double_buffer

paddle.fluid.layers.double_buffer(reader, place=None, name=None)

Wrap a double buffer reader. The data will copy to target place with a double buffer queue. If the target place is None, the place that executor perform on will be used.

Parameters:
  • reader (Variable) – the reader variable need to be wrapped.
  • place (Place) – the place of target data. Default is the sample place of executor perform.
  • name (str) – Variable name. None if the user does not care.
Returns:

wrapped reader with double buffer.

Examples

>>> reader = fluid.layers.open_files(filenames=['somefile'],
>>>                                  shapes=[[-1, 784], [-1, 1]],
>>>                                  dtypes=['float32', 'int64'])
>>> reader = fluid.layers.double_buffer(reader)
>>> img, label = fluid.layers.read_file(reader)

random_data_generator

paddle.fluid.layers.random_data_generator(low, high, shapes, lod_levels, for_parallel=True)

Create a uniform random data generator

This layer returns a Reader Variable. Instead of opening a file and reading data from it, this Reader Variable generates float uniform random data by itself. It can be used as a dummy reader to test a network without opening a real file.

Parameters:
  • low (float) – The lower bound of data’s uniform distribution.
  • high (float) – The upper bound of data’s uniform distribution.
  • shapes (list) – List of tuples which declaring data shapes.
  • lod_levels (list) – List of ints which declaring data lod_level.
  • for_parallel (Bool) – Set it as True if you are going to run subsequent operators in parallel.
Returns:

A Reader Variable from which we can get random data.

Return type:

Variable

Examples

reader = fluid.layers.random_data_generator(
                                 low=0.0,
                                 high=1.0,
                                 shapes=[[3,224,224], [1]],
                                 lod_levels=[0, 0])
# Via the reader, we can use 'read_file' layer to get data:
image, label = fluid.layers.read_file(reader)

py_reader

paddle.fluid.layers.py_reader(capacity, shapes, dtypes, lod_levels=None, name=None, use_double_buffer=True)

Create a Python reader for data feeding in Python

This layer returns a Reader Variable. The Reader provides decorate_paddle_reader() and decorate_tensor_provider() to set a Python generator as the data source in Python side. When Executor::Run() is invoked in C++ side, the data from the generator would be read automatically. Unlike DataFeeder.feed(), the data reading process and Executor::Run() process can run in parallel using py_reader. The start() method of the Reader should be called when each pass begins, while the reset() method should be called when the pass ends and fluid.core.EOFException raises. Note that Program.clone() method cannot clone py_reader.

Parameters:
  • capacity (int) – The buffer capacity maintained by py_reader.
  • shapes (list|tuple) – List of tuples which declaring data shapes.
  • dtypes (list|tuple) – List of strs which declaring data type.
  • lod_levels (list|tuple) – List of ints which declaring data lod_level.
  • name (basestring) – The prefix Python queue name and Reader name. None will be generated automatically.
  • use_double_buffer (bool) – Whether use double buffer or not.
Returns:

A Reader from which we can get feeding data.

Return type:

Variable

Examples

  1. The basic usage of py_reader is as follows:
>>> import paddle.v2
>>> import paddle.fluid as fluid
>>> import paddle.dataset.mnist as mnist
>>>
>>> reader = fluid.layers.py_reader(capacity=64,
>>>                                 shapes=[(-1,3,224,224), (-1,1)],
>>>                                 dtypes=['float32', 'int64'])
>>> reader.decorate_paddle_reader(
>>>     paddle.v2.reader.shuffle(paddle.batch(mnist.train())
>>>
>>> img, label = fluid.layers.read_file(reader)
>>> loss = network(img, label) # some network definition
>>>
>>> fluid.Executor(fluid.CUDAPlace(0)).run(fluid.default_startup_program())
>>>
>>> exe = fluid.ParallelExecutor(use_cuda=True, loss_name=loss.name)
>>> for epoch_id in range(10):
>>>     reader.start()
>>>     try:
>>>         while True:
>>>             exe.run(fetch_list=[loss.name])
>>>     except fluid.core.EOFException:
>>>         reader.reset()

2. When training and testing are both performed, two different py_reader should be created with different names, e.g.:

>>> import paddle.v2
>>> import paddle.fluid as fluid
>>> import paddle.dataset.mnist as mnist
>>>
>>> def network(reader):
>>>     img, label = fluid.layers.read_file(reader)
>>>     # Here, we omitted the network definition
>>>     return loss
>>>
>>> train_reader = fluid.layers.py_reader(capacity=64,
>>>                                       shapes=[(-1,3,224,224), (-1,1)],
>>>                                       dtypes=['float32', 'int64'],
>>>                                       name='train_reader')
>>> train_reader.decorate_paddle_reader(
>>>     paddle.v2.reader.shuffle(paddle.batch(mnist.train())
>>>
>>> test_reader = fluid.layers.py_reader(capacity=32,
>>>                                      shapes=[(-1,3,224,224), (-1,1)],
>>>                                      dtypes=['float32', 'int64'],
>>>                                      name='test_reader')
>>> test_reader.decorate_paddle_reader(paddle.batch(mnist.test(), 512))
>>>
>>> # Create train_main_prog and train_startup_prog
>>> train_main_prog = fluid.Program()
>>> train_startup_prog = fluid.Program()
>>> with fluid.program_guard(train_main_prog, train_startup_prog):
>>>     # Use fluid.unique_name.guard() to share parameters with test program
>>>     with fluid.unique_name.guard():
>>>         train_loss = network(train_reader) # some network definition
>>>         adam = fluid.optimizer.Adam(learning_rate=0.01)
>>>         adam.minimize(loss)
>>>
>>> # Create test_main_prog and test_startup_prog
>>> test_main_prog = fluid.Program()
>>> test_startup_prog = fluid.Program()
>>> with fluid.program_guard(test_main_prog, test_startup_prog):
>>>     # Use fluid.unique_name.guard() to share parameters with train program
>>>     with fluid.unique_name.guard():
>>>         test_loss = network(test_reader)
>>>
>>> fluid.Executor(fluid.CUDAPlace(0)).run(train_startup_prog)
>>> fluid.Executor(fluid.CUDAPlace(0)).run(test_startup_prog)
>>>
>>> train_exe = fluid.ParallelExecutor(use_cuda=True,
>>>                 loss_name=train_loss.name, main_program=train_main_prog)
>>> test_exe = fluid.ParallelExecutor(use_cuda=True,
>>>                 loss_name=test_loss.name, main_program=test_main_prog)
>>> for epoch_id in range(10):
>>>     train_reader.start()
>>>     try:
>>>         while True:
>>>             train_exe.run(fetch_list=[train_loss.name])
>>>     except fluid.core.EOFException:
>>>         train_reader.reset()
>>>
>>>     test_reader.start()
>>>     try:
>>>         while True:
>>>             test_exe.run(fetch_list=[test_loss.name])
>>>     except fluid.core.EOFException:
>>>         test_reader.reset()

Preprocessor

class paddle.fluid.layers.Preprocessor(reader, name=None)

A block for data pre-processing in reader.

Parameters:
  • reader (Variable) – A reader variable.
  • name (str, default None) – The name of the reader.

Examples

preprocessor = fluid.layers.io.Preprocessor(reader=reader)
with preprocessor.block():
    img, lbl = preprocessor.inputs()
    img_out = img / 2
    lbl_out = lbl + 1
    preprocessor.outputs(img_out, lbl_out)

data_file = fluid.layers.io.double_buffer(preprocessor())

load

paddle.fluid.layers.load(out, file_path, load_as_fp16=None)

Load operator will load a LoDTensor / SelectedRows variable from disk file.

>>> import paddle.fluid as fluid
>>> tmp_tensor = fluid.layers.create_tensor(dtype='float32')
>>> fluid.layers.load(tmp_tensor, "./tmp_tensor.bin")
Parameters:
  • out (Variable) – The LoDTensor / SelectedRows need to be loaded.
  • file_path (STRING) – Variable will be loaded from “file_path”.
  • load_as_fp16 (BOOLEAN) – If true, the tensor will be first loaded and then converted to float16 data type. Otherwise, the tensor will be directly loaded without data type conversion. Default is false.
Returns:

None

nn

fc

paddle.fluid.layers.fc(input, size, num_flatten_dims=1, param_attr=None, bias_attr=None, act=None, is_test=False, name=None)

Fully Connected Layer

This function creates a fully connected layer in the network. It can take multiple tensors as its inputs. It creates a variable called weights for each input tensor, which represents a fully connected weight matrix from each input unit to each output unit. The fully connected layer multiplies each input tensor with its coresponding weight to produce an output Tensor. If multiple input tensors are given, the results of multiple multiplications will be sumed up. If bias_attr is not None, a bias variable will be created and added to the output. Finally, if activation is not None, it will be applied to the output as well.

This process can be formulated as follows:

\[Out = Act({\sum_{i=0}^{N-1}X_iW_i + b})\]

In the above equation:

  • \(N\): Number of the input.
  • \(X_i\): The input tensor.
  • \(W\): The weights created by this layer.
  • \(b\): The bias parameter created by this layer (if needed).
  • \(Act\): The activation function.
  • \(Out\): The output tensor.
Parameters:
  • input (Variable|list of Variable) – The input tensor(s) of this layer, and the dimension of the input tensor(s) is at least 2.
  • size (int) – The number of output units in this layer.
  • num_flatten_dims (int, default 1) – The fc layer can accept an input tensor with more than two dimensions. If this happens, the multidimensional tensor will first be flattened into a 2-dimensional matrix. The parameter num_flatten_dims determines how the input tensor is flattened: the first num_flatten_dims (inclusive, index starts from 1) dimensions will be flatten to form the first dimension of the final matrix (height of the matrix), and the rest rank(X) - num_flatten_dims dimensions are flattened to form the second dimension of the final matrix (width of the matrix). For example, suppose X is a 6-dimensional tensor with a shape [2, 3, 4, 5, 6], and num_flatten_dims = 3. Then, the flattened matrix will have a shape [2 x 3 x 4, 5 x 6] = [24, 30].
  • param_attr (ParamAttr|list of ParamAttr, default None) – The parameter attribute for learnable parameters/weights of this layer.
  • bias_attr (ParamAttr|list of ParamAttr, default None) – The parameter attribute for the bias of this layer. If it is set to False, no bias will be added to the output units. If it is set to None, the bias is initialized zero. Default: None.
  • act (str, default None) – Activation to be applied to the output of this layer.
  • is_test (bool) – A flag indicating whether execution is in test phase.
  • name (str, default None) – The name of this layer.
Returns:

The transformation result.

Return type:

Variable

Raises:

ValueError – If rank of the input tensor is less than 2.

Examples

data = fluid.layers.data(name="data", shape=[32, 32], dtype="float32")
fc = fluid.layers.fc(input=data, size=1000, act="tanh")

embedding

paddle.fluid.layers.embedding(input, size, is_sparse=False, is_distributed=False, padding_idx=None, param_attr=None, dtype='float32')

Embedding Layer

This layer is used to lookup embeddings of IDs, provided by input, in a lookup table. The result of this lookup is the embedding of each ID in the input.

All the input variables are passed in as local variables to the LayerHelper constructor.

Parameters:
  • input (Variable) – The tensor variable containing the IDs.
  • size (tuple|list) – The shape of the look up table parameter. It should have two elements which indicate the size of the dictionary of embeddings and the size of each embedding vector respectively.
  • is_sparse (bool) – The flag indicating whether to use sparse update.
  • is_distributed (bool) – Whether to run lookup table from remote parameter server.
  • padding_idx (int|long|None) – If None, it makes no effect to lookup. Otherwise the given padding_idx indicates padding the output with zeros whenever lookup encounters it in input. If \(padding_idx < 0\), the padding_idx to use in lookup is \(size[0] + dim\).
  • param_attr (ParamAttr) – Parameters for this layer
  • dtype (np.dtype|core.VarDesc.VarType|str) – The type of data : float32, float_16, int etc
Returns:

The tensor variable storing the embeddings of the supplied inputs.

Return type:

Variable

Examples

dict_size = len(dataset.ids)
data = fluid.layers.data(name='ids', shape=[32, 32], dtype='float32')
fc = fluid.layers.embedding(input=data, size=[dict_size, 16])

dynamic_lstm

paddle.fluid.layers.dynamic_lstm(input, size, h_0=None, c_0=None, param_attr=None, bias_attr=None, use_peepholes=True, is_reverse=False, gate_activation='sigmoid', cell_activation='tanh', candidate_activation='tanh', dtype='float32', name=None)

Long-Short Term Memory (LSTM) Operator.

The defalut implementation is diagonal/peephole connection (https://arxiv.org/pdf/1402.1128.pdf), the formula is as follows:

$$ i_t = \sigma(W_{ix}x_{t} + W_{ih}h_{t-1} + W_{ic}c_{t-1} + b_i) $$

$$ f_t = \sigma(W_{fx}x_{t} + W_{fh}h_{t-1} + W_{fc}c_{t-1} + b_f) $$

$$ \tilde{c_t} = act_g(W_{cx}x_t + W_{ch}h_{t-1} + b_c) $$

$$ o_t = \sigma(W_{ox}x_{t} + W_{oh}h_{t-1} + W_{oc}c_t + b_o) $$

$$ c_t = f_t \odot c_{t-1} + i_t \odot \tilde{c_t} $$

$$ h_t = o_t \odot act_h(c_t) $$

  • W terms denote weight matrices (e.g. \(W_{xi}\) is the matrix of weights from the input gate to the input), \(W_{ic}, W_{fc}, W_{oc}\) are diagonal weight matrices for peephole connections. In our implementation, we use vectors to reprenset these diagonal weight matrices. - The b terms denote bias vectors (\(b_i\) is the input gate bias vector). - \(\sigma\) is the non-line activations, such as logistic sigmoid function. - \(i, f, o\) and \(c\) are the input gate, forget gate, output gate, and cell activation vectors, respectively, all of which have the same size as the cell output activation vector \(h\). - The \(\odot\) is the element-wise product of the vectors. - \(act_g\) and \(act_h\) are the cell input and cell output activation functions and tanh is usually used for them. - \(\tilde{c_t}\) is also called candidate hidden state, which is computed based on the current input and the previous hidden state.

Set use_peepholes False to disable peephole connection. The formula is omitted here, please refer to the paper http://www.bioinf.jku.at/publications/older/2604.pdf for details.

Note that these \(W_{xi}x_{t}, W_{xf}x_{t}, W_{xc}x_{t}, W_{xo}x_{t}\) operations on the input \(x_{t}\) are NOT included in this operator. Users can choose to use fully-connect operator before LSTM operator.

Parameters:
  • input (Variable) – (LoDTensor) the first input is a LodTensor, which support variable-time length input sequence. The underlying tensor in this LoDTensor is a matrix with shape (T X 4D), where T is the total time steps in this mini-batch, D is the hidden size
  • size (int) – 4 * hidden size.
  • h_0 (Variable) – The initial hidden state is an optional input, default is zero. This is a tensor with shape (N x D), where N is the batch size and D is the hidden size.
  • c_0 (Variable) – The initial cell state is an optional input, default is zero. This is a tensor with shape (N x D), where N is the batch size. h_0 and c_0 can be NULL but only at the same time.
  • param_attr (ParamAttr|None) –

    The parameter attribute for the learnable hidden-hidden weights.

    • Weights = {\(W_{ch}, W_{ih}, W_{fh}, W_{oh}\)}
    • The shape is (D x 4D), where D is the hidden size.

    If it is set to None or one attribute of ParamAttr, dynamic_lstm will create ParamAttr as param_attr. If the Initializer of the param_attr is not set, the parameter is initialized with Xavier. Default: None.

  • bias_attr (ParamAttr|None) –

    The bias attribute for the learnable bias weights, which contains two parts, input-hidden bias weights and peephole connections weights if setting use_peepholes to True.

    1. use_peepholes = False - Biases = {\(b_c, b_i, b_f, b_o\)}. - The shape is (1 x 4D).
    2. use_peepholes = True - Biases = { \(b_c, b_i, b_f, b_o, W_{ic}, W_{fc}, W_{oc}\)}. - The shape is (1 x 7D).

    If it is set to None or one attribute of ParamAttr, dynamic_lstm will create ParamAttr as bias_attr. If the Initializer of the bias_attr is not set, the bias is initialized zero. Default: None.

  • use_peepholes (bool) – (bool, defalut: True) whether to enable diagonal/peephole connections
  • is_reverse (bool) – (bool, defalut: False) whether to compute reversed LSTM
  • gate_activation (str) – (string, default: sigmoid)The activation for input gate, forget gate and output gate, sigmoid by default
  • cell_activation (str) – (string, default: tanh)The activation for cell output, tanh by defalut
  • candidate_activation (str) – (string, default: tanh)The activation for candidate hidden state, tanh by default
  • dtype (str) – Data type. Choices = [“float32”, “float64”], default “float32”.
  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
Returns:

The hidden state, and cell state of LSTM. The shape of both is (T x D), and lod is the same with the input.

Return type:

tuple

Examples

hidden_dim = 512
forward_proj = fluid.layers.fc(input=input_seq, size=hidden_dim * 4,
                               bias_attr=False)
forward, _ = fluid.layers.dynamic_lstm(
    input=forward_proj, size=hidden_dim * 4, use_peepholes=False)

dynamic_lstmp

paddle.fluid.layers.dynamic_lstmp(input, size, proj_size, param_attr=None, bias_attr=None, use_peepholes=True, is_reverse=False, gate_activation='sigmoid', cell_activation='tanh', candidate_activation='tanh', proj_activation='tanh', dtype='float32', name=None)

Dynamic LSTMP Layer

LSTMP (LSTM with recurrent projection) layer has a separate projection layer after the LSTM layer, projecting the original hidden state to a lower-dimensional one, which is proposed to reduce the number of total parameters and furthermore computational complexity for the LSTM, espeacially for the case that the size of output units is relative large (https://research.google.com/pubs/archive/43905.pdf).

The formula is as follows:

\[ \begin{align}\begin{aligned}i_t & = \sigma(W_{ix}x_{t} + W_{ir}r_{t-1} + W_{ic}c_{t-1} + b_i)\\f_t & = \sigma(W_{fx}x_{t} + W_{fr}r_{t-1} + W_{fc}c_{t-1} + b_f)\\\tilde{c_t} & = act_g(W_{cx}x_t + W_{cr}r_{t-1} + b_c)\\o_t & = \sigma(W_{ox}x_{t} + W_{or}r_{t-1} + W_{oc}c_t + b_o)\\c_t & = f_t \odot c_{t-1} + i_t \odot \tilde{c_t}\\h_t & = o_t \odot act_h(c_t)\\r_t & = \overline{act_h}(W_{rh}h_t)\end{aligned}\end{align} \]

In the above formula:

  • \(W\): Denotes weight matrices (e.g. \(W_{xi}\) is the matrix of weights from the input gate to the input).
  • \(W_{ic}\), \(W_{fc}\), \(W_{oc}\): Diagonal weight matrices for peephole connections. In our implementation, we use vectors to reprenset these diagonal weight matrices.
  • \(b\): Denotes bias vectors (e.g. \(b_i\) is the input gate bias vector).
  • \(\sigma\): The activation, such as logistic sigmoid function.
  • \(i, f, o\) and \(c\): The input gate, forget gate, output gate, and cell activation vectors, respectively, all of which have the same size as the cell output activation vector \(h\).
  • \(h\): The hidden state.
  • \(r\): The recurrent projection of the hidden state.
  • \(\tilde{c_t}\): The candidate hidden state, whose computation is based on the current input and previous hidden state.
  • \(\odot\): The element-wise product of the vectors.
  • \(act_g\) and \(act_h\): The cell input and cell output activation functions and tanh is usually used for them.
  • \(\overline{act_h}\): The activation function for the projection output, usually using identity or same as \(act_h\).

Set use_peepholes to False to disable peephole connection. The formula is omitted here, please refer to the paper http://www.bioinf.jku.at/publications/older/2604.pdf for details.

Note that these \(W_{xi}x_{t}, W_{xf}x_{t}, W_{xc}x_{t}, W_{xo}x_{t}\) operations on the input \(x_{t}\) are NOT included in this operator. Users can choose to use fully-connected layer before LSTMP layer.

Parameters:
  • input (Variable) – The input of dynamic_lstmp layer, which supports variable-time length input sequence. The underlying tensor in this Variable is a matrix with shape (T X 4D), where T is the total time steps in this mini-batch, D is the hidden size.
  • size (int) – 4 * hidden size.
  • proj_size (int) – The size of projection output.
  • param_attr (ParamAttr|None) –

    The parameter attribute for the learnable hidden-hidden weight and projection weight.

    • Hidden-hidden weight = {\(W_{ch}, W_{ih}, W_{fh}, W_{oh}\)}.
    • The shape of hidden-hidden weight is (P x 4D), where P is the projection size and D the hidden size.
    • Projection weight = {\(W_{rh}\)}.
    • The shape of projection weight is (D x P).

    If it is set to None or one attribute of ParamAttr, dynamic_lstm will create ParamAttr as param_attr. If the Initializer of the param_attr is not set, the parameter is initialized with Xavier. Default: None.

  • bias_attr (ParamAttr|None) –

    The bias attribute for the learnable bias weights, which contains two parts, input-hidden bias weights and peephole connections weights if setting use_peepholes to True.

    1. use_peepholes = False
    • Biases = {\(b_c, b_i, b_f, b_o\)}.
    • The shape is (1 x 4D).
    1. use_peepholes = True
    • Biases = { \(b_c, b_i, b_f, b_o, W_{ic}, W_{fc}, W_{oc}\)}.
    • The shape is (1 x 7D).

    If it is set to None or one attribute of ParamAttr, dynamic_lstm will create ParamAttr as bias_attr. If the Initializer of the bias_attr is not set, the bias is initialized zero. Default: None.

  • use_peepholes (bool) – Whether to enable diagonal/peephole connections, default True.
  • is_reverse (bool) – Whether to compute reversed LSTM, default False.
  • gate_activation (str) – The activation for input gate, forget gate and output gate. Choices = [“sigmoid”, “tanh”, “relu”, “identity”], default “sigmoid”.
  • cell_activation (str) – The activation for cell output. Choices = [“sigmoid”, “tanh”, “relu”, “identity”], default “tanh”.
  • candidate_activation (str) – The activation for candidate hidden state. Choices = [“sigmoid”, “tanh”, “relu”, “identity”], default “tanh”.
  • proj_activation (str) – The activation for projection output. Choices = [“sigmoid”, “tanh”, “relu”, “identity”], default “tanh”.
  • dtype (str) – Data type. Choices = [“float32”, “float64”], default “float32”.
  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
Returns:

A tuple of two output variable: the projection of hidden state, and cell state of LSTMP. The shape of projection is (T x P), for the cell state which is (T x D), and both LoD is the same with the input.

Return type:

tuple

Examples

dict_dim, emb_dim = 128, 64
data = fluid.layers.data(name='sequence', shape=[1],
                         dtype='int32', lod_level=1)
emb = fluid.layers.embedding(input=data, size=[dict_dim, emb_dim])
hidden_dim, proj_dim = 512, 256
fc_out = fluid.layers.fc(input=emb, size=hidden_dim * 4,
                         act=None, bias_attr=None)
proj_out, _ = fluid.layers.dynamic_lstmp(input=fc_out,
                                         size=hidden_dim * 4,
                                         proj_size=proj_dim,
                                         use_peepholes=False,
                                         is_reverse=True,
                                         cell_activation="tanh",
                                         proj_activation="tanh")

dynamic_gru

paddle.fluid.layers.dynamic_gru(input, size, param_attr=None, bias_attr=None, is_reverse=False, gate_activation='sigmoid', candidate_activation='tanh', h_0=None)

Gated Recurrent Unit (GRU) Layer

Refer to Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling .

The formula is as follows:

\[ \begin{align}\begin{aligned}u_t & = act_g(W_{ux}x_{t} + W_{uh}h_{t-1} + b_u)\\r_t & = act_g(W_{rx}x_{t} + W_{rh}h_{t-1} + b_r)\\\tilde{h_t} & = act_c(W_{cx}x_{t} + W_{ch}(r_t \odot h_{t-1}) + b_c)\\h_t & = (1-u_t) \odot h_{t-1} + u_t \odot \tilde{h_t}\end{aligned}\end{align} \]

The \(\odot\) is the element-wise product of the vectors. \(act_g\) is the update gate and reset gate activation function and \(sigmoid\) is usually used for it. \(act_c\) is the activation function for candidate hidden state and \(tanh\) is usually used for it.

Note that these \(W_{ux}x_{t}, W_{rx}x_{t}, W_{cx}x_{t}\) operations on the input \(x_{t}\) are NOT included in this operator. Users can choose to use fully-connect layer before GRU layer.

Parameters:
  • input (Variable) – The input of dynamic_gru layer, which supports variable-time length input sequence. The underlying tensor in this Variable is a matrix with shape \((T \times 3D)\), where \(T\) is the total time steps in this mini-batch, \(D\) is the hidden size.
  • size (int) – The dimension of the gru cell.
  • param_attr (ParamAttr|None) –

    The parameter attribute for the learnable hidden-hidden weight matrix. Note:

    • The shape of the weight matrix is \((T \times 3D)\), where \(D\) is the hidden size.
    • All elements in the weight matrix can be divided into two parts. The first part are weights of the update gate and reset gate with shape \((D \times 2D)\), and the second part are weights for candidate hidden state with shape \((D \times D)\).

    If it is set to None or one attribute of ParamAttr, dynamic_gru will create ParamAttr as param_attr. If the Initializer of the param_attr is not set, the parameter is initialized with Xavier. Default: None.

  • bias_attr (ParamAttr|bool|None) – The parameter attribute for the bias of GRU. Note that the bias with \((1 \times 3D)\) concatenates the bias in the update gate, reset gate and candidate calculations. If it is set to False, no bias will be applied to the update gate, reset gate and candidate calculations. If it is set to None or one attribute of ParamAttr, dynamic_gru will create ParamAttr as bias_attr. If the Initializer of the bias_attr is not set, the bias is initialized zero. Default: None.
  • is_reverse (bool) – Whether to compute reversed GRU, default False.
  • gate_activation (str) – The activation for update gate and reset gate. Choices = [“sigmoid”, “tanh”, “relu”, “identity”], default “sigmoid”.
  • candidate_activation (str) – The activation for candidate hidden state. Choices = [“sigmoid”, “tanh”, “relu”, “identity”], default “tanh”.
  • h_0 (Variable) – This is initial hidden state. If not set, default is zero. This is a tensor with shape (N x D), where N is the number of total time steps of input mini-batch feature and D is the hidden size.
Returns:

The hidden state of GRU. The shape is \((T \times D)\), and sequence length is the same with the input.

Return type:

Variable

Examples

dict_dim, emb_dim = 128, 64
data = fluid.layers.data(name='sequence', shape=[1],
                         dtype='int32', lod_level=1)
emb = fluid.layers.embedding(input=data, size=[dict_dim, emb_dim])
hidden_dim = 512
x = fluid.layers.fc(input=emb, size=hidden_dim * 3)
hidden = fluid.layers.dynamic_gru(input=x, dim=hidden_dim)

gru_unit

paddle.fluid.layers.gru_unit(input, hidden, size, param_attr=None, bias_attr=None, activation='tanh', gate_activation='sigmoid')

GRU unit layer. The equation of a gru step is:

\[ \begin{align}\begin{aligned}u_t & = actGate(xu_{t} + W_u h_{t-1} + b_u)\\r_t & = actGate(xr_{t} + W_r h_{t-1} + b_r)\\m_t & = actNode(xm_t + W_c dot(r_t, h_{t-1}) + b_m)\\h_t & = dot((1-u_t), m_t) + dot(u_t, h_{t-1})\end{aligned}\end{align} \]

The inputs of gru unit includes \(z_t\), \(h_{t-1}\). In terms of the equation above, the \(z_t\) is split into 3 parts - \(xu_t\), \(xr_t\) and \(xm_t\). This means that in order to implement a full GRU unit operator for an input, a fully connected layer has to be applied, such that \(z_t = W_{fc}x_t\).

The terms \(u_t\) and \(r_t\) represent the update and reset gates of the GRU cell. Unlike LSTM, GRU has one lesser gate. However, there is an intermediate candidate hidden output, which is denoted by \(m_t\). This layer has three outputs \(h_t\), \(dot(r_t, h_{t-1})\) and concatenation of \(u_t\), \(r_t\) and \(m_t\).

Parameters:
  • input (Variable) – The fc transformed input value of current step.
  • hidden (Variable) – The hidden value of gru unit from previous step.
  • size (integer) – The input dimension value.
  • param_attr (ParamAttr|None) –

    The parameter attribute for the learnable hidden-hidden weight matrix. Note:

    • The shape of the weight matrix is \((T \times 3D)\), where \(D\) is the hidden size.
    • All elements in the weight matrix can be divided into two parts. The first part are weights of the update gate and reset gate with shape \((D \times 2D)\), and the second part are weights for candidate hidden state with shape \((D \times D)\).

    If it is set to None or one attribute of ParamAttr, gru_unit will create ParamAttr as param_attr. If the Initializer of the param_attr is not set, the parameter is initialized with Xavier. Default: None.

  • bias_attr (ParamAttr|bool|None) – The parameter attribute for the bias of GRU. Note that the bias with \((1 \times 3D)\) concatenates the bias in the update gate, reset gate and candidate calculations. If it is set to False, no bias will be applied to the update gate, reset gate and candidate calculations. If it is set to None or one attribute of ParamAttr, gru_unit will create ParamAttr as bias_attr. If the Initializer of the bias_attr is not set, the bias is initialized zero. Default: None.
  • activation (string) – The activation type for cell (actNode). Default: ‘tanh’
  • gate_activation (string) – The activation type for gates (actGate). Default: ‘sigmoid’
Returns:

The hidden value, reset-hidden value and gate values.

Return type:

tuple

Examples

# assuming we have x_t_data and prev_hidden of size=10
x_t = fluid.layers.fc(input=x_t_data, size=30)
hidden_val, r_h_val, gate_val = fluid.layers.gru_unit(input=x_t,
                                       hidden = prev_hidden)

linear_chain_crf

paddle.fluid.layers.linear_chain_crf(input, label, param_attr=None)

Linear Chain CRF.

Conditional Random Field defines an undirected probabilistic graph with nodes denoting random variables and edges denoting dependencies between these variables. CRF learns the conditional probability \(P(Y|X)\), where \(X = (x_1, x_2, ... , x_n)\) are structured inputs and \(Y = (y_1, y_2, ... , y_n)\) are labels for the inputs.

Linear chain CRF is a special case of CRF that is useful for sequence labeling task. Sequence labeling tasks do not assume a lot of conditional independences among inputs. The only constraint they impose is that the input and output must be linear sequences. Thus, the graph of such a CRF is a simple chain or a line, which results in the linear chain CRF.

This operator implements the Forward-Backward algorithm for the linear chain CRF. Please refer to http://www.cs.columbia.edu/~mcollins/fb.pdf and http://cseweb.ucsd.edu/~elkan/250Bwinter2012/loglinearCRFs.pdf for details.

Equation:

  1. Denote Input(Emission) to this operator as \(x\) here. 2. The first D values of Input(Transition) to this operator are for starting weights, denoted as \(a\) here. 3. The next D values of Input(Transition) of this operator are for ending weights, denoted as \(b\) here. 4. The remaning values of Input(Transition) are for transition weights, denoted as \(w\) here. 5. Denote Input(Label) as \(s\) here.

The probability of a sequence \(s\) of length \(L\) is defined as: $$P(s) = (1/Z) exp(a_{s_1} + b_{s_L} + sum_{l=1}^L x_{s_l} + sum_{l=2}^L w_{s_{l-1},s_l})$$

where \(Z\) is a normalization value so that the sum of \(P(s)\) over all possible sequences is 1, and \(x\) is the emission feature weight to the linear chain CRF.

Finally, the linear chain CRF operator outputs the logarithm of the conditional likelihood of each training sample in a mini-batch.

NOTE:

  1. The feature function for a CRF is made up of the emission features and the transition features. The emission feature weights are NOT computed in this operator. They MUST be computed first before this operator is called.
  2. Because this operator performs global normalization over all possible sequences internally, it expects UNSCALED emission feature weights. Please do not call this op with the emission feature being output of any nonlinear activation.
  3. The 2nd dimension of Input(Emission) MUST be equal to the tag number.
Parameters:
  • input (Variable) – (LoDTensor, default LoDTensor<float>) A 2-D LoDTensor with shape [N x D], where N is the size of the mini-batch and D is the total tag number. The unscaled emission weight matrix for the linear chain CRF.
  • input – (Tensor, default Tensor<float>) A 2-D Tensor with shape [(D + 2) x D]. The learnable parameter for the linear_chain_crf operator. See more details in the operator’s comments
  • label (Variable) – (LoDTensor, default LoDTensor<int64_t>) A LoDTensor with shape [N x 1], where N is the total element number in a mini-batch. The ground truth
  • param_attr (ParamAttr) – The attribute of the learnable parameter.
Returns:

(Tensor, default Tensor<float>) A 2-D Tensor with shape [N x D]. The exponentials of Input(Emission). This is an intermediate computational result in forward computation, and will be reused in backward computation

output(Variable): (Tensor, default Tensor<float>) A 2-D Tensor with shape [(D + 2) x D]. The exponentials of Input(Transition). This is an intermediate computational result in forward computation, and will be reused in backward computation

output(Variable): (Tensor, default Tensor<float>) The logarithm of the conditional likelihood of each training sample in a mini-batch. This is a 2-D tensor with shape [S x 1], where S is the sequence number in a mini-batch. Note: S is equal to the sequence number in a mini-batch. The output is no longer a LoDTensor

Return type:

output(Variable)

crf_decoding

paddle.fluid.layers.crf_decoding(input, param_attr, label=None)

The crf_decoding operator reads the emission feature weights and the transition feature weights learned by the linear_chain_crf operator. It implements the Viterbi algorithm which is a dynamic programming algorithm for finding the most likely sequence of hidden states, called the Viterbi path, that results in a sequence of observed tags.

The output of this operator changes according to whether Input(Label) is given:

  1. Input(Label) is given: This happens in training. This operator is used to co-work with the chunk_eval operator. When Input(Label) is given, the crf_decoding operator returns a row vector with shape [N x 1] whose values are fixed to be 0, indicating an incorrect prediction, or 1 indicating a tag is correctly predicted. Such an output is the input to chunk_eval operator.
  2. Input(Label) is not given: This is the standard decoding process.

The crf_decoding operator returns a row vector with shape [N x 1] whose values range from 0 to maximum tag number - 1, Each element indicates an index of a predicted tag.

Parameters:
  • input (Variable) – (LoDTensor, default: LoDTensor<float>). A LoDTensor with shape [N x D] where N is the size of the mini-batch and D is the total tag number. This input is the unscaled emission weight matrix of the linear_chain_crf operator
  • param_attr (ParamAttr) – The parameter attribute for training.
  • label (Variable) – (LoDTensor, LoDTensor<int64_t>). The ground truth with shape [N x 1]. This input is optional. See more details in the operator’s comments
Returns:

(LoDTensor, LoDTensor<int64_t>). The decoding results. What to return changes depending on whether the Input(Label) (the ground truth) is given. See more details in the operator’s comment

Return type:

Variable

Examples

crf_decode = layers.crf_decoding(
     input=hidden, param_attr=ParamAttr(name="crfw"))

cos_sim

paddle.fluid.layers.cos_sim(X, Y)

Cosine Similarity Operator

\(Out = \frac{X^T * Y}{(\sqrt{X^T * X} * \sqrt{Y^T * Y})}\)

The input X and Y must have the same shape, except that the 1st dimension of input Y could be just 1 (different from input X), which will be broadcasted to match the shape of input X before computing their cosine similarity.

Both the input X and Y can carry the LoD (Level of Details) information, or not. But the output only shares the LoD information with input X.

Parameters:
  • X (Variable) – The 1st input of cos_sim op.
  • Y (Variable) – The 2nd input of cos_sim op.
Returns:

the output of cosine(X, Y).

Return type:

Variable

cross_entropy

paddle.fluid.layers.cross_entropy(input, label, soft_label=False, ignore_index=-100)

Cross Entropy Layer

This layer computes the cross entropy between input and label. It supports both standard cross-entropy and soft-label cross-entropy loss computation.

  1. One-hot cross-entropy:

    soft_label = False, Label[i, 0] indicates the class index for sample i:

    \[Y[i] = -\log(X[i, Label[i]])\]
  2. Soft-label cross-entropy:

    soft_label = True, Label[i, j] indicates the soft label of class j for sample i:

    \[Y[i] = \sum_j{-Label[i, j] * log(X[i, j])}\]

    Please make sure that in this case the summation of each row of label equals one.

  3. One-hot cross-entropy with vecterized label:

    As a special case of 2), when each row of ‘label’ has only one non-zero element which is equal to 1, soft-label cross-entropy degenerates to a one-hot cross-entropy with one-hot label representation.

Parameters:
  • input (Variable|list) – a 2-D tensor with shape [N x D], where N is the batch size and D is the number of classes. This input is a probability computed by the previous operator, which is almost always the result of a softmax operator.
  • label (Variable|list) – the ground truth which is a 2-D tensor. When soft_label is set to False, label is a tensor<int64> with shape [N x 1]. When soft_label is set to True, label is a tensor<float/double> with shape [N x D].
  • soft_label (bool) – a flag indicating whether to interpretate the given labels as soft labels. Default: False.
  • ignore_index (int) – Specifies a target value that is ignored and does not contribute to the input gradient. Only valid if soft_label is set to False. Default: -100
Returns:

A 2-D tensor with shape [N x 1], the cross entropy loss.

Raises:

ValueError – 1) the 1st dimension of input and label are not equal. 2) when soft_label == True, and the 2nd dimension of

input and label are not equal.

  1. when soft_label == False, and the 2nd dimension of label is not 1.

Examples

predict = fluid.layers.fc(input=net, size=classdim, act='softmax')
cost = fluid.layers.cross_entropy(input=predict, label=label)

square_error_cost

paddle.fluid.layers.square_error_cost(input, label)

Square error cost layer

This layer accepts input predictions and target label and returns the squared error cost.

For predictions, \(X\), and target labels, \(Y\), the equation is:

\[Out = (X - Y)^2\]

In the above equation:

  • \(X\): Input predictions, a tensor.
  • \(Y\): Input labels, a tensor.
  • \(Out\): Output value, same shape with \(X\).
Parameters:
  • input (Variable) – Input tensor, has predictions.
  • label (Variable) – Label tensor, has target labels.
Returns:

The tensor variable storing the element-wise squared error difference of input and label.

Return type:

Variable

Examples

y = layers.data(name='y', shape=[1], dtype='float32')
y_predict = layers.data(name='y_predict', shape=[1], dtype='float32')
cost = layers.square_error_cost(input=y_predict, label=y)

chunk_eval

paddle.fluid.layers.chunk_eval(input, label, chunk_scheme, num_chunk_types, excluded_chunk_types=None)

Chunk Evaluator

This function computes and outputs the precision, recall and F1-score of chunk detection.

For some basics of chunking, please refer to ‘Chunking with Support Vector Machines <https://aclanthology.info/pdf/N/N01/N01-1025.pdf>’.

ChunkEvalOp computes the precision, recall, and F1-score of chunk detection, and supports IOB, IOE, IOBES and IO (also known as plain) tagging schemes. Here is a NER example of labeling for these tagging schemes:

====== ====== ======  =====  ==  ============   =====  ===== =====  ==  =========
       Li     Ming    works  at  Agricultural   Bank   of    China  in  Beijing.
====== ====== ======  =====  ==  ============   =====  ===== =====  ==  =========
IO     I-PER  I-PER   O      O   I-ORG          I-ORG  I-ORG I-ORG  O   I-LOC
IOB    B-PER  I-PER   O      O   B-ORG          I-ORG  I-ORG I-ORG  O   B-LOC
IOE    I-PER  E-PER   O      O   I-ORG          I-ORG  I-ORG E-ORG  O   E-LOC
IOBES  B-PER  E-PER   O      O   I-ORG          I-ORG  I-ORG E-ORG  O   S-LOC
====== ====== ======  =====  ==  ============   =====  ===== =====  ==  =========

There are three chunk types(named entity types) including PER(person), ORG(organization) and LOC(LOCATION), and we can see that the labels have the form <tag type>-<chunk type>.

Since the calculations actually use label ids rather than labels, extra attention should be paid when mapping labels to ids to make CheckEvalOp work. The key point is that the listed equations are satisfied by ids.

tag_type = label % num_tag_type
chunk_type = label / num_tag_type

where num_tag_type is the num of tag types in the tagging scheme, num_chunk_type is the num of chunk types, and tag_type get its value from the following table.

Scheme Begin Inside End   Single
 plain   0     -      -     -
 IOB     0     1      -     -
 IOE     -     0      1     -
 IOBES   0     1      2     3

Still use NER as example, assuming the tagging scheme is IOB while chunk types are ORG, PER and LOC. To satisfy the above equations, the label map can be like this:

B-ORG  0
I-ORG  1
B-PER  2
I-PER  3
B-LOC  4
I-LOC  5
O      6

It’s not hard to verify the equations noting that the num of chunk types is 3 and the num of tag types in IOB scheme is 2. For example, the label id of I-LOC is 5, the tag type id of I-LOC is 1, and the chunk type id of I-LOC is 2, which consistent with the results from the equations.

Parameters:
  • input (Variable) – prediction output of the network.
  • label (Variable) – label of the test data set.
  • chunk_scheme (str) – The labeling scheme indicating how to encode the chunks. Must be IOB, IOE, IOBES or plain. See the descriptionfor details
  • num_chunk_types (int) – The number of chunk type. See the description for details
  • excluded_chunk_types (list) – A list including chunk type ids indicating chunk types that are not counted. See the description for details
Returns:

tuple containing: precision, recall, f1_score, num_infer_chunks, num_label_chunks, num_correct_chunks

Return type:

tuple

Examples

crf = fluid.layers.linear_chain_crf(
    input=hidden, label=label, param_attr=ParamAttr(name="crfw"))
crf_decode = fluid.layers.crf_decoding(
    input=hidden, param_attr=ParamAttr(name="crfw"))
fluid.layers.chunk_eval(
    input=crf_decode,
    label=label,
    chunk_scheme="IOB",
    num_chunk_types=(label_dict_len - 1) / 2)

sequence_conv

paddle.fluid.layers.sequence_conv(input, num_filters, filter_size=3, filter_stride=1, padding=None, bias_attr=None, param_attr=None, act=None, name=None)

This function creates the op for sequence_conv, using the inputs and other convolutional configurations for the filters and stride as given in the input parameters to the function.

Parameters:
  • input (Variable) – (LoDTensor) the input(X) is a LodTensor, which supports variable-time length input sequence. The underlying tensor in this LoDTensor is a matrix with shape (T, N), where T is the total time steps in this mini-batch and N is the input_hidden_size
  • num_filters (int) – number of filters.
  • filter_size (int) – the filter size (H and W).
  • filter_stride (int) – stride of the filter.
  • padding (bool) – if True, add paddings.
  • bias_attr (ParamAttr|bool|None) – The parameter attribute for the bias of sequence_conv. If it is set to False, no bias will be added to the output units. If it is set to None or one attribute of ParamAttr, sequence_conv will create ParamAttr as bias_attr. If the Initializer of the bias_attr is not set, the bias is initialized zero. Default: None.
  • param_attr (ParamAttr|None) – The parameter attribute for learnable parameters/weights of sequence_conv. If it is set to None or one attribute of ParamAttr, sequence_conv will create ParamAttr as param_attr. If the Initializer of the param_attr is not set, the parameter is initialized with Xavier. Default: None.
  • act (str) – Activation type, if it is set to None, activation is not appended. Default: None.
  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically. Default: None.
Returns:

output of sequence_conv

Return type:

Variable

conv2d

paddle.fluid.layers.conv2d(input, num_filters, filter_size, stride=1, padding=0, dilation=1, groups=None, param_attr=None, bias_attr=None, use_cudnn=True, act=None, name=None)

The convolution2D layer calculates the output based on the input, filter and strides, paddings, dilations, groups parameters. Input and Output are in NCHW format, where N is batch size, C is the number of channels, H is the height of the feature, and W is the width of the feature. Filter is in MCHW format, where M is the number of output image channels, C is the number of input image channels, H is the height of the filter, and W is the width of the filter. If the groups is greater than 1, C will equal the number of input image channels divided by the groups. Please refer to UFLDL’s convolution for more detials. If bias attribution and activation type are provided, bias is added to the output of the convolution, and the corresponding activation function is applied to the final result.

For each input \(X\), the equation is:

\[Out = \sigma (W \ast X + b)\]

Where:

  • \(X\): Input value, a tensor with NCHW format.
  • \(W\): Filter value, a tensor with MCHW format.
  • \(\ast\): Convolution operation.
  • \(b\): Bias value, a 2-D tensor with shape [M, 1].
  • \(\sigma\): Activation function.
  • \(Out\): Output value, the shape of \(Out\) and \(X\) may be different.

Example

  • Input:

    Input shape: \((N, C_{in}, H_{in}, W_{in})\)

    Filter shape: \((C_{out}, C_{in}, H_f, W_f)\)

  • Output:

    Output shape: \((N, C_{out}, H_{out}, W_{out})\)

Where

\[\begin{split}H_{out}&= \frac{(H_{in} + 2 * paddings[0] - (dilations[0] * (H_f - 1) + 1))}{strides[0]} + 1 \\ W_{out}&= \frac{(W_{in} + 2 * paddings[1] - (dilations[1] * (W_f - 1) + 1))}{strides[1]} + 1\end{split}\]
Parameters:
  • input (Variable) – The input image with [N, C, H, W] format.
  • num_filters (int) – The number of filter. It is as same as the output image channel.
  • filter_size (int|tuple|None) – The filter size. If filter_size is a tuple, it must contain two integers, (filter_size_H, filter_size_W). Otherwise, the filter will be a square.
  • stride (int|tuple) – The stride size. If stride is a tuple, it must contain two integers, (stride_H, stride_W). Otherwise, the stride_H = stride_W = stride. Default: stride = 1.
  • padding (int|tuple) – The padding size. If padding is a tuple, it must contain two integers, (padding_H, padding_W). Otherwise, the padding_H = padding_W = padding. Default: padding = 0.
  • dilation (int|tuple) – The dilation size. If dilation is a tuple, it must contain two integers, (dilation_H, dilation_W). Otherwise, the dilation_H = dilation_W = dilation. Default: dilation = 1.
  • groups (int) – The groups number of the Conv2d Layer. According to grouped convolution in Alex Krizhevsky’s Deep CNN paper: when group=2, the first half of the filters is only connected to the first half of the input channels, while the second half of the filters is only connected to the second half of the input channels. Default: groups=1.
  • param_attr (ParamAttr|None) –

    The parameter attribute for learnable parameters/weights of conv2d. If it is set to None or one attribute of ParamAttr, conv2d will create ParamAttr as param_attr. If the Initializer of the param_attr is not set, the parameter is initialized with \(Normal(0.0, std)\),

    and the \(std\) is \((\frac{2.0 }{filter\_elem\_num})^{0.5}\). Default: None.
  • bias_attr (ParamAttr|bool|None) – The parameter attribute for the bias of conv2d. If it is set to False, no bias will be added to the output units. If it is set to None or one attribute of ParamAttr, conv2d will create ParamAttr as bias_attr. If the Initializer of the bias_attr is not set, the bias is initialized zero. Default: None.
  • use_cudnn (bool) – Use cudnn kernel or not, it is valid only when the cudnn library is installed. Default: True
  • act (str) – Activation type, if it is set to None, activation is not appended. Default: None
  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically. Default: None
Returns:

The tensor variable storing the convolution and non-linearity activation result.

Return type:

Variable

Raises:

ValueError – If the shapes of input, filter_size, stride, padding and groups mismatch.

Examples

data = fluid.layers.data(name='data', shape=[3, 32, 32], dtype='float32')
conv2d = fluid.layers.conv2d(input=data, num_filters=2, filter_size=3, act="relu")

conv3d

paddle.fluid.layers.conv3d(input, num_filters, filter_size, stride=1, padding=0, dilation=1, groups=None, param_attr=None, bias_attr=None, use_cudnn=True, act=None, name=None)

Convlution3D Layer

The convolution3D layer calculates the output based on the input, filter and strides, paddings, dilations, groups parameters. Input(Input) and Output(Output) are in NCDHW format. Where N is batch size C is the number of channels, D is the depth of the feature, H is the height of the feature, and W is the width of the feature. Convlution3D is similar with Convlution2D but adds one dimension(depth). If bias attribution and activation type are provided, bias is added to the output of the convolution, and the corresponding activation function is applied to the final result.

For each input \(X\), the equation is:

\[Out = \sigma (W \ast X + b)\]

In the above equation:

  • \(X\): Input value, a tensor with NCDHW format.
  • \(W\): Filter value, a tensor with MCDHW format.
  • \(\ast\): Convolution operation.
  • \(b\): Bias value, a 2-D tensor with shape [M, 1].
  • \(\sigma\): Activation function.
  • \(Out\): Output value, the shape of \(Out\) and \(X\) may be different.

Example

  • Input:

    Input shape: \((N, C_{in}, D_{in}, H_{in}, W_{in})\)

    Filter shape: \((C_{out}, C_{in}, D_f, H_f, W_f)\)

  • Output: Output shape: \((N, C_{out}, D_{out}, H_{out}, W_{out})\)

Where

\[\begin{split}D_{out}&= \frac{(D_{in} + 2 * paddings[0] - (dilations[0] * (D_f - 1) + 1))}{strides[0]} + 1 \\ H_{out}&= \frac{(H_{in} + 2 * paddings[1] - (dilations[1] * (H_f - 1) + 1))}{strides[1]} + 1 \\ W_{out}&= \frac{(W_{in} + 2 * paddings[2] - (dilations[2] * (W_f - 1) + 1))}{strides[2]} + 1\end{split}\]
Parameters:
  • input (Variable) – The input image with [N, C, D, H, W] format. num_filters(int): The number of filter. It is as same as the output image channel.
  • filter_size (int|tuple|None) – The filter size. If filter_size is a tuple, it must contain three integers, (filter_size_D, filter_size_H, filter_size_W). Otherwise, the filter will be a square.
  • stride (int|tuple) – The stride size. If stride is a tuple, it must contain three integers, (stride_D, stride_H, stride_W). Otherwise, the stride_D = stride_H = stride_W = stride. Default: stride = 1.
  • padding (int|tuple) – The padding size. If padding is a tuple, it must contain three integers, (padding_D, padding_H, padding_W). Otherwise, the padding_D = padding_H = padding_W = padding. Default: padding = 0.
  • dilation (int|tuple) – The dilation size. If dilation is a tuple, it must contain three integers, (dilation_D, dilation_H, dilation_W). Otherwise, the dilation_D = dilation_H = dilation_W = dilation. Default: dilation = 1.
  • groups (int) – The groups number of the Conv3d Layer. According to grouped convolution in Alex Krizhevsky’s Deep CNN paper: when group=2, the first half of the filters is only connected to the first half of the input channels, while the second half of the filters is only connected to the second half of the input channels. Default: groups=1
  • param_attr (ParamAttr|None) – The parameter attribute for learnable parameters/weights of conv3d. If it is set to None or one attribute of ParamAttr, conv3d will create ParamAttr as param_attr. If it is set to None, the parameter is initialized with \(Normal(0.0, std)\), and the \(std\) is \((\frac{2.0 }{filter\_elem\_num})^{0.5}\). Default: None.
  • bias_attr (ParamAttr|bool|None) – The parameter attribute for the bias of conv3d. If it is set to False, no bias will be added to the output units. If it is set to None or one attribute of ParamAttr, conv3d will create ParamAttr as bias_attr. If the Initializer of the bias_attr is not set, the bias is initialized zero. Default: None.
  • use_cudnn (bool) – Use cudnn kernel or not, it is valid only when the cudnn library is installed. Default: True
  • act (str) – Activation type, if it is set to None, activation is not appended. Default: None.
  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically. Default: None.
Returns:

The tensor variable storing the convolution and non-linearity activation result.

Return type:

Variable

Raises:

ValueError – If the shapes of input, filter_size, stride, padding and groups mismatch.

Examples

data = fluid.layers.data(name='data', shape=[3, 12, 32, 32], dtype='float32')
conv3d = fluid.layers.conv3d(input=data, num_filters=2, filter_size=3, act="relu")

sequence_pool

paddle.fluid.layers.sequence_pool(input, pool_type)

This function add the operator for sequence pooling. It pools features of all time-steps of each instance, and is applied on top of the input using pool_type mentioned in the parameters.

It supports four pool_type:

  • average: \(Out[i] = \frac{\sum_i X_i}{N}\)
  • sum: \(Out[i] = \sum_jX_{ij}\)
  • sqrt: \(Out[i] = \frac{\sum_jX_{ij}}{\sqrt{len(X_i)}}\)
  • max: \(Out[i] = max(X_i)\)
x is a 1-level LoDTensor:
  x.lod = [[2, 3, 2]]
  x.data = [1, 3, 2, 4, 6, 5, 1]
  x.dims = [7, 1]

then output is a Tensor:
  out.dim = [3, 1]
  with condition len(x.lod[-1]) == out.dims[0]

for different pool_type:
  average: out.data = [2, 4, 3], where 2=(1+3)/2, 4=(2+4+6)/3, 3=(5+1)/2
  sum    : out.data = [4, 12, 6], where 4=1+3, 12=2+4+6, 6=5+1
  sqrt   : out.data = [2.82, 6.93, 4.24], where 2.82=(1+3)/sqrt(2),
             6.93=(2+4+6)/sqrt(3), 4.24=(5+1)/sqrt(2)
  max    : out.data = [3, 6, 5], where 3=max(1,3), 6=max(2,4,6), 5=max(5,1)
  last   : out.data = [3, 6, 1], where 3=last(1,3), 6=last(2,4,6), 1=last(5,1)
  first  : out.data = [1, 2, 5], where 1=first(1,3), 2=first(2,4,6), 5=first(5,1)
Parameters:
  • input (variable) – The input variable which is a LoDTensor.
  • pool_type (string) – The pooling type of sequence_pool. It supports average, sum, sqrt and max.
Returns:

The sequence pooling variable which is a Tensor.

Examples

x = fluid.layers.data(name='x', shape=[7, 1],
                 dtype='float32', lod_level=1)
avg_x = fluid.layers.sequence_pool(input=x, pool_type='average')
sum_x = fluid.layers.sequence_pool(input=x, pool_type='sum')
sqrt_x = fluid.layers.sequence_pool(input=x, pool_type='sqrt')
max_x = fluid.layers.sequence_pool(input=x, pool_type='max')
last_x = fluid.layers.sequence_pool(input=x, pool_type='last')
first_x = fluid.layers.sequence_pool(input=x, pool_type='first')

sequence_softmax

paddle.fluid.layers.sequence_softmax(input, use_cudnn=False, name=None)

This function computes the softmax activation among all time-steps for each sequence. The dimension of each time-step should be 1. Thus, the shape of input Tensor can be either \([N, 1]\) or \([N]\), where \(N\) is the sum of the length of all sequences.

For i-th sequence in a mini-batch:

\[Out(X[lod[i]:lod[i+1]], :) = \frac{\exp(X[lod[i]:lod[i+1], :])}{\sum(\exp(X[lod[i]:lod[i+1], :]))}\]

For example, for a mini-batch of 3 sequences with variable-length, each containing 2, 3, 2 time-steps, the lod of which is [0, 2, 5, 7], then softmax will be computed among \(X[0:2, :]\), \(X[2:5, :]\), \(X[5:7, :]\), and \(N\) turns out to be 7.

Parameters:
  • input (Variable) – The input variable which is a LoDTensor.
  • use_cudnn (bool) – Use cudnn kernel or not, it is valid only when the cudnn library is installed. Default: False.
  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically. Default: None.
Returns:

output of sequence_softmax

Return type:

Variable

Examples

x = fluid.layers.data(name='x', shape=[7, 1],
                 dtype='float32', lod_level=1)
x_sequence_softmax = fluid.layers.sequence_softmax(input=x)

softmax

paddle.fluid.layers.softmax(input, use_cudnn=True, name=None)

The input of the softmax operator is a tensor of any rank. The output tensor has the same shape as the input.

The input tensor will first be logically flattened to a 2-D matrix. The matrix’s second dimension(row length) is as same as the last dimension of the input tensor, and the first dimension(column length) is the product of all other dimensions of the input tensor. For each row of the matrix, the softmax operator squashes the K-dimensional(K is the width of the matrix, which is also the size of the input tensor’s last dimension) vector of arbitrary real values to a K-dimensional vector of real values in the range [0, 1] that add up to 1.

It computes the exponential of the given dimension and the sum of exponential values of all the other dimensions in the K-dimensional vector input. Then the ratio of the exponential of the given dimension and the sum of exponential values of all the other dimensions is the output of the softmax operator.

For each row \(i\) and each column \(j\) in the matrix, we have:

\[Out[i, j] = \frac{\exp(X[i, j])}{\sum_j(exp(X[i, j])}\]
Parameters:
  • input (Variable) – The input variable.
  • use_cudnn (bool) – Use cudnn kernel or not, it is valid only when the cudnn library is installed.
  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically. Default: None.
Returns:

output of softmax

Return type:

Variable

Examples

fc = fluid.layers.fc(input=x, size=10)
softmax = fluid.layers.softmax(input=fc)

pool2d

paddle.fluid.layers.pool2d(input, pool_size=-1, pool_type='max', pool_stride=1, pool_padding=0, global_pooling=False, use_cudnn=True, ceil_mode=False, name=None)

The pooling2d operation calculates the output based on the input, pooling_type and ksize, strides, paddings parameters. Input(X) and output(Out) are in NCHW format, where N is batch size, C is the number of channels, H is the height of the feature, and W is the width of the feature. Parameters(ksize, strides, paddings) are two elements. These two elements represent height and width, respectively. The input(X) size and output(Out) size may be different.

Example:

Input:

X shape: \((N, C, H_{in}, W_{in})\)

Output:

Out shape: \((N, C, H_{out}, W_{out})\)

For ceil_mode = false: $$ H_{out} = \frac{(H_{in} - ksize[0] + 2 * paddings[0])}{strides[0]} + 1 $$ $$ W_{out} = \frac{(W_{in} - ksize[1] + 2 * paddings[1])}{strides[1]} + 1 $$ For ceil_mode = true: $$ H_{out} = \frac{(H_{in} - ksize[0] + 2 * paddings[0] + strides[0] - 1)}{strides[0]} + 1 $$ $$ W_{out} = \frac{(W_{in} - ksize[1] + 2 * paddings[1] + strides[1] - 1)}{strides[1]} + 1 $$

Parameters:
  • input (Variable) – The input tensor of pooling operator. The format of input tensor is NCHW, where N is batch size, C is the number of channels, H is the height of the feature, and W is the width of the feature.
  • pool_size (int) – The side length of pooling windows. All pooling windows are squares with pool_size on a side.
  • pool_type – (string), pooling type, can be “max” for max-pooling and “avg” for average-pooling
  • pool_stride (int) – stride of the pooling layer.
  • pool_padding (int) – padding size.
  • global_pooling – (bool, default false) Whether to use the global pooling. If global_pooling = true, ksize and paddings will be ignored
  • use_cudnn – (bool, default false) Only used in cudnn kernel, need install cudnn
  • ceil_mode – (bool, default false) Wether to use the ceil function to calculate output height and width. False is the default. If it is set to False, the floor function will be used
  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
Returns:

The pooling result.

Return type:

Variable

Raises:
  • ValueError – If ‘pool_type’ is not “max” nor “avg”
  • ValueError – If ‘global_pooling’ is False and ‘pool_size’ is -1
  • ValueError – If ‘use_cudnn’ is not a bool value.

Examples

data = fluid.layers.data(
    name='data', shape=[3, 32, 32], dtype='float32')
conv2d = fluid.layers.pool2d(
                  input=data,
                  pool_size=2,
                  pool_type='max',
                  pool_stride=1,
                  global_pooling=False)

pool3d

paddle.fluid.layers.pool3d(input, pool_size=-1, pool_type='max', pool_stride=1, pool_padding=0, global_pooling=False, use_cudnn=True, ceil_mode=False, name=None)

This function adds the operator for pooling in 3-dimensions, using the pooling configurations mentioned in input parameters.

Parameters:
  • input (Variable) – ${input_comment}
  • pool_size (int) – ${ksize_comment}
  • pool_type (str) – ${pooling_type_comment}
  • pool_stride (int) – stride of the pooling layer.
  • pool_padding (int) – padding size.
  • global_pooling (bool) – ${global_pooling_comment}
  • use_cudnn (bool) – ${use_cudnn_comment}
  • ceil_mode (bool) – ${ceil_mode_comment}
  • name (str) – A name for this layer(optional). If set None, the layer will be named automatically.
Returns:

output of pool3d layer.

Return type:

Variable

batch_norm

paddle.fluid.layers.batch_norm(input, act=None, is_test=False, momentum=0.9, epsilon=1e-05, param_attr=None, bias_attr=None, data_layout='NCHW', in_place=False, name=None, moving_mean_name=None, moving_variance_name=None, do_model_average_for_mean_and_var=False, fuse_with_relu=False)

Batch Normalization Layer

Can be used as a normalizer function for conv2d and fully_connected operations. The required data format for this layer is one of the following:

  1. NHWC [batch, in_height, in_width, in_channels]
  2. NCHW [batch, in_channels, in_height, in_width]

Refer to Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift for more details.

\(input\) is the input features over a mini-batch.

\[\begin{split}\mu_{\beta} &\gets \frac{1}{m} \sum_{i=1}^{m} x_i \qquad &//\ \ mini-batch\ mean \\ \sigma_{\beta}^{2} &\gets \frac{1}{m} \sum_{i=1}^{m}(x_i - \ \mu_{\beta})^2 \qquad &//\ mini-batch\ variance \\ \hat{x_i} &\gets \frac{x_i - \mu_\beta} {\sqrt{\ \sigma_{\beta}^{2} + \epsilon}} \qquad &//\ normalize \\ y_i &\gets \gamma \hat{x_i} + \beta \qquad &//\ scale\ and\ shift\end{split}\]
Parameters:
  • input (variable) – The input variable which is a LoDTensor.
  • act (string, Default None) – Activation type, linear|relu|prelu|...
  • is_test (bool, Default False) – Used for training or training.
  • momentum (float, Default 0.9) –
  • epsilon (float, Default 1e-05) –
  • param_attr (ParamAttr|None) – The parameter attribute for Parameter scale of batch_norm. If it is set to None or one attribute of ParamAttr, batch_norm will create ParamAttr as param_attr. If the Initializer of the param_attr is not set, the parameter is initialized with Xavier. Default: None.
  • bias_attr (ParamAttr|None) – The parameter attribute for the bias of batch_norm. If it is set to None or one attribute of ParamAttr, batch_norm will create ParamAttr as bias_attr. If the Initializer of the bias_attr is not set, the bias is initialized zero. Default: None.
  • data_layout (string, default NCHW) – NCHW|NHWC
  • in_place (bool, Default False) – Make the input and output of batch norm reuse memory.
  • name (string, Default None) – A name for this layer(optional). If set None, the layer will be named automatically.
  • moving_mean_name (string, Default None) – The name of moving_mean which store the global Mean.
  • moving_variance_name (string, Default None) – The name of the moving_variance which store the global Variance.
  • do_model_average_for_mean_and_var (bool, Default False) – Do model average for mean and variance or not.
  • fuse_with_relu (bool) – if True, this OP performs relu after batch norm.
Returns:

A tensor variable which is the result after applying batch normalization on the input.

Return type:

Variable

Examples

hidden1 = fluid.layers.fc(input=x, size=200, param_attr='fc1.w')
hidden2 = fluid.layers.batch_norm(input=hidden1)

beam_search_decode

paddle.fluid.layers.beam_search_decode(ids, scores, beam_size, end_id, name=None)

Beam Search Decode Layer. This layer constructs the full hypotheses for each source sentence by walking back along the LoDTensorArray ids whose lods can be used to restore the path in the beam search tree. Please see the following demo for a fully beam search usage example:

fluid/tests/book/test_machine_translation.py
Parameters:
  • ids (Variable) – The LodTensorArray variable containing the selected ids of all steps.
  • scores (Variable) – The LodTensorArray variable containing the selected scores of all steps.
  • beam_size (int) – The beam width used in beam search.
  • end_id (int) – The id of end token.
  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
Returns:

The LodTensor pair containing the generated id sequences and the corresponding scores. The shapes and lods of the two LodTensor are same. The lod level is 2 and the two levels separately indicate how many hypotheses each source sentence has and how many ids each hypothesis has.

Return type:

Variable

Examples

conv2d_transpose

paddle.fluid.layers.conv2d_transpose(input, num_filters, output_size=None, filter_size=None, padding=0, stride=1, dilation=1, groups=None, param_attr=None, bias_attr=None, use_cudnn=True, act=None, name=None)

Convlution2D transpose layer

The convolution2D transpose layer calculates the output based on the input, filter, and dilations, strides, paddings. Input(Input) and output(Output) are in NCHW format. Where N is batch size, C is the number of channels, H is the height of the feature, and W is the width of the feature. Parameters(dilations, strides, paddings) are two elements. These two elements represent height and width, respectively. The details of convolution transpose layer, please refer to the following explanation and references therein. If bias attribution and activation type are provided, bias is added to the output of the convolution, and the corresponding activation function is applied to the final result.

For each input \(X\), the equation is:

\[Out = \sigma (W \ast X + b)\]

Where:

  • \(X\): Input value, a tensor with NCHW format.
  • \(W\): Filter value, a tensor with MCHW format.
  • \(\ast\): Convolution operation.
  • \(b\): Bias value, a 2-D tensor with shape [M, 1].
  • \(\sigma\): Activation function.
  • \(Out\): Output value, the shape of \(Out\) and \(X\) may be different.

Example

  • Input:

    Input shape: \((N, C_{in}, H_{in}, W_{in})\)

    Filter shape: \((C_{in}, C_{out}, H_f, W_f)\)

  • Output:

    Output shape: \((N, C_{out}, H_{out}, W_{out})\)

Where

\[\begin{split}H^\prime_{out} &= (H_{in} - 1) * strides[0] - 2 * paddings[0] + dilations[0] * (H_f - 1) + 1 \\ W^\prime_{out} &= (W_{in} - 1) * strides[1] - 2 * paddings[1] + dilations[1] * (W_f - 1) + 1 \\ H_{out} \in [ H^\prime_{out}, H^\prime_{out} + strides[0] ) \\ W_{out} \in [ W^\prime_{out}, W^\prime_{out} + strides[1] )\end{split}\]
Parameters:
  • input (Variable) – The input image with [N, C, H, W] format.
  • num_filters (int) – The number of the filter. It is as same as the output image channel.
  • output_size (int|tuple|None) – The output image size. If output size is a tuple, it must contain two integers, (image_H, image_W). None if use filter_size, padding, and stride to calculate output_size. if output_size and filter_size are specified at the same time, They should follow the formula above.
  • filter_size (int|tuple|None) – The filter size. If filter_size is a tuple, it must contain two integers, (filter_size_H, filter_size_W). Otherwise, the filter will be a square. None if use output size to calculate filter_size.
  • padding (int|tuple) – The padding size. If padding is a tuple, it must contain two integers, (padding_H, padding_W). Otherwise, the padding_H = padding_W = padding. Default: padding = 0.
  • stride (int|tuple) – The stride size. If stride is a tuple, it must contain two integers, (stride_H, stride_W). Otherwise, the stride_H = stride_W = stride. Default: stride = 1.
  • dilation (int|tuple) – The dilation size. If dilation is a tuple, it must contain two integers, (dilation_H, dilation_W). Otherwise, the dilation_H = dilation_W = dilation. Default: dilation = 1.
  • groups (int) – The groups number of the Conv2d transpose layer. Inspired by grouped convolution in Alex Krizhevsky’s Deep CNN paper, in which when group=2, the first half of the filters is only connected to the first half of the input channels, while the second half of the filters is only connected to the second half of the input channels. Default: groups = 1.
  • param_attr (ParamAttr|None) – The parameter attribute for learnable parameters/weights of conv2d_transpose. If it is set to None or one attribute of ParamAttr, conv2d_transpose will create ParamAttr as param_attr. If the Initializer of the param_attr is not set, the parameter is initialized with Xavier. Default: None.
  • bias_attr (ParamAttr|bool|None) – The parameter attribute for the bias of conv2d_transpose. If it is set to False, no bias will be added to the output units. If it is set to None or one attribute of ParamAttr, conv2d_transpose will create ParamAttr as bias_attr. If the Initializer of the bias_attr is not set, the bias is initialized zero. Default: None.
  • use_cudnn (bool) – Use cudnn kernel or not, it is valid only when the cudnn library is installed. Default: True.
  • act (str) – Activation type, if it is set to None, activation is not appended. Default: None.
  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically. Default: True.
Returns:

The tensor variable storing the convolution transpose result.

Return type:

Variable

Raises:

ValueError – If the shapes of input, filter_size, stride, padding and groups mismatch.

Examples

data = fluid.layers.data(name='data', shape=[3, 32, 32], dtype='float32')
conv2d_transpose = fluid.layers.conv2d_transpose(input=data, num_filters=2, filter_size=3)

conv3d_transpose

paddle.fluid.layers.conv3d_transpose(input, num_filters, output_size=None, filter_size=None, padding=0, stride=1, dilation=1, groups=None, param_attr=None, bias_attr=None, use_cudnn=True, act=None, name=None)

Convlution3D transpose layer

The convolution3D transpose layer calculates the output based on the input, filter, and dilations, strides, paddings. Input(Input) and output(Output) are in NCDHW format. Where N is batch size, C is the number of channels, D is the depth of the feature, H is the height of the feature, and W is the width of the feature. Parameters(dilations, strides, paddings) are two elements. These two elements represent height and width, respectively. The details of convolution transpose layer, please refer to the following explanation and references therein. If bias attribution and activation type are provided, bias is added to the output of the convolution, and the corresponding activation function is applied to the final result.

For each input \(X\), the equation is:

\[Out = \sigma (W \ast X + b)\]

In the above equation:

  • \(X\): Input value, a tensor with NCDHW format.
  • \(W\): Filter value, a tensor with MCDHW format.
  • \(\ast\): Convolution operation.
  • \(b\): Bias value, a 2-D tensor with shape [M, 1].
  • \(\sigma\): Activation function.
  • \(Out\): Output value, the shape of \(Out\) and \(X\) may be different.

Example

  • Input:

    Input shape: \((N, C_{in}, D_{in}, H_{in}, W_{in})\)

    Filter shape: \((C_{in}, C_{out}, D_f, H_f, W_f)\)

  • Output:

    Output shape: \((N, C_{out}, D_{out}, H_{out}, W_{out})\)

Where

\[\begin{split}D_{out} &= (D_{in} - 1) * strides[0] - 2 * paddings[0] + dilations[0] * (D_f - 1) + 1 \\ H_{out} &= (H_{in} - 1) * strides[1] - 2 * paddings[1] + dilations[1] * (H_f - 1) + 1 \\ W_{out} &= (W_{in} - 1) * strides[2] - 2 * paddings[2] + dilations[2] * (W_f - 1) + 1\end{split}\]
Parameters:
  • input (Variable) – The input image with [N, C, D, H, W] format.
  • num_filters (int) – The number of the filter. It is as same as the output image channel.
  • output_size (int|tuple|None) – The output image size. If output size is a tuple, it must contain three integers, (image_D, image_H, image_W). This parameter only works when filter_size is None.
  • filter_size (int|tuple|None) – The filter size. If filter_size is a tuple, it must contain three integers, (filter_size_D, filter_size_H, filter_size_W). Otherwise, the filter will be a square. None if use output size to calculate filter_size.
  • padding (int|tuple) – The padding size. If padding is a tuple, it must contain three integers, (padding_D, padding_H, padding_W). Otherwise, the padding_D = padding_H = padding_W = padding. Default: padding = 0.
  • stride (int|tuple) – The stride size. If stride is a tuple, it must contain three integers, (stride_D, stride_H, stride_W). Otherwise, the stride_D = stride_H = stride_W = stride. Default: stride = 1.
  • dilation (int|tuple) – The dilation size. If dilation is a tuple, it must contain three integers, (dilation_D, dilation_H, dilation_W). Otherwise, the dilation_D = dilation_H = dilation_W = dilation. Default: dilation = 1.
  • groups (int) – The groups number of the Conv3d transpose layer. Inspired by grouped convolution in Alex Krizhevsky’s Deep CNN paper, in which when group=2, the first half of the filters is only connected to the first half of the input channels, while the second half of the filters is only connected to the second half of the input channels. Default: groups=1
  • param_attr (ParamAttr|None) – The parameter attribute for learnable parameters/weights of conv3d_transpose. If it is set to None or one attribute of ParamAttr, conv3d_transpose will create ParamAttr as param_attr. If the Initializer of the param_attr is not set, the parameter is initialized with Xavier. Default: None.
  • bias_attr (ParamAttr|bool|None) – The parameter attribute for the bias of conv3d_transpose. If it is set to False, no bias will be added to the output units. If it is set to None or one attribute of ParamAttr, conv3d_transpose will create ParamAttr as bias_attr. If the Initializer of the bias_attr is not set, the bias is initialized zero. Default: None.
  • use_cudnn (bool) – Use cudnn kernel or not, it is valid only when the cudnn library is installed. Default: True
  • act (str) – Activation type, if it is set to None, activation is not appended. Default: None.
  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
Returns:

The tensor variable storing the convolution transpose result.

Return type:

Variable

Raises:

ValueError – If the shapes of input, filter_size, stride, padding and groups mismatch.

Examples

data = fluid.layers.data(name='data', shape=[3, 12, 32, 32], dtype='float32')
conv3d_transpose = fluid.layers.conv3d_transpose(input=data, num_filters=2, filter_size=3)

sequence_expand

paddle.fluid.layers.sequence_expand(x, y, ref_level=-1, name=None)

Sequence Expand Layer. This layer will expand the input variable x according to specified level lod of y. Please note that lod level of x is at most 1 and rank of x is at least 2. When rank of x is greater than 2, then it would be viewed as a 2-D tensor. Following examples will explain how sequence_expand works:

* Case 1
    x is a LoDTensor:
        x.lod  = [[2,        2]]
        x.data = [[a], [b], [c], [d]]
        x.dims = [4, 1]

    y is a LoDTensor:
        y.lod = [[2,    2],
                 [3, 3, 1, 1]]

    ref_level: 0

    then output is a 1-level LoDTensor:
        out.lod =  [[2,        2,        2,        2]]
        out.data = [[a], [b], [a], [b], [c], [d], [c], [d]]
        out.dims = [8, 1]

* Case 2
    x is a Tensor:
        x.data = [[a], [b], [c]]
        x.dims = [3, 1]

    y is a LoDTensor:
        y.lod = [[2, 0, 3]]

    ref_level: -1

    then output is a Tensor:
        out.data = [[a], [a], [c], [c], [c]]
        out.dims = [5, 1]
Parameters:
  • x (Variable) – The input variable which is a Tensor or LoDTensor.
  • y (Variable) – The input variable which is a LoDTensor.
  • ref_level (int) – Lod level of y to be referred by x. If set to -1, refer the last level of lod.
  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
Returns:

The expanded variable which is a LoDTensor.

Return type:

Variable

Examples

x = fluid.layers.data(name='x', shape=[10], dtype='float32')
y = fluid.layers.data(name='y', shape=[10, 20],
                 dtype='float32', lod_level=1)
out = layers.sequence_expand(x=x, y=y, ref_level=0)

sequence_expand_as

paddle.fluid.layers.sequence_expand_as(x, y, name=None)

Sequence Expand As Layer. This layer will expand the input variable x according to the zeroth level lod of y. Current implementation requires the level number of Input(Y)’s lod must be 1, and the first dimension of Input(X) should be equal to the size of Input(Y)’s zeroth level lod, and lod of Input(X) is not considered.

Following examples will explain how sequence_expand_as works:

* Case 1:

    Given a 1-level LoDTensor input(X)
        X.data = [[a], [b], [c], [d]]
        X.dims = [4, 1]
    and input(Y)
        Y.lod = [[0, 3, 6, 7, 8]]
    ref_level: 0
    then we get 1-level LoDTensor
        Out.lod =  [[0,            3,              6,  7,  8]]
        Out.data = [[a], [a], [a], [b], [b], [b], [c], [d]]
        Out.dims = [8, 1]

* Case 2:

    Given a common Tensor input(X)
        X.data = [[a, b], [c, d], [e, f]]
        X.dims = [3, 2]
    and input(Y)
        Y.lod = [[0, 2, 3, 6]]
    ref_level: 0
    then we get a common LoDTensor
        Out.lod =  [[0,             2,     3,                    6]]
        Out.data = [[a, b], [a, b] [c, d], [e, f], [e, f], [e, f]]
        Out.dims = [6, 2]
Parameters:
  • x (Variable) – The input variable which is a Tensor or LoDTensor.
  • y (Variable) – The input variable which is a LoDTensor.
  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
Returns:

The expanded variable which is a LoDTensor.

Return type:

Variable

Examples

x = fluid.layers.data(name='x', shape=[10], dtype='float32')
y = fluid.layers.data(name='y', shape=[10, 20],
                 dtype='float32', lod_level=1)
out = layers.sequence_expand_as(x=x, y=y)

sequence_pad

paddle.fluid.layers.sequence_pad(x, pad_value, maxlen=None)

Sequence Pad Operator

This operator pads sequences in a same batch to a consistent length. The length is specified by attribute ‘padded_length’. New elements, whose values are specified by input ‘PadValue’, will be appended to the end of each sequence, to make their final lengths consistent.

Following are cases to better explain how this works:

Case 1:

Given a 1-level LoDTensor input(X): X.lod = [[0, 2, 5]] X.data = [a, b, c, d, e] and Input(PadValue): PadValue.data = [0] and attribite ‘padded_length’ = 4, then we get LoDTensor: Out.data = [[a, b, 0, 0], [c, d, e, 0]] Length.data = [[2], [3]]

Case 2:

Given a 1-level LoDTensor input(X): X.lod = [[0, 2, 5]] X.data = [[a1, a2], [b1, b2], [c1, c2], [d1, d2], [e1, e2]] and Input(PadValue): PadValue.data = [0] and attribite ‘padded_length’ = -1, which mean using the length of longest input sequence(3 in this case), then we get LoDTensor: Out.data = [[[a1, a2], [b1, b2], [0, 0]], [[c1, c2], [d1, d2], [e1, e2]]] Length.data = [[2], [3]]

Case 3:

Given a 1-level LoDTensor input(X): X.lod = [[0, 2, 5]] X.data = [[a1, a2], [b1, b2], [c1, c2], [d1, d2], [e1, e2]] and Input(PadValue): PadValue.data = [p1, p2] and attribite ‘padded_length’ = -1, which mean using the length of longest input sequence(3 in this case), then we get LoDTensor: Out.data = [[[a1, a2], [b1, b2], [p1, p2]], [[c1, c2], [d1, d2], [e1, e2]]] Length.data = [[2], [3]]

Parameters:
  • x (Variable) – Input variable which should contain lod information.
  • pad_value (Variable) – The Variable that holds values that will be fill into padded steps. It can be a scalar or a tensor whose shape equals to time steps in sequences. If it’s a scalar, it will be automatically broadcasted to the shape of time step.
  • maxlen (int, default None) – The length of padded sequences. It can be None or any positive int. When it is None, all sequences will be padded up to the length of the longest one among them; when it a certain positive value, it must be greater than the length of the longest original sequence.”
Returns:

The padded sequence batch and the original lengths before

padding. All sequences has the same length.

Return type:

Variable

Examples

import numpy

x = fluid.layers.data(name='y', shape=[10, 5],
                 dtype='float32', lod_level=1)
pad_value = fluid.layers.assign(input=numpy.array([0]))
out = fluid.layers.sequence_pad(x=x, pad_value=pad_value)

lstm_unit

paddle.fluid.layers.lstm_unit(x_t, hidden_t_prev, cell_t_prev, forget_bias=0.0, param_attr=None, bias_attr=None, name=None)

Lstm unit layer. The equation of a lstm step is:

\[ \begin{align}\begin{aligned}i_t & = \sigma(W_{x_i}x_{t} + W_{h_i}h_{t-1} + b_i)\\f_t & = \sigma(W_{x_f}x_{t} + W_{h_f}h_{t-1} + b_f)\\c_t & = f_tc_{t-1} + i_t tanh (W_{x_c}x_t + W_{h_c}h_{t-1} + b_c)\\o_t & = \sigma(W_{x_o}x_{t} + W_{h_o}h_{t-1} + b_o)\\h_t & = o_t tanh(c_t)\end{aligned}\end{align} \]

The inputs of lstm unit include \(x_t\), \(h_{t-1}\) and \(c_{t-1}\). The 2nd dimensions of \(h_{t-1}\) and \(c_{t-1}\) should be same. The implementation separates the linear transformation and non-linear transformation apart. Here, we take \(i_t\) as an example. The linear transformation is applied by calling a fc layer and the equation is:

\[L_{i_t} = W_{x_i}x_{t} + W_{h_i}h_{t-1} + b_i\]

The non-linear transformation is applied by calling lstm_unit_op and the equation is:

\[i_t = \sigma(L_{i_t})\]

This layer has two outputs including \(h_t\) and \(o_t\).

Parameters:
  • x_t (Variable) – The input value of current step, a 2-D tensor with shape M x N, M for batch size and N for input size.
  • hidden_t_prev (Variable) – The hidden value of lstm unit, a 2-D tensor with shape M x S, M for batch size and S for size of lstm unit.
  • cell_t_prev (Variable) – The cell value of lstm unit, a 2-D tensor with shape M x S, M for batch size and S for size of lstm unit.
  • forget_bias (float) – The forget bias of lstm unit.
  • param_attr (ParamAttr|None) – The parameter attribute for the learnable hidden-hidden weights. If it is set to None or one attribute of ParamAttr, lstm_unit will create ParamAttr as param_attr. If the Initializer of the param_attr is not set, the parameter is initialized with Xavier. Default: None.
  • bias_attr (ParamAttr|None) – The bias attribute for the learnable bias weights. If it is set to False, no bias will be added to the output units. If it is set to None or one attribute of ParamAttr, lstm_unit will create ParamAttr as bias_attr. If the Initializer of the bias_attr is not set, the bias is initialized zero. Default: None.
  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
Returns:

The hidden value and cell value of lstm unit.

Return type:

tuple

Raises:

ValueError – The ranks of x_t, hidden_t_prev and cell_t_prev not be 2 or the 1st dimensions of x_t, hidden_t_prev and cell_t_prev not be the same or the 2nd dimensions of hidden_t_prev and cell_t_prev not be the same.

Examples

x_t = fluid.layers.fc(input=x_t_data, size=10)
prev_hidden = fluid.layers.fc(input=prev_hidden_data, size=30)
prev_cell = fluid.layers.fc(input=prev_cell_data, size=30)
hidden_value, cell_value = fluid.layers.lstm_unit(x_t=x_t,
                                       hidden_t_prev=prev_hidden,
                                       cell_t_prev=prev_cell)

reduce_sum

paddle.fluid.layers.reduce_sum(input, dim=None, keep_dim=False, name=None)

Computes the sum of tensor elements over the given dimension.

Parameters:
  • input (Variable) – The input variable which is a Tensor or LoDTensor.
  • dim (list|int|None) – The dimensions along which the sum is performed. If None, sum all elements of input and return a Tensor variable with a single element, otherwise must be in the range \([-rank(input), rank(input))\). If \(dim[i] < 0\), the dimension to reduce is \(rank + dim[i]\).
  • keep_dim (bool|False) – Whether to reserve the reduced dimension in the output Tensor. The result tensor will have one fewer dimension than the input unless keep_dim is true.
  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
Returns:

The reduced Tensor variable.

Return type:

Variable

Examples

# x is a Tensor variable with following elements:
#    [[0.2, 0.3, 0.5, 0.9]
#     [0.1, 0.2, 0.6, 0.7]]
# Each example is followed by the corresponding output tensor.
fluid.layers.reduce_sum(x)  # [3.5]
fluid.layers.reduce_sum(x, dim=0)  # [0.3, 0.5, 1.1, 1.6]
fluid.layers.reduce_sum(x, dim=-1)  # [1.9, 1.6]
fluid.layers.reduce_sum(x, dim=1, keep_dim=True)  # [[1.9], [1.6]]

# x is a Tensor variable with shape [2, 2, 2] and elements as below:
#      [[[1, 2], [3, 4]],
#      [[5, 6], [7, 8]]]
# Each example is followed by the corresponding output tensor.
fluid.layers.reduce_sum(x, dim=[1, 2]) # [10, 26]
fluid.layers.reduce_sum(x, dim=[0, 1]) # [16, 20]

reduce_mean

paddle.fluid.layers.reduce_mean(input, dim=None, keep_dim=False, name=None)

Computes the mean of the input tensor’s elements along the given dimension.

Parameters:
  • input (Variable) – The input variable which is a Tensor or LoDTensor.
  • dim (list|int|None) – The dimension along which the mean is computed. If None, compute the mean over all elements of input and return a variable with a single element, otherwise it must be in the range \([-rank(input), rank(input))\). If \(dim[i] < 0\), the dimension to reduce is \(rank(input) + dim[i]\).
  • keep_dim (bool) – Whether to reserve the reduced dimension in the output Tensor. The result tensor will have one fewer dimension than the input unless keep_dim is true.
  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
Returns:

The reduced mean Variable.

Return type:

Variable

Examples

# x is a Tensor variable with following elements:
#    [[0.2, 0.3, 0.5, 0.9]
#     [0.1, 0.2, 0.6, 0.7]]
# Each example is followed by the correspending output tensor.
fluid.layers.reduce_mean(x)  # [0.4375]
fluid.layers.reduce_mean(x, dim=0)  # [0.15, 0.25, 0.55, 0.8]
fluid.layers.reduce_mean(x, dim=-1)  # [0.475, 0.4]
fluid.layers.reduce_mean(
    x, dim=1, keep_dim=True)  # [[0.475], [0.4]]

# x is a Tensor variable with shape [2, 2, 2] and elements as below:
#      [[[1.0, 2.0], [3.0, 4.0]],
#      [[5.0, 6.0], [7.0, 8.0]]]
# Each example is followed by the correspending output tensor.
fluid.layers.reduce_mean(x, dim=[1, 2]) # [2.5, 6.5]
fluid.layers.reduce_mean(x, dim=[0, 1]) # [4.0, 5.0]

reduce_max

paddle.fluid.layers.reduce_max(input, dim=None, keep_dim=False, name=None)

Computes the maximum of tensor elements over the given dimension.

Parameters:
  • input (Variable) – The input variable which is a Tensor or LoDTensor.
  • dim (list|int|None) – The dimension along which the maximum is computed. If None, compute the maximum over all elements of input and return a Tensor variable with a single element, otherwise must be in the range \([-rank(input), rank(input))\). If \(dim[i] < 0\), the dimension to reduce is \(rank + dim[i]\).
  • keep_dim (bool) – Whether to reserve the reduced dimension in the output Tensor. The result tensor will have one fewer dimension than the input unless keep_dim is true.
  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
Returns:

The reduced Tensor variable.

Return type:

Variable

Examples

# x is a Tensor variable with following elements:
#    [[0.2, 0.3, 0.5, 0.9]
#     [0.1, 0.2, 0.6, 0.7]]
# Each example is followed by the correspending output tensor.
fluid.layers.reduce_max(x)  # [0.9]
fluid.layers.reduce_max(x, dim=0)  # [0.2, 0.3, 0.6, 0.9]
fluid.layers.reduce_max(x, dim=-1)  # [0.9, 0.7]
fluid.layers.reduce_max(x, dim=1, keep_dim=True)  # [[0.9], [0.7]]

# x is a Tensor variable with shape [2, 2, 2] and elements as below:
#      [[[1.0, 2.0], [3.0, 4.0]],
#      [[5.0, 6.0], [7.0, 8.0]]]
# Each example is followed by the correspending output tensor.
fluid.layers.reduce_max(x, dim=[1, 2]) # [4.0, 8.0]
fluid.layers.reduce_max(x, dim=[0, 1]) # [7.0, 8.0]

reduce_min

paddle.fluid.layers.reduce_min(input, dim=None, keep_dim=False, name=None)

Computes the minimum of tensor elements over the given dimension.

Parameters:
  • input (Variable) – The input variable which is a Tensor or LoDTensor.
  • dim (list|int|None) – The dimensions along which the minimum is computed. If None, compute the minimum over all elements of input and return a Tensor variable with a single element, otherwise must be in the range \([-rank(input), rank(input))\). If \(dim[i] < 0\), the dimension to reduce is \(rank + dim[i]\).
  • keep_dim (bool) – Whether to reserve the reduced dimension in the output Tensor. The result tensor will have one fewer dimension than the input unless keep_dim is true.
  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
Returns:

The reduced Tensor variable.

Return type:

Variable

Examples

# x is a Tensor variable with following elements:
#    [[0.2, 0.3, 0.5, 0.9]
#     [0.1, 0.2, 0.6, 0.7]]
# Each example is followed by the correspending output tensor.
fluid.layers.reduce_min(x)  # [0.1]
fluid.layers.reduce_min(x, dim=0)  # [0.1, 0.2, 0.5, 0.7]
fluid.layers.reduce_min(x, dim=-1)  # [0.2, 0.1]
fluid.layers.reduce_min(x, dim=1, keep_dim=True)  # [[0.2], [0.1]]

# x is a Tensor variable with shape [2, 2, 2] and elements as below:
#      [[[1.0, 2.0], [3.0, 4.0]],
#      [[5.0, 6.0], [7.0, 8.0]]]
# Each example is followed by the correspending output tensor.
fluid.layers.reduce_min(x, dim=[1, 2]) # [1.0, 5.0]
fluid.layers.reduce_min(x, dim=[0, 1]) # [1.0, 2.0]

reduce_prod

paddle.fluid.layers.reduce_prod(input, dim=None, keep_dim=False, name=None)

Computes the product of tensor elements over the given dimension.

Parameters:
  • input (Variable) – The input variable which is a Tensor or LoDTensor.
  • dim (list|int|None) – The dimensions along which the product is performed. If None, multipy all elements of input and return a Tensor variable with a single element, otherwise must be in the range \([-rank(input), rank(input))\). If \(dim[i] < 0\), the dimension to reduce is \(rank + dim[i]\).
  • keep_dim (bool|False) – Whether to reserve the reduced dimension in the output Tensor. The result tensor will have one fewer dimension than the input unless keep_dim is true.
  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
Returns:

The reduced Tensor variable.

Return type:

Variable

Examples

# x is a Tensor variable with following elements:
#    [[0.2, 0.3, 0.5, 0.9]
#     [0.1, 0.2, 0.6, 0.7]]
# Each example is followed by the correspending output tensor.
fluid.layers.reduce_prod(x)  # [0.0002268]
fluid.layers.reduce_prod(x, dim=0)  # [0.02, 0.06, 0.3, 0.63]
fluid.layers.reduce_prod(x, dim=-1)  # [0.027, 0.0084]
fluid.layers.reduce_prod(x, dim=1,
                         keep_dim=True)  # [[0.027], [0.0084]]

# x is a Tensor variable with shape [2, 2, 2] and elements as below:
#      [[[1.0, 2.0], [3.0, 4.0]],
#      [[5.0, 6.0], [7.0, 8.0]]]
# Each example is followed by the correspending output tensor.
fluid.layers.reduce_prod(x, dim=[1, 2]) # [24.0, 1680.0]
fluid.layers.reduce_prod(x, dim=[0, 1]) # [105.0, 384.0]

sequence_first_step

paddle.fluid.layers.sequence_first_step(input)

This function gets the first step of sequence.

x is a 1-level LoDTensor:
  x.lod = [[2, 3, 2]]
  x.data = [1, 3, 2, 4, 6, 5, 1]
  x.dims = [7, 1]

then output is a Tensor:
  out.dim = [3, 1]
  with condition len(x.lod[-1]) == out.dims[0]
  out.data = [1, 2, 5], where 1=first(1,3), 2=first(2,4,6), 5=first(5,1)
Parameters:input (variable) – The input variable which is a LoDTensor.
Returns:The sequence’s first step variable which is a Tensor.

Examples

x = fluid.layers.data(name='x', shape=[7, 1],
                 dtype='float32', lod_level=1)
x_first_step = fluid.layers.sequence_first_step(input=x)

sequence_last_step

paddle.fluid.layers.sequence_last_step(input)

This function gets the last step of sequence.

x is a 1-level LoDTensor:
  x.lod = [[2, 3, 2]]
  x.data = [1, 3, 2, 4, 6, 5, 1]
  x.dims = [7, 1]

then output is a Tensor:
  out.dim = [3, 1]
  with condition len(x.lod[-1]) == out.dims[0]
  out.data = [3, 6, 1], where 3=last(1,3), 6=last(2,4,6), 1=last(5,1)
Parameters:input (variable) – The input variable which is a LoDTensor.
Returns:The sequence’s last step variable which is a Tensor.

Examples

x = fluid.layers.data(name='x', shape=[7, 1],
                 dtype='float32', lod_level=1)
x_last_step = fluid.layers.sequence_last_step(input=x)

dropout

paddle.fluid.layers.dropout(x, dropout_prob, is_test=False, seed=None, name=None)

Computes dropout.

Drop or keep each element of x independently. Dropout is a regularization technique for reducing overfitting by preventing neuron co-adaption during training. The dropout operator randomly sets (according to the given dropout probability) the outputs of some units to zero, while others are remain unchanged.

Parameters:
  • x (Variable) – The input tensor variable.
  • dropout_prob (float) – Probability of setting units to zero.
  • is_test (bool) – A flag indicating whether it is in test phrase or not.
  • seed (int) – A Python integer used to create random seeds. If this parameter is set to None, a random seed is used. NOTE: If an integer seed is given, always the same output units will be dropped. DO NOT use a fixed seed in training.
  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
Returns:

A tensor variable is the shape with x.

Return type:

Variable

Examples

x = fluid.layers.data(name="data", shape=[32, 32], dtype="float32")
droped = fluid.layers.dropout(x, dropout_prob=0.5)

split

paddle.fluid.layers.split(input, num_or_sections, dim=-1, name=None)

Split the input tensor into multiple sub-tensors.

Parameters:
  • input (Variable) – The input variable which is a Tensor or LoDTensor.
  • num_or_sections (int|list) – If num_or_sections is an integer, then the integer indicates the number of equal sized sub-tensors that the tensor will be divided into. If num_or_sections is a list of integers, the length of list indicates the number of sub-tensors and the integers indicate the sizes of sub-tensors’ dim dimension orderly.
  • dim (int) – The dimension along which to split. If \(dim < 0\), the dimension to split along is \(rank(input) + dim\).
  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
Returns:

The list of segmented tensor variables.

Return type:

list(Variable)

Examples

# x is a Tensor variable with shape [3, 9, 5]:
x0, x1, x2 = fluid.layers.split(x, num_or_sections=3, dim=1)
x0.shape  # [3, 3, 5]
x1.shape  # [3, 3, 5]
x2.shape  # [3, 3, 5]
x0, x1, x2 = fluid.layers.split(
    x, num_or_sections=[2, 3, 4], dim=1)
x0.shape  # [3, 2, 5]
x1.shape  # [3, 3, 5]
x2.shape  # [3, 4, 5]

ctc_greedy_decoder

paddle.fluid.layers.ctc_greedy_decoder(input, blank, name=None)

This op is used to decode sequences by greedy policy by below steps:

  1. Get the indexes of max value for each row in input. a.k.a. numpy.argmax(input, axis=0).
  2. For each sequence in result of step1, merge repeated tokens between two blanks and delete all blanks.

A simple example as below:

Given:

input.data = [[0.6, 0.1, 0.3, 0.1],
              [0.3, 0.2, 0.4, 0.1],
              [0.1, 0.5, 0.1, 0.3],
              [0.5, 0.1, 0.3, 0.1],

              [0.5, 0.1, 0.3, 0.1],
              [0.2, 0.2, 0.2, 0.4],
              [0.2, 0.2, 0.1, 0.5],
              [0.5, 0.1, 0.3, 0.1]]

input.lod = [[4, 4]]

Then:

output.data = [[2],
               [1],
               [3]]

output.lod = [[2, 1]]
Parameters:
  • input (Variable) – (LoDTensor<float>), the probabilities of variable-length sequences, which is a 2-D Tensor with LoD information. It’s shape is [Lp, num_classes + 1], where Lp is the sum of all input sequences’ length and num_classes is the true number of classes. (not including the blank label).
  • blank (int) – the blank label index of Connectionist Temporal Classification (CTC) loss, which is in thehalf-opened interval [0, num_classes + 1).
  • name (str) – The name of this layer. It is optional.
Returns:

CTC greedy decode result. If all the sequences in result were empty, the result LoDTensor will be [-1] with LoD [[]] and dims [1, 1].

Return type:

Variable

Examples

x = fluid.layers.data(name='x', shape=[8], dtype='float32')

cost = fluid.layers.ctc_greedy_decoder(input=x, blank=0)

edit_distance

paddle.fluid.layers.edit_distance(input, label, normalized=True, ignored_tokens=None)

EditDistance operator computes the edit distances between a batch of hypothesis strings and their references. Edit distance, also called Levenshtein distance, measures how dissimilar two strings are by counting the minimum number of operations to transform one string into anthor. Here the operations include insertion, deletion, and substitution.

For example, given hypothesis string A = “kitten” and reference B = “sitting”, the edit distance is 3 for A will be transformed into B at least after two substitutions and one insertion:

“kitten” -> “sitten” -> “sittin” -> “sitting”

The input is a LoDTensor consisting of all the hypothesis strings with the total number denoted by batch_size, and the separation is specified by the LoD information. And the batch_size reference strings are arranged in order in the same way in the input LoDTensor.

The output contains the batch_size results and each stands for the edit distance for a pair of strings respectively. If Attr(normalized) is true, the edit distance will be divided by the length of reference string.

Parameters:
  • input (Variable) – The indices for hypothesis strings.
  • label (Variable) – The indices for reference strings.
  • normalized (bool, default True) – Indicated whether to normalize the edit distance by the length of reference string.
  • ignored_tokens (list<int>, default None) – Tokens that should be removed before calculating edit distance.
  • name (str) – The name of this layer. It is optional.
Returns:

sequence-to-sequence edit distance in shape [batch_size, 1].

Return type:

Variable

Examples

x = fluid.layers.data(name='x', shape=[8], dtype='float32')
y = fluid.layers.data(name='y', shape=[7], dtype='float32')
cost = fluid.layers.edit_distance(input=x,label=y)

l2_normalize

paddle.fluid.layers.l2_normalize(x, axis, epsilon=1e-12, name=None)

L2 normalize Layer

The l2 normalize layer normalizes x along dimension axis using an L2 norm. For a 1-D tensor (dim is fixed to 0), this layer computes

\[y = \frac{x}{ \sqrt{\sum {x^2} + epsion }}\]

For x with more dimensions, this layer independently normalizes each 1-D slice along dimension axis.

Parameters:
  • x (Variable|list) – The input tensor to l2_normalize layer.
  • axis (int) – The axis on which to apply normalization. If axis < 0, the dimension to normalization is rank(X) + axis. -1 is the last dimension.
  • epsilon (float) – The epsilon value is used to avoid division by zero, the defalut value is 1e-10.
  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
Returns:

The output tensor variable is the same shape with x.

Return type:

Variable

Examples

data = fluid.layers.data(name="data",
                         shape=(3, 17, 13),
                         dtype="float32")
normed = fluid.layers.l2_normalize(x=data, axis=1)

matmul

paddle.fluid.layers.matmul(x, y, transpose_x=False, transpose_y=False, alpha=1.0, name=None)

Applies matrix multiplication to two tensors.

Currently, the input tensors’ rank can be any, but when the rank of any inputs is bigger than 3, this two inputs’ rank should be equal.

The actual behavior depends on the shapes of \(x\), \(y\) and the flag values of transpose_x, transpose_y. Specifically:

  • If a transpose flag is specified, the last two dimensions of the tensor are transposed. If the tensor is rank-1 of shape \([D]\), then for \(x\) it is treated as \([1, D]\) in nontransposed form and as \([D, 1]\) in transposed form, whereas for \(y\) it is the opposite: It is treated as \([D, 1]\) in nontransposed form and as \([1, D]\) in transposed form.
  • After transpose, the two tensors are 2-D or n-D and matrix multiplication performs in the following way.
    • If both are 2-D, they are multiplied like conventional matrices.
    • If either is n-D, it is treated as a stack of matrices residing in the last two dimensions and a batched matrix multiply supporting broadcast applies on the two tensors.

Also note that if the raw tensor \(x\) or \(y\) is rank-1 and nontransposed, the prepended or appended dimension \(1\) will be removed after matrix multiplication.

Parameters:
  • x (Variable) – The input variable which is a Tensor or LoDTensor.
  • y (Variable) – The input variable which is a Tensor or LoDTensor.
  • transpose_x (bool) – Whether to transpose \(x\) before multiplication.
  • transpose_y (bool) – Whether to transpose \(y\) before multiplication.
  • alpha (float) – The scale of output. Default 1.0.
  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
Returns:

The product Tensor variable.

Return type:

Variable

Examples

# Examples to clarify shapes of the inputs and output
# x: [B, ..., M, K], y: [B, ..., K, N]
fluid.layers.matmul(x, y)  # out: [B, ..., M, N]

# x: [B, M, K], y: [B, K, N]
fluid.layers.matmul(x, y)  # out: [B, M, N]

# x: [B, M, K], y: [K, N]
fluid.layers.matmul(x, y)  # out: [B, M, N]

# x: [M, K], y: [K, N]
fluid.layers.matmul(x, y)  # out: [M, N]

# x: [B, M, K], y: [K]
fluid.layers.matmul(x, y)  # out: [B, M]

# x: [K], y: [K]
fluid.layers.matmul(x, y)  # out: [1]

# x: [M], y: [N]
fluid.layers.matmul(x, y, True, True)  # out: [M, N]

topk

paddle.fluid.layers.topk(input, k, name=None)

This operator is used to find values and indices of the k largest entries for the last dimension.

If the input is a vector (1-D Tensor), finds the k largest entries in the vector and outputs their values and indices as vectors. Thus values[j] is the j-th largest entry in input, and its index is indices[j].

If the input is a Tensor with higher rank, this operator computes the top k entries along the last dimension.

For example:

If:
    input = [[5, 4, 2, 3],
             [9, 7, 10, 25],
             [6, 2, 10, 1]]
    k = 2

Then:
    The first output:
    values = [[5, 4],
              [10, 25],
              [6, 10]]

    The second output:
    indices = [[0, 1],
               [2, 3],
               [0, 2]]
Parameters:
  • input (Variable) – The input variable which can be a vector or Tensor with higher rank.
  • k (int) – The number of top elements to look for along the last dimension of input.
  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically. Default: None
Returns:

A tuple with two elements. Each element is a Variable. The first one is k largest elements along each last dimensional slice. The second one is indices of values within the last dimension of input.

Return type:

Tuple[Variable]

Raises:

ValueError – If k < 1 or k is not less than the last dimension of input

Examples

top5_values, top5_indices = layers.topk(input, k=5)

warpctc

paddle.fluid.layers.warpctc(input, label, blank=0, norm_by_times=False)

An operator integrating the open source Warp-CTC library (https://github.com/baidu-research/warp-ctc) to compute Connectionist Temporal Classification (CTC) loss. It can be aliased as softmax with CTC, since a native softmax activation is interated to the Warp-CTC library, to to normlize values for each row of the input tensor.

Parameters:
  • input (Variable) – The unscaled probabilities of variable-length sequences, which is a 2-D Tensor with LoD information. It’s shape is [Lp, num_classes + 1], where Lp is the sum of all input sequences’ length and num_classes is the true number of classes. (not including the blank label).
  • label (Variable) – The ground truth of variable-length sequence, which is a 2-D Tensor with LoD information. It is of the shape [Lg, 1], where Lg is th sum of all labels’ length.
  • blank (int, default 0) – The blank label index of Connectionist Temporal Classification (CTC) loss, which is in the half-opened interval [0, num_classes + 1).
  • norm_by_times (bool, default false) – Whether to normalize the gradients by the number of time-step, which is also the sequence’s length. There is no need to normalize the gradients if warpctc layer was follewed by a mean_op.
Returns:

The Connectionist Temporal Classification (CTC) loss, which is a 2-D Tensor of the shape [batch_size, 1].

Return type:

Variable

Examples

label = fluid.layers.data(shape=[11, 8], dtype='float32', lod_level=1)
predict = fluid.layers.data(shape=[11, 1], dtype='float32')
cost = fluid.layers.warpctc(input=predict, label=label)

sequence_reshape

paddle.fluid.layers.sequence_reshape(input, new_dim)

Sequence Reshape Layer

This layer will rearrange the input sequences. The new dimension is set by user. Length of each sequence is computed according to original length, original dimension and new dimension. The following example will help to illustrate the function of this layer:

x is a LoDTensor:
    x.lod  = [[0, 2, 6]]
    x.data = [[1,  2], [3,  4],
              [5,  6], [7,  8],
              [9, 10], [11, 12]]
    x.dims = [6, 2]

set new_dim = 4

then out is a LoDTensor:

    out.lod  = [[0, 1, 3]]

    out.data = [[1,  2,  3,  4],
                [5,  6,  7,  8],
                [9, 10, 11, 12]]
    out.dims = [3, 4]

Currently, only 1-level LoDTensor is supported and please make sure (original length * original dimension) can be divided by new dimension with no remainder for each sequence.

Parameters:
  • input (Variable) – A 2-D LoDTensor with shape being [N, M] where M for dimension.
  • new_dim (int) – New dimension that the input LoDTensor is reshaped to.
Returns:

Reshaped LoDTensor according to new dimension.

Return type:

Variable

Examples

x = fluid.layers.data(shape=[5, 20], dtype='float32', lod_level=1)
x_reshaped = fluid.layers.sequence_reshape(input=x, new_dim=10)

transpose

paddle.fluid.layers.transpose(x, perm, name=None)

Permute the dimensions of input according to perm.

The i-th dimension of the returned tensor will correspond to the perm[i]-th dimension of input.

Parameters:
  • x (Variable) – The input Tensor.
  • perm (list) – A permutation of the dimensions of input.
  • name (str) – The name of this layer. It is optional.
Returns:

A transposed Tensor.

Return type:

Variable

Examples

x = fluid.layers.data(name='x', shape=[5, 10, 15], dtype='float32')
x_transposed = layers.transpose(x, perm=[1, 0, 2])

im2sequence

paddle.fluid.layers.im2sequence(input, filter_size=1, stride=1, padding=0, input_image_size=None, out_stride=1, name=None)

Extracts image patches from the input tensor to form a tensor of shape {input.batch_size * output_height * output_width, filter_size_H * filter_size_W * input.channels} which is similar with im2col. This op use filter / kernel to scan images and convert these images to sequences. After expanding, the number of time step are output_height * output_width for an image, in which output_height and output_width are calculated by below equation:

\[output\_size = 1 + (2 * padding + img\_size - block\_size + stride - 1) / stride\]

And the dimension of each time step is block_y * block_x * input.channels.

Parameters:
  • input (Variable) – The input should be a tensor in NCHW format.
  • filter_size (int|tuple|None) – The filter size. If filter_size is a tuple, it must contain two integers, (filter_size_H, filter_size_W). Otherwise, the filter will be a square.
  • stride (int|tuple) – The stride size. If stride is a tuple, it must contain two integers, (stride_H, stride_W). Otherwise, the stride_H = stride_W = stride. Default: stride = 1.
  • padding (int|tuple) – The padding size. If padding is a tuple, it can contain two integers like (padding_H, padding_W) which means padding_up = padding_down = padding_H and padding_left = padding_right = padding_W. Or it can use (padding_up, padding_left, padding_down, padding_right) to indicate paddings of four direction. Otherwise, a scalar padding means padding_up = padding_down = padding_left = padding_right = padding Default: padding = 0.
  • input_image_size (Variable) – the input contains image real size.It’s dim is [batchsize, 2]. It is dispensable.It is just for batch inference.
  • out_stride (int|tuple) – The scaling of image through CNN. It is dispensable. It is valid only when input_image_size is not null. If out_stride is tuple, it must contain two intergers, (out_stride_H, out_stride_W). Otherwise, the out_stride_H = out_stride_W = out_stride.
  • name (int) – The name of this layer. It is optional.
Returns:

The output is a LoDTensor with shape {input.batch_size * output_height * output_width, filter_size_H * filter_size_W * input.channels}. If we regard output as a matrix, each row of this matrix is a step of a sequence.

Return type:

output

Examples

   Given:

   x = [[[[ 6.  2.  1.]
          [ 8.  3.  5.]
          [ 0.  2.  6.]]

         [[ 2.  4.  4.]
          [ 6.  3.  0.]
          [ 6.  4.  7.]]]

        [[[ 6.  7.  1.]
          [ 5.  7.  9.]
          [ 2.  4.  8.]]

         [[ 1.  2.  1.]
          [ 1.  3.  5.]
          [ 9.  0.  8.]]]]

   x.dims = {2, 2, 3, 3}

   And:

   filter = [2, 2]
   stride = [1, 1]
   padding = [0, 0]

   Then:

   output.data = [[ 6.  2.  8.  3.  2.  4.  6.  3.]
                  [ 2.  1.  3.  5.  4.  4.  3.  0.]
                  [ 8.  3.  0.  2.  6.  3.  6.  4.]
                  [ 3.  5.  2.  6.  3.  0.  4.  7.]
                  [ 6.  7.  5.  7.  1.  2.  1.  3.]
                  [ 7.  1.  7.  9.  2.  1.  3.  5.]
                  [ 5.  7.  2.  4.  1.  3.  9.  0.]
                  [ 7.  9.  4.  8.  3.  5.  0.  8.]]

   output.dims = {8, 8}

   output.lod = [[4, 4]]

Examples:

   .. code-block:: python

       output = fluid.layers.im2sequence(
           input=layer, stride=[1, 1], filter_size=[2, 2])

nce

paddle.fluid.layers.nce(input, label, num_total_classes, sample_weight=None, param_attr=None, bias_attr=None, num_neg_samples=None, name=None)

Compute and return the noise-contrastive estimation training loss. See Noise-contrastive estimation: A new estimation principle for unnormalized statistical models. By default this operator uses a uniform distribution for sampling.

Parameters:
  • input (Variable) – input variable.
  • label (Variable) – label.
  • num_total_classes (int) – Total number of classes in all samples
  • sample_weight (Variable|None) – A Variable of shape [batch_size, 1] storing a weight for each sample. The default weight for each sample is 1.0.
  • param_attr (ParamAttr|None) – The parameter attribute for learnable parameters/weights of nce. If it is set to None or one attribute of ParamAttr, nce will create ParamAttr as param_attr. If the Initializer of the param_attr is not set, the parameter is initialized with Xavier. Default: None.
  • bias_attr (ParamAttr|bool|None) – The parameter attribute for the bias of nce. If it is set to False, no bias will be added to the output units. If it is set to None or one attribute of ParamAttr, nce will create ParamAttr as bias_attr. If the Initializer of the bias_attr is not set, the bias is initialized zero. Default: None.
  • num_neg_samples (int) – The number of negative classes. The default value is 10
  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically. Default: None.
Returns:

The output nce loss.

Return type:

Variable

Examples

window_size = 5
words = []
for i in xrange(window_size):
    words.append(layers.data(
        name='word_{0}'.format(i), shape=[1], dtype='int64'))

dict_size = 10000
label_word = int(window_size / 2) + 1

embs = []
for i in xrange(window_size):
    if i == label_word:
        continue

    emb = layers.embedding(input=words[i], size=[dict_size, 32],
                           param_attr='emb.w', is_sparse=True)
    embs.append(emb)

embs = layers.concat(input=embs, axis=1)
loss = layers.nce(input=embs, label=words[label_word],
              num_total_classes=dict_size, param_attr='nce.w',
              bias_attr='nce.b')

hsigmoid

paddle.fluid.layers.hsigmoid(input, label, num_classes, param_attr=None, bias_attr=None, name=None)

The hierarchical sigmoid operator is used to accelerate the training process of language model. This operator organizes the classes into a complete binary tree, each leaf node represents a class(a word) and each internal node acts as a binary classifier. For each word there’s a unique path from root to it’s leaf node, hsigmoid calculate the cost for each internal node on the path, and sum them to get a total cost. hsigmoid can achive a acceleration from \(O(N)\) to \(O(logN)\), where \(N\) represents the size of word dict.

Refer to Hierarchical Probabilistic Neural Network Language Model

Parameters:
  • input (Variable) – The input tensor variable with shape \([N \times D]\), where \(N\) is the size of mini-batch, and \(D\) is the feature size.
  • label (Variable) – The tensor variable contains labels of training data. It’s a tensor with shape is \([N \times 1]\).
  • num_classes – (int), The number of classes, must not be less than 2.
  • param_attr (ParamAttr|None) – The parameter attribute for learnable parameters/weights of hsigmoid. If it is set to None or one attribute of ParamAttr, hsigmoid will create ParamAttr as param_attr. If the Initializer of the param_attr is not set, the parameter is initialized with Xavier. Default: None.
  • bias_attr (ParamAttr|bool|None) – The parameter attribute for the bias of hsigmoid. If it is set to False, no bias will be added to the output units. If it is set to None or one attribute of ParamAttr, hsigmoid will create ParamAttr as bias_attr. If the Initializer of the bias_attr is not set, the bias is initialized zero. Default: None.
  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically. Default: None.
Returns:

(Tensor) The cost of hierarchical sigmoid operator. the shape is [N, 1]

Return type:

Out

Examples

x = fluid.layers.data(name='x', shape=[2], dtype='float32')
y = fluid.layers.data(name='y', shape=[1], dtype='int64')
out = fluid.layers.hsigmoid(input=x, label=y, num_classes=6)

row_conv

paddle.fluid.layers.row_conv(input, future_context_size, param_attr=None, act=None)

Row-convolution operator

The row convolution is called lookahead convolution. This operator was introduced in the following paper for DeepSpeech2: http://www.cs.cmu.edu/~dyogatam/papers/wang+etal.iclrworkshop2016.pdf

The main motivation is that a bidirectional RNN, useful in DeepSpeech like speech models, learns representation for a sequence by performing a forward and a backward pass through the entire sequence. However, unlike unidirectional RNNs, bidirectional RNNs are challenging to deploy in an online and low-latency setting. The lookahead convolution incorporates information from future subsequences in a computationally efficient manner to improve unidirectional recurrent neural networks. The row convolution operator is different from the 1D sequence convolution, and is computed as follows:

Given an input sequence \(in\) of length \(t\) and input dimension \(d\), and a filter (\(W\)) of size \(context \times d\), the output sequence is convolved as:

$$ out_{i, :} = \sum_{j=i}^{i + context} in_{j,:} \cdot W_{i-j, :} $$

In the above equation:

  • \(Out_{i}\): The i-th row of output variable with shape [1, D].
  • \(\\tau\): Future context size.
  • \(X_{j}\): The j-th row of input variable with shape [1, D].
  • \(W_{i-j}\): The (i-j)-th row of parameters with shape [1, D].

More details about row_conv please refer to the design document https://github.com/PaddlePaddle/Paddle/issues/2228#issuecomment-303903645 .

Parameters:
  • input (Variable) – the input(X) is a LodTensor, which supports variable time-length input sequences. The underlying tensor in this LoDTensor is a matrix with shape (T x N), where T is the total time steps in this mini-batch and N is the input data dimension.
  • future_context_size (int) – Future context size. Please note, the shape of convolution kernel is [future_context_size + 1, D].
  • param_attr (ParamAttr) – Attributes of parameters, including name, initializer etc.
  • act (str) – Non-linear activation to be applied to output variable.
Returns:

the output(Out) is a LodTensor, which supports variable time-length input sequences. The underlying tensor in this LodTensor is a matrix with shape T x N, i.e., the same shape as X.

Examples

>>> import paddle.fluid as fluid
>>> x = fluid.layers.data(name='x', shape=[16],
>>>                        dtype='float32', lod_level=1)
>>> out = fluid.layers.row_conv(input=x, future_context_size=2)

multiplex

paddle.fluid.layers.multiplex(inputs, index)

Referring to the given index variable, this layer selects rows from the input variables to construct a multiplex variable. Assuming that there are \(m\) input variables and \(I_i\) represents the i-th input variable and \(i\) is in [0, \(m\)). All input variables are tensors with same shape [\(d_0\), \(d_1\), ..., \(d_R\)]. Please note that rank of the input tensor should be at least 2. Each input variable will be treated as a 2-D matrix with shape [\(M\), \(N\)] where \(M\) for \(d_0\) and \(N\) for \(d_1\) * \(d_2\) * ... * \(d_R\). Let \(I_i[j]\) be the j-th row of the i-th input variable. The given index variable should be a 2-D tensor with shape [\(M\), 1]. Let ID[i] be the i-th index value of the index variable. Then the output variable will be a tensor with shape [\(d_0\), \(d_1\), ..., \(d_R\)]. If we treat the output tensor as a 2-D matrix with shape [\(M\), \(N\)] and let \(O[i]\) be the i-th row of the matrix, then O[i] is equal to \(I_{ID[i]}[i]\).

  • Ids: the index tensor.
  • X[0 : N - 1]: the candidate tensors for output (N >= 2).
  • For each index i from 0 to batchSize - 1, the output is the i-th row of the the (Ids[i])-th tensor.

For i-th row of the output tensor:

$$ y[i] = x_{k}[i] $$

where \(y\) is the output tensor, \(x_{k}\) is the k-th input tensor, and \(k = Ids[i]\).

>>> import paddle.fluid as fluid
>>> x1 = fluid.layers.data(name='x1', shape=[4], dtype='float32')
>>> x2 = fluid.layers.data(name='x2', shape=[4], dtype='float32')
>>> index = fluid.layers.data(name='index', shape=[1], dtype='int32')
>>> out = fluid.layers.multiplex(inputs=[x1, x2], index=index)
Parameters:
  • inputs (list) – A list of variables to gather from. All variables have the same shape and the rank is at least 2.
  • index (Variable) – Tensor<int32>, index variable which is a 2-D tensor with shape [M, 1] where M is the batch size.
Returns:

The output tensor of multiplex operator.

layer_norm

paddle.fluid.layers.layer_norm(input, scale=True, shift=True, begin_norm_axis=1, epsilon=1e-05, param_attr=None, bias_attr=None, act=None, name=None)

Assume feature vectors exist on dimensions begin_norm_axis ... rank(input) and calculate the moment statistics along these dimensions for each feature vector \(a\) with size \(H\), then normalize each feature vector using the corresponding statistics. After that, apply learnable gain and bias on the normalized tensor to scale and shift if scale and shift are set.

Refer to Layer Normalization

The formula is as follows:

\[ \begin{align}\begin{aligned}\mu & = \frac{1}{H}\sum_{i=1}^{H} a_i\\\sigma & = \sqrt{\frac{1}{H}\sum_{i=1}^{H}(a_i - \mu)^2}\\h & = f(\frac{g}{\sigma}(a - \mu) + b)\end{aligned}\end{align} \]
  • \(a\): the vector representation of the summed inputs to the neurons

in that layer.

  • \(H\): the number of hidden units in a layers
  • \(g\): the trainable scale parameter.
  • \(b\): the trainable bias parameter.
Parameters:
  • input (Variable) – The input tensor variable.
  • scale (bool) – Whether to learn the adaptive gain \(g\) after normalization. Default True.
  • shift (bool) – Whether to learn the adaptive bias \(b\) after normalization. Default True.
  • begin_norm_axis (int) – The normalization will be performed along dimensions from begin_norm_axis to rank(input). Default 1.
  • epsilon (float) – The small value added to the variance to prevent division by zero. Default 1e-05.
  • param_attr (ParamAttr|None) – The parameter attribute for the learnable gain \(g\). If scale is False, param_attr is omitted. If scale is True and param_attr is None, a default ParamAttr would be added as scale. The param_attr is initialized as 1 if it is added. Default None.
  • bias_attr (ParamAttr|None) – The parameter attribute for the learnable bias \(b\). If shift is False, bias_attr is omitted. If shift is True and param_attr is None, a default ParamAttr would be added as bias. The bias_attr is initialized as 0 if it is added. Default None.
  • act (str) – Activation to be applied to the output of layer normalizaiton. Default None.
  • name (str) – The name of this layer. It is optional. Default None, and a unique name would be generated automatically.
Returns:

Result after normalization

Examples

>>> data = fluid.layers.data(name='data', shape=[3, 32, 32],
>>>                          dtype='float32')
>>> x = fluid.layers.layer_norm(input=data, begin_norm_axis=1)

softmax_with_cross_entropy

paddle.fluid.layers.softmax_with_cross_entropy(logits, label, soft_label=False, ignore_index=-100)

Softmax With Cross Entropy Operator.

Cross entropy loss with softmax is used as the output layer extensively. This operator computes the softmax normalized values for each row of the input tensor, after which cross-entropy loss is computed. This provides a more numerically stable gradient.

Because this operator performs a softmax on logits internally, it expects unscaled logits. This operator should not be used with the output of softmax operator since that would produce incorrect results.

When the attribute soft_label is set false, this operators expects mutually exclusive hard labels, each sample in a batch is in exactly one class with a probability of 1.0. Each sample in the batch will have a single label.

The equation is as follows:

  1. Hard label (one-hot label, so every sample has exactly one class)
\[loss_j = -\text{logit}_{label_j} + \log\left(\sum_{i=0}^{K}\exp(\text{logit}_i)\right), j = 1,..., K\]
  1. Soft label (each sample can have a distribution over all classes)
\[loss_j = -\sum_{i=0}^{K}\text{label}_i \left(\text{logit}_i - \log\left(\sum_{i=0}^{K} \exp(\text{logit}_i)\right)\right), j = 1,...,K\]
Parameters:
  • logits (Variable) – The unscaled log probabilities, which is a 2-D tensor with shape [N x K]. N is the batch_size, and K is the class number.
  • label (Variable) – The ground truth which is a 2-D tensor. If soft_label is set to false, Label is a Tensor<int64> with shape [N x 1]. If soft_label is set to true, Label is a Tensor<float/double> with
  • soft_label (bool) – A flag to indicate whether to interpretate the given labels as soft labels. By default, soft_label is set to False.
  • ignore_index (int) – Specifies a target value that is ignored and does not contribute to the input gradient. Only valid if soft_label is set to False. Default: -100
Returns:

The cross entropy loss is a 2-D tensor with shape [N x 1].

Return type:

Variable

Examples

data = fluid.layers.data(name='data', shape=[128], dtype='float32')
label = fluid.layers.data(name='label', shape=[1], dtype='int64')
fc = fluid.layers.fc(input=data, size=100)
out = fluid.layers.softmax_with_cross_entropy(
    logits=fc, label=label)

smooth_l1

paddle.fluid.layers.smooth_l1(x, y, inside_weight=None, outside_weight=None, sigma=None)

This layer computes the smooth L1 loss for Variable x and y. It takes the first dimension of x and y as batch size. For each instance, it computes the smooth L1 loss element by element first and then sums all the losses. So the shape of ouput Variable is [batch_size, 1].

Parameters:
  • x (Variable) – A tensor with rank at least 2. The input value of smooth L1 loss op with shape [batch_size, dim1, ..., dimN].
  • y (Variable) – A tensor with rank at least 2. The target value of smooth L1 loss op with same shape as x.
  • inside_weight (Variable|None) – A tensor with rank at least 2. This input is optional and should have same shape with x. If provided, the result of (x - y) will be multiplied by this tensor element by element.
  • outside_weight (Variable|None) – A tensor with rank at least 2. This input is optional and should have same shape with x. If provided, the out smooth L1 loss will be multiplied by this tensor element by element.
  • sigma (float|None) – Hyper parameter of smooth L1 loss layer. A float scalar with default value 1.0.
Returns:

The output smooth L1 loss with shape [batch_size, 1].

Return type:

Variable

Examples

data = fluid.layers.data(name='data', shape=[128], dtype='float32')
label = fluid.layers.data(
    name='label', shape=[100], dtype='float32')
fc = fluid.layers.fc(input=data, size=100)
out = fluid.layers.smooth_l1(x=fc, y=label)

one_hot

paddle.fluid.layers.one_hot(input, depth)

This layer creates the one-hot representations for input indices.

Parameters:
  • input (Variable) – Input indices, last dimension must be 1.
  • depth (scalar) – An interger defining the depth of the one-hot dimension.
Returns:

The one-hot representations of input.

Return type:

Variable

Examples

label = layers.data(name="label", shape=[1], dtype="float32")
one_hot_label = layers.one_hot(input=label, depth=10)

autoincreased_step_counter

paddle.fluid.layers.autoincreased_step_counter(counter_name=None, begin=1, step=1)

Create an auto-increase variable which will be automatically increased by 1 every mini-batch Return the run counter of the main program, default is started from 1.

Parameters:
  • counter_name (str) – The counter name, default is ‘@STEP_COUNTER@’.
  • begin (int) – The first value of this counter.
  • step (int) – The increment step between each execution.
Returns:

The global run counter.

Return type:

Variable

Examples

global_step = fluid.layers.autoincreased_step_counter(
    counter_name='@LR_DECAY_COUNTER@', begin=begin, step=1)

reshape

paddle.fluid.layers.reshape(x, shape, actual_shape=None, act=None, inplace=True, name=None)

Gives a new shape to the input Tensor without changing its data.

The target shape can be given by shape or actual_shape. shape is a list of integer while actual_shape is a tensor variable. actual_shape has a higher priority than shape if it is provided, while shape still should be set correctly to gurantee shape inference in compile-time.

Some tricks exist when specifying the target shape.

1. -1 means the value of this dimension is inferred from the total element number of x and remaining dimensions. Thus one and only one dimension can be set -1.

2. 0 means the actual dimension value is going to be copied from the corresponding dimension of x. The indice of 0s in shape can not exceed Rank(X).

Here are some examples to explain it.

1. Given a 3-D tensor x with a shape [2, 4, 6], and the target shape is [6, 8], the reshape operator will transform x into a 2-D tensor with shape [6, 8] and leaving x’s data unchanged.

2. Given a 3-D tensor x with a shape [2, 4, 6], and the target shape specified is [2, 3, -1, 2], the reshape operator will transform x into a 4-D tensor with shape [2, 3, 4, 2] and leaving x’s data unchanged. In this case, one dimension of the target shape is set to -1, the value of this dimension is inferred from the total element number of x and remaining dimensions.

3. Given a 3-D tensor x with a shape [2, 4, 6], and the target shape is [-1, 0, 3, 2], the reshape operator will transform x into a 4-D tensor with shape [2, 4, 3, 2] and leaving x’s data unchanged. In this case, besides -1, 0 means the actual dimension value is going to be copied from the corresponding dimension of x.

Parameters:
  • x (variable) – The input tensor.
  • shape (list) – The new shape. At most one dimension of the new shape can be -1.
  • actual_shape (variable) – An optional input. If provided, reshape according to this given shape rather than shape specifying shape. That is to say actual_shape has a higher priority than shape.
  • act (str) – The non-linear activation to be applied to output variable.
  • inplace (bool) – If this flag is set true, the output shares data with input without copying, otherwise a new output tensor is created whose data is copied from input x.
  • name (str) – The name of this layer. It is optional.
Returns:

The output tensor.

Return type:

Variable

Raises:

TypeError – if actual_shape is neither Variable nor None.

Examples

data = fluid.layers.data(
    name='data', shape=[2, 4, 6], dtype='float32')
reshaped = fluid.layers.reshape(
    x=data, shape=[-1, 0, 3, 2], act='tanh', inplace=True)

squeeze

paddle.fluid.layers.squeeze(input, axes, name=None)

Remove single-dimensional entries from the shape of a tensor. Takes a parameter axes with a list of axes to squeeze. If axes is not provided, all the single dimensions will be removed from the shape. If an axis is selected with shape entry not equal to one, an error is raised.

Examples: Case 1:

Given
X.shape = (1, 3, 1, 5)
and
axes = [0]
we get:
Out.shape = (3, 1, 5)
Case 2:
Given
X.shape = (1, 3, 1, 5)
and
axes = []
we get:
Out.shape = (3, 5)
Parameters:
  • input (Variable) – The input variable to be squeezed.
  • axes (list) – List of integers, indicating the dimensions to be squeezed.
  • name (str|None) – Name for this layer.
Returns:

Output squeezed variable.

Return type:

Variable

Examples

x = layers.data(name='x', shape=[5, 1, 10])
y = layers.sequeeze(input=x, axes=[1])

unsqueeze

paddle.fluid.layers.unsqueeze(input, axes, name=None)

Insert single-dimensional entries to the shape of a tensor. Takes one required argument axes, a list of dimensions that will be inserted. Dimension indices in axes are as seen in the output tensor.

For example:
Given a tensor such that tensor with shape [3, 4, 5], then Unsqueezed tensor with axes=[0, 4] has shape [1, 3, 4, 5, 1].
Parameters:
  • input (Variable) – The input variable to be unsqueezed.
  • axes (list) – List of integers, indicating the dimensions to be inserted.
  • name (str|None) – Name for this layer.
Returns:

Output unsqueezed variable.

Return type:

Variable

Examples

x = layers.data(name='x', shape=[5, 10])
y = layers.unsequeeze(input=x, axes=[1])

lod_reset

paddle.fluid.layers.lod_reset(x, y=None, target_lod=None)

Set LoD of x to a new one specified by y or target_lod. When y provided, y.lod would be considered as target LoD first, otherwise y.data would be considered as target LoD. If y is not provided, target LoD should be specified by target_lod. If target LoD is specified by Y.data or target_lod, only one level LoD is supported.

* Example 1:

    Given a 1-level LoDTensor x:
        x.lod =  [[ 2,           3,                   1 ]]
        x.data = [[1.0], [2.0], [3.0], [4.0], [5.0], [6.0]]
        x.dims = [6, 1]

    target_lod: [4, 2]

    then we get a 1-level LoDTensor:
        out.lod =  [[4,                          2]]
        out.data = [[1.0], [2.0], [3.0], [4.0], [5.0], [6.0]]
        out.dims = [6, 1]

* Example 2:

    Given a 1-level LoDTensor x:
        x.lod =  [[2,            3,                   1]]
        x.data = [[1.0], [2.0], [3.0], [4.0], [5.0], [6.0]]
        x.dims = [6, 1]

    y is a Tensor:
        y.data = [[2, 4]]
        y.dims = [1, 3]

    then we get a 1-level LoDTensor:
        out.lod =  [[2,            4]]
        out.data = [[1.0], [2.0], [3.0], [4.0], [5.0], [6.0]]
        out.dims = [6, 1]

* Example 3:

    Given a 1-level LoDTensor x:
        x.lod =  [[2,            3,                   1]]
        x.data = [[1.0], [2.0], [3.0], [4.0], [5.0], [6.0]]
        x.dims = [6, 1]

    y is a 2-level LoDTensor:
        y.lod =  [[2, 2], [2, 2, 1, 1]]
        y.data = [[1.1], [2.1], [3.1], [4.1], [5.1], [6.1]]
        y.dims = [6, 1]

    then we get a 2-level LoDTensor:
        out.lod =  [[2, 2], [2, 2, 1, 1]]
        out.data = [[1.0], [2.0], [3.0], [4.0], [5.0], [6.0]]
        out.dims = [6, 1]
Parameters:
  • x (Variable) – Input variable which could be a Tensor or LodTensor.
  • y (Variable|None) – If provided, output’s LoD would be derived from y.
  • target_lod (list|tuple|None) – One level LoD which should be considered as target LoD when y not provided.
Returns:

Output variable with LoD specified by this layer.

Return type:

Variable

Raises:

ValueError – If y and target_lod are both None.

Examples

x = layers.data(name='x', shape=[10])
y = layers.data(name='y', shape=[10, 20], lod_level=2)
out = layers.lod_reset(x=x, y=y)

lrn

paddle.fluid.layers.lrn(input, n=5, k=1.0, alpha=0.0001, beta=0.75, name=None)

Local Response Normalization Layer. This layer performs a type of “lateral inhibition” by normalizing over local input regions.

The formula is as follows:

\[Output(i, x, y) = Input(i, x, y) / \left(k + \alpha \sum\limits^{\min(C, c + n/2)}_{j = \max(0, c - n/2)}(Input(j, x, y))^2\right)^{\beta}\]

In the above equation:

  • \(n\): The number of channels to sum over.
  • \(k\): The offset (avoid being divided by 0).
  • \(alpha\): The scaling parameter.
  • \(beta\): The exponent parameter.

Refer to ImageNet Classification with Deep Convolutional Neural Networks

Parameters:
  • input (Variable) – The input tensor of this layer, and the dimension of input tensor must be 4.
  • n (int, default 5) – The number of channels to sum over.
  • k (float, default 1.0) – An offset (usually positive to avoid dividing by 0).
  • alpha (float, default 1e-4) – The scaling parameter.
  • beta (float, default 0.75) – The exponent.
  • name (str, default None) – A name for this operation.
Raises:

ValueError – If rank of the input tensor is not 4.

Returns:

A tensor variable storing the transformation result.

Examples

data = fluid.layers.data(
    name="data", shape=[3, 112, 112], dtype="float32")
lrn = fluid.layers.lrn(input=data)

pad

paddle.fluid.layers.pad(x, paddings, pad_value=0.0, name=None)

Pads a tensor with a constant value given by pad_value, and the padded width is specified by paddings.

Specifically, the number of values padded before the contents of x in dimension i is indicated by paddings[i], and the number of values padded after the contents of x in dimension i is indicated by paddings[i+1].

See below for an example.

Given:
    x = [[1, 2], [3, 4]]

    paddings = [0, 1, 1, 2]

    pad_value = 0

Return:

    out = [[0, 1, 2, 0, 0]
           [0, 3, 4, 0, 0]
           [0, 0, 0, 0, 0]]
Parameters:
  • x (Variable) – The input tensor variable.
  • paddings (list) – A list of integers. Its elements specify the padded width before and after for each dimension in turn. The length of :attr:paddings must be \(rank(x) \times 2\).
  • pad_value (float) – The constant value used to pad.
  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
Returns:

The padded tensor variable.

Return type:

Variable

Examples

# x is a rank 2 tensor variable.
out = fluid.layers.pad(
    x=x, paddings=[0, 1, 1, 2], pad_value=0.)

pad_constant_like

paddle.fluid.layers.pad_constant_like(x, y, pad_value=0.0, name=None)

Pad input(Y) with pad_value, the number of values padded to the edges of each axis is specified by the difference of the shape of X and Y. ((0, shape_x_0 - shape_y_0), ... (0, shape_x_n - shape_y_n)) unique pad widths for each axis. The input should be a k-D tensor(k > 0 and k < 7).

See below for an example.

Given:
    X = [[[[ 0,  1,  2],
           [ 3,  4,  5]],
          [[ 6,  7,  8],
           [ 9, 10, 11]],
          [[12, 13, 14],
           [15, 16, 17]]],
         [[[18, 19, 20],
           [21, 22, 23]],
          [[24, 25, 26],
           [27, 28, 29]],
          [[30, 31, 32],
           [33, 34, 35]]]]
    X.shape = (2, 3, 2, 3)

    Y = [[[[35, 36, 37]],
          [[38, 39, 40]],
          [[41, 42, 43]]]]
    Y.shape = (1, 3, 1, 3)
And
pad_value = -1,
Returns:

Out = [[[[35, 36, 37],

[-1, -1, -1]],

[[38, 39, 40],

[-1, -1, -1]],

[[41, 42, 43],

[-1, -1, -1]]],

[[[-1, -1, -1],

[-1, -1, -1]],

[[-1, -1, -1],

[-1, -1, -1]],

[[-1, -1, -1],

[-1, -1, -1]]]]

Out.shape = (2, 3, 2, 3)

Parameters:
  • x (Variable) – The input tensor variable.
  • y (Variable) – The input tensor variable.
  • pad_value (float) – The constant value used to pad.
  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
Returns:

The padded tensor variable.

Return type:

Variable

Examples

# x is a rank 4 tensor variable, x.shape = (2, 3, 2, 3)
# y is a rank 4 tensor variable, y.shape = (1, 3, 1, 3)
out = fluid.layers.pad_constant_like(x=x, y=y, pad_value=0.)
# out is a rank 4 tensor variable, and out.shape = [2, 3 ,2 , 3]

label_smooth

paddle.fluid.layers.label_smooth(label, prior_dist=None, epsilon=0.1, dtype='float32', name=None)

Label smoothing is a mechanism to regularize the classifier layer and is called label-smoothing regularization (LSR).

Label smoothing is proposed to encourage the model to be less confident, since optimizing the log-likelihood of the correct label directly may cause overfitting and reduce the ability of the model to adapt. Label smoothing replaces the ground-truth label \(y\) with the weighted sum of itself and some fixed distribution \(\mu\). For class \(k\), i.e.

\[\tilde{y_k} = (1 - \epsilon) * y_k + \epsilon * \mu_k,\]

where \(1 - \epsilon\) and \(\epsilon\) are the weights respectively, and \(\tilde{y}_k\) is the smoothed label. Usually uniform distribution is used for \(\mu\).

See more details about label smoothing in https://arxiv.org/abs/1512.00567.

Parameters:
  • label (Variable) – The input variable containing the label data. The label data should use one-hot representation.
  • prior_dist (Variable) – The prior distribution to be used to smooth labels. If not provided, an uniform distribution is used. The shape of prior_dist should be \((1, class\_num)\).
  • epsilon (float) – The weight used to mix up the original ground-truth distribution and the fixed distribution.
  • dtype (np.dtype|core.VarDesc.VarType|str) – The type of data : float32, float_64, int etc.
  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
Returns:

The tensor variable containing the smoothed labels.

Return type:

Variable

Examples

label = layers.data(name="label", shape=[1], dtype="float32")
one_hot_label = layers.one_hot(input=label, depth=10)
smooth_label = layers.label_smooth(
    label=one_hot_label, epsilon=0.1, dtype="float32")

roi_pool

paddle.fluid.layers.roi_pool(input, rois, pooled_height=1, pooled_width=1, spatial_scale=1.0)

ROIPool Operator

Region of interest pooling (also known as RoI pooling) is to perform is to perform max pooling on inputs of nonuniform sizes to obtain fixed-size feature maps (e.g. 7*7).

The operator has three steps:

  1. Dividing each region proposal into equal-sized sections with the pooled_width and pooled_height
  2. Finding the largest value in each section
  3. Copying these max values to the output buffer

ROI Pooling for Faster-RCNN. The link below is a further introduction: https://stackoverflow.com/questions/43430056/what-is-roi-layer-in-fast-rcnn

Parameters:
  • input (Variable) – (Tensor), the input of ROIPoolOp. The format of input tensor is NCHW. Where N is batch size, C is the number of input channels, H is the height of the feature, and W is the width of the feature
  • rois (Variable) – ROIs (Regions of Interest) to pool over.
  • pooled_height (integer) – (int, default 1), The pooled output height Default: 1
  • pooled_width (integer) – (int, default 1), The pooled output width Default: 1
  • spatial_scale (float) – (float, default 1.0), Multiplicative spatial scale factor to translate ROI coords from their input scale to the scale used when pooling Default: 1.0
Returns:

(Tensor), The output of ROIPoolOp is a 4-D tensor with shape (num_rois, channels, pooled_h, pooled_w).

Return type:

Variable

Examples

pool_out = fluid.layers.roi_pool(input=x, rois=rois, 7, 7, 1.0)

dice_loss

paddle.fluid.layers.dice_loss(input, label, epsilon=1e-05)

Dice loss for comparing the similarity of two batch of data, usually is used for binary image segmentation i.e. labels are binary. The dice loss can be defined as below equation:

\[\begin{split}dice\_loss &= 1 - \frac{2 * intersection\_area}{total\_area} \\ &= \frac{(total\_area - intersection\_area) - intersection\_area}{total\_area} \\ &= \frac{(union\_area - intersection\_area)}{total\_area}\end{split}\]
Parameters:
  • input (Variable) – The predictions with rank>=2. The first dimension is batch size, and the last dimension is class number.
  • label (Variable) – The groud truth with the same rank with input. The first dimension is batch size, and the last dimension is 1.
  • epsilon (float) – The epsilon will be added to the numerator and denominator. If both input and label are empty, it makes sure dice is 1. Default: 0.00001
Returns:

The dice loss with shape [1].

Return type:

dice_loss (Variable)

Examples

predictions = fluid.layers.softmax(x)
loss = fluid.layers.dice_loss(input=predictions, label=label, 2)

image_resize

paddle.fluid.layers.image_resize(input, out_shape=None, scale=None, name=None, resample='BILINEAR')

Resize a Batch of Images

The input must be a tensor of the shape (num_batches, channels, in_h, in_w), and the resizing only applies on the last two dimensions(hight and width).

Supporting resample methods:

‘BILINEAR’ : Bilinear interpolation
Parameters:
  • input (Variable) – The input tensor of image resize layer, This is a 4-D tensor of the shape (num_batches, channels, in_h, in_w).
  • out_shape (list|tuple|Variable|None) – Output shape of image resize layer, the shape is (out_h, out_w). Default: None
  • scale (float|None) – The multiplier for the input height or width. At least one of out_shape or scale must be set. And out_shape has a higher priority than scale. Default: None
  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
  • resample (str) – The resample method. It can only be ‘BILINEAR’ currently. Default: ‘BILINEAR’
Returns:

The output is a 4-D tensor of the shape (num_batches, channls, out_h, out_w).

Return type:

Variable

Examples

out = fluid.layers.image_resize(input, out_shape=[12, 12])

image_resize_short

paddle.fluid.layers.image_resize_short(input, out_short_len, resample='BILINEAR')

Resize a batch of images. The short edge of input images will be resized to the given ‘out_short_len’. The long edge of input images will be resized proportionately to make images’ length-width ratio constant.

Parameters:
  • input (Variable) – The input tensor of image resize layer, This is a 4-D tensor of the shape (num_batches, channels, in_h, in_w).
  • out_short_len (int) – The length of output images’ short edge.
  • resample (str) – resample method, default: BILINEAR.
Returns:

The output is a 4-D tensor of the shape (num_batches, channls, out_h, out_w).

Return type:

Variable

resize_bilinear

paddle.fluid.layers.resize_bilinear(input, out_shape=None, scale=None, name=None)

Bilinear interpolation is an extension of linear interpolation for interpolating functions of two variables (e.g. H-direction and W-direction in this op) on a rectilinear 2D grid.

The key idea is to perform linear interpolation first in one direction, and then again in the other direction.

For details, please refer to Wikipedia: https://en.wikipedia.org/wiki/Bilinear_interpolation

Parameters:
  • input (Variable) – The input tensor of bilinear interpolation, This is a 4-D tensor with shape of (N x C x h x w).
  • out_shape (Variable) – This is a 1-D tensor with two number. The first number is height and the second number is width.
  • scale (float|None) – The multiplier for the input height or width. At least one of out_shape or scale must be set. And out_shape has a higher priority than scale. Default: None.
  • name (str|None) – The output variable name.
Returns:

The dimension of output is (N x C x out_h x out_w).

gather

paddle.fluid.layers.gather(input, index)

Gather Layer

Output is obtained by gathering entries of the outer-most dimension of X indexed by index and concatenate them together.

\[Out = X[Index]\]
Given:

X = [[1, 2],
     [3, 4],
     [5, 6]]

Index = [1, 2]

Then:

Out = [[3, 4],
       [5, 6]]
Parameters:
  • input (Variable) – The source input with rank>=1.
  • index (Variable) – The index input with rank=1.
Returns:

The output is a tensor with the same rank as input.

Return type:

output (Variable)

Examples

output = fluid.layers.gather(x, index)

scatter

paddle.fluid.layers.scatter(input, index, updates, name=None)

Scatter Layer

Output is obtained by updating the input on selected indices on the first axis.

\[Out = X Out[Ids] = Updates\]
Parameters:
  • input (Variable) – The source input with rank>=1.
  • index (Variable) – The index input with rank=1. Its dtype should be int32 or int64 as it is used as indexes.
  • updates (Variable) – The updated value of scatter op.
  • name (str|None) – The output variable name. Default None.
Returns:

The output is a tensor with the same shape as input.

Return type:

output (Variable)

Examples

output = fluid.layers.scatter(input, index, updates)

sequence_scatter

paddle.fluid.layers.sequence_scatter(input, index, updates, name=None)

Sequence Scatter Layer

This operator scatters the Updates tensor to the input X. It uses the LoD information of Ids to select the rows to update, and use the values in Ids as the columns to update in each row of X.

Here is an example: Given the following input: .. code-block:: text

input.data = [[1.0, 1.0, 1.0, 1.0, 1.0, 1.0],
[1.0, 1.0, 1.0, 1.0, 1.0, 1.0], [1.0, 1.0, 1.0, 1.0, 1.0, 1.0]]

input.dims = [3, 6]

index.data = [[0], [1], [2], [5], [4], [3], [2], [1], [3], [2], [5], [4]] index.lod = [[0, 3, 8, 12]]

updates.data = [[0.3], [0.3], [0.4], [0.1], [0.2], [0.3], [0.4], [0.0], [0.2], [0.3], [0.1], [0.4]] updates.lod = [[ 0, 3, 8, 12]]

Then we have the output: .. code-block:: text

out.data = [[1.3, 1.3, 1.4, 1.0, 1.0, 1.0],
[1.0, 1.0, 1.4, 1.3, 1.2, 1.1], [1.0, 1.0, 1.3, 1.2, 1.4, 1.1]]

out.dims = X.dims = [3, 6]

Parameters:
  • input (Variable) – The source input with rank>=1.
  • index (Variable) – A LoD Tensor. The index input of sequence scatter op where input will be updated. The index input with rank=1. Its dtype should be int32 or int64 as it is used as indexes.
  • updates (Variable) – A LoD Tensor. The values to scatter to the input tensor X, must be a LoDTensor with the same LoD information as index.
  • name (str|None) – The output variable name. Default None.
Returns:

The output is a tensor with the same shape as input.

Return type:

output (Variable)

Examples

output = fluid.layers.sequence_scatter(input, index, updates)

random_crop

paddle.fluid.layers.random_crop(x, shape, seed=None)

This operator takes a batch of instance, and do random cropping on each instance. It means that cropping positions differs on each instance, which is determined by an uniform random generator. All cropped instances have the same shape, which is determined by the operator’s attribute ‘shape’.

Parameters:
  • x (Variable) – A batch of instances to random crop
  • shape (INTS) – The shape of a cropped instance
  • seed (int|Variable|None) – The random seed By default, the seed will get from random.randint(-65536, 65535).
Returns:

The cropped instance batch

Examples

>>> img = fluid.layers.data("img", [3, 256, 256])
>>> cropped_img = fluid.layers.random_crop(img, shape=[3, 224, 224])

mean_iou

paddle.fluid.layers.mean_iou(input, label, num_classes)

Mean Intersection-Over-Union is a common evaluation metric for semantic image segmentation, which first computes the IOU for each semantic class and then computes the average over classes. IOU is defined as follows:

\[IOU = \frac{true\_positiv}{(true\_positive + false\_positive + false\_negative)}.\]

The predictions are accumulated in a confusion matrix and mean-IOU is then calculated from it.

Parameters:
  • input (Variable) – A Tensor of prediction results for semantic labels with type int32 or int64.
  • label (Variable) – A Tensor of ground truth labels with type int32 or int64. Its shape should be the same as input.
  • num_classes (int) – The possible number of labels.
Returns:

A Tensor representing the mean intersection-over-union with shape [1]. out_wrong(Variable): A Tensor with shape [num_classes]. The wrong numbers of each class. out_correct(Variable): A Tensor with shape [num_classes]. The correct numbers of each class.

Return type:

mean_iou (Variable)

Examples

iou, wrongs, corrects = fluid.layers.mean_iou(predict, label, num_classes)

relu

paddle.fluid.layers.relu(x, name=None)

Relu takes one input data (Tensor) and produces one output data (Tensor) where the rectified linear function, y = max(0, x), is applied to the tensor elementwise.

\[Out = \max(0, x)\]
Parameters:
  • x (Variable) – The input tensor.
  • name (str|None, default None) – A name for this layer If set None, the layer will be named automatically.
Returns:

The output tensor with the same shape as input.

Return type:

Variable

Examples

output = fluid.layers.relu(x)

log

paddle.fluid.layers.log(x, name=None)

Calculates the natural log of the given input tensor, element-wise.

\[Out = \ln(x)\]
Parameters:
  • x (Variable) – Input tensor.
  • name (str|None, default None) – A name for this layer If set None, the layer will be named automatically.
Returns:

The natural log of the input tensor computed element-wise.

Return type:

Variable

Examples

output = fluid.layers.log(x)

crop

paddle.fluid.layers.crop(x, shape=None, offsets=None, name=None)

Crop input into output, as specified by offsets and shape.

* Case 1:
    Given
        X = [[0, 1, 2, 0, 0]
             [0, 3, 4, 0, 0]
             [0, 0, 0, 0, 0]],
    and
        shape = [2, 2],
        offsets = [0, 1],
    output is:
        Out = [[1, 2],
               [3, 4]].
* Case 2:
    Given
        X = [[0, 1, 2, 5, 0]
             [0, 3, 4, 6, 0]
             [0, 0, 0, 0, 0]],
    and shape is tensor
        shape = [[0, 0, 0]
                 [0, 0, 0]]
    and
        offsets = [0, 1],

    output is:
        Out = [[1, 2, 5],
               [3, 4, 6]].
Parameters:
  • x (Variable) – The input tensor variable.
  • shape (Variable|list/tuple of integer) – The output shape is specified by shape, which can a Variable or a list/tupe of integer. If a tensor Variable, it’s rank must be the same as x. This way is suitable for the case that the output shape may be changed each iteration. If a list/tupe of integer, it’s length must be the same as the rank of x
  • offsets (Variable|list/tuple of integer|None) – Specifies the copping offsets at each dimension. It can be a Variable or or a list/tupe of integer. If a tensor Variable, it’s rank must be the same as x. This way is suitable for the case that the offsets may be changed each iteration. If a list/tupe of integer, it’s length must be the same as the rank of x. If None, the offsets are 0 at each dimension.
  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
Returns:

The cropped tensor variable.

Return type:

Variable

Raises:

ValueError – If shape is not a list, tuple or Variable.

Examples

x = fluid.layers.data(name="x", shape=[3, 5], dtype="float32")
y = fluid.layers.data(name="y", shape=[2, 3], dtype="float32")
crop = fluid.layers.crop(x, shape=y)

# or
z = fluid.layers.data(name="z", shape=[3, 5], dtype="float32")
crop = fluid.layers.crop(z, shape=[2, 3])

rank_loss

paddle.fluid.layers.rank_loss(label, left, right, name=None)

Rank loss layer for RankNet

RankNet(http://icml.cc/2015/wp-content/uploads/2015/06/icml_ranking.pdf) is a pairwise ranking model with a training sample consisting of a pair of documents, A and B. Label P indicates whether A is ranked higher than B or not:

P = {0, 1} or {0, 0.5, 1}, where 0.5 means that there is no information about the rank of the input pair.

Rank loss layer takes three inputs: left (o_i), right (o_j) and label (P_{i,j}). The inputs respectively represent RankNet’s output scores for documents A and B and the value of label P. The following equation computes rank loss C_{i,j} from the inputs:

$$

C_{i,j} = - ilde{P_{ij}} * o_{i,j} + log(1 + e^{o_{i,j}}) o_{i,j} = o_i - o_j

ilde{P_{i,j}} = left {0, 0.5, 1

ight } or left {0, 1 ight }

$$

Rank loss layer takes batch inputs with size batch_size (batch_size >= 1).

Args:

label (Variable): Indicats whether A ranked higher than B or not. left (Variable): RankNet’s output score for doc A. right (Variable): RankNet’s output score for doc B. name(str|None): A name for this layer(optional). If set None, the layer

will be named automatically.
Returns:
list: The value of rank loss.
Raises:
ValueError: Any of label, left, and right is not a variable.

Examples:

label = fluid.layers.data(name="label", shape=[4, 1], dtype="float32")
left = fluid.layers.data(name="left", shape=[4, 1], dtype="float32")
right = fluid.layers.data(name="right", shape=[4, 1], dtype="float32")
out = fluid.layers.rank_loss(label, left, right)

elu

paddle.fluid.layers.elu(x, alpha=1.0, name=None)

ELU Activation Operator.

Applies the following element-wise computation on the input according to https://arxiv.org/abs/1511.07289.

\(out = \max(0, x) + \min(0, \alpha * (e^x - 1))\)

Parameters:
  • x (Variable) – Input of ELU operator
  • alpha (FLOAT|1.0) – The alpha value of ELU
  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
Returns:

Output of ELU operator

Return type:

output(Variable)

relu6

paddle.fluid.layers.relu6(x, threshold=6.0, name=None)

Relu6 Activation Operator.

\(out = \min(\max(0, x), 6)\)

Parameters:
  • x (Variable) – Input of Relu6 operator
  • threshold (FLOAT|6.0) – The threshold value of Relu6
  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
Returns:

Output of Relu6 operator

Return type:

output(Variable)

pow

paddle.fluid.layers.pow(x, factor=1.0, name=None)

Pow Activation Operator.

\(out = x^{factor}\)

Parameters:
  • x (Variable) – Input of Pow operator
  • factor (FLOAT|1.0) – The exponential factor of Pow
  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
Returns:

Output of Pow operator

Return type:

output(Variable)

stanh

paddle.fluid.layers.stanh(x, scale_a=0.6666666666666666, scale_b=1.7159, name=None)

STanh Activation Operator.

$$out = b * \frac{e^{a * x} - e^{-a * x}}{e^{a * x} + e^{-a * x}}$$

Parameters:
  • x (Variable) – Input of STanh operator
  • scale_a (FLOAT|2.0 / 3.0) – The scale parameter of a for the input
  • scale_b (FLOAT|1.7159) – The scale parameter of b for the input
  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
Returns:

Output of STanh operator

Return type:

output(Variable)

hard_sigmoid

paddle.fluid.layers.hard_sigmoid(x, slope=0.2, offset=0.5, name=None)

HardSigmoid Activation Operator.

Segment-wise linear approximation of sigmoid(https://arxiv.org/abs/1603.00391), which is much faster than sigmoid.

\(out = \max(0, \min(1, slope * x + shift))\)

The slope should be positive. The offset can be either positive or negative. The default slope and shift are set according to the above reference. It is recommended to use the defaults for this activation.

Parameters:
  • x (Variable) – Input of HardSigmoid operator
  • slope (FLOAT|0.2) – Slope for linear approximation of sigmoid
  • offset (FLOAT|0.5) – Offset for linear approximation of sigmoid
  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
Returns:

Output of HardSigmoid operator

Return type:

output(Variable)

swish

paddle.fluid.layers.swish(x, beta=1.0, name=None)

Swish Activation Operator.

$$out = \frac{x}{1 + e^{- beta x}}$$

Parameters:
  • x (Variable) – Input of Swish operator
  • beta (FLOAT|1.0) – Constant beta of swish operator
  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
Returns:

Output of Swish operator

Return type:

output(Variable)

prelu

paddle.fluid.layers.prelu(x, mode, param_attr=None, name=None)

Equation:

y = max(0, x) + alpha min(0, x)
Parameters:
  • x (Variable) –

    The input tensor. param_attr(ParamAttr|None): The parameter attribute for the learnable

    weight (alpha).
  • mode (string) – The mode for weight sharing all: all elements share same weight channel:elements in a channel share same weight element:each element has a weight
  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
Returns:

The output tensor with the same shape as input.

Return type:

Variable

Examples

x = fluid.layers.data(name="x", shape=[10,10], dtype="float32")
   mode = 'channel'
   output = fluid.layers.prelu(x,mode)

brelu

paddle.fluid.layers.brelu(x, t_min=0.0, t_max=24.0, name=None)

BRelu Activation Operator.

\(out = \max(\min(x, t_{min}), t_{max})\)

Parameters:
  • x (Variable) – Input of BRelu operator
  • t_min (FLOAT|0.0) – The min marginal value of BRelu
  • t_max (FLOAT|24.0) – The max marginal value of BRelu
  • name – A name for this layer(optional). If set None, the layer will be named automatically.

leaky_relu

paddle.fluid.layers.leaky_relu(x, alpha=0.02, name=None)

LeakyRelu Activation Operator.

\(out = \max(x, \alpha * x)\)

Parameters:
  • x (Variable) – Input of LeakyRelu operator
  • alpha (FLOAT|0.02) – The small negative slope
  • name – A name for this layer(optional). If set None, the layer will be named automatically.

soft_relu

paddle.fluid.layers.soft_relu(x, threshold=40.0, name=None)

SoftRelu Activation Operator.

\(out = \ln(1 + \exp(\max(\min(x, threshold), threshold))\)

Parameters:
  • x (Variable) – Input of SoftRelu operator
  • threshold (FLOAT|40.0) – The threshold value of SoftRelu
  • name – A name for this layer(optional). If set None, the layer will be named automatically.

flatten

paddle.fluid.layers.flatten(x, axis=1, name=None)

Flatten layer Flattens the input tensor into a 2D matrix.

Examples: Case 1:

Given
X.shape = (3, 100, 100, 4)
and
axis = 2
We get:
Out.shape = (3 * 100, 4 * 100)
Case 2:
Given
X.shape = (3, 100, 100, 4)
and
axis = 0
We get:
Out.shape = (1, 3 * 100 * 100 * 4)
Parameters:
  • x (Variable) – A tensor of rank >= axis.
  • axis (int) – Indicate up to which input dimensions (exclusive) should be flattened to the outer dimension of the output. The value for axis must be in the range [0, R], where R is the rank of the input tensor. When axis = 0, the shape of the output tensor is (1, (d_0 X d_1 ... d_n), where the shape of the input tensor is (d_0, d_1, ... d_n).
  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
Returns:

A 2D tensor with the contents of the input tensor, with input

dimensions up to axis flattened to the outer dimension of the output and remaining input dimensions flattened into the inner dimension of the output.

Return type:

Variable

Raises:
  • ValueError – If x is not a variable.
  • ValueError – If axis is not in range [0, rank(x)].

Examples

x = fluid.layers.data(name="x", shape=[4, 4, 3], dtype="float32")
out = fluid.layers.flatten(x=x, axis=2)

sequence_mask

paddle.fluid.layers.sequence_mask(x, maxlen=None, dtype='int64', name=None)

SequenceMask Layer

This layer outputs a mask according to the input x and maxlen with data type of dtype.

Supposing x is a Tensor with shape [d_1, d_2, ..., d_n], the y is a mask with shape [d_1, d_2, ..., d_n, maxlen], where:

\[y(i_1, i_2,..., i_n, j) = (j < x(i_1, i_2,..., i_n))\]
Parameters:
  • x (Variable) – Input tensor of sequence_mask layer, whose elements are integers less than maxlen.
  • maxlen (int|None) – Maximum length of the sequence. If maxlen is None, it would be replace with \(max(x)\).
  • dtype (np.dtype|core.VarDesc.VarType|str) – Data type of the output.
  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
Returns:

The output sequence mask.

Return type:

Variable

stack

paddle.fluid.layers.stack(x, axis=0)

Stack Layer

This layer stacks all of the input x along axis.

Input x can be a single variable, a list of variables, or a tuple of variables. If x is a list or tuple, the shapes of all these variables must be the same. Supposing the shape of each input is \([d_0, d_1, ..., d_{n-1}]\), the shape of the output variable would be \([d_0, d_1, ..., d_{axis}=len(x), ..., d_{n-1}]\). If axis < 0, it would be replaced with axis+rank(x[0])+1. If axis is None, it would be replaced with 0.

Parameters:
  • x (Variable|list(Variable)|tuple(Variable)) – Input variables.
  • axis (int|None) – The axis along which all inputs are stacked.
Returns:

The stacked variable.

Return type:

Variable

pad2d

paddle.fluid.layers.pad2d(input, paddings=[0, 0, 0, 0], mode='constant', pad_value=0.0, data_format='NCHW', name=None)

Pad 2-d images accordding to ‘paddings’ and ‘mode’. If mode is ‘reflect’, paddings[0] and paddings[1] must be no greater than height-1. And the width dimension has the same condition.

Example

Given that X is a channel of image from input:

X = [[1, 2, 3],
[4, 5, 6]]

Case 0:

paddings = [0, 1, 2, 3], mode = ‘constant’ pad_value = 0

Out = [[0, 0, 1, 2, 3, 0, 0, 0]
[0, 0, 4, 5, 6, 0, 0, 0] [0, 0, 0, 0, 0, 0, 0, 0]]

Case 1:

paddings = [0, 1, 2, 1], mode = ‘reflect’

Out = [[3, 2, 1, 2, 3, 2]
[6, 5, 4, 5, 6, 5] [3, 2, 1, 2, 3, 2]]

Case 2:

paddings = [0, 1, 2, 1], mode = ‘edge’

Out = [[1, 1, 1, 2, 3, 3]
[4, 4, 4, 5, 6, 6] [4, 4, 4, 5, 6, 6]]
Parameters:
  • input (Variable) – The input image with [N, C, H, W] format or [N, H, W, C] format.
  • paddings (tuple|list) – The padding size. If padding is a tuple, it must contain four integers, (padding_top, padding_bottom, padding_left, padding_right). Default: padding = [0, 0, 0, 0].
  • mode (str) – Three modes: constant(default), reflect, edge. Default: constant
  • pad_value (float32) – The value to fill the padded areas in constant mode. Default: 0
  • data_format (str) – An optional string from: “NHWC”, “NCHW”. Specify the data format of the input data. Default: “NCHW”
  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
Returns:

The tensor variable padded accordding to paddings and mode.

Return type:

Variable

Examples

data = fluid.layers.data(name='data', shape=[3, 32, 32], dtype='float32')
result = fluid.layers.pad2d(input=data, padding=[1,2,3,4], mode='reflect')

unstack

paddle.fluid.layers.unstack(x, axis=0, num=None)

UnStack Layer

This layer unstacks input x into several tensors along axis.

If axis < 0, it would be replaced with axis+rank(x). If num is None, it would be inferred from x.shape[axis], and if x.shape[axis] <= 0 or is unknown, ValueError is raised.

Parameters:
  • x (Variable) – Input variable.
  • axis (int) – The axis along which the input is unstacked.
  • num (int|None) – The number of output variables.
Returns:

The unstacked variables.

Return type:

list(Variable)

sequence_enumerate

paddle.fluid.layers.sequence_enumerate(input, win_size, pad_value=0, name=None)

Generate a new sequence for the input index sequence, which enumerates all the sub-sequences with length win_size of the input. The enumerated sequence has the same 1st dimension with variable input, and the 2nd dimension is win_size, padded by pad_value if necessary in generation.

Examples: Case 1:

Input:
X.lod = [[0, 3, 5]] X.data = [[1], [2], [3], [4], [5]] X.dims = [5, 1]
Attrs:
win_size = 2 pad_value = 0
Output:
Out.lod = [[0, 3, 5]] Out.data = [[1, 2], [2, 3], [3, 0], [4, 5], [5, 0]] Out.dims = [5, 2]
Parameters:
  • input (Variable) – The input variable which is a index sequence.
  • win_size (int) – The window size for enumerating all sub-sequences.
  • pad_value (int) – The padding value, default 0.
Returns:

The enumerate sequence variable which is a LoDTensor.

Return type:

Variable

Examples

x = fluid.layers.data(shape[30, 1], dtype='int32', lod_level=1)
out = fluid.layers.sequence_enumerate(input=x, win_size=3, pad_value=0)

expand

paddle.fluid.layers.expand(x, expand_times, name=None)

Expand operator tiles the input by given times number. You should set times number for each dimension by providing attribute ‘expand_times’. The rank of X should be in [1, 6]. Please note that size of ‘expand_times’ must be the same with X’s rank. Following is a using case:

Input(X) is a 3-D tensor with shape [2, 3, 1]:

        [
           [[1], [2], [3]],
           [[4], [5], [6]]
        ]

Attr(expand_times):  [1, 2, 2]

Output(Out) is a 3-D tensor with shape [2, 6, 2]:

        [
            [[1, 1], [2, 2], [3, 3], [1, 1], [2, 2], [3, 3]],
            [[4, 4], [5, 5], [6, 6], [4, 4], [5, 5], [6, 6]]
        ]
Parameters:
  • x (Variable) – A tensor with rank in [1, 6].
  • expand_times (list|tuple) – Expand times number for each dimension.
Returns:

The expanded variable which is a LoDTensor. After expanding, size of each dimension of Output(Out) is equal to ithe size of the corresponding dimension of Input(X) multiplying the corresponding value given by expand_times.

Return type:

Variable

Examples

x = fluid.layers.data(name='x', shape=[10], dtype='float32')
out = fluid.layers.expand(x=x, expand_times=[1, 2, 2])

sequence_concat

paddle.fluid.layers.sequence_concat(input, name=None)

Sequence Concat Op It will concat LoD tensors by its sequence information. For example: LoD of X1 = [0, 3, 7] LoD of X2 = [0, 7, 9] Result LoD is [0, (3+7), (7+9)] i.e.[0, 10, 16]

Parameters:
  • input (list) – List of Variables to be concatenated.
  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
Returns:

Output variable of the concatenation.

Return type:

Variable

Examples

out = fluid.layers.sequence_concat(input=[seq1, seq2, seq3])

scale

paddle.fluid.layers.scale(x, scale=1.0, bias=0.0, bias_after_scale=True, act=None, name=None)

Scale operator

Apply scaling and bias addition to the input tensor.

if bias_after_scale=True:

$$Out = scale*X + bias$$

else:

$$Out = scale*(X + bias)$$

Parameters:
  • x (Variable) – (Tensor) Input tensor of scale operator
  • scale (FLOAT) – The scaling factor of the scale operator
  • bias (FLOAT) – The bias of the scale operator
  • bias_after_scale (BOOLEAN) – Apply bias addition after or before scaling. It is useful for numeric stability in some circumstances
  • act (basestring|None) – Activation applied to the output.
  • name (basestring|None) – Name of the output.
Returns:

(Tensor) Output tensor of scale operator

Return type:

out(Variable)

elementwise_add

paddle.fluid.layers.elementwise_add(x, y, axis=-1, act=None, name=None)

Elementwise Add Operator

The equation is:

$$Out = X + Y$$

  • \(X\): a tensor of any dimension.
  • \(Y\): a tensor whose dimensions must be less than or equal to the dimensions of \(X\).

There are two cases for this operator:

  1. The shape of \(Y\) is the same with \(X\).
  2. The shape of \(Y\) is a continuous subsequence of \(X\).

For case 2:

  1. Broadcast \(Y\) to match the shape of \(X\), where \(axis\) is the start dimension index for broadcasting \(Y\) onto \(X\).
  2. If \(axis\) is -1 (default), \(axis = rank(X) - rank(Y)\).
  3. The trailing dimensions of size 1 for \(Y\) will be ignored for the consideration of subsequence, such as shape(Y) = (2, 1) => (2).

For example:

shape(X) = (2, 3, 4, 5), shape(Y) = (,)
shape(X) = (2, 3, 4, 5), shape(Y) = (5,)
shape(X) = (2, 3, 4, 5), shape(Y) = (4, 5), with axis=-1(default) or axis=2
shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1
shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0
shape(X) = (2, 3, 4, 5), shape(Y) = (2, 1), with axis=0

The inputs \(X\) and \(Y\) can carry the different LoD information. But the output only shares the LoD information with the input \(X\).

Parameters:
  • x – (Tensor), The first input tensor of elementwise op.
  • y – (Tensor), The second input tensor of elementwise op.
  • axis (INT) – (int, default -1). The start dimension index for broadcasting Y onto X.
  • use_mkldnn (BOOLEAN) – (bool, default false). Used by MKLDNN.
  • act (basestring|None) – Activation applied to the output.
  • name (basestring|None) – Name of the output.
Returns:

The output of elementwise op.

elementwise_div

paddle.fluid.layers.elementwise_div(x, y, axis=-1, act=None, name=None)

Elementwise Div Operator

The equation is:

$$Out = X / Y$$

  • \(X\): a tensor of any dimension.
  • \(Y\): a tensor whose dimensions must be less than or equal to the dimensions of \(X\).

There are two cases for this operator:

  1. The shape of \(Y\) is the same with \(X\).
  2. The shape of \(Y\) is a continuous subsequence of \(X\).

For case 2:

  1. Broadcast \(Y\) to match the shape of \(X\), where \(axis\) is the start dimension index for broadcasting \(Y\) onto \(X\).
  2. If \(axis\) is -1 (default), \(axis = rank(X) - rank(Y)\).
  3. The trailing dimensions of size 1 for \(Y\) will be ignored for the consideration of subsequence, such as shape(Y) = (2, 1) => (2).

For example:

shape(X) = (2, 3, 4, 5), shape(Y) = (,)
shape(X) = (2, 3, 4, 5), shape(Y) = (5,)
shape(X) = (2, 3, 4, 5), shape(Y) = (4, 5), with axis=-1(default) or axis=2
shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1
shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0
shape(X) = (2, 3, 4, 5), shape(Y) = (2, 1), with axis=0

The inputs \(X\) and \(Y\) can carry the different LoD information. But the output only shares the LoD information with the input \(X\).

Parameters:
  • x – (Tensor), The first input tensor of elementwise op.
  • y – (Tensor), The second input tensor of elementwise op.
  • axis (INT) – (int, default -1). The start dimension index for broadcasting Y onto X.
  • use_mkldnn (BOOLEAN) – (bool, default false). Used by MKLDNN.
  • act (basestring|None) – Activation applied to the output.
  • name (basestring|None) – Name of the output.
Returns:

The output of elementwise op.

elementwise_sub

paddle.fluid.layers.elementwise_sub(x, y, axis=-1, act=None, name=None)

Elementwise Sub Operator

The equation is:

$$Out = X - Y$$

  • \(X\): a tensor of any dimension.
  • \(Y\): a tensor whose dimensions must be less than or equal to the dimensions of \(X\).

There are two cases for this operator:

  1. The shape of \(Y\) is the same with \(X\).
  2. The shape of \(Y\) is a continuous subsequence of \(X\).

For case 2:

  1. Broadcast \(Y\) to match the shape of \(X\), where \(axis\) is the start dimension index for broadcasting \(Y\) onto \(X\).
  2. If \(axis\) is -1 (default), \(axis = rank(X) - rank(Y)\).
  3. The trailing dimensions of size 1 for \(Y\) will be ignored for the consideration of subsequence, such as shape(Y) = (2, 1) => (2).

For example:

shape(X) = (2, 3, 4, 5), shape(Y) = (,)
shape(X) = (2, 3, 4, 5), shape(Y) = (5,)
shape(X) = (2, 3, 4, 5), shape(Y) = (4, 5), with axis=-1(default) or axis=2
shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1
shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0
shape(X) = (2, 3, 4, 5), shape(Y) = (2, 1), with axis=0

The inputs \(X\) and \(Y\) can carry the different LoD information. But the output only shares the LoD information with the input \(X\).

Parameters:
  • x – (Tensor), The first input tensor of elementwise op.
  • y – (Tensor), The second input tensor of elementwise op.
  • axis (INT) – (int, default -1). The start dimension index for broadcasting Y onto X.
  • use_mkldnn (BOOLEAN) – (bool, default false). Used by MKLDNN.
  • act (basestring|None) – Activation applied to the output.
  • name (basestring|None) – Name of the output.
Returns:

The output of elementwise op.

elementwise_mul

paddle.fluid.layers.elementwise_mul(x, y, axis=-1, act=None, name=None)

Elementwise Mul Operator

The equation is:

$$Out = X \odot Y$$

  • \(X\): a tensor of any dimension.
  • \(Y\): a tensor whose dimensions must be less than or equal to the dimensions of \(X\).

There are two cases for this operator:

  1. The shape of \(Y\) is the same with \(X\).
  2. The shape of \(Y\) is a continuous subsequence of \(X\).

For case 2:

  1. Broadcast \(Y\) to match the shape of \(X\), where \(axis\) is the start dimension index for broadcasting \(Y\) onto \(X\).
  2. If \(axis\) is -1 (default), \(axis = rank(X) - rank(Y)\).
  3. The trailing dimensions of size 1 for \(Y\) will be ignored for the consideration of subsequence, such as shape(Y) = (2, 1) => (2).

For example:

shape(X) = (2, 3, 4, 5), shape(Y) = (,)
shape(X) = (2, 3, 4, 5), shape(Y) = (5,)
shape(X) = (2, 3, 4, 5), shape(Y) = (4, 5), with axis=-1(default) or axis=2
shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1
shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0
shape(X) = (2, 3, 4, 5), shape(Y) = (2, 1), with axis=0

The inputs \(X\) and \(Y\) can carry the different LoD information. But the output only shares the LoD information with the input \(X\).

Parameters:
  • x – (Tensor), The first input tensor of elementwise op.
  • y – (Tensor), The second input tensor of elementwise op.
  • axis (INT) – (int, default -1). The start dimension index for broadcasting Y onto X.
  • use_mkldnn (BOOLEAN) – (bool, default false). Used by MKLDNN.
  • act (basestring|None) – Activation applied to the output.
  • name (basestring|None) – Name of the output.
Returns:

The output of elementwise op.

elementwise_max

paddle.fluid.layers.elementwise_max(x, y, axis=-1, act=None, name=None)

Elementwise Max Operator

The equation is:

$$Out = max(X, Y)$$

  • \(X\): a tensor of any dimension.
  • \(Y\): a tensor whose dimensions must be less than or equal to the dimensions of \(X\).

There are two cases for this operator:

  1. The shape of \(Y\) is the same with \(X\).
  2. The shape of \(Y\) is a continuous subsequence of \(X\).

For case 2:

  1. Broadcast \(Y\) to match the shape of \(X\), where \(axis\) is the start dimension index for broadcasting \(Y\) onto \(X\).
  2. If \(axis\) is -1 (default), \(axis = rank(X) - rank(Y)\).
  3. The trailing dimensions of size 1 for \(Y\) will be ignored for the consideration of subsequence, such as shape(Y) = (2, 1) => (2).

For example:

shape(X) = (2, 3, 4, 5), shape(Y) = (,)
shape(X) = (2, 3, 4, 5), shape(Y) = (5,)
shape(X) = (2, 3, 4, 5), shape(Y) = (4, 5), with axis=-1(default) or axis=2
shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1
shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0
shape(X) = (2, 3, 4, 5), shape(Y) = (2, 1), with axis=0

The inputs \(X\) and \(Y\) can carry the different LoD information. But the output only shares the LoD information with the input \(X\).

Parameters:
  • x – (Tensor), The first input tensor of elementwise op.
  • y – (Tensor), The second input tensor of elementwise op.
  • axis (INT) – (int, default -1). The start dimension index for broadcasting Y onto X.
  • use_mkldnn (BOOLEAN) – (bool, default false). Used by MKLDNN.
  • act (basestring|None) – Activation applied to the output.
  • name (basestring|None) – Name of the output.
Returns:

The output of elementwise op.

elementwise_min

paddle.fluid.layers.elementwise_min(x, y, axis=-1, act=None, name=None)

Elementwise Min Operator

The equation is:

$$Out = min(X, Y)$$

  • \(X\): a tensor of any dimension.
  • \(Y\): a tensor whose dimensions must be less than or equal to the dimensions of \(X\).

There are two cases for this operator:

  1. The shape of \(Y\) is the same with \(X\).
  2. The shape of \(Y\) is a continuous subsequence of \(X\).

For case 2:

  1. Broadcast \(Y\) to match the shape of \(X\), where \(axis\) is the start dimension index for broadcasting \(Y\) onto \(X\).
  2. If \(axis\) is -1 (default), \(axis = rank(X) - rank(Y)\).
  3. The trailing dimensions of size 1 for \(Y\) will be ignored for the consideration of subsequence, such as shape(Y) = (2, 1) => (2).

For example:

shape(X) = (2, 3, 4, 5), shape(Y) = (,)
shape(X) = (2, 3, 4, 5), shape(Y) = (5,)
shape(X) = (2, 3, 4, 5), shape(Y) = (4, 5), with axis=-1(default) or axis=2
shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1
shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0
shape(X) = (2, 3, 4, 5), shape(Y) = (2, 1), with axis=0

The inputs \(X\) and \(Y\) can carry the different LoD information. But the output only shares the LoD information with the input \(X\).

Parameters:
  • x – (Tensor), The first input tensor of elementwise op.
  • y – (Tensor), The second input tensor of elementwise op.
  • axis (INT) – (int, default -1). The start dimension index for broadcasting Y onto X.
  • use_mkldnn (BOOLEAN) – (bool, default false). Used by MKLDNN.
  • act (basestring|None) – Activation applied to the output.
  • name (basestring|None) – Name of the output.
Returns:

The output of elementwise op.

elementwise_pow

paddle.fluid.layers.elementwise_pow(x, y, axis=-1, act=None, name=None)

Elementwise Pow Operator

The equation is:

$$Out = X ^ Y$$

  • \(X\): a tensor of any dimension.
  • \(Y\): a tensor whose dimensions must be less than or equal to the dimensions of \(X\).

There are two cases for this operator:

  1. The shape of \(Y\) is the same with \(X\).
  2. The shape of \(Y\) is a continuous subsequence of \(X\).

For case 2:

  1. Broadcast \(Y\) to match the shape of \(X\), where \(axis\) is the start dimension index for broadcasting \(Y\) onto \(X\).
  2. If \(axis\) is -1 (default), \(axis = rank(X) - rank(Y)\).
  3. The trailing dimensions of size 1 for \(Y\) will be ignored for the consideration of subsequence, such as shape(Y) = (2, 1) => (2).

For example:

shape(X) = (2, 3, 4, 5), shape(Y) = (,)
shape(X) = (2, 3, 4, 5), shape(Y) = (5,)
shape(X) = (2, 3, 4, 5), shape(Y) = (4, 5), with axis=-1(default) or axis=2
shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1
shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0
shape(X) = (2, 3, 4, 5), shape(Y) = (2, 1), with axis=0

The inputs \(X\) and \(Y\) can carry the different LoD information. But the output only shares the LoD information with the input \(X\).

Parameters:
  • x – (Tensor), The first input tensor of elementwise op.
  • y – (Tensor), The second input tensor of elementwise op.
  • axis (INT) – (int, default -1). The start dimension index for broadcasting Y onto X.
  • use_mkldnn (BOOLEAN) – (bool, default false). Used by MKLDNN.
  • act (basestring|None) – Activation applied to the output.
  • name (basestring|None) – Name of the output.
Returns:

The output of elementwise op.

uniform_random_batch_size_like

paddle.fluid.layers.uniform_random_batch_size_like(input, shape, dtype='float32', input_dim_idx=0, output_dim_idx=0, min=-1.0, max=1.0, seed=0)

UniformRandomBatchSizeLike operator.

This operator initializes a tensor with the same batch_size as the Input tensor with random values sampled from a uniform distribution.

Parameters:
  • input (Variable) – Tensor whose input_dim_idx’th dimension specifies the batch_size
  • shape (tuple|list) – The shape of the output
  • input_dim_idx (Int) – default 0. The index of input’s batch size dimension
  • output_dim_idx (Int) – default 0. The index of output’s batch size dimension
  • min (Float) – (float, default -1.0) Minimum value of uniform random
  • max (Float) – (float, default 1.0) Maximun value of uniform random
  • seed (Int) – (int, default 0) Random seed used for generating samples. 0 means use a seed generated by the system.Note that if seed is not 0, this operator will always generate the same random numbers every time
  • dtype (np.dtype|core.VarDesc.VarType|str) – The type of data : float32, float_16, int etc
Returns:

Tensor of specified shape will be filled with the specified value

Return type:

out (Variable)

gaussian_random

paddle.fluid.layers.gaussian_random(shape, mean=0.0, std=1.0, seed=0, dtype='float32')

GaussianRandom Operator.

Used to initialize tensors with gaussian random generator.

Parameters:
  • shape (tuple|list) – (vector<int>) The dimension of random tensor
  • mean (Float) – (float, default 0.0) mean of random tensor
  • std (Float) – (float, default 1.0) std of random tensor
  • seed (Int) – (int, default 0) Random seed of generator.0 means use system wide seed.Note that if seed is not 0, this operator will always generate the same random numbers every time
  • dtype (np.dtype|core.VarDesc.VarType|str) – Output data type.
Returns:

Output matrix of gaussian random op

Return type:

out (Variable)

sampling_id

paddle.fluid.layers.sampling_id(x, min=0.0, max=1.0, seed=0, dtype='float32')

SamplingId Operator. A layer for sampling id from multinomial distribution from the input. Sampling one id for one sample.

Parameters:
  • x (Variable) – The input tensor of softmax. 2-D with shape [batch_size, input_feature_dimensions]
  • min (Float) – Minimum value of random. (float, default 0.0)
  • max (Float) – Maximun value of random. (float, default 1.0)
  • seed (Float) – Random seed used for the random number engine. 0 means use a seed generated by the system.Note that if seed is not 0, this operator will always generate the same random numbers every time. (int, default 0)
  • dtype (np.dtype|core.VarDesc.VarType|str) – The type of output data : float32, float_16, int etc
Returns:

SamplingId data tensor

Return type:

out (Variable)

gaussian_random_batch_size_like

paddle.fluid.layers.gaussian_random_batch_size_like(input, shape, input_dim_idx=0, output_dim_idx=0, mean=0.0, std=1.0, seed=0, dtype='float32')

Used to initialize tensors with gaussian random generator. The defalut mean of the distribution is 0. and defalut standard deviation (std) of the distribution is 1.. Uers can set mean and std by input arguments.

Parameters:
  • input (Variable) – Tensor whose input_dim_idx’th dimension specifies the batch_size
  • shape (tuple|list) – The shape of the output
  • input_dim_idx (Int) – default 0. The index of input’s batch size dimension
  • output_dim_idx (Int) – default 0. The index of output’s batch size dimension
  • mean (Float) – (float, default 0.0) The mean (or center) of the gaussian distribution
  • std (Float) – (float, default 1.0) The standard deviation (std, or spread) of the gaussian distribution
  • seed (Int) – (int, default 0) Random seed of generator.0 means use system wide seed.Note that if seed is not 0, this operator will always generate the same random numbers every time
  • dtype (np.dtype|core.VarDesc.VarType|str) – The type of output data : float32, float_16, int etc
Returns:

Tensor of specified shape will be filled with the specified value

Return type:

out (Variable)

sum

paddle.fluid.layers.sum(x)

Sum operator.

This operators sums the input tensors. All the inputs can carry the LoD (Level of Details) information. However, the output only shares the LoD information with the first input.

Parameters:x (Variable) – (vector<Tensor>) The input tensors of sum operator
Returns:(Tensor) The output tensor of sum operator
Return type:out (Variable)

slice

paddle.fluid.layers.slice(input, axes, starts, ends)

Slice Operator.

Produces a slice of the input tensor along multiple axes. Similar to numpy: https://docs.scipy.org/doc/numpy/reference/arrays.indexing.html Slice uses axes, starts and ends attributes to specify the start and end dimension for each axis in the list of axes, it uses this information to slice the input data tensor. If a negative value is passed for any of the start or end indices, it represents number of elements before the end of that dimension. If the value passed to start or end is larger than the n (the number of elements in this dimension), it represents n. For slicing to the end of a dimension with unknown size, it is recommended to pass in INT_MAX. If axes are omitted, they are set to [0, ..., ndim-1]. Following examples will explain how slice works:


Cast1: Given: data = [ [1, 2, 3, 4], [5, 6, 7, 8], ] axes = [0, 1] starts = [1, 0] ends = [2, 3] Then: result = [ [5, 6, 7], ]

Cast2: Given: data = [ [1, 2, 3, 4], [5, 6, 7, 8], ] starts = [0, 1] ends = [-1, 1000] Then: result = [ [2, 3, 4], ]

Parameters:
  • input (Variable) – Tensor of data to extract slices from.
  • axes (List) – (list<int>) Axes that starts and ends apply to. It’s optional.If not present, will be treated as [0, 1, ..., len(starts) - 1]
  • starts (List) – (list<int>) Starting indices of corresponding axis in axes
  • ends (List) – (list<int>) Starting indices of corresponding axis in axes
Returns:

Sliced data tensor

Return type:

out (Variable)

shape

paddle.fluid.layers.shape(input)

Shape Operator

Get the shape of input tensor. Only support CPU input Tensor now.

Parameters:input (Variable) – (Tensor), The input tensor
Returns:(Tensor), The shape of input tensor, the data type of the shape is int32_t, will be on the same device with the input Tensor
Return type:out (Variable)

logical_and

paddle.fluid.layers.logical_and(x, y, out=None, name=None)

logical_and Operator

It operates element-wise on X and Y, and returns the Out. X, Y and Out are N-dim boolean tensors. Each element of Out is calculated by $$Out = X && Y$$

Parameters:
  • x (Variable) – (LoDTensor) Left hand operand of logical_and operator
  • y (Variable) – (LoDTensor) Right hand operand of logical_and operator
  • out (Tensor) – Output tensor of logical operation.
  • name (basestring|None) – Name of the output.
Returns:

(LoDTensor) n-dim bool tensor. Each element is $$Out = X && Y$$

Return type:

out(Variable)

logical_or

paddle.fluid.layers.logical_or(x, y, out=None, name=None)

logical_or Operator

It operates element-wise on X and Y, and returns the Out. X, Y and Out are N-dim boolean tensors. Each element of Out is calculated by $$Out = X || Y$$

Parameters:
  • x (Variable) – (LoDTensor) Left hand operand of logical_or operator
  • y (Variable) – (LoDTensor) Right hand operand of logical_or operator
  • out (Tensor) – Output tensor of logical operation.
  • name (basestring|None) – Name of the output.
Returns:

(LoDTensor) n-dim bool tensor. Each element is $$Out = X || Y$$

Return type:

out(Variable)

logical_xor

paddle.fluid.layers.logical_xor(x, y, out=None, name=None)

logical_xor Operator

It operates element-wise on X and Y, and returns the Out. X, Y and Out are N-dim boolean tensors. Each element of Out is calculated by !!Out = (X || Y) && !(X && Y)!!

Parameters:
  • x (Variable) – (LoDTensor) Left hand operand of logical_xor operator
  • y (Variable) – (LoDTensor) Right hand operand of logical_xor operator
  • out (Tensor) – Output tensor of logical operation.
  • name (basestring|None) – Name of the output.
Returns:

(LoDTensor) n-dim bool tensor. Each element is $$Out = (X || Y) && !(X && Y)$$

Return type:

out(Variable)

logical_not

paddle.fluid.layers.logical_not(x, out=None, name=None)

logical_not Operator

It operates element-wise on X, and returns the Out. X and Out are N-dim boolean tensors. Each element of Out is calculated by !!Out = !X!!

Parameters:
  • x (Variable) – (LoDTensor) Operand of logical_not operator
  • out (Tensor) – Output tensor of logical operation.
  • name (basestring|None) – Name of the output.
Returns:

(LoDTensor) n-dim bool tensor. Each element is $$Out = !X$$

Return type:

out(Variable)

clip

paddle.fluid.layers.clip(x, min, max, name=None)

Clip Operator.

The clip operator limits the value of given input within an interval. The interval is specified with arguments ‘min’ and ‘max’:

$$ Out = min(max(X, min), max) $$

Parameters:
  • x (Variable) – (Tensor)The input of clip op.The number of dimensions must be between [1, 9]
  • min (FLOAT) – (float)Minimum value, under which element is replaced by min
  • max (FLOAT) – (float)Maximum value, above which element is replaced by max
  • name (basestring|None) – Name of the output.
Returns:

(Tensor)The output of clip op with shape as input(X)

Return type:

out(Variable)

clip_by_norm

paddle.fluid.layers.clip_by_norm(x, max_norm, name=None)

ClipByNorm Operator.

This operator limits the L2 norm of the input \(X\) within \(max\_norm\). If the L2 norm of \(X\) is less than or equal to \(max\_norm\), \(Out\) will be the same as \(X\). If the L2 norm of \(X\) is greater than \(max\_norm\), \(X\) will be linearly scaled to make the L2 norm of \(Out\) equal to \(max\_norm\), as shown in the following formula:

$$ Out = \frac{max\_norm * X}{norm(X)}, $$

where \(norm(X)\) represents the L2 norm of \(X\).

Examples: .. code-block:: python

data = fluid.layer.data( name=’data’, shape=[2, 4, 6], dtype=’float32’) reshaped = fluid.layers.clip_by_norm( x=data, max_norm=0.5)

Parameters:
  • x (Variable) – (Tensor) The input of clip_by_norm op.The number of dimensions must be between [1, 9]
  • max_norm (FLOAT) – (float) The maximum norm value
  • name (basestring|None) – Name of the output.
Returns:

(Tensor) The output of clip_by_norm op with shape as input(X)

Return type:

out(Variable)

mean

paddle.fluid.layers.mean(x, name=None)

Mean Operator calculates the mean of all elements in X.

Parameters:
  • x (Variable) – (Tensor) The input of mean op
  • name (basestring|None) – Name of the output.
Returns:

(Tensor) The output of mean op

Return type:

out(Variable)

mul

paddle.fluid.layers.mul(x, y, x_num_col_dims=1, y_num_col_dims=1, name=None)

Mul Operator.

This operator is used to perform matrix multiplication for input \(X\) and \(Y\).

The equation is:

$$Out = X * Y$$

Both the input \(X\) and \(Y\) can carry the LoD (Level of Details) information, or not. But the output only shares the LoD information with input \(X\).

Parameters:
  • x (Variable) – (Tensor), The first input tensor of mul op
  • y (Variable) – (Tensor), The second input tensor of mul op
  • x_num_col_dims (INT) – (int, default 1), The mul_op can take tensors with more than two dimensions as its inputs. If the input $X$ is a tensor with more than two dimensions, $X$ will be flattened into a two-dimensional matrix first. The flattening rule is: the first num_col_dims will be flattened to form the first dimension of the final matrix (the height of the matrix), and the rest rank(X) - num_col_dims dimensions are flattened to form the second dimension of the final matrix (the width of the matrix). As a result, height of the flattened matrix is equal to the product of $X$’s first x_num_col_dims dimensions’ sizes, and width of the flattened matrix is equal to the product of $X$’s last rank(x) - num_col_dims dimensions’ size. For example, suppose $X$ is a 6-dimensional tensor with the shape [2, 3, 4, 5, 6], and x_num_col_dims = 3. Thus, the flattened matrix will have a shape [2 x 3 x 4, 5 x 6] = [24, 30].
  • y_num_col_dims (INT) – (int, default 1), The mul_op can take tensors with more than two, dimensions as its inputs. If the input $Y$ is a tensor with more than two dimensions, $Y$ will be flattened into a two-dimensional matrix first. The attribute y_num_col_dims determines how $Y$ is flattened. See comments of x_num_col_dims for more details.
  • name (basestring|None) – Name of the output.
Returns:

(Tensor), The output tensor of mul op

Return type:

out(Variable)

sigmoid_cross_entropy_with_logits

paddle.fluid.layers.sigmoid_cross_entropy_with_logits(x, label, name=None)

SigmoidCrossEntropyWithLogits Operator.

This measures the element-wise probability error in classification tasks in which each class is independent. This can be thought of as predicting labels for a data-point, where labels are not mutually exclusive. For example, a news article can be about politics, technology or sports at the same time or none of these.

The logistic loss is given as follows:

$$loss = -Labels * log(sigma(X)) - (1 - Labels) * log(1 - sigma(X))$$

We know that $$sigma(X) = \frac{1}{1 + exp(-X)}$$. By substituting this we get:

$$loss = X - X * Labels + log(1 + exp(-X))$$

For stability and to prevent overflow of $$exp(-X)$$ when X < 0, we reformulate the loss as follows:

$$loss = max(X, 0) - X * Labels + log(1 + exp(-|X|))$$

Both the input X and Labels can carry the LoD (Level of Details) information. However the output only shares the LoD with input X.

Parameters:
  • x (Variable) – (Tensor, default Tensor<float>), a 2-D tensor with shape N x D, where N is the batch size and D is the number of classes. This input is a tensor of logits computed by the previous operator. Logits are unscaled log probabilities given as log(p/(1-p))
  • label (Variable) – (Tensor, default Tensor<float>), a 2-D tensor of the same type and shape as X. This input is a tensor of probabalistic labels for each logit
  • name (basestring|None) – Name of the output.
Returns:

(Tensor, default Tensor<float>), a 2-D tensor with shape N x D of elementwise logistic losses

Return type:

out(Variable)

maxout

paddle.fluid.layers.maxout(x, groups, name=None)

MaxOut Operator.

Assumed the input shape is (N, Ci, H, W). The output shape is (N, Co, H, W). Then \(Co = Ci / groups\) and the operator formula is as follows:

$$ y_{si+j} = max_k x_{gsi + sk + j} \ g = groups \ s = frac{input.size}{num_channels} \ 0 le i < frac{num_channels}{groups} \ 0 le j < s \ 0 le k < groups $$

Please refer to Paper: - Maxout Networks: http://www.jmlr.org/proceedings/papers/v28/goodfellow13.pdf - Multi-digit Number Recognition from Street View Imagery using Deep Convolutional Neural Networks: https://arxiv.org/pdf/1312.6082v4.pdf

Parameters:
  • x (Variable) – (Tensor) The input tensor of maxout operator. The format of input tensor is NCHW. Where N is batch size, C is the number of channels, H and W is the height and width of feature
  • groups (INT) – “Specifies how many groups the input tensor will be split”
  • the channel dimension. And the number of output channel is " ("in) –
  • number of channels divided by groups.." ("the) –
  • name (basestring|None) – Name of the output.
Returns:

(Tensor) The output tensor of maxout operator.The format of output tensor is also NCHW.Where N is batch size, C is the number of channels, H and W is the height and width of feature

Return type:

out(Variable)

ops

sigmoid

paddle.fluid.layers.sigmoid(x, name=None)

SigmoidDoc :param x: Input of Sigmoid operator :param use_mkldnn: (bool, default false) Only used in mkldnn kernel :type use_mkldnn: BOOLEAN

Returns:Output of Sigmoid operator

logsigmoid

paddle.fluid.layers.logsigmoid(x, name=None)

LogSigmoidDoc :param x: Input of LogSigmoid operator :param use_mkldnn: (bool, default false) Only used in mkldnn kernel :type use_mkldnn: BOOLEAN

Returns:Output of LogSigmoid operator

exp

paddle.fluid.layers.exp(x, name=None)

ExpDoc :param x: Input of Exp operator :param use_mkldnn: (bool, default false) Only used in mkldnn kernel :type use_mkldnn: BOOLEAN

Returns:Output of Exp operator

tanh

paddle.fluid.layers.tanh(x, name=None)

TanhDoc :param x: Input of Tanh operator :param use_mkldnn: (bool, default false) Only used in mkldnn kernel :type use_mkldnn: BOOLEAN

Returns:Output of Tanh operator

tanh_shrink

paddle.fluid.layers.tanh_shrink(x, name=None)

TanhShrinkDoc :param x: Input of TanhShrink operator :param use_mkldnn: (bool, default false) Only used in mkldnn kernel :type use_mkldnn: BOOLEAN

Returns:Output of TanhShrink operator

softshrink

paddle.fluid.layers.softshrink(x, name=None)

Softshrink Activation Operator

\[\begin{split}out = \begin{cases} x - \lambda, \text{if } x > \lambda \\ x + \lambda, \text{if } x < -\lambda \\ 0, \text{otherwise} \end{cases}\end{split}\]
Parameters:
  • x – Input of Softshrink operator
  • lambda (FLOAT) – non-negative offset
Returns:

Output of Softshrink operator

sqrt

paddle.fluid.layers.sqrt(x, name=None)

SqrtDoc :param x: Input of Sqrt operator :param use_mkldnn: (bool, default false) Only used in mkldnn kernel :type use_mkldnn: BOOLEAN

Returns:Output of Sqrt operator

abs

paddle.fluid.layers.abs(x, name=None)

AbsDoc :param x: Input of Abs operator :param use_mkldnn: (bool, default false) Only used in mkldnn kernel :type use_mkldnn: BOOLEAN

Returns:Output of Abs operator

ceil

paddle.fluid.layers.ceil(x, name=None)

CeilDoc :param x: Input of Ceil operator :param use_mkldnn: (bool, default false) Only used in mkldnn kernel :type use_mkldnn: BOOLEAN

Returns:Output of Ceil operator

floor

paddle.fluid.layers.floor(x, name=None)

FloorDoc :param x: Input of Floor operator :param use_mkldnn: (bool, default false) Only used in mkldnn kernel :type use_mkldnn: BOOLEAN

Returns:Output of Floor operator

cos

paddle.fluid.layers.cos(x, name=None)

CosDoc :param x: Input of Cos operator :param use_mkldnn: (bool, default false) Only used in mkldnn kernel :type use_mkldnn: BOOLEAN

Returns:Output of Cos operator

sin

paddle.fluid.layers.sin(x, name=None)

SinDoc :param x: Input of Sin operator :param use_mkldnn: (bool, default false) Only used in mkldnn kernel :type use_mkldnn: BOOLEAN

Returns:Output of Sin operator

round

paddle.fluid.layers.round(x, name=None)

RoundDoc :param x: Input of Round operator :param use_mkldnn: (bool, default false) Only used in mkldnn kernel :type use_mkldnn: BOOLEAN

Returns:Output of Round operator

reciprocal

paddle.fluid.layers.reciprocal(x, name=None)

ReciprocalDoc :param x: Input of Reciprocal operator :param use_mkldnn: (bool, default false) Only used in mkldnn kernel :type use_mkldnn: BOOLEAN

Returns:Output of Reciprocal operator

square

paddle.fluid.layers.square(x, name=None)

SquareDoc :param x: Input of Square operator :param use_mkldnn: (bool, default false) Only used in mkldnn kernel :type use_mkldnn: BOOLEAN

Returns:Output of Square operator

softplus

paddle.fluid.layers.softplus(x, name=None)

SoftplusDoc :param x: Input of Softplus operator :param use_mkldnn: (bool, default false) Only used in mkldnn kernel :type use_mkldnn: BOOLEAN

Returns:Output of Softplus operator

softsign

paddle.fluid.layers.softsign(x, name=None)

SoftsignDoc :param x: Input of Softsign operator :param use_mkldnn: (bool, default false) Only used in mkldnn kernel :type use_mkldnn: BOOLEAN

Returns:Output of Softsign operator

uniform_random

paddle.fluid.layers.uniform_random(shape, dtype=None, min=None, max=None, seed=None)

This operator initializes a tensor with random values sampled from a uniform distribution. The random result is in set [min, max].

Parameters:
  • shape (INTS) – The shape of the output tensor
  • min (FLOAT) – Minimum value of uniform random. [default -1.0].
  • max (FLOAT) – Maximun value of uniform random. [default 1.0].
  • seed (INT) – Random seed used for generating samples. 0 means use a seed generated by the system.Note that if seed is not 0, this operator will always generate the same random numbers every time. [default 0].
  • dtype (INT) – Output tensor data type. [default 5(FP32)].
Returns:

The output tensor of uniform random op

Examples

>>> result = fluid.layers.uniform_random(shape=[32, 784])

hard_shrink

paddle.fluid.layers.hard_shrink(x, threshold=None)

HardShrink activation operator

\[\begin{split}out = \begin{cases} x, \text{if } x > \lambda \\ x, \text{if } x < -\lambda \\ 0, \text{otherwise} \end{cases}\end{split}\]
Parameters:
  • x – Input of HardShrink operator
  • threshold (FLOAT) – The value of threshold for HardShrink. [default: 0.5]
Returns:

Output of HardShrink operator

Examples

>>> data = fluid.layers.data(name="input", shape=[784])
>>> result = fluid.layers.hard_shrink(x=data, threshold=0.3)

cumsum

paddle.fluid.layers.cumsum(x, axis=None, exclusive=None, reverse=None)

The cumulative sum of the elements along a given axis. By default, the first element of the result is the same of the first element of the input. If exlusive is true, the first element of the result is 0.

Parameters:
  • x – Input of cumsum operator
  • axis (INT) – The dimenstion to accumulate along. -1 means the last dimenstion [default -1].
  • exclusive (BOOLEAN) – Whether to perform exclusive cumsum. [default false].
  • reverse (BOOLEAN) – If true, the cumsum is performed in the reversed direction. [default false].
Returns:

Output of cumsum operator

Examples

>>> data = fluid.layers.data(name="input", shape=[32, 784])
>>> result = fluid.layers.cumsum(data, axis=0)

thresholded_relu

paddle.fluid.layers.thresholded_relu(x, threshold=None)

ThresholdedRelu activation operator

\[\begin{split}out = \begin{cases} x, \text{if } x > threshold \\ 0, \text{otherwise} \end{cases}\end{split}\]
Parameters:
  • x – Input of ThresholdedRelu operator
  • threshold (FLOAT) – The threshold location of activation. [default 1.0].
Returns:

Output of ThresholdedRelu operator

Examples

>>> data = fluid.layers.data(name="input", shape=[1])
>>> result = fluid.layers.thresholded_relu(data, threshold=0.4)

tensor

create_tensor

paddle.fluid.layers.create_tensor(dtype, name=None, persistable=False)

Create an variable, which will hold a LoDTensor with data type dtype.

Parameters:
  • dtype (string) – ‘float32’|’int32’|..., the data type of the created tensor.
  • name (string) – The name of the created tensor, if not set, the name will be a random unique one.
  • persistable (bool) – Set the persistable flag of the create tensor.
Returns:

The tensor variable storing the created tensor.

Return type:

Variable

Examples

tensor = fluid.layers.create_tensor(dtype='float32')

create_parameter

paddle.fluid.layers.create_parameter(shape, dtype, name=None, attr=None, is_bias=False, default_initializer=None)

Create a parameter. The parameter is a learnable variable, which can have gradient, and can be optimized.

NOTE: this is a very low-level API. This API is useful when you create operator by your self. instead of using layers.

Parameters:
  • shape (list[int]) – shape of the parameter
  • dtype (string) – element type of the parameter
  • attr (ParamAttr) – attributes of the parameter
  • is_bias (bool) – This can affect which default initializer is chosen when default_initializer is None. If is_bias, initializer.Constant(0.0) will be used. Otherwise, Xavier() will be used.
  • default_initializer (Initializer) – initializer for the parameter
Returns:

the created parameter.

Examples

>>> W = fluid.layers.create_parameter(shape=[784, 200], dtype='float32')
>>> data = fluid.layers.data(name="img", shape=[64, 784], append_batch_size=False)
>>> hidden = fluid.layers.matmul(x=data, y=W)

create_global_var

paddle.fluid.layers.create_global_var(shape, value, dtype, persistable=False, force_cpu=False, name=None)

Create a new tensor variable with value in the global block(block 0).

Parameters:
  • shape (list[int]) – shape of the variable
  • value (float) – the value of the variable. The new created variable will be filled with it.
  • dtype (string) – data type of the variable
  • persistable (bool) – if this variable is persistable. Default: False
  • force_cpu (bool) – force this variable to be on CPU. Default: False
  • name (str|None) – The name of the variable. If set to None the variable name will be generated automatically. Default: None
Returns:

the created Variable

Return type:

Variable

Examples

var = fluid.create_global_var(shape=[2,3], value=1.0, dtype='float32',
                     persistable=True, force_cpu=True, name='new_var')

cast

paddle.fluid.layers.cast(x, dtype)

This layer takes in the Variable x with x.dtype and casts it to the output with dtype.

Parameters:
  • x (Variable) – The input Variable for casting.
  • dtype (np.dtype|core.VarDesc.VarType|str) – Data type of the output Variable.
Returns:

The output Variable after casting.

Return type:

Variable

Examples

data = fluid.layers.data(name='x', shape=[13], dtype='float32')
result = fluid.layers.cast(x=data, dtype='float64')

concat

paddle.fluid.layers.concat(input, axis=0, name=None)

Concat

This function concatenates the input along the axis mentioned and returns that as the output.

Parameters:
  • input (list) – List of tensors to be concatenated
  • axis (int) – Integer axis along which the tensors will be concatenated
  • name (str|None) – A name for this layer(optional). If set None, the layer will be named automatically.
Returns:

Output variable of the concatenation

Return type:

Variable

Examples

out = fluid.layers.concat(input=[Efirst, Esecond, Ethird, Efourth])

sums

paddle.fluid.layers.sums(input, out=None)

This function performs the sum operation on the input and returns the result as the output.

Parameters:
  • input (Variable|list) – The input tensor that has the elements that need to be summed up.
  • out (Variable|None) – Output parameter. The sum result. Default: None
Returns:

the sum of input. The same as the argument ‘out’

Return type:

Variable

Examples

tmp = fluid.layers.zeros(shape=[10], dtype='int32')
i = fluid.layers.fill_constant(shape=[1], dtype='int64', value=10)
a0 = layers.array_read(array=tmp, i=i)
i = layers.increment(x=i)
a1 = layers.array_read(array=tmp, i=i)
mean_a0 = layers.mean(a0)
mean_a1 = layers.mean(a1)
a_sum = layers.sums(input=[mean_a0, mean_a1])

assign

paddle.fluid.layers.assign(input, output=None)

Assign

This function copies the input Variable to the output Variable.

Parameters:
  • input (Variable|numpy.ndarray) – The source variable
  • output (Variable|None) – The destination variable
Returns:

The destination variable that was supplied as the output.

Return type:

Variable

Examples

out = fluid.layers.create_tensor(dtype='float32')
hidden = fluid.layers.fc(input=data, size=10)
fluid.layers.assign(hidden, out)

fill_constant_batch_size_like

paddle.fluid.layers.fill_constant_batch_size_like(input, shape, dtype, value, input_dim_idx=0, output_dim_idx=0)

This function creates a tensor of specified shape, dtype and batch size, and initializes this with a constant supplied in value. The batch size is obtained from the input tensor.

It also sets stop_gradient to True.

>>> data = fluid.layers.fill_constant_batch_size_like(
>>>             input=like, shape=[1], value=0, dtype='int64')
Parameters:
  • input (Variable) – Tensor whose input_dim_idx’th dimension specifies the batch_size.
  • shape (INTS) – The shape of the output.
  • dtype (INT) – It could be numpy.dtype. Output data type. Default is float32.
  • value (FLOAT) – default 0. The value to be filled.
  • input_dim_idx (INT) – default 0. The index of input’s batch size dimension.
  • output_dim_idx (INT) – default 0. The index of output’s batch size dimension.
Returns:

Tensor of specified shape will be filled with the specified value.

fill_constant

paddle.fluid.layers.fill_constant(shape, dtype, value, force_cpu=False, out=None)

fill_constant

This function creates a tensor with specified shape and dtype, and initializes it with a constant specifed by value.

The attribute stop_gradient of the created tensor is set to True.

Parameters:
  • shape (tuple|list|None) – Shape of the output tensor.
  • dtype (np.dtype|core.VarDesc.VarType|str) – Data type of the output tensor.
  • value (float) – The constant value used to initialize the output tensor.
  • out (Variable) – The output tensor.
  • force_cpu (True|False) – data should be on CPU if set true.
Returns:

The tensor variable storing the output.

Return type:

Variable

Examples

data = fluid.layers.fill_constant(shape=[1], value=0, dtype='int64')

argmin

paddle.fluid.layers.argmin(x, axis=0)

argmin

This function computes the indices of the min elements of the input tensor’s element along the provided axis.

Parameters:
  • x (Variable) – The input to compute the indices of the min elements.
  • axis (int) – Axis to compute indices along.
Returns:

The tensor variable storing the output

Return type:

Variable

Examples

out = fluid.layers.argmin(x=in, axis=0)
out = fluid.layers.argmin(x=in, axis=-1)

argmax

paddle.fluid.layers.argmax(x, axis=0)

argmax

This function computes the indices of the max elements of the input tensor’s element along the provided axis.

Parameters:
  • x (Variable) – The input to compute the indices of the max elements.
  • axis (int) – Axis to compute indices along.
Returns:

The tensor variable storing the output

Return type:

Variable

Examples

out = fluid.layers.argmax(x=in, axis=0)
out = fluid.layers.argmax(x=in, axis=-1)

argsort

paddle.fluid.layers.argsort(input, axis=-1, name=None)

Performs sorting on the input Variable along the given axis, and outputs sorted data Varibale and its corresponding index Variable with the same shape as input.

For example, the given axis is -1 and the input Variable

    input = [[0.15849551, 0.45865775, 0.8563702 ],
             [0.12070083, 0.28766365, 0.18776911]],

after argsort, the sorted Vairable becomes

    out = [[0.15849551, 0.45865775, 0.8563702 ],
           [0.12070083, 0.18776911, 0.28766365]],

and the sorted indices along the given axis turn outs to be

    indices = [[0, 1, 2],
               [0, 2, 1]]
Parameters:
  • input (Variable) – The input Variable for sorting.
  • axis (int) – The axis along which to sort the input Variable. When axis < 0, the actual axis will be axis + rank(input). Default -1, the last dimension.
  • name (str|None) – (optional) A name for this layer. If set None, the layer will be named automatically.
Returns:

A tuple of sorted data Variable and the sorted indices.

Return type:

tuple

Examples

input = fluid.layers.data(data=[2, 3])
out, indices = fluid.layers.argsort(input, axis=0)

ones

paddle.fluid.layers.ones(shape, dtype, force_cpu=False)

ones

This function creates a tensor of specified shape and dtype, and initializes this with 1.

It also sets stop_gradient to True.

Parameters:
  • shape (tuple|list|None) – Shape of output tensor
  • dtype (np.dtype|core.VarDesc.VarType|str) – Data type of output tensor
Returns:

The tensor variable storing the output

Return type:

Variable

Examples

data = fluid.layers.ones(shape=[1], dtype='int64')

zeros

paddle.fluid.layers.zeros(shape, dtype, force_cpu=False)

zeros

This function creates a tensor of specified shape and dtype, and initializes this with 0.

It also sets stop_gradient to True.

Parameters:
  • shape (tuple|list|None) – Shape of output tensor.
  • dtype (np.dtype|core.VarDesc.VarType|str) – Data type of output tensor.
  • force_cpu (bool, default False) – Whether to make output stay on CPU.
Returns:

The tensor variable storing the output.

Return type:

Variable

Examples

data = fluid.layers.zeros(shape=[1], dtype='int64')

reverse

paddle.fluid.layers.reverse(x, axis)

reverse

This function reverse the input ‘x’ along given axises.

Parameters:
  • x (Vairbale) – the input to be reversed.
  • axis (int|tuple|list) – Axis that along which order of elements is reversed. If it is a tuple or a list, reversing will be apply on each axis in the tuple or list.
Returns:

The reversed tensor.

Return type:

Variable

Examples

out = fluid.layers.reverse(x=in, axis=0)
# or:
out = fluid.layers.reverse(x=in, axis=[0,1])

learning_rate_scheduler

exponential_decay

paddle.fluid.layers.exponential_decay(learning_rate, decay_steps, decay_rate, staircase=False)

Applies exponential decay to the learning rate.

When training a model, it is often recommended to lower the learning rate as the training progresses. By using this function, the learning rate will be decayed by ‘decay_rate’ every ‘decay_steps’ steps.

>>> if staircase == True:
>>>     decayed_learning_rate = learning_rate * decay_rate ^ floor(global_step / decay_steps)
>>> else:
>>>     decayed_learning_rate = learning_rate * decay_rate ^ (global_step / decay_steps)
Parameters:
  • learning_rate (Variable|float) – The initial learning rate.
  • decay_steps (int) – See the decay computation above.
  • decay_rate (float) – The decay rate. See the decay computation above.
  • staircase (Boolean) – If True, decay the learning rate at discrete intervals. Default: False
Returns:

The decayed learning rate

Return type:

Variable

Examples

base_lr = 0.1
sgd_optimizer = fluid.optimizer.SGD(
      learning_rate=fluid.layers.exponential_decay(
          learning_rate=base_lr,
          decay_steps=10000,
          decay_rate=0.5,
          staircase=True))
sgd_optimizer.minimize(avg_cost)

natural_exp_decay

paddle.fluid.layers.natural_exp_decay(learning_rate, decay_steps, decay_rate, staircase=False)

Applies natural exponential decay to the initial learning rate.

>>> if not staircase:
>>>     decayed_learning_rate = learning_rate * exp(- decay_rate * (global_step / decay_steps))
>>> else:
>>>     decayed_learning_rate = learning_rate * exp(- decay_rate * (global_step / decay_steps))
Parameters:
  • learning_rate – A scalar float32 value or a Variable. This will be the initial learning rate during training
  • decay_steps – A Python int32 number.
  • decay_rate – A Python float number.
  • staircase – Boolean. If set true, decay the learning rate every decay_steps.
Returns:

The decayed learning rate

inverse_time_decay

paddle.fluid.layers.inverse_time_decay(learning_rate, decay_steps, decay_rate, staircase=False)

Applies inverse time decay to the initial learning rate.

When training a model, it is often recommended to lower the learning rate as the training progresses. By using this function, an inverse decay function will be applied to the initial learning rate.

>>> if staircase == True:
>>>     decayed_learning_rate = learning_rate / (1 + decay_rate * floor(global_step / decay_step))
>>> else:
>>>     decayed_learning_rate = learning_rate / (1 + decay_rate * global_step / decay_step)
Parameters:
  • learning_rate (Variable|float) – The initial learning rate.
  • decay_steps (int) – See the decay computation above.
  • decay_rate (float) – The decay rate. See the decay computation above.
  • staircase (Boolean) – If True, decay the learning rate at discrete intervals. Default: False
Returns:

The decayed learning rate

Return type:

Variable

Examples

base_lr = 0.1
sgd_optimizer = fluid.optimizer.SGD(
      learning_rate=fluid.layers.inverse_time_decay(
          learning_rate=base_lr,
          decay_steps=10000,
          decay_rate=0.5,
          staircase=True))
sgd_optimizer.minimize(avg_cost)

polynomial_decay

paddle.fluid.layers.polynomial_decay(learning_rate, decay_steps, end_learning_rate=0.0001, power=1.0, cycle=False)

Applies polynomial decay to the initial learning rate.

if cycle:
  decay_steps = decay_steps * ceil(global_step / decay_steps)
else:
  global_step = min(global_step, decay_steps)
  decayed_learning_rate = (learning_rate - end_learning_rate) *
       (1 - global_step / decay_steps) ^ power + end_learning_rate
Parameters:
  • learning_rate (Variable|float32) – A scalar float32 value or a Variable. This will be the initial learning rate during training.
  • decay_steps (int32) – A Python int32 number.
  • end_learning_rate (float) – A Python float number.
  • power (float) – A Python float number.
  • cycle (bool) – If set true, decay the learning rate every decay_steps.
Returns:

The decayed learning rate

Return type:

Variable

piecewise_decay

paddle.fluid.layers.piecewise_decay(boundaries, values)

Applies piecewise decay to the initial learning rate.

The algorithm can be described as the code below.

boundaries = [10000, 20000]
values = [1.0, 0.5, 0.1]
if step < 10000:
    learning_rate = 1.0
elif 10000 <= step < 20000:
    learning_rate = 0.5
else:
    learning_rate = 0.1
Parameters:
  • boundaries – A list of steps numbers.
  • values – A list of learning rate values that will be picked during different step boundaries.
Returns:

The decayed learning rate.

noam_decay

paddle.fluid.layers.noam_decay(d_model, warmup_steps)

Noam decay method. The numpy implementation of noam decay as follows.

>>> import numpy as np
>>> lr_value = np.power(d_model, -0.5) * np.min([
>>>                         np.power(current_steps, -0.5),
>>>                         np.power(warmup_steps, -1.5) * current_steps])

Please reference attention is all you need.

Parameters:
  • d_model (Variable) – The dimensionality of input and output of model.
  • warmup_steps (Variable) – A super parameter.
Returns:

The decayed learning rate.

append_LARS

paddle.fluid.layers.append_LARS(params_grads, learning_rate, weight_decay)
Applies LARS (LAYER-WISE ADAPTIVE RATE SCALING) to learning rate for
each layer.
```python
learning_rate *= local_gw_ratio * sqrt(sumsq(param))
/ (sqrt(sumsq(gradient))+ weight_decay * sqrt(sumsq(param)))

```

Parameters:
  • learning_rate – A learning rate Variable. This is the global learning rate for LARS.
  • weight_decay – A Python float number.
Returns:

The decayed learning rate

detection

prior_box

paddle.fluid.layers.prior_box(input, image, min_sizes, max_sizes=None, aspect_ratios=[1.0], variance=[0.1, 0.1, 0.2, 0.2], flip=False, clip=False, steps=[0.0, 0.0], offset=0.5, name=None, min_max_aspect_ratios_order=False)

Prior Box Operator

Generate prior boxes for SSD(Single Shot MultiBox Detector) algorithm. Each position of the input produce N prior boxes, N is determined by the count of min_sizes, max_sizes and aspect_ratios, The size of the box is in range(min_size, max_size) interval, which is generated in sequence according to the aspect_ratios.

Parameters:
  • input (Variable) – The Input Variables, the format is NCHW.
  • image (Variable) – The input image data of PriorBoxOp, the layout is NCHW.
  • min_sizes (list|tuple|float value) – min sizes of generated prior boxes.
  • max_sizes (list|tuple|None) – max sizes of generated prior boxes. Default: None.
  • aspect_ratios (list|tuple|float value) – the aspect ratios of generated prior boxes. Default: [1.].
  • variance (list|tuple) – the variances to be encoded in prior boxes. Default:[0.1, 0.1, 0.2, 0.2].
  • flip (bool) – Whether to flip aspect ratios. Default:False.
  • clip (bool) – Whether to clip out-of-boundary boxes. Default: False.
  • step (list|turple) – Prior boxes step across width and height, If step[0] == 0.0/step[1] == 0.0, the prior boxes step across height/weight of the input will be automatically calculated. Default: [0., 0.]
  • offset (float) – Prior boxes center offset. Default: 0.5
  • name (str) – Name of the prior box op. Default: None.
  • min_max_aspect_ratios_order (bool) – If set True, the output prior box is in order of [min, max, aspect_ratios], which is consistent with Caffe. Please note, this order affects the weights order of convolution layer followed by and does not affect the final detection results. Default: False.
Returns:

A tuple with two Variable (boxes, variances)

boxes: the output prior boxes of PriorBox. The layout is [H, W, num_priors, 4]. H is the height of input, W is the width of input, num_priors is the total box count of each position of input.

variances: the expanded variances of PriorBox. The layout is [H, W, num_priors, 4]. H is the height of input, W is the width of input num_priors is the total box count of each position of input

Return type:

tuple

Examples

box, var = fluid.layers.prior_box(
    input=conv1,
    image=images,
    min_sizes=[100.],
    flip=True,
    clip=True)

multi_box_head

paddle.fluid.layers.multi_box_head(inputs, image, base_size, num_classes, aspect_ratios, min_ratio=None, max_ratio=None, min_sizes=None, max_sizes=None, steps=None, step_w=None, step_h=None, offset=0.5, variance=[0.1, 0.1, 0.2, 0.2], flip=True, clip=False, kernel_size=1, pad=0, stride=1, name=None, min_max_aspect_ratios_order=False)

Generate prior boxes for SSD(Single Shot MultiBox Detector) algorithm. The details of this algorithm, please refer the section 2.2 of SSD paper SSD: Single Shot MultiBox Detector .

Parameters:
  • inputs (list|tuple) – The list of input Variables, the format of all Variables is NCHW.
  • image (Variable) – The input image data of PriorBoxOp, the layout is NCHW.
  • base_size (int) – the base_size is used to get min_size and max_size according to min_ratio and max_ratio.
  • num_classes (int) – The number of classes.
  • aspect_ratios (list|tuple) – the aspect ratios of generated prior boxes. The length of input and aspect_ratios must be equal.
  • min_ratio (int) – the min ratio of generated prior boxes.
  • max_ratio (int) – the max ratio of generated prior boxes.
  • min_sizes (list|tuple|None) – If len(inputs) <=2, min_sizes must be set up, and the length of min_sizes should equal to the length of inputs. Default: None.
  • max_sizes (list|tuple|None) – If len(inputs) <=2, max_sizes must be set up, and the length of min_sizes should equal to the length of inputs. Default: None.
  • steps (list|tuple) – If step_w and step_h are the same, step_w and step_h can be replaced by steps.
  • step_w (list|tuple) – Prior boxes step across width. If step_w[i] == 0.0, the prior boxes step across width of the inputs[i] will be automatically calculated. Default: None.
  • step_h (list|tuple) – Prior boxes step across height, If step_h[i] == 0.0, the prior boxes step across height of the inputs[i] will be automatically calculated. Default: None.
  • offset (float) – Prior boxes center offset. Default: 0.5
  • variance (list|tuple) – the variances to be encoded in prior boxes. Default:[0.1, 0.1, 0.2, 0.2].
  • flip (bool) – Whether to flip aspect ratios. Default:False.
  • clip (bool) – Whether to clip out-of-boundary boxes. Default: False.
  • kernel_size (int) – The kernel size of conv2d. Default: 1.
  • pad (int|list|tuple) – The padding of conv2d. Default:0.
  • stride (int|list|tuple) – The stride of conv2d. Default:1,
  • name (str) – Name of the prior box layer. Default: None.
  • min_max_aspect_ratios_order (bool) – If set True, the output prior box is in order of [min, max, aspect_ratios], which is consistent with Caffe. Please note, this order affects the weights order of convolution layer followed by and does not affect the fininal detection results. Default: False.
Returns:

A tuple with four Variables. (mbox_loc, mbox_conf, boxes, variances)

mbox_loc: The predicted boxes’ location of the inputs. The layout is [N, H*W*Priors, 4]. where Priors is the number of predicted boxes each position of each input.

mbox_conf: The predicted boxes’ confidence of the inputs. The layout is [N, H*W*Priors, C]. where Priors is the number of predicted boxes each position of each input and C is the number of Classes.

boxes: the output prior boxes of PriorBox. The layout is [num_priors, 4]. num_priors is the total box count of each position of inputs.

variances: the expanded variances of PriorBox. The layout is [num_priors, 4]. num_priors is the total box count of each position of inputs

Return type:

tuple

Examples

mbox_locs, mbox_confs, box, var = fluid.layers.multi_box_head(
  inputs=[conv1, conv2, conv3, conv4, conv5, conv5],
  image=images,
  num_classes=21,
  min_ratio=20,
  max_ratio=90,
  aspect_ratios=[[2.], [2., 3.], [2., 3.], [2., 3.], [2.], [2.]],
  base_size=300,
  offset=0.5,
  flip=True,
  clip=True)

bipartite_match

paddle.fluid.layers.bipartite_match(dist_matrix, match_type=None, dist_threshold=None, name=None)

This operator implements a greedy bipartite matching algorithm, which is used to obtain the matching with the maximum distance based on the input distance matrix. For input 2D matrix, the bipartite matching algorithm can find the matched column for each row (matched means the largest distance), also can find the matched row for each column. And this operator only calculate matched indices from column to row. For each instance, the number of matched indices is the column number of the input distance matrix.

There are two outputs, matched indices and distance. A simple description, this algorithm matched the best (maximum distance) row entity to the column entity and the matched indices are not duplicated in each row of ColToRowMatchIndices. If the column entity is not matched any row entity, set -1 in ColToRowMatchIndices.

NOTE: the input DistMat can be LoDTensor (with LoD) or Tensor. If LoDTensor with LoD, the height of ColToRowMatchIndices is batch size. If Tensor, the height of ColToRowMatchIndices is 1.

NOTE: This API is a very low level API. It is used by ssd_loss layer. Please consider to use ssd_loss instead.

Parameters:
  • dist_matrix (Variable) –

    This input is a 2-D LoDTensor with shape [K, M]. It is pair-wise distance matrix between the entities represented by each row and each column. For example, assumed one entity is A with shape [K], another entity is B with shape [M]. The dist_matrix[i][j] is the distance between A[i] and B[j]. The bigger the distance is, the better matching the pairs are.

    NOTE: This tensor can contain LoD information to represent a batch of inputs. One instance of this batch can contain different numbers of entities.

  • match_type (string|None) – The type of matching method, should be ‘bipartite’ or ‘per_prediction’. [default ‘bipartite’].
  • dist_threshold (float|None) – If match_type is ‘per_prediction’, this threshold is to determine the extra matching bboxes based on the maximum distance, 0.5 by default.
Returns:

a tuple with two elements is returned. The first is matched_indices, the second is matched_distance.

The matched_indices is a 2-D Tensor with shape [N, M] in int type. N is the batch size. If match_indices[i][j] is -1, it means B[j] does not match any entity in i-th instance. Otherwise, it means B[j] is matched to row match_indices[i][j] in i-th instance. The row number of i-th instance is saved in match_indices[i][j].

The matched_distance is a 2-D Tensor with shape [N, M] in float type . N is batch size. If match_indices[i][j] is -1, match_distance[i][j] is also -1.0. Otherwise, assumed match_distance[i][j] = d, and the row offsets of each instance are called LoD. Then match_distance[i][j] = dist_matrix[d+LoD[i]][j].

Return type:

tuple

Examples

>>> x = fluid.layers.data(name='x', shape=[4], dtype='float32')
>>> y = fluid.layers.data(name='y', shape=[4], dtype='float32')
>>> iou = fluid.layers.iou_similarity(x=x, y=y)
>>> matched_indices, matched_dist = fluid.layers.bipartite_match(iou)

target_assign

paddle.fluid.layers.target_assign(input, matched_indices, negative_indices=None, mismatch_value=None, name=None)

This operator can be, for given the target bounding boxes or labels, to assign classification and regression targets to each prediction as well as weights to prediction. The weights is used to specify which prediction would not contribute to training loss.

For each instance, the output out and`out_weight` are assigned based on match_indices and negative_indices. Assumed that the row offset for each instance in input is called lod, this operator assigns classification/regression targets by performing the following steps:

  1. Assigning all outpts based on match_indices:
If id = match_indices[i][j] > 0,

    out[i][j][0 : K] = X[lod[i] + id][j % P][0 : K]
    out_weight[i][j] = 1.

Otherwise,

    out[j][j][0 : K] = {mismatch_value, mismatch_value, ...}
    out_weight[i][j] = 0.
  1. Assigning out_weight based on neg_indices if neg_indices is provided:

Assumed that the row offset for each instance in neg_indices is called neg_lod, for i-th instance and each id of neg_indices in this instance:

out[i][id][0 : K] = {mismatch_value, mismatch_value, ...}
out_weight[i][id] = 1.0
Parameters:
  • inputs (Variable) – This input is a 3D LoDTensor with shape [M, P, K].
  • matched_indices (Variable) – Tensor<int>), The input matched indices is 2D Tenosr<int32> with shape [N, P], If MatchIndices[i][j] is -1, the j-th entity of column is not matched to any entity of row in i-th instance.
  • negative_indices (Variable) – The input negative example indices are an optional input with shape [Neg, 1] and int32 type, where Neg is the total number of negative example indices.
  • mismatch_value (float32) – Fill this value to the mismatched location.
Returns:

A tuple(out, out_weight) is returned. out is a 3D Tensor with shape [N, P, K], N and P is the same as they are in neg_indices, K is the same as it in input of X. If match_indices[i][j]. out_weight is the weight for output with the shape of [N, P, 1].

Return type:

tuple

Examples

matched_indices, matched_dist = fluid.layers.bipartite_match(iou)
gt = layers.data(
            name='gt', shape=[1, 1], dtype='int32', lod_level=1)
trg, trg_weight = layers.target_assign(
                gt, matched_indices, mismatch_value=0)

detection_output

paddle.fluid.layers.detection_output(loc, scores, prior_box, prior_box_var, background_label=0, nms_threshold=0.3, nms_top_k=400, keep_top_k=200, score_threshold=0.01, nms_eta=1.0)

Detection Output Layer for Single Shot Multibox Detector (SSD).

This operation is to get the detection results by performing following two steps:

  1. Decode input bounding box predictions according to the prior boxes.
  2. Get the final detection results by applying multi-class non maximum suppression (NMS).

Please note, this operation doesn’t clip the final output bounding boxes to the image window.

Parameters:
  • loc (Variable) – A 3-D Tensor with shape [N, M, 4] represents the predicted locations of M bounding bboxes. N is the batch size, and each bounding box has four coordinate values and the layout is [xmin, ymin, xmax, ymax].
  • scores (Variable) – A 3-D Tensor with shape [N, M, C] represents the predicted confidence predictions. N is the batch size, C is the class number, M is number of bounding boxes. For each category there are total M scores which corresponding M bounding boxes.
  • prior_box (Variable) – A 2-D Tensor with shape [M, 4] holds M boxes, each box is represented as [xmin, ymin, xmax, ymax], [xmin, ymin] is the left top coordinate of the anchor box, if the input is image feature map, they are close to the origin of the coordinate system. [xmax, ymax] is the right bottom coordinate of the anchor box.
  • prior_box_var (Variable) – A 2-D Tensor with shape [M, 4] holds M group of variance.
  • background_label (float) – The index of background label, the background label will be ignored. If set to -1, then all categories will be considered.
  • nms_threshold (float) – The threshold to be used in NMS.
  • nms_top_k (int) – Maximum number of detections to be kept according to the confidences aftern the filtering detections based on score_threshold.
  • keep_top_k (int) – Number of total bboxes to be kept per image after NMS step. -1 means keeping all bboxes after NMS step.
  • score_threshold (float) – Threshold to filter out bounding boxes with low confidence score. If not provided, consider all boxes.
  • nms_eta (float) – The parameter for adaptive NMS.
Returns:

The detection outputs is a LoDTensor with shape [No, 6]. Each row has six values: [label, confidence, xmin, ymin, xmax, ymax]. No is the total number of detections in this mini-batch. For each instance, the offsets in first dimension are called LoD, the offset number is N + 1, N is the batch size. The i-th image has LoD[i + 1] - LoD[i] detected results, if it is 0, the i-th image has no detected results. If all images have not detected results, all the elements in LoD are 0, and output tensor only contains one value, which is -1.

Return type:

Variable

Examples

pb = layers.data(name='prior_box', shape=[10, 4],
             append_batch_size=False, dtype='float32')
pbv = layers.data(name='prior_box_var', shape=[10, 4],
              append_batch_size=False, dtype='float32')
loc = layers.data(name='target_box', shape=[2, 21, 4],
              append_batch_size=False, dtype='float32')
scores = layers.data(name='scores', shape=[2, 21, 10],
              append_batch_size=False, dtype='float32')
nmsed_outs = fluid.layers.detection_output(scores=scores,
                           loc=loc,
                           prior_box=pb,
                           prior_box_var=pbv)

ssd_loss

paddle.fluid.layers.ssd_loss(location, confidence, gt_box, gt_label, prior_box, prior_box_var=None, background_label=0, overlap_threshold=0.5, neg_pos_ratio=3.0, neg_overlap=0.5, loc_loss_weight=1.0, conf_loss_weight=1.0, match_type='per_prediction', mining_type='max_negative', normalize=True, sample_size=None)

Multi-box loss layer for object detection algorithm of SSD

This layer is to compute dection loss for SSD given the location offset predictions, confidence predictions, prior boxes and ground-truth boudding boxes and labels, and the type of hard example mining. The returned loss is a weighted sum of the localization loss (or regression loss) and confidence loss (or classification loss) by performing the following steps:

  1. Find matched bounding box by bipartite matching algorithm.

1.1 Compute IOU similarity between ground-truth boxes and prior boxes.

1.2 Compute matched boundding box by bipartite matching algorithm.

  1. Compute confidence for mining hard examples

2.1. Get the target label based on matched indices.

2.2. Compute confidence loss.

  1. Apply hard example mining to get the negative example indices and update the matched indices.
  2. Assign classification and regression targets

4.1. Encoded bbox according to the prior boxes.

4.2. Assign regression targets.

4.3. Assign classification targets.

  1. Compute the overall objective loss.

5.1 Compute confidence loss.

5.1 Compute localization loss.

5.3 Compute the overall weighted loss.

Parameters:
  • location (Variable) – The location predictions are a 3D Tensor with shape [N, Np, 4], N is the batch size, Np is total number of predictions for each instance. 4 is the number of coordinate values, the layout is [xmin, ymin, xmax, ymax].
  • confidence (Variable) – The confidence predictions are a 3D Tensor with shape [N, Np, C], N and Np are the same as they are in location, C is the class number.
  • gt_box (Variable) – The ground-truth boudding boxes (bboxes) are a 2D LoDTensor with shape [Ng, 4], Ng is the total number of ground-truth bboxes of mini-batch input.
  • gt_label (Variable) – The ground-truth labels are a 2D LoDTensor with shape [Ng, 1].
  • prior_box (Variable) – The prior boxes are a 2D Tensor with shape [Np, 4].
  • prior_box_var (Variable) – The variance of prior boxes are a 2D Tensor with shape [Np, 4].
  • background_label (int) – The index of background label, 0 by default.
  • overlap_threshold (float) –

    If match_type is ‘per_prediction’, use overlap_threshold to determine the extra matching bboxes when

    finding matched boxes. 0.5 by default.
  • neg_pos_ratio (float) – The ratio of the negative boxes to the positive boxes, used only when mining_type is ‘max_negative’, 3.0 by defalut.
  • neg_overlap (float) – The negative overlap upper bound for the unmatched predictions. Use only when mining_type is ‘max_negative’, 0.5 by default.
  • loc_loss_weight (float) – Weight for localization loss, 1.0 by default.
  • conf_loss_weight (float) – Weight for confidence loss, 1.0 by default.
  • match_type (str) – The type of matching method during training, should be ‘bipartite’ or ‘per_prediction’, ‘per_prediction’ by defalut.
  • mining_type (str) – The hard example mining type, should be ‘hard_example’ or ‘max_negative’, now only support max_negative.
  • normalize (bool) – Whether to normalize the SSD loss by the total number of output locations, True by default.
  • sample_size (int) – The max sample size of negative box, used only when mining_type is ‘hard_example’.
Returns:

The weighted sum of the localization loss and confidence loss, with shape [N * Np, 1], N and Np are the same as they are in location.

Raises:

ValueError – If mining_type is ‘hard_example’, now only support mining type of max_negative.

Examples

>>> pb = fluid.layers.data(
>>>                   name='prior_box',
>>>                   shape=[10, 4],
>>>                   append_batch_size=False,
>>>                   dtype='float32')
>>> pbv = fluid.layers.data(
>>>                   name='prior_box_var',
>>>                   shape=[10, 4],
>>>                   append_batch_size=False,
>>>                   dtype='float32')
>>> loc = fluid.layers.data(name='target_box', shape=[10, 4], dtype='float32')
>>> scores = fluid.layers.data(name='scores', shape=[10, 21], dtype='float32')
>>> gt_box = fluid.layers.data(
>>>         name='gt_box', shape=[4], lod_level=1, dtype='float32')
>>> gt_label = fluid.layers.data(
>>>         name='gt_label', shape=[1], lod_level=1, dtype='float32')
>>> loss = fluid.layers.ssd_loss(loc, scores, gt_box, gt_label, pb, pbv)

detection_map

paddle.fluid.layers.detection_map(detect_res, label, class_num, background_label=0, overlap_threshold=0.3, evaluate_difficult=True, has_state=None, input_states=None, out_states=None, ap_version='integral')

Detection mAP evaluate operator. The general steps are as follows. First, calculate the true positive and false positive according to the input of detection and labels, then calculate the mAP evaluate value. Supporting ‘11 point’ and ‘integral’ mAP algorithm. Please get more information from the following articles: https://sanchom.wordpress.com/tag/average-precision/ https://arxiv.org/abs/1512.02325

Parameters:
  • detect_res – (LoDTensor) A 2-D LoDTensor with shape [M, 6] represents the detections. Each row has 6 values: [label, confidence, xmin, ymin, xmax, ymax], M is the total number of detect results in this mini-batch. For each instance, the offsets in first dimension are called LoD, the number of offset is N + 1, if LoD[i + 1] - LoD[i] == 0, means there is no detected data
  • label – (LoDTensor) A 2-D LoDTensor represents theLabeled ground-truth data. Each row has 6 values: [label, xmin, ymin, xmax, ymax, is_difficult] or 5 values: [label, xmin, ymin, xmax, ymax], where N is the total number of ground-truth data in this mini-batch. For each instance, the offsets in first dimension are called LoD, the number of offset is N + 1, if LoD[i + 1] - LoD[i] == 0, means there is no ground-truth data
  • class_num – (int) The class number
  • background_label – (int, defalut: 0) The index of background label, the background label will be ignored. If set to -1, then all categories will be considered
  • overlap_threshold – (float) The lower bound jaccard overlap threshold of detection output and ground-truth data
  • evaluate_difficult – (bool, default true) Switch to control whether the difficult data is evaluated
  • has_state – (Tensor<int>) A tensor with shape [1], 0 means ignoring input states, which including PosCount, TruePos, FalsePos
  • input_states – If not None, It contains 3 elements: 1. pos_count (Tensor) A tensor with shape [Ncls, 1], store the input positive example count of each class, Ncls is the count of input classification. This input is used to pass the AccumPosCount generated by the previous mini-batch when the multi mini-batches cumulative calculation carried out. When the input(PosCount) is empty, the cumulative calculation is not carried out, and only the results of the current mini-batch are calculated. 2. true_pos (LoDTensor) A 2-D LoDTensor with shape [Ntp, 2], store the input true positive example of each class.This input is used to pass the AccumTruePos generated by the previous mini-batch when the multi mini-batches cumulative calculation carried out. . 3. false_pos (LoDTensor) A 2-D LoDTensor with shape [Nfp, 2], store the input false positive example of each class.This input is used to pass the AccumFalsePos generated by the previous mini-batch when the multi mini-batches cumulative calculation carried out. .
  • out_states – If not None, it contains 3 elements. 1. accum_pos_count (Tensor) A tensor with shape [Ncls, 1], store the positive example count of each class. It combines the input input(PosCount) and the positive example count computed from input(Detection) and input(Label). 2. accum_true_pos (LoDTensor) A LoDTensor with shape [Ntp’, 2], store the true positive example of each class. It combines the input(TruePos) and the true positive examples computed from input(Detection) and input(Label). 3. accum_false_pos (LoDTensor) A LoDTensor with shape [Nfp’, 2], store the false positive example of each class. It combines the input(FalsePos) and the false positive examples computed from input(Detection) and input(Label).
  • ap_version – (string, default ‘integral’) The AP algorithm type, ‘integral’ or ‘11point’
Returns:

(Tensor) A tensor with shape [1], store the mAP evaluate result of the detection

Examples

detect_res = fluid.layers.data(
    name='detect_res',
    shape=[10, 6],
    append_batch_size=False,
    dtype='float32')
label = fluid.layers.data(
    name='label',
    shape=[10, 6],
    append_batch_size=False,
    dtype='float32')

map_out = fluid.layers.detection_map(detect_res, label, 21)

rpn_target_assign

paddle.fluid.layers.rpn_target_assign(bbox_pred, cls_logits, anchor_box, anchor_var, gt_boxes, is_crowd, im_info, rpn_batch_size_per_im=256, rpn_straddle_thresh=0.0, rpn_fg_fraction=0.5, rpn_positive_overlap=0.7, rpn_negative_overlap=0.3, use_random=True)

** Target Assign Layer for region proposal network (RPN) in Faster-RCNN detection. **

This layer can be, for given the Intersection-over-Union (IoU) overlap between anchors and ground truth boxes, to assign classification and regression targets to each each anchor, these target labels are used for train RPN. The classification targets is a binary class label (of being an object or not). Following the paper of Faster-RCNN, the positive labels are two kinds of anchors: (i) the anchor/anchors with the highest IoU overlap with a ground-truth box, or (ii) an anchor that has an IoU overlap higher than rpn_positive_overlap(0.7) with any ground-truth box. Note that a single ground-truth box may assign positive labels to multiple anchors. A non-positive anchor is when its IoU ratio is lower than rpn_negative_overlap (0.3) for all ground-truth boxes. Anchors that are neither positive nor negative do not contribute to the training objective. The regression targets are the encoded ground-truth boxes associated with the positive anchors.

Parameters:
  • bbox_pred (Variable) – A 3-D Tensor with shape [N, M, 4] represents the predicted locations of M bounding bboxes. N is the batch size, and each bounding box has four coordinate values and the layout is [xmin, ymin, xmax, ymax].
  • cls_logits (Variable) – A 3-D Tensor with shape [N, M, 1] represents the predicted confidence predictions. N is the batch size, 1 is the frontground and background sigmoid, M is number of bounding boxes.
  • anchor_box (Variable) – A 2-D Tensor with shape [M, 4] holds M boxes, each box is represented as [xmin, ymin, xmax, ymax], [xmin, ymin] is the left top coordinate of the anchor box, if the input is image feature map, they are close to the origin of the coordinate system. [xmax, ymax] is the right bottom coordinate of the anchor box.
  • anchor_var (Variable) – A 2-D Tensor with shape [M,4] holds expanded variances of anchors.
  • gt_boxes (Variable) – The ground-truth boudding boxes (bboxes) are a 2D LoDTensor with shape [Ng, 4], Ng is the total number of ground-truth bboxes of mini-batch input.
  • is_crowd (Variable) – A 1-D LoDTensor which indicates groud-truth is crowd.
  • im_info (Variable) – A 2-D LoDTensor with shape [N, 3]. N is the batch size,
  • is the height, width and scale. (3) –
  • rpn_batch_size_per_im (int) – Total number of RPN examples per image.
  • rpn_straddle_thresh (float) – Remove RPN anchors that go outside the image by straddle_thresh pixels.
  • rpn_fg_fraction (float) – Target fraction of RoI minibatch that is labeled foreground (i.e. class > 0), 0-th class is background.
  • rpn_positive_overlap (float) – Minimum overlap required between an anchor and ground-truth box for the (anchor, gt box) pair to be a positive example.
  • rpn_negative_overlap (float) – Maximum overlap allowed between an anchor and ground-truth box for the (anchor, gt box) pair to be a negative examples.
Returns:

A tuple(predicted_scores, predicted_location, target_label, target_bbox) is returned. The predicted_scores and predicted_location is the predicted result of the RPN. The target_label and target_bbox is the ground truth, respectively. The predicted_location is a 2D Tensor with shape [F, 4], and the shape of target_bbox is same as the shape of the predicted_location, F is the number of the foreground anchors. The predicted_scores is a 2D Tensor with shape [F + B, 1], and the shape of target_label is same as the shape of the predicted_scores, B is the number of the background anchors, the F and B is depends on the input of this operator.

Return type:

tuple

Examples


bbox_pred = layers.data(name=’bbox_pred’, shape=[100, 4],
append_batch_size=False, dtype=’float32’)
cls_logits = layers.data(name=’cls_logits’, shape=[100, 1],
append_batch_size=False, dtype=’float32’)
anchor_box = layers.data(name=’anchor_box’, shape=[20, 4],
append_batch_size=False, dtype=’float32’)
gt_boxes = layers.data(name=’gt_boxes’, shape=[10, 4],
append_batch_size=False, dtype=’float32’)
loc_pred, score_pred, loc_target, score_target =
fluid.layers.rpn_target_assign(bbox_pred=bbox_pred,
cls_logits=cls_logits, anchor_box=anchor_box, gt_boxes=gt_boxes)

anchor_generator

paddle.fluid.layers.anchor_generator(input, anchor_sizes=None, aspect_ratios=None, variance=[0.1, 0.1, 0.2, 0.2], stride=None, offset=0.5, name=None)

Anchor generator operator

Generate anchors for Faster RCNN algorithm. Each position of the input produce N anchors, N = size(anchor_sizes) * size(aspect_ratios). The order of generated anchors is firstly aspect_ratios loop then anchor_sizes loop.

Parameters:
  • input (Variable) – The input feature map, the format is NCHW.
  • anchor_sizes (list|tuple|float) – The anchor sizes of generated anchors,
  • in absolute pixels e.g. [64., 128., 256., 512.] (given) –
  • instance, the anchor size of 64 means the area of this anchor equals to 64**2. (For) –
  • aspect_ratios (list|tuple|float) – The height / width ratios of generated anchors, e.g. [0.5, 1.0, 2.0].
  • variance (list|tuple) – The variances to be used in box regression deltas. Default:[0.1, 0.1, 0.2, 0.2].
  • stride (list|turple) – The anchors stride across width and height, e.g. [16.0, 16.0]
  • offset (float) – Prior boxes center offset. Default: 0.5
  • name (str) – Name of the prior box op. Default: None.
Returns:

The output anchors with a layout of [H, W, num_anchors, 4].

H is the height of input, W is the width of input, num_anchors is the box count of each position. Each anchor is in (xmin, ymin, xmax, ymax) format an unnormalized.

Variances(Variable): The expanded variances of anchors

with a layout of [H, W, num_priors, 4]. H is the height of input, W is the width of input num_anchors is the box count of each position. Each variance is in (xcenter, ycenter, w, h) format.

Return type:

Anchors(Variable)

Examples

anchor, var = anchor_generator(
    input=conv1,
    anchor_sizes=[64, 128, 256, 512],
    aspect_ratios=[0.5, 1.0, 2.0],
    variance=[0.1, 0.1, 0.2, 0.2],
    stride=[16.0, 16.0],
    offset=0.5)

roi_perspective_transform

paddle.fluid.layers.roi_perspective_transform(input, rois, transformed_height, transformed_width, spatial_scale=1.0)

ROI perspective transform op.

Parameters:
  • input (Variable) – The input of ROIPerspectiveTransformOp. The format of input tensor is NCHW. Where N is batch size, C is the number of input channels, H is the height of the feature, and W is the width of the feature.
  • rois (Variable) – ROIs (Regions of Interest) to be transformed. It should be a 2-D LoDTensor of shape (num_rois, 8). Given as [[x1, y1, x2, y2, x3, y3, x4, y4], ...], (x1, y1) is the top left coordinates, and (x2, y2) is the top right coordinates, and (x3, y3) is the bottom right coordinates, and (x4, y4) is the bottom left coordinates.
  • transformed_height (integer) – The height of transformed output.
  • transformed_height – The width of transformed output.
  • spatial_scale (float) – Spatial scale factor to scale ROI coords. Default: 1.0
Returns:

The output of ROIPerspectiveTransformOp which is a 4-D tensor with shape

(num_rois, channels, transformed_h, transformed_w).

Return type:

Variable

Examples

out = fluid.layers.roi_perspective_transform(input, rois, 7, 7, 1.0)

generate_proposal_labels

paddle.fluid.layers.generate_proposal_labels(rpn_rois, gt_classes, is_crowd, gt_boxes, im_info, batch_size_per_im=256, fg_fraction=0.25, fg_thresh=0.25, bg_thresh_hi=0.5, bg_thresh_lo=0.0, bbox_reg_weights=[0.1, 0.1, 0.2, 0.2], class_nums=None, use_random=True)

** Generate proposal labels Faster-RCNN ** TODO(buxingyuan): Add Document

generate_proposals

paddle.fluid.layers.generate_proposals(scores, bbox_deltas, im_info, anchors, variances, pre_nms_top_n=6000, post_nms_top_n=1000, nms_thresh=0.5, min_size=0.1, eta=1.0, name=None)

** Generate proposal labels Faster-RCNN **

This operation proposes RoIs according to each box with their probability to be a foreground object and the box can be calculated by anchors. Bbox_deltais and scores to be an object are the output of RPN. Final proposals could be used to train detection net.

For generating proposals, this operation performs following steps:

  1. Transposes and resizes scores and bbox_deltas in size of (H*W*A, 1) and (H*W*A, 4)
  2. Calculate box locations as proposals candidates.
  3. Clip boxes to image
  4. Remove predicted boxes with small area.
  5. Apply NMS to get final proposals as output.
Args:
scores(Variable): A 4-D Tensor with shape [N, A, H, W] represents the probability for each box to be an object.
N is batch size, A is number of anchors, H and W are height and width of the feature map.

bbox_deltas(Variable): A 4-D Tensor with shape [N, 4*A, H, W] represents the differece between predicted box locatoin and anchor location. im_info(Variable): A 2-D Tensor with shape [N, 3] represents origin image information for N batch. Info contains height, width and scale

between origin image size and the size of feature map.
anchors(Variable): A 4-D Tensor represents the anchors with a layout of [H, W, A, 4]. H and W are height and width of the feature map,
num_anchors is the box count of each position. Each anchor is in (xmin, ymin, xmax, ymax) format an unnormalized.

variances(Variable): The expanded variances of anchors with a layout of [H, W, num_priors, 4]. Each variance is in (xcenter, ycenter, w, h) format. pre_nms_top_n(float): Number of total bboxes to be kept per image before NMS. 6000 by default. post_nms_top_n(float): Number of total bboxes to be kept per image after NMS. 1000 by default. nms_thresh(float): Threshold in NMS, 0.5 by default. min_size(float): Remove predicted boxes with either height or width < min_size. 0.1 by default. eta(float): Apply in adaptive NMS, if adaptive threshold > 0.5, adaptive_threshold = adaptive_threshold * eta in each iteration.

iou_similarity

paddle.fluid.layers.iou_similarity(x, y, name=None)

IOU Similarity Operator

Computes intersection-over-union (IOU) between two box lists. Box list ‘X’ should be a LoDTensor and ‘Y’ is a common Tensor, boxes in ‘Y’ are shared by all instance of the batched inputs of X. Given two boxes A and B, the calculation of IOU is as follows:

$$ IOU(A, B) = \frac{area(A\cap B)}{area(A)+area(B)-area(A\cap B)} $$

Parameters:
  • x (Variable) – (LoDTensor, default LoDTensor<float>) Box list X is a 2-D LoDTensor with shape [N, 4] holds N boxes, each box is represented as [xmin, ymin, xmax, ymax], the shape of X is [N, 4]. [xmin, ymin] is the left top coordinate of the box if the input is image feature map, they are close to the origin of the coordinate system. [xmax, ymax] is the right bottom coordinate of the box. This tensor can contain LoD information to represent a batch of inputs. One instance of this batch can contain different numbers of entities
  • y (Variable) – (Tensor, default Tensor<float>) Box list Y holds M boxes, each box is represented as [xmin, ymin, xmax, ymax], the shape of X is [N, 4]. [xmin, ymin] is the left top coordinate of the box if the input is image feature map, and [xmax, ymax] is the right bottom coordinate of the box
Returns:

(LoDTensor, the lod is same as input X) The output of iou_similarity op, a tensor with shape [N, M] representing pairwise iou scores

Return type:

out(Variable)

box_coder

paddle.fluid.layers.box_coder(prior_box, prior_box_var, target_box, code_type='encode_center_size', box_normalized=True, name=None)

Bounding Box Coder.

Encode/Decode the target bounding box with the priorbox information.

The Encoding schema described below:

ox = (tx - px) / pw / pxv

oy = (ty - py) / ph / pyv

ow = log(abs(tw / pw)) / pwv

oh = log(abs(th / ph)) / phv

The Decoding schema described below:

ox = (pw * pxv * tx * + px) - tw / 2

oy = (ph * pyv * ty * + py) - th / 2

ow = exp(pwv * tw) * pw + tw / 2

oh = exp(phv * th) * ph + th / 2

where tx, ty, tw, th denote the target box’s center coordinates, width and height respectively. Similarly, px, py, pw, ph denote the priorbox’s (anchor) center coordinates, width and height. pxv, pyv, pwv, phv denote the variance of the priorbox and ox, oy, ow, oh denote the encoded/decoded coordinates, width and height.

Parameters:
  • prior_box (Variable) – (Tensor, default Tensor<float>) Box list PriorBox is a 2-D Tensor with shape [M, 4] holds M boxes, each box is represented as [xmin, ymin, xmax, ymax], [xmin, ymin] is the left top coordinate of the anchor box, if the input is image feature map, they are close to the origin of the coordinate system. [xmax, ymax] is the right bottom coordinate of the anchor box
  • prior_box_var (Variable) – (Tensor, default Tensor<float>, optional) PriorBoxVar is a 2-D Tensor with shape [M, 4] holds M group of variance. PriorBoxVar will set all elements to 1 by default
  • target_box (Variable) – (LoDTensor or Tensor) This input can be a 2-D LoDTensor with shape [N, 4] when code_type is ‘encode_center_size’. This input also can be a 3-D Tensor with shape [N, M, 4] when code_type is ‘decode_center_size’. [N, 4], each box is represented as [xmin, ymin, xmax, ymax], [xmin, ymin] is the left top coordinate of the box if the input is image feature map, they are close to the origin of the coordinate system. [xmax, ymax] is the right bottom coordinate of the box. This tensor can contain LoD information to represent a batch of inputs. One instance of this batch can contain different numbers of entities
  • code_type (STRING) – (string, default encode_center_size) the code type used with the target box
  • box_normalized (BOOLEAN) – (bool, default true) whether treat the priorbox as a noramlized box
Returns:

(LoDTensor or Tensor) When code_type is ‘encode_center_size’, the output tensor of box_coder_op with shape [N, M, 4] representing the result of N target boxes encoded with M Prior boxes and variances. When code_type is ‘decode_center_size’, N represents the batch size and M represents the number of deocded boxes

Return type:

output_box(Variable)

polygon_box_transform

paddle.fluid.layers.polygon_box_transform(input, name=None)

PolygonBoxTransform Operator.

PolygonBoxTransform Operator is used to transform the coordinate shift to the real coordinate.

The input is the final geometry output in detection network. We use 2*n numbers to denote the coordinate shift from n corner vertices of the polygon_box to the pixel location. As each distance offset contains two numbers (xi, yi), the geometry output contains 2*n channels.

Parameters:input (Variable) – The input with shape [batch_size, geometry_channels, height, width]
Returns:The output with the same shape as input
Return type:output(Variable)

metric_op

accuracy

paddle.fluid.layers.accuracy(input, label, k=1, correct=None, total=None)

accuracy layer. Refer to the https://en.wikipedia.org/wiki/Precision_and_recall

This function computes the accuracy using the input and label. If the correct label occurs in top k predictions, then correct will increment by one. Note: the dtype of accuracy is determined by input. the input and label dtype can be different.

Parameters:
  • input (Variable) – The input of accuracy layer, which is the predictions of network. Carry LoD information is supported.
  • label (Variable) – The label of dataset.
  • k (int) – The top k predictions for each class will be checked.
  • correct (Variable) – The correct predictions count.
  • total (Variable) – The total entries count.
Returns:

The correct rate.

Return type:

Variable

Examples

data = fluid.layers.data(name="data", shape=[-1, 32, 32], dtype="float32")
label = fluid.layers.data(name="data", shape=[-1,1], dtype="int32")
predict = fluid.layers.fc(input=data, size=10)
acc = fluid.layers.accuracy(input=predict, label=label, k=5)

auc

paddle.fluid.layers.auc(input, label, curve='ROC', num_thresholds=4095, topk=1, slide_steps=1)

Area Under the Curve (AUC) Layer

This implementation computes the AUC according to forward output and label. It is used very widely in binary classification evaluation.

Note: If input label contains values other than 0 and 1, it will be cast to bool. Find the relevant definitions here.

There are two types of possible curves:

  1. ROC: Receiver operating characteristic;
  2. PR: Precision Recall
Parameters:
  • input (Variable) – A floating-point 2D Variable, values are in the range [0, 1]. Each row is sorted in descending order. This input should be the output of topk. Typically, this Variable indicates the probability of each label.
  • label (Variable) – A 2D int Variable indicating the label of the training data. The height is batch size and width is always 1.
  • curve (str) – Curve type, can be ‘ROC’ or ‘PR’. Default ‘ROC’.
  • num_thresholds (int) – The number of thresholds to use when discretizing the roc curve. Default 200.
  • topk (int) – only topk number of prediction output will be used for auc.
  • slide_steps – when calc batch auc, we can not only use step currently but the previous steps can be used. slide_steps=1 means use the current step, slide_steps=3 means use current step and the previous second steps, slide_steps=0 use all of the steps.
Returns:

A scalar representing the current AUC.

Return type:

Variable

Examples

# network is a binary classification model and label the ground truth
prediction = network(image, is_infer=True)
auc_out=fluid.layers.auc(input=prediction, label=label)