fluid.executor

Executor

class paddle.fluid.executor.Executor(place)

An Executor in Python, only support the single-GPU running. For multi-cards, please refer to ParallelExecutor. Python executor takes a program, add feed operators and fetch operators to this program according to feed map and fetch_list. Feed map provides input data for the program. fetch_list provides the variables(or names) that user want to get after program run. Note: the executor will run all operators in the program but not only the operators dependent by the fetch_list. It store the global variables into the global scope, and create a local scope for the temporary variables. The local scope contents will be discarded after every minibatch forward/backward finished. But the global scope variables will be persistent through different runs. All of ops in program will be running in sequence.

Parameters:place (core.CPUPlace|core.CUDAPlace(n)) – indicate the executor run on which device

Note: For debugging complicated network in parallel-GPUs, you can test it on the executor. They has the exactly same arguments, and expected the same results.

close()

Close this executor.

You can no long use this executor after calling this method. For the distributed training, this method would free the resource on PServers related to the current Trainer.

Example

>>> cpu = core.CPUPlace()
>>> exe = Executor(cpu)
>>> ...
>>> exe.close()
run(program=None, feed=None, fetch_list=None, feed_var_name='feed', fetch_var_name='fetch', scope=None, return_numpy=True, use_program_cache=False)

Run program by this Executor. Feed data by feed map, fetch result by fetch_list. Python executor takes a program, add feed operators and fetch operators to this program according to feed map and fetch_list. Feed map provides input data for the program. fetch_list provides the variables(or names) that user want to get after program run.

Note: the executor will run all operators in the program but not only the operators dependent by the fetch_list

Parameters:
  • program (Program) – the program that need to run, if not provied, then default_main_program will be used.
  • feed (dict) – feed variable map, e.g. {“image”: ImageData, “label”: LableData}
  • fetch_list (list) – a list of variable or variable names that user want to get, run will return them according to this list.
  • feed_var_name (str) – the name for the input variable of feed Operator.
  • fetch_var_name (str) – the name for the output variable of fetch Operator.
  • scope (Scope) – the scope used to run this program, you can switch it to different scope. default is global_scope
  • return_numpy (bool) – if convert the fetched tensor to numpy
  • use_program_cache (bool) – set use_program_cache to true if program not changed compare to the last step.
Returns:

fetch result according to fetch_list.

Return type:

list(numpy.array)

Examples

>>> data = layers.data(name='X', shape=[1], dtype='float32')
>>> hidden = layers.fc(input=data, size=10)
>>> layers.assign(hidden, out)
>>> loss = layers.mean(out)
>>> adam = fluid.optimizer.Adam()
>>> adam.minimize(loss)
>>> cpu = core.CPUPlace()
>>> exe = Executor(cpu)
>>> exe.run(default_startup_program())
>>> x = numpy.random.random(size=(10, 1)).astype('float32')
>>> outs = exe.run(
>>>     feed={'X': x},
>>>     fetch_list=[loss.name])

global_scope

paddle.fluid.executor.global_scope()

Get the global/default scope instance. There are a lot of APIs use global_scope as its default value, e.g., Executor.run

Returns:The global/default scope instance.
Return type:Scope

scope_guard

paddle.fluid.executor.scope_guard(*args, **kwds)

Change the global/default scope instance by Python with statement. All variable in runtime will assigned to the new scope.

Examples

>>> import paddle.fluid as fluid
>>> new_scope = fluid.Scope()
>>> with fluid.scope_guard(new_scope):
>>>     ...
Parameters:scope – The new global/default scope.

_switch_scope

paddle.fluid.executor._switch_scope(scope)