fluid.param_attr

ParamAttr

class paddle.fluid.param_attr.ParamAttr(name=None, initializer=None, learning_rate=1.0, regularizer=None, trainable=True, gradient_clip=None, do_model_average=False)

Parameter attributes object. To fine-tuning network training process, user can set parameter’s attributes to control training details. Such as learning rate, regularization, trainable, do_model_average and the method to initialize param.

Parameters:
  • name (str) – The parameter’s name. Default None.
  • initializer (Initializer) – The method to initial this parameter. Default None.
  • learning_rate (float) – The parameter’s learning rate. The learning rate when optimize is \(global\_lr * parameter\_lr * scheduler\_factor\). Default 1.0.
  • regularizer (WeightDecayRegularizer) – Regularization factor. Default None.
  • trainable (bool) – Whether this parameter is trainable. Default True.
  • gradient_clip (BaseGradientClipAttr) – The method to clip this parameter’s gradient. Default None.
  • do_model_average (bool) – Whether this parameter should do model average. Default False.

Examples

w_param_attrs = fluid.ParamAttr(name="fc_weight",
                                learning_rate=0.5,
                                regularizer=fluid.L2Decay(1.0),
                                trainable=True)
y_predict = fluid.layers.fc(input=x, size=10, param_attr=w_param_attrs)
set_default_initializer(initializer)

Set the default initializer, the initializer should be Constant, Uniform, Normal, Xavier, MSRA.

Parameters:initializer (Initializer) – the initializer to set.
Returns:None
set_default_param_initializer()

Set the default initializer for the parameter with Xavier.

Parameters:None.
Returns:None.
set_default_bias_initializer()

Set the default initializer for the bias with Constant(0.0).

Parameters:None.
Returns:None.
static to_attr(arg)

Create ParamAttr[s].

Parameters:arg – Arguments to initialize ParamAttr[s]. arg’s type can be str, Initializer, float, WeightDecayRegularizer, BaseGradientClipAttr, bool, ParamAttr, or a list of above type.
Returns:ParamAttr[s] initialized with arg.
Return type:ParamAttr[s]
Raises:arg can not initialize a ParamAttr.
to_kwargs(with_initializer=False)

Returns the attributes of this parameter.

Parameters:with_initializer (bool) – Whether to add initializer attr.
Returns:The attributes of this parameter.
Return type:Parameter attributes(map)

WeightNormParamAttr

class paddle.fluid.param_attr.WeightNormParamAttr(dim=None, **kwargs)

Used for weight Norm. Weight Norm is a reparameterization of the weight vectors in a neural network that decouples the length of those weight vectors from their direction. Weight Norm has been implemented as discussed in this paper: Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks.

Parameters:
  • dim (list) – The parameter’s name. Default None.
  • kwargs – Any field in ParamAttr. Default None.

Examples

data = fluid.layers.data(name="data", shape=[3, 32, 32], dtype="float32")
fc = fluid.layers.fc(input=data,
                     size=1000,
                     param_attr=WeightNormParamAttr(
                          dim=None,
                          name='weight_norm_param'))