A simple implementation of a feed forward neural network.
Methods
Returns the corresponding flat weight index to layer and weight index. This is not very fast, so do not use it for anything but small examples or testing.
Parameters : | flatWno : int
|
---|---|
Returns : | pos : tuple
This is not very fast, so do not use it for anything but small : examples or testing. : |
Returns the output from the output layer.
Parameters : | inputs : NxD np array
|
---|---|
Returns : | result : np array
|
Returns the outputs from all layers, from input to output.
Parameters : | inputs : NxD np array
|
---|---|
Returns : | result : list
|
Returns : | sum : scalar
|
---|
Returns : | error_func : function
|
---|
Returns : | error_func : function
|
---|
Returns the value of the weight corresponding to flat weight index.
Returns : | W : 1d np array
|
---|
Returns : | Nw : int
|
---|
Returns : | weights : list
|
---|
Calculate the derivatives of the error function. Return a matrix corresponding to each weight matrix.
Parameters : | inputs : NxD np array
targets : NxT np array
|
---|---|
Returns : | gradient : list
|
Train the network using the gradient descent method. This method shouldn’t relly be used. Use the more advanced methods.
Parameters : | inputs : NxD np array
targets : NxT np array
For example one could use the XOR function as an example: :
|
---|
Sets the value of the weight with flat weight index.
Set the weights of the network according to a vector of weights.
Parameters : | W : 1d np array
|
---|
Bases: pypr.ann.ann.ANN
This modifies the error function and the gradient to accommodate weight decay, otherwise it works just like an ANN.
Methods
error_with_weight_penalty | |
find_flat_weight_no | |
find_weight | |
forward | |
forward_get_all | |
get_af_summed_weighs | |
get_base_error_func | |
get_error_func | |
get_flat_weight | |
get_flat_weights | |
get_num_layers | |
get_weight_copy | |
gradient | |
gradient_descent_train | |
set_flat_weight | |
set_flat_weights |
Returns a modified error function with a weight decay penalty.
An error function is represented as a tuple with a function for evaluating networks error and its derivatives.