ann Package

ann Package

activation_functions Module

ann Module

class pypr.ann.ann.ANN(nodes, afunc=[], errorfunc=None)

A simple implementation of a feed forward neural network.

Methods

find_flat_weight_no
find_weight
forward
forward_get_all
get_af_summed_weighs
get_base_error_func
get_error_func
get_flat_weight
get_flat_weights
get_num_layers
get_weight_copy
gradient
gradient_descent_train
set_flat_weight
set_flat_weights
find_flat_weight_no(layerNo, r, c)

Returns the corresponding flat weight index to layer and weight index. This is not very fast, so do not use it for anything but small examples or testing.

find_weight(flatWno)
Parameters :

flatWno : int

The number of the weight in a flattened vector

Returns :

pos : tuple

Returns the layer number, row, and column where weight is found - (layer, r, c)

This is not very fast, so do not use it for anything but small :

examples or testing. :

forward(inputs)

Returns the output from the output layer.

Parameters :

inputs : NxD np array

An array with N samples with D dimensions (features). For example an input with 2 samples and 3 dimensions:

inputs = array([[1,2,3],

[3,4,5]])

Returns :

result : np array

A NxD’ array, where D’ is the number of outputs of the network.

forward_get_all(inputs)

Returns the outputs from all layers, from input to output.

Parameters :

inputs : NxD np array

An array with N samples with D dimensions (features). For example an input with 2 samples and 3 dimensions:

inputs = array([[1,2,3],

[3,4,5]])

Returns :

result : list

A list of 2d np arrays. The all the arrays will have N rows, the number of outputs for each array will depend on the of nodes in that particular layer.

get_af_summed_weighs()
Returns :

sum : scalar

Returns the sum of all the weights to the input of a note’s activation function.

get_base_error_func()
Returns :

error_func : function

This is the error function which back propagation uses. Should normally not be overwritten. This error function does not include the weight decay.

get_error_func()
Returns :

error_func : function

The error function. This function can be overwritten, so for example a network with weight decay will return the error function with the weight decay penality.

get_flat_weight(flatWno)

Returns the value of the weight corresponding to flat weight index.

get_flat_weights()
Returns :

W : 1d np array

Returns all the weights of the network in a vector form. From input to output layer. This is useful for the optimization methods, which normally only operate on an 1d array.

get_num_layers()
Returns :

Nw : int

Number of weights layers in the network.

get_weight_copy()
Returns :

weights : list

Copy of the current weights. The length of the list is equal to the number of hidden layers.

gradient(inputs, targets, errorfunc=None)

Calculate the derivatives of the error function. Return a matrix corresponding to each weight matrix.

Parameters :

inputs : NxD np array

The inputs are given as, N, rows, with D features/dimensions.

targets : NxT np array

The N targets corresponding to the inputs. The number of outputs is given by T.

Returns :

gradient : list

List of gradient matrices.

gradient_descent_train(inputs, targets, eta=0.001, maxitr=20000)

Train the network using the gradient descent method. This method shouldn’t relly be used. Use the more advanced methods.

Parameters :

inputs : NxD np array

The inputs are given as, N, rows, with D features/dimensions.

targets : NxT np array

The N targets corresponding to the inputs. The number of outputs is given by T.

For example one could use the XOR function as an example: :

inputs = array([[0,0],

[0,1], [1,0], [1,1]])

targets = array([[0],[1],[1],[0]])

set_flat_weight(flatWno, value)

Sets the value of the weight with flat weight index.

set_flat_weights(W)

Set the weights of the network according to a vector of weights.

Parameters :

W : 1d np array

W must have the correct length, otherwise it will not work.

class pypr.ann.ann.WeightDecayANN(*args, **kwargs)

Bases: pypr.ann.ann.ANN

This modifies the error function and the gradient to accommodate weight decay, otherwise it works just like an ANN.

Methods

error_with_weight_penalty
find_flat_weight_no
find_weight
forward
forward_get_all
get_af_summed_weighs
get_base_error_func
get_error_func
get_flat_weight
get_flat_weights
get_num_layers
get_weight_copy
gradient
gradient_descent_train
set_flat_weight
set_flat_weights
error_with_weight_penalty(y, t)
get_error_func()

Returns a modified error function with a weight decay penalty.

gradient(*args)

error_functions Module

An error function is represented as a tuple with a function for evaluating networks error and its derivatives.

Table Of Contents

Previous topic

pypr Package

This Page