This represents a toolbox for artificial neural networks,
based on "Matrix ANN" book
- Only layered feedforward networks are supported *directly* at the moment
(for others use the "hooks" provided)
- Unlimited number of layers
- Unlimited number of neurons per each layer separately
- User defined activation function (defaults to logistic)
- User defined error function (defaults to SSE)
- Algorithms implemented so far:
* standard (vanilla) with or without bias, on-line or batch
* momentum with or without bias, on-line or batch
* SuperSAB with or without bias, on-line or batch
* Conjugate gradients
* Jacobian computation
* Computation of result of multiplication between "vector" and
- Some helper functions provided
For full descriptions start with the toplevel "ANN" man page.
ann_FF — Algorithms for feedforward nets.
ann_FF_ConjugGrad — Conjugate Gradient algorithm.
ann_FF_Hess — computes Hessian by finite differences.
ann_FF_INT — internal implementation of feedforward nets.
ann_FF_Jacobian — computes Jacobian by finite differences.
ann_FF_Jacobian_BP — computes Jacobian trough backpropagation.
ann_FF_Mom_batch — batch backpropagation with momentum.
ann_FF_Mom_batch_nb — batch backpropagation with momentum (without bias).
ann_FF_Mom_online — online backpropagation with momentum.
ann_FF_Mom_online_nb — online backpropagation with momentum.
ann_FF_SSAB_batch — batch SuperSAB algorithm.
ann_FF_SSAB_batch_nb — batch SuperSAB algorithm (without bias).
ann_FF_SSAB_online — online SuperSAB training algorithm.
ann_FF_SSAB_online_nb — online backpropagation with SuperSAB
ann_FF_Std_batch — standard batch backpropagation.
ann_FF_Std_batch_nb — standard batch backpropagation (without bias).
ann_FF_Std_online — online standard backpropagation.
ann_FF_Std_online_nb — online standard backpropagation
ann_FF_VHess — multiplication between a "vector" V and Hessian
ann_FF_grad — error gradient trough finite differences.
ann_FF_grad_BP — error gradient trough backpropagation
ann_FF_grad_BP_nb — error gradient trough backpropagation (without bias)
ann_FF_grad_nb — error gradient trough finite differences
ann_FF_init — initialize the weight hypermatrix.
ann_FF_init_nb — initialize the weight hypermatrix (without bias).
ann_FF_run — run patterns trough a feedforward net.
ann_FF_run_nb — run patterns trough a feedforward net (without bias).
ann_d_log_activ — derivative of logistic activation function
ann_d_sum_of_sqr — derivative of sum-of-squares error
ann_log_activ — logistic activation function
ann_pat_shuffle — shuffles randomly patterns for an ANN
ann_sum_of_sqr — calculates sum-of-squares error
Upload date : 2011-11-24 13:30:39
MD5 : 7ed13b46ea1326136c8758856d317665
SHA1 : 798b7aa96d37c821e462334025961d168b549c42
Downloads : 2233
Mise à jour du fichier de Description.
Upload date : 2016-02-24 17:48:22
MD5 : 98e941fd468b25237ec88d517b6b09b7
SHA1 : 81c085f17f9ab4555678c5d4bfe0f8fef2e2f96d
Downloads : 2467
Binary version (all platforms)
Automatically generated by the ATOMS compilation chain
Upload date : 2016-06-22 17:31:38
MD5 : 0f05a8a38cf0ba91852cbfee0a6aaf22
SHA1 : c487fdde6e641058d07f6f30a815ca9b900d48a7
Downloads : 3269
I have basic questions on the NN toolbox.
Do the inputs and outputs have to be normalized [0,1] or [-1,1] before use? Are there any
example Scilab codes for conjugate gradient?
> I have basic questions on the NN toolbox.
> Do the inputs and outputs have to be normalized [0,1] or [-1,1] before use? Are there
> example Scilab codes for conjugate gradient?
It depends on activation function. In ordinary way input aren't limited (no need for
normalization) but output is in range (0,1). I use my "line function" and then it
once installed run NN toolbox? ANN_ToolboxEdit();?
> once installed run NN toolbox? ANN_ToolboxEdit();?
As far as I am able to judge it is only a few function with a lot of limitations (like
only one type of activation function for all net) :(
are you planning a version for Scilab 6 ?? When ??
First, thank you for creating this tool. It has been a great learning tool.
I have been using the following call, which provides a working hypermatrix:
W = ann_FF_Std_online(traindata,targetdata,N,W,lp,Epochs)
lp = [0.2, 0.0]
traindata is: 23x1887
targetdata is: 1x1887
N is: [23 38 38 1]
Epochs is: 2000
W is initialized by: W = ann_FF_init(N)
I am now trying to use the ann_FF_ConjugGrad with the same matrices, but I am getting the
W = ann_FF_ConjugGrad(traindata,targetdata,N,W,Epochs,1.01)
I am uncertain as to what value I should be using for the last item, dW, which I have
applied a 1.01 value.
Error produced by ConjugGrad call:
Division by zero...
at line 46 of function ann_FF_ConjugGrad called by :
W = ann_FF_ConjugGrad(traindata,targetdata,N,W,Epochs,1.01);
Any help you can provide will be greatly appreciated.