Scilab Home Page | Wiki | Bug Tracker | Forge | Mailing List Archives | Scilab Online Help | File Exchange
ATOMS : ANN Toolbox details
Login with GitLab

ANN Toolbox

ANN Toolbox
(15100 downloads for this version - 53395 downloads for all versions)
Details
Version
0.4.2.5
A more recent valid version exists: 0.5
Authors
Ryurick M. Hristev
Allan Cornet <allan.cornet@scilab.org>
Samuel GOUGEON
Owner Organization
Private Individual
Maintainers
Allan CORNET
Administrator ATOMS
Samuel Gougeon
License
Creation Date
November 24, 2011
Source created on
Scilab 5.4.x
Binaries available on
Scilab 5.4.x:
Windows 64-bit Windows 32-bit Linux 64-bit Linux 32-bit macOS
Scilab 5.5.x:
Windows 64-bit Windows 32-bit Linux 64-bit Linux 32-bit macOS
Install command
--> atomsInstall("ANN_Toolbox")
Description
            This represents a toolbox for artificial neural networks,
based on "Matrix ANN" book

Current feature:
 - Only layered feedforward networks are supported *directly* at the moment
   (for others use the "hooks" provided)
 - Unlimited number of layers
 - Unlimited number of neurons per each layer separately
 - User defined activation function (defaults to logistic)
 - User defined error function (defaults to SSE)
 - Algorithms implemented so far:
    * standard (vanilla) with or without bias, on-line or batch
    * momentum with or without bias, on-line or batch
    * SuperSAB with or without bias, on-line or batch
    * Conjugate gradients
    * Jacobian computation
    * Computation of result of multiplication between "vector" and
Hessian
 - Some helper functions provided

For full descriptions start with the toplevel "ANN" man page.

functions:

ann_FF — Algorithms for feedforward nets.
ann_FF_ConjugGrad — Conjugate Gradient algorithm.
ann_FF_Hess — computes Hessian by finite differences.
ann_FF_INT — internal implementation of feedforward nets.
ann_FF_Jacobian — computes Jacobian by finite differences.
ann_FF_Jacobian_BP — computes Jacobian trough backpropagation.
ann_FF_Mom_batch — batch backpropagation with momentum.
ann_FF_Mom_batch_nb — batch backpropagation with momentum (without bias).
ann_FF_Mom_online — online backpropagation with momentum.
ann_FF_Mom_online_nb — online backpropagation with momentum.
ann_FF_SSAB_batch — batch SuperSAB algorithm.
ann_FF_SSAB_batch_nb — batch SuperSAB algorithm (without bias).
ann_FF_SSAB_online — online SuperSAB training algorithm.
ann_FF_SSAB_online_nb — online backpropagation with SuperSAB
ann_FF_Std_batch — standard batch backpropagation.
ann_FF_Std_batch_nb — standard batch backpropagation (without bias).
ann_FF_Std_online — online standard backpropagation.
ann_FF_Std_online_nb — online standard backpropagation
ann_FF_VHess — multiplication between a "vector" V and Hessian
ann_FF_grad — error gradient trough finite differences.
ann_FF_grad_BP — error gradient trough backpropagation
ann_FF_grad_BP_nb — error gradient trough backpropagation (without bias)
ann_FF_grad_nb — error gradient trough finite differences
ann_FF_init — initialize the weight hypermatrix.
ann_FF_init_nb — initialize the weight hypermatrix (without bias).
ann_FF_run — run patterns trough a feedforward net.
ann_FF_run_nb — run patterns trough a feedforward net (without bias).
ann_d_log_activ — derivative of logistic activation function
ann_d_sum_of_sqr — derivative of sum-of-squares error
ann_log_activ — logistic activation function
ann_pat_shuffle — shuffles randomly patterns for an ANN
ann_sum_of_sqr — calculates sum-of-squares error

            
Files (3)
[88.39 kB]
Source code archive

[169.30 kB]
OS-independent binary for Scilab 5.4.x
Mise à jour du fichier de Description.
[161.41 kB]
OS-independent binary for Scilab 5.5.x
Binary version (all platforms)
Automatically generated by the ATOMS compilation chain

News (0)
Comments (5)     Leave a comment 
Comment from Rajive Ganguli -- June 14, 2012, 02:25:56 AM    
I have basic questions on the NN toolbox.
Do the inputs and outputs have to be normalized [0,1] or [-1,1] before use? Are there any 
example Scilab codes for conjugate gradient? 
Answer from mike mike -- May 12, 2015, 12:26:28 PM    
> I have basic questions on the NN toolbox.
> Do the inputs and outputs have to be normalized [0,1] or [-1,1] before use? Are there
> any 
> example Scilab codes for conjugate gradient? 


It depends on activation function. In ordinary way input aren't limited (no need for 
normalization) but output is in range (0,1). I use my "line function" and then it
is 
unlimited too. 
Comment from Е;л;е;н;а; Р;о;ж;и;н;а; -- November 18, 2013, 07:31:15 PM    
once installed run NN toolbox? ANN_ToolboxEdit();?
Answer from mike mike -- May 12, 2015, 12:28:38 PM    
> once installed run NN toolbox? ANN_ToolboxEdit();?

As far as I am able to judge it is only a few function with a lot of limitations (like 
only one type of activation function for all net) :(
Comment from Jeff Waters -- July 10, 2017, 09:58:34 PM    
are you planning a version for Scilab 6 ?? When ??

Thanks

Jeff

Answer from Samuel Gougeon -- December 1, 2018, 11:14:34 PM    
Hello,
ANN_Toolbox 0.5 is now released for Scilab 5.5, Scilab 6.0, and Scilab 6.1.
Enjoy!
Comment from Thomas Haregot -- July 18, 2017, 02:46:10 AM    
Dear Sirs,

First, thank you for creating this tool. It has been a great learning tool.

I have been using the following call, which provides a working hypermatrix:

W = ann_FF_Std_online(traindata,targetdata,N,W,lp,Epochs)

where,
lp = [0.2, 0.0]
traindata is: 23x1887
targetdata is: 1x1887
N is: [23 38 38 1]
Epochs is: 2000
W is initialized by: W = ann_FF_init(N)

I am now trying to use the ann_FF_ConjugGrad with the same matrices, but I am getting the
following error.

W = ann_FF_ConjugGrad(traindata,targetdata,N,W,Epochs,1.01)

I am uncertain as to what value I should be using for the last item, dW, which I have
applied a 1.01 value.

Error produced by ConjugGrad call:
Division by zero...
at line      46 of function ann_FF_ConjugGrad called by :  
    W = ann_FF_ConjugGrad(traindata,targetdata,N,W,Epochs,1.01);


Any help you can provide will be greatly appreciated.

-Thomas
Comment from Vanshika Adiani -- September 18, 2018, 08:51:13 AM    
I have an input variable of 6*26 matrix
target variable of 2*26
and an unknown test sample to be predicted of 6*4 matrix

input=read('input.txt',6,26)
target=read('target.txt',2,26)
test=read('test.txt',6,4)
net_1_1 = ann_FFBP_lm(input,target,[6,1,2])
net_1_1_test=ann_FFBP_run(test,net_1_1)
ann_FFBP_lm does the training for me.
ann_FFBP_run predicts the output for unknown data.
but if I wish to save the trained net, such that it will generate the same output
everytime, how can it be done.
bcoz fresh training always will generate new weights
any suggestions
Leave a comment
You must register and log in before leaving a comment.
Login with GitLab
Email notifications
Send me email when this toolbox has changes, new files or a new release.
You must register and log in before setting up notifications.