Scilab Home Page | Wiki | Bug Tracker | Forge | Mailing List Archives | Scilab Online Help | File Exchange
ATOMS : ANN Toolbox details
Login with GitLab

ANN Toolbox

ANN Toolbox
(6245 downloads for this version - 54067 downloads for all versions)
Details
Version
0.4.2.4
A more recent valid version exists: 0.5
Author
Ryurick M. Hristev
Owner Organization
Private Individual
Maintainers
Pierre MARECHAL
Allan CORNET
License
Creation Date
September 7, 2010
Source created on
Scilab 5.3.x
Binaries available on
Scilab 5.3.x:
Windows 64-bit Windows 32-bit Linux 64-bit Linux 32-bit macOS
Install command
--> atomsInstall("ANN_Toolbox")
Description
            This represents a toolbox for artificial neural networks,
based on my developments described in "Matrix ANN" book,
under development, if interested send me an email at
r.hristev@phys.canterbury.ac.nz

Current feature:s
 - Only layered feedforward networks are supported *directly* at the moment
   (for others use the "hooks" provided)
 - Unlimited number of layers
 - Unlimited number of neurons per each layer separately
 - User defined activation function (defaults to logistic)
 - User defined error function (defaults to SSE)
 - Algorithms implemented so far:
    * standard (vanilla) with or without bias, on-line or batch
    * momentum with or without bias, on-line or batch
    * SuperSAB with or without bias, on-line or batch
    * Conjugate gradients
    * Jacobian computation
    * Computation of result of multiplication between "vector" and Hessian
 - Some helper functions provided

For full descriptions start with the toplevel "ANN" man page.
            
Files (2)
[88.39 kB]
Source code archive

[195.37 kB]
OS-independent binary for Scilab 5.3.x
Binary version
Automatically generated by the ATOMS compilation chain

News (0)
Comments (3)     Leave a comment 
Comment from Adrian Letchford -- January 11, 2011, 04:41:54 AM    
Dear Ryurick Hristev,

I was trying out your ANN Toolbox for SciLab, and I'm wondering if I have something wrong.

I tried out a basic [5-3-1] feedforward net with 500 input patterns and training with the 
online backpropagation algorithm. However, for me, it was very slow. Is there anything 
that needs doing to speed up processing? I will be doing mathematical work for my Ph.D in 
Computer Science this year, while the university offers matlab, I would very much like to 
stick to open source software so that I can make the experiments open to anyone. I have 
never used MatLab, so I do not know how fast it is, but I have coded my own BP networks in

C# and they can process at least a thousand input patterns much faster than Scilab can do 
10.

Here is the code I'm using, it is just a rewrite of one of your demos:

// network def.
//  - neurons per layer, including input
N  = [4,2,1];

// inputs
x = inputs'; //inputs were taken from a CSV file and organised into this matrix. 500 input

patterns were used.
     
// targets, at training stage is acts as identity networ
t = outputs; //same as inputs

// learning parameter
lp = [0.01,0];

// init randomize weights between
r = [-1,1];
W = ann_FF_init(N,r);

// Do 100 iterations
timer();
W = ann_FF_Std_batch(x,t,N,W,lp,100);
disp(timer());

Your code is very easy to use, but is there a way to speed it up? The training here takes 
8 seconds on my computer.
Comment -- October 13, 2011, 08:36:25 AM    
Hi,

I did a review of this toolbox at:

http://wiki.scilab.org/New%20Scientific%20Features%20in%202010#A7th_of_September_2010:_ANN_T
oolbox

Best regards,

Michaël Baudin
Comment from Martin Wendell Cordova Villanueva -- January 12, 2012, 05:13:06 PM    
Hi:

this is typical "AND" example. I try to solve with scilab's code using
ANN-TOOLBOX.

/ network def.
//  - neurons per layer, including input
-->N=[2 1];

-->//inputs
-->x=[0 0 1 1;
-->0 1 0 1]
x  =
 
    0.    0.    1.    1.  
    0.    1.    0.    1.

-->//target
-->t=[0 0 0 1]
 t  =
 
    0.    0.    0.    1.  
 
-->//learning parameter

-->lp=[8,0];

-->// init randomize weights between
->r=[-1,1];
-->W=ann_FF_init_nb(N,r);

-->// T epoch variable

-->T=500;
-->W=ann_FF_Std_online_nb(x,t,N,W,lp,T);
--> //run
->ann_FF_run_nb(x,N,W)
 ans  =
 
    0.5    0.2277666    0.2277666    0.0800306 

-->y=ann_FF_run_nb(x,N,W);
-->E = ann_sum_of_sqr(y,t)
 E  =
 
    0.6000495  

E is too error. I would like to know how to solve this tipical problem "AND" with
ANN-TOOLBOX

what can I use from ANN-toolbox's commands for the testing phase of the ANN after
finishing the training phase?

Thanks
Leave a comment
You must register and log in before leaving a comment.
Login with GitLab
Email notifications
Send me email when this toolbox has changes, new files or a new release.
You must register and log in before setting up notifications.