Scilab Home Page | Wiki | Bug Tracker | Forge | Mailing List Archives | Scilab Online Help | File Exchange
ATOMS : ANN Toolbox details
Please login or create an account

ANN Toolbox

ANN Toolbox
(6138 downloads for this version - 41104 downloads for all versions)
Details
Version
0.4.2.4
A more recent valid version exists: 0.5
Author
Ryurick M. Hristev
Owner Organization
Private Individual
Maintainers
Pierre Marechal
Allan Cornet
License
Creation Date
September 7, 2010
Source created on
Scilab 5.3.x
Binaries available on
Scilab 5.3.x:
Windows 64-bit Windows 32-bit Linux 64-bit Linux 32-bit MacOSX
Install command
--> atomsInstall("ANN_Toolbox")
Description
            This represents a toolbox for artificial neural networks,
based on my developments described in "Matrix ANN" book,
under development, if interested send me an email at
r.hristev@phys.canterbury.ac.nz

Current feature:s
 - Only layered feedforward networks are supported *directly* at the moment
   (for others use the "hooks" provided)
 - Unlimited number of layers
 - Unlimited number of neurons per each layer separately
 - User defined activation function (defaults to logistic)
 - User defined error function (defaults to SSE)
 - Algorithms implemented so far:
    * standard (vanilla) with or without bias, on-line or batch
    * momentum with or without bias, on-line or batch
    * SuperSAB with or without bias, on-line or batch
    * Conjugate gradients
    * Jacobian computation
    * Computation of result of multiplication between "vector" and Hessian
 - Some helper functions provided

For full descriptions start with the toplevel "ANN" man page.
            
Files (2)
[88.39 kB]
Source code archive

[195.37 kB]
OS-independent binary for Scilab 5.3.x
Binary version
Automatically generated by the ATOMS compilation chain

News (0)
Comments (3)     Leave a comment 
Comment from Adrian Letchford -- January 11, 2011, 04:41:54 AM    
Dear Ryurick Hristev,

I was trying out your ANN Toolbox for SciLab, and I'm wondering if I have something wrong.

I tried out a basic [5-3-1] feedforward net with 500 input patterns and training with the 
online backpropagation algorithm. However, for me, it was very slow. Is there anything 
that needs doing to speed up processing? I will be doing mathematical work for my Ph.D in 
Computer Science this year, while the university offers matlab, I would very much like to 
stick to open source software so that I can make the experiments open to anyone. I have 
never used MatLab, so I do not know how fast it is, but I have coded my own BP networks in

C# and they can process at least a thousand input patterns much faster than Scilab can do 
10.

Here is the code I'm using, it is just a rewrite of one of your demos:

// network def.
//  - neurons per layer, including input
N  = [4,2,1];

// inputs
x = inputs'; //inputs were taken from a CSV file and organised into this matrix. 500 input

patterns were used.
     
// targets, at training stage is acts as identity networ
t = outputs; //same as inputs

// learning parameter
lp = [0.01,0];

// init randomize weights between
r = [-1,1];
W = ann_FF_init(N,r);

// Do 100 iterations
timer();
W = ann_FF_Std_batch(x,t,N,W,lp,100);
disp(timer());

Your code is very easy to use, but is there a way to speed it up? The training here takes 
8 seconds on my computer.
[get_person] Le compte avec l'identifiant 2125 n'existe pas