Scilab Home Page | Wiki | Bug Tracker | Forge | Mailing List Archives | Scilab Online Help | File Exchange
ATOMS : Neural Network Module details

Neural Network Module

This is a Scilab Neural Network Module which covers supervised and unsupervised training algorithms
(19391 downloads for this version - 27040 downloads for all versions)
A more recent valid version exists: 3.0
Tan Chin Luh
Owner Organization
Trity Technology
Chin Luh Tan
Administrator Atoms
Yann Debray
Creation Date
July 23, 2016
Source created on
Scilab 5.5.x
Binaries available on
Scilab 5.5.x:
Windows 64-bit Windows 32-bit Linux 64-bit Linux 32-bit macOS
Scilab 6.0.x:
Windows 64-bit Windows 32-bit Linux 64-bit Linux 32-bit macOS
Install command
--> atomsInstall("neuralnetwork")
            This Neural Network Module is based on the book "Neural Network
Design" book by Martin T. Hagan. 

The module could be used to build following netwroks
1. Perceptron
2. Adaline
3. Multilayer Feedforware Backpropagation Network
   - Gradient Decent
   - Gradient Decent with Adaptive Learning Rate
   - Gradient Decent with Momentum
   - Gradient Decent with Adaptive Learning Rate and Momentum
   - Levenberg–Marquardt
4. Competitive Network
5. Self-Organizing Map
6. LVQ1 Network
Files (4)
[763.20 kB]
Source code archive

[567.74 kB]
OS-independent binary for Scilab 5.5.x
Binary version (all platforms)
Automatically generated by the ATOMS compilation chain

[653.41 kB]
OS-independent binary for Scilab 6.0.x
Binary version (all platforms)
Automatically generated by the ATOMS compilation chain

[674.69 kB]
Miscellaneous file
Binary version (all platforms)
Automatically generated by the ATOMS compilation chain

News (0)
Comments (9)
Comment from Yann Debray -- July 25, 2016, 02:18:04 PM    
Beautiful toolbox !
Here is the document on which this toolbox is based:
Answer from Chin Luh Tan -- August 1, 2016, 05:05:37 AM    
> Beautiful toolbox !
> Here is the document on which this toolbox is based:

Thanks for the ebook, I never notice it was there! 
Comment from Yann Debray -- July 28, 2016, 10:27:08 AM    
This comment has been deleted.
Comment from Raad Alshehri -- April 6, 2017, 09:09:23 PM    
I've been trying to use this toolbox for a while to implement hand written recognition
using MNIST data set. I believe there should be 784 inputs (28*28 pix) in the input layer,
and 10 outputs (numbers 0-9) in the output layer. I keep getting the error message
"Inconsistent row/column dimensions." no matter what is the number of hidden


--> W = ann_FFBP_gd(P,T,[784 10 10]);
at line   105 of function ann_FFBP_gd (
line 105 )

Inconsistent row/column dimensions.

Any suggestions?

I would be so delighted if I could get a comprehensive documentation of the toolbox as I'm
planning to use it to implement deep learning algorithms.
Answer from Chin Luh Tan -- April 13, 2017, 03:40:58 AM    
What's the output for your size(P) and size(T)? 

for size(P), it should be 784 x M 
size(T) should be 10 x M

where M is the number of dataset. 

ANN and DL are similar but not the same, this module is more for Machine Learning and 
Conventional Artificial NN base on the book as mentioned above. 

I am exploring into getting an ebook up for this module together with image processing 
module, will keep you updated.


Chin Luh
Comment from Raad Alshehri -- April 14, 2017, 04:06:41 PM    
Thank you for your kind feedback. Do you suggest me to not use this toolbox for deep
Answer from Chin Luh Tan -- April 16, 2017, 05:31:39 PM    
Hi, mentioning about the deep neural network (dnn), you could use this module to train a 
multiple hidden layers NN. However, about the problem you mentioned, i think Convolutional

neural networks (CNN) might be more suitable. This module does not have CNN yet. 

Comment from Thomas Haregot -- July 16, 2017, 05:46:20 AM    

I would like to implement specific activation functions in each hidden layer. Is this
possible? If so would you be so kind as to provide an example? I am interested in using
the following function: W = ann_FFBP_gd(P,T,N,af,lr,itermax,mse_min,gd_min) and its

Many thanks,
Answer from Chin Luh Tan -- July 18, 2017, 05:36:31 AM    
W = ann_FFBP_gd(P,T,N,af,lr,itermax,mse_min,gd_min)

example of optional inputs

af = ['ann_tansig_activ','ann_purelin_activ']; 
lr = 0.01; 
itermax = 1000; 
mse_min = 1e-5; 
gd_min =  1e-5; 

available activation functions:

you could also create your own activation function together with its' derivative in order 
to calculate the backprop Sensitivities, name them in pairs:


hope this helps.

Chin Luh
Comment from Ansh Abhay Balde -- March 7, 2018, 06:14:49 AM    
Does this module require any changes or new additions? If yes, then maybe I can help.
Answer from Chin Luh Tan -- June 26, 2018, 08:20:43 AM    
sorry as i've overlooked this message. please feel free to do whatever improvement as this

project is open source. 

one of the improvement which I could think of is Generalization of the training algo, this

should make the module perform much better.


Comment from Bilge Kaan Gorur -- June 25, 2018, 12:40:01 PM    
Dear developers,

First of all, thank you very much for this very useful module. I can use it 
successfully, but I wonder if some more features are possible with it or not.

1. Can I get final MSE value into a variable after the learning process. Currently, I 
can see it in the interface, but I would like to save it programmatically.
2. How is input data divided by learning functions for training, validation and test? I 
could not find any information about it. Can we change the percentage of them manually, 
or is it currently hard-coded?

Thanks in advance.

Kaan Gorur
Answer from Chin Luh Tan -- June 26, 2018, 08:18:46 AM    

if you were to have full control on the module, please use the source code version instead

of the binary version. Then you could have any changes on it. 

As this module fully implemented with Scilab codes, it should be easy to do so. 



Comment from Radosław Cechowicz -- April 14, 2019, 09:16:31 PM    
Dear Developers,

I have found a bug in the implementation of logsig activation function (ann_logsig_activ)
for Scilab 6.0.2

The original function uses expotential defined as %e^(x) (line 40), which gives errors 

--> W = ann_FFBP_gda( T_input, T_result, [4, 4, 3], ['ann_logsig_activ', 

at line    51 of function evstr        ( 
/opt/scilab/share/scilab/modules/string/macros/evstr.sci line 66 )
at line   107 of function ann_FFBP_gda ( /home/rc/.Scilab/scilab-
6.0.2/atoms/neuralnetwork/2.0/macros/network/ann_FFBP_gda.sci line 107 )

evstr: Argument #1: Some expression can't be evaluated (%s_pow: Wrong size for input 
argument #2: Square matrix expected.).

This can be resolved with the following:

Change the line 40 in ann_logsig_activ.sci 
(-) y = 1 ./ (1+%e^(-x));
(+) y = 1 ./ (1+exp(-x));

Please, update the library.