Scilab Home Page | Wiki | Bug Tracker | Forge | Mailing List Archives | Scilab Online Help | File Exchange
ATOMS : Neural Network Module details
Please login or create an account

Neural Network Module

This is a Scilab Neural Network Module which covers supervised and unsupervised training algorithms
(3827 downloads for this version - 3827 downloads for all versions)
Details
Version
2.0
Author
Tan Chin Luh
Owner Organization
Trity Technology
Maintainers
Chin Luh Tan
Administrator Atoms
Yann Debray
License
Creation Date
July 23, 2016
Source created on
Scilab 5.5.x
Binaries available on
Scilab 5.5.x:
Windows 64-bit Windows 32-bit Linux 64-bit Linux 32-bit MacOSX
Scilab 6.0.x:
Windows 64-bit Windows 32-bit Linux 64-bit Linux 32-bit MacOSX
Install command
--> atomsInstall("neuralnetwork")
Description
            This Neural Network Module is based on the book "Neural Network
Design" book by Martin T. Hagan. 

The module could be used to build following netwroks
1. Perceptron
2. Adaline
3. Multilayer Feedforware Backpropagation Network
   - Gradient Decent
   - Gradient Decent with Adaptive Learning Rate
   - Gradient Decent with Momentum
   - Gradient Decent with Adaptive Learning Rate and Momentum
   - Levenberg–Marquardt
4. Competitive Network
5. Self-Organizing Map
6. LVQ1 Network
            
Files (4)
[763.20 kB]
Source code archive

[567.74 kB]
OS-independent binary for Scilab 5.5.x
Binary version (all platforms)
Automatically generated by the ATOMS compilation chain

[653.41 kB]
OS-independent binary for Scilab 6.0.x
Binary version (all platforms)
Automatically generated by the ATOMS compilation chain

[674.69 kB]
Miscellaneous file
Binary version (all platforms)
Automatically generated by the ATOMS compilation chain

News (0)
Comments (6)     Leave a comment 
Comment from Yann Debray -- July 25, 2016, 02:18:04 PM    
Beautiful toolbox !
Here is the document on which this toolbox is based:
http://hagan.okstate.edu/NNDesign.pdf
Answer from Chin Luh Tan -- August 1, 2016, 05:05:37 AM    
> Beautiful toolbox !
> Here is the document on which this toolbox is based:
> http://hagan.okstate.edu/NNDesign.pdf

Thanks for the ebook, I never notice it was there! 
Comment from Yann Debray -- July 28, 2016, 10:27:08 AM    
This comment has been deleted.
Comment from Raad Alshehri -- April 6, 2017, 09:09:23 PM    
I've been trying to use this toolbox for a while to implement hand written recognition
using MNIST data set. I believe there should be 784 inputs (28*28 pix) in the input layer,
and 10 outputs (numbers 0-9) in the output layer. I keep getting the error message
"Inconsistent row/column dimensions." no matter what is the number of hidden
layer.

Example:

--> W = ann_FFBP_gd(P,T,[784 10 10]);
at line   105 of function ann_FFBP_gd (
C:\Users\Raad\AppData\Roaming\Scilab\SCILAB~1.0\atoms\x64\NEURAL~1\2.0\macros\network\ann_FFBP_gd.sci
line 105 )

Inconsistent row/column dimensions.

Any suggestions?

I would be so delighted if I could get a comprehensive documentation of the toolbox as I'm
planning to use it to implement deep learning algorithms.
Answer from Chin Luh Tan -- April 13, 2017, 03:40:58 AM    
What's the output for your size(P) and size(T)? 

for size(P), it should be 784 x M 
and
size(T) should be 10 x M

where M is the number of dataset. 

ANN and DL are similar but not the same, this module is more for Machine Learning and 
Conventional Artificial NN base on the book as mentioned above. 

I am exploring into getting an ebook up for this module together with image processing 
module, will keep you updated.

Thanks.

Regards,
Chin Luh
Comment from Raad Alshehri -- April 14, 2017, 04:06:41 PM    
Thank you for your kind feedback. Do you suggest me to not use this toolbox for deep
learning?
Answer from Chin Luh Tan -- April 16, 2017, 05:31:39 PM    
Hi, mentioning about the deep neural network (dnn), you could use this module to train a 
multiple hidden layers NN. However, about the problem you mentioned, i think Convolutional

neural networks (CNN) might be more suitable. This module does not have CNN yet. 

CL  
Comment from Thomas Haregot -- July 16, 2017, 05:46:20 AM    
Hello,

I would like to implement specific activation functions in each hidden layer. Is this
possible? If so would you be so kind as to provide an example? I am interested in using
the following function: W = ann_FFBP_gd(P,T,N,af,lr,itermax,mse_min,gd_min) and its
alternatives.

Many thanks,
Thomas
Answer from Chin Luh Tan -- July 18, 2017, 05:36:31 AM    
W = ann_FFBP_gd(P,T,N,af,lr,itermax,mse_min,gd_min)

example of optional inputs

af = ['ann_tansig_activ','ann_purelin_activ']; 
lr = 0.01; 
itermax = 1000; 
mse_min = 1e-5; 
gd_min =  1e-5; 


available activation functions:
ann_logsig_activ
ann_purelin_activ
ann_tansig_activ
ann_hardlim_activ

you could also create your own activation function together with its' derivative in order 
to calculate the backprop Sensitivities, name them in pairs:

ann_myfunc_activ
ann_d_myfunc_activ

hope this helps.

rgds,
Chin Luh
Leave a comment
You must register and log in before leaving a comment.
Email notifications
Send me email when this toolbox has changes, new files or a new release.
You must register and log in before setting up notifications.