ANN FeedForward Backpropagation Gradient Decent with Adaptive Learning Rate and Momentum training function.
W = ann_FFBP_gdx(P,T,N) W = ann_FFBP_gdx(P,T,N,af,lr,lr_inc,lr_dec,Mr,itermax,mse_min,gd_min,mse_diff_max)
Training input
Training target
Number of Neurons in each layer, incluing Input and output layer
Activation Function from 1st hidden layer to the output layer
Learning rate
Momentum
Learning Rate Increase Rate
Learning Rate Decrease Rate
Maximum epoch for training
Minumum Error (Performance Goal)
Minimum Gradient
MSE changes max in percentage, default 5%
Output Weight and bias
This function perform FeedForward Backpropagation with Gradient Decent training algorithm with Adaptive Learning Rate and Momentum.