Train a (statistical) classifier
CC = nan_train_sc(D,classlabel) CC = nan_train_sc(D,classlabel,MODE) CC = nan_train_sc(D,classlabel,MODE, W) weighting D(k,:) with weight W(k) (not all classifiers supported weighting) CC contains the model parameters of a classifier which can be applied to test data using test_sc. R = nan_test_sc(CC,D,...)
training samples (each row is a sample, each column is a feature)
labels of each sample, must have the same number of rows as D.
mahalanobis distance based classifier [1]
mahalanobis distance based classifier [1]
mahalanobis distance based classifier [1]
Gaussian radial basis function [1]
quadratic discriminant analysis [1]
linear discriminant analysis (see LDBC2) [1] MODE.hyperparameter.gamma: regularization parameter [default 0]
linear discriminant analysis (see LDBC3) [1] MODE.hyperparameter.gamma: regularization parameter [default 0]
linear discriminant analysis (see LDBC4) [1] MODE.hyperparameter.gamma: regularization parameter [default 0]
another LDA (motivated by CSP) MODE.hyperparameter.gamma: regularization parameter [default 0]
regularized discriminant analysis [7] MODE.hyperparameter.gamma: regularization parameter MODE.hyperparameter.lambda = gamma = 0, lambda = 0 : MDA gamma = 0, lambda = 1 : LDA [default] Hint: hyperparameter are used only in test_sc.m, testing different the hyperparameters do not need repetitive calls to train_sc,
general distance based classifier [1]
statistical classifier, requires Mode argument in TEST_SC
if the data contains missing values (encoded as %nans), a row-wise or column-wise deletion (depending on which method removes less data values) is applied;
GSVD and statistical classifier [2,3],
sparse [5] '###' must be 'LDA' or any other classifier
(linear) partial least squares regression
regression analysis;
Wiener-Hopf equation
Naive Bayesian Classifier [6]
Augmented Naive Bayesian Classifier [6]
Naive Bayesian Parzen Window [9]
Perceptron Learning Algorithm [11] MODE.hyperparameter.alpha = alpha [default: 1] w = w + alpha * e'*x
, 'AdaLine' Least mean squares, adaptive line element, Widrow-Hoff, delta rule MODE.hyperparameter.alpha = alpha [default: 1]
Winnow2 algorithm [12]
Proximal SVM [8] MODE.hyperparameter.nu (default: 1.0)
Linear Programming Machine uses and requires train_LPM of the iLog CPLEX optimizer MODE.hyperparameter.c_value =
CommonSpatialPattern is very experimental and just a hack uses a smoothing window of 50 samples.
support vector machines, one-vs-rest MODE.hyperparameter.c_value =
support vector machines, one-vs-one + voting MODE.hyperparameter.c_value =
Support Vector Machines with RBF Kernel MODE.hyperparameter.c_value = MODE.hyperparameter.gamma =
LIB': libSVM [default SVM algorithm)
bioinfo': uses and requires svmtrain from the bioinfo toolbox
OSU': uses and requires mexSVMTrain from the OSU-SVM toolbox
LOO': uses and requires svcm_train from the LOO-SVM toolbox
Gunn': uses and requires svc-functios from the Gunn-SVM toolbox
KM': uses and requires svmclass-function from the KM-SVM toolbox
LIN0': (default) LibLinear with -- L2-regularized logistic regression
LIN1': LibLinear with -- L2-loss support vector machines (dual)
LIN2': LibLinear with -- L2-loss support vector machines (primal)
LIN3': LibLinear with -- L1-loss support vector machines (dual)
LIN4': LibLinear with -- multi-class support vector machines by Crammer and Singer
contains the model parameters of a classifier. Some time ago,
was a statistical classifier containing the mean
[1] R. Duda, P. Hart, and D. Stork, Pattern Classification, second ed.
John Wiley & Sons, 2001.
[2] Peg Howland and Haesun Park,
Generalizing Discriminant Analysis Using the Generalized Singular Value Decomposition
IEEE Transactions on Pattern Analysis and Machine Intelligence, 26(8), 2004.
dx.doi.org/10.1109/TPAMI.2004.46
[3] http://www-static.cc.gatech.edu/~kihwan23/face_recog_gsvd.htm
[4] Jieping Ye, Ravi Janardan, Cheong Hee Park, Haesun Park
A new optimization criterion for generalized discriminant analysis on undersampled problems.
The Third IEEE International Conference on Data Mining, Melbourne, Florida, USA
November 19 - 22, 2003
[5] J.D. Tebbens and P. Schlesinger (2006),
Improving Implementation of Linear Discriminant Analysis for the Small Sample Size Problem
Computational Statistics & Data Analysis, vol 52(1): 423-437, 2007
http://www.cs.cas.cz/mweb/download/publi/JdtSchl2006.pdf
[6] H. Zhang, The optimality of Naive Bayes,
http://www.cs.unb.ca/profs/hzhang/publications/FLAIRS04ZhangH.pdf
[7] J.H. Friedman. Regularized discriminant analysis.
Journal of the American Statistical Association, 84:165–175, 1989.
[8] G. Fung and O.L. Mangasarian, Proximal Support Vector Machine Classifiers, KDD 2001.
Eds. F. Provost and R. Srikant, Proc. KDD-2001: Knowledge Discovery and Data Mining, August 26-29, 2001, San Francisco, CA.
p. 77-86.
[9] Kai Keng Ang, Zhang Yang Chin, Haihong Zhang, Cuntai Guan.
Filter Bank Common Spatial Pattern (FBCSP) in Brain-Computer Interface.
IEEE International Joint Conference on Neural Networks, 2008. IJCNN 2008. (IEEE World Congress on Computational Intelligence).
1-8 June 2008 Page(s):2390 - 2397
[10] R.-E. Fan, K.-W. Chang, C.-J. Hsieh, X.-R. Wang, and C.-J. Lin.
LIBLINEAR: A Library for Large Linear Classification, Journal of Machine Learning Research 9(2008), 1871-1874.
Software available at http://www.csie.ntu.edu.tw/~cjlin/liblinear
[11] http://en.wikipedia.org/wiki/Perceptron#Learning_algorithm
[12] Littlestone, N. (1988)
"Learning Quickly When Irrelevant Attributes Abound: A New Linear-threshold Algorithm"
Machine Learning 285-318(2)
http://en.wikipedia.org/wiki/Winnow_(algorithm)