trains a svm model
model = svmtrain(training_label_vector, training_instance_matrix); model = svmtrain(training_label_vector, training_instance_matrix,libsvm_options);
set type of SVM (default 0)
C-SVC
nu-SVC
one-class SVM
epsilon-SVR
nu-SVR
set type of kernel function (default 2)
u'*v
(gamma*u'*v + coef0)^degree
exp(-gamma*|u-v|^2)
tanh(gamma*u'*v + coef0)
(kernel values in training_instance_matrix)
set degree in kernel function (default 3)
set gamma in kernel function (default 1/num_features)
set coef0 in kernel function (default 0)
set the parameter C of C-SVC, epsilon-SVR, and nu-SVR (default 1)
set the parameter nu of nu-SVC, one-class SVM, and nu-SVR (default 0.5)
set the epsilon in loss function of epsilon-SVR (default 0.1)
set cache memory size in MB (default 100)
set tolerance of termination criterion (default 0.001)
whether to use the shrinking heuristics, 0 or 1 (default 1)
whether to train a SVC or SVR model for probability estimates, 0 or 1 (default 0)
set the parameter C of class i to weight*C, for C-SVC (default 1)
n-fold cross validation mode
quiet mode (no outputs)
parameters
number of classes; = 2 for regression/one-class svm
total #SV
-b of the decision function(s) wx+b
label of each class; empty for regression/one-class SVM
pairwise probability information; empty if -b 0 or in one-class SVM
pairwise probability information; empty if -b 0 or in one-class SVM
number of SVs for each class; empty for regression/one-class SVM
coefficients for SVs in decision functions
support vectors
The k in the -g option means the number of attributes in the input data.
option -v randomly splits the data into n parts and calculates crossvalidation accuracy/mean squared error on them.
Scale your data. For example, scale each attribute to [0,1] or [-1,+1].
The 'svmtrain' function returns a model which can be used for future prediction. It is a structure and is organized as [Parameters, nr_class, totalSV, rho, Label, ProbA, ProbB, nSV, sv_coef, SVs]:
If you do not use the option '-b 1', ProbA and ProbB are empty matrices. If the '-v' option is specified, cross validation is conducted and the returned model is just a scalar: cross-validation accuracy for classification and mean-squared error for regression.