trains a svm model
model = svmtrain(training_label_vector, training_instance_matrix); model = svmtrain(training_label_vector, training_instance_matrix,libsvm_options);
set type of SVM (default 0)
C-SVC
nu-SVC
one-class SVM
epsilon-SVR
nu-SVR
set type of kernel function (default 2)
u'*v
(gamma*u'*v + coef0)^degree
exp(-gamma*|u-v|^2)
tanh(gamma*u'*v + coef0)
(kernel values in training_instance_matrix)
set degree in kernel function (default 3)
set gamma in kernel function (default 1/num_features)
set coef0 in kernel function (default 0)
set the parameter C of C-SVC, epsilon-SVR, and nu-SVR (default 1)
set the parameter nu of nu-SVC, one-class SVM, and nu-SVR (default 0.5)
set the epsilon in loss function of epsilon-SVR (default 0.1)
set cache memory size in MB (default 100)
set tolerance of termination criterion (default 0.001)
whether to use the shrinking heuristics, 0 or 1 (default 1)
whether to train a SVC or SVR model for probability estimates, 0 or 1 (default 0)
set the parameter C of class i to weight*C, for C-SVC (default 1)
n-fold cross validation mode
quiet mode (no outputs)
The k in the -g option means the number of attributes in the input data.
option -v randomly splits the data into n parts and calculates crossvalidation accuracy/mean squared error on them.
Scale your data. For example, scale each attribute to [0,1] or [-1,+1].