'Building Windows binaries' in this file) or use the pre-built
binaries (Windows binaries are in the directory `windows').
-This software uses some level-1 BLAS subroutines. The needed functions are
+This software uses some level-1 BLAS subroutines. The needed functions are
included in this package. If a BLAS library is available on your
machine, you may use it by modifying the Makefile: Unmark the following line
#LIBS ?= -lblas
-and mark
+and mark
LIBS ?= blas/blas.a
-e epsilon : set tolerance of termination criterion
-s 0 and 2
|f'(w)|_2 <= eps*min(pos,neg)/l*|f'(w0)|_2,
- where f is the primal function and pos/neg are # of
- positive/negative data (default 0.01)
+ where f is the primal function and pos/neg are # of
+ positive/negative data (default 0.01)
-s 1, 3, and 4
Dual maximal violation <= eps; similar to libsvm (default 0.1)
-s 5 and 6
For L2-regularized L2-loss SVC dual (-s 1), we solve
-min_alpha 0.5(alpha^T (Q + I/2/C) alpha) - e^T alpha
+min_alpha 0.5(alpha^T (Q + I/2/C) alpha) - e^T alpha
s.t. 0 <= alpha_i,
For L2-regularized L2-loss SVC (-s 2), we solve
For L2-regularized L1-loss SVC dual (-s 3), we solve
-min_alpha 0.5(alpha^T Q alpha) - e^T alpha
+min_alpha 0.5(alpha^T Q alpha) - e^T alpha
s.t. 0 <= alpha_i <= C,
For L1-regularized L2-loss SVC (-s 5), we solve
Q is a matrix with Q_ij = y_i y_j x_i^T x_j.
-If bias >= 0, w becomes [w; w_{n+1}] and x becomes [x; bias].
+If bias >= 0, w becomes [w; w_{n+1}] and x becomes [x; bias].
The primal-dual relationship implies that -s 1 and -s 2 gives the same
model.
> train -c 10 -w3 1 -w2 5 two_class_data_file
-If there are only two classes, we train ONE model.
-The C values for the two classes are 10 and 50.
+If there are only two classes, we train ONE model.
+The C values for the two classes are 10 and 50.
> predict -b 1 test_file data_file.model output_file
solver_type can be one of L2R_LR, L2R_L2LOSS_SVC_DUAL, L2R_L2LOSS_SVC, L2R_L1LOSS_SVC_DUAL, MCSVM_CS, L1R_L2LOSS_SVC, L1R_LR.
L2R_LR L2-regularized logistic regression
- L2R_L2LOSS_SVC_DUAL L2-regularized L2-loss support vector classification (dual)
+ L2R_L2LOSS_SVC_DUAL L2-regularized L2-loss support vector classification (dual)
L2R_L2LOSS_SVC L2-regularized L2-loss support vector classification (primal)
- L2R_L1LOSS_SVC_DUAL L2-regularized L1-loss support vector classification (dual)
- MCSVM_CS multi-class support vector classification by Crammer and Singer
+ L2R_L1LOSS_SVC_DUAL L2-regularized L1-loss support vector classification (dual)
+ MCSVM_CS multi-class support vector classification by Crammer and Singer
L1R_L2LOSS_SVC L1-regularized L2-loss support vector classification
L1R_LR L1-regularized logistic regression
- C is the cost of constraints violation.
- eps is the stopping criterion.
+ C is the cost of constraints violation.
+ eps is the stopping criterion.
nr_weight, weight_label, and weight are used to change the penalty
for some classes (If the weight for a class is not changed, it is
organized in the following way
+------------------+------------------+------------+
- | nr_class weights | nr_class weights | ...
- | for 1st feature | for 2nd feature |
+ | nr_class weights | nr_class weights | ...
+ | for 1st feature | for 2nd feature |
+------------------+------------------+------------+
If bias >= 0, x becomes [x; bias]. The number of features is
This function gives nr_w decision values in the array
dec_values. nr_w is 1 if there are two classes except multi-class
- svm by Crammer and Singer (-s 4), and is the number of classes otherwise.
+ svm by Crammer and Singer (-s 4), and is the number of classes otherwise.
We implement one-vs-the rest multi-class strategy (-s 0,1,2,3) and
multi-class svm by Crammer and Singer (-s 4) for multi-class SVM.