LIBLINEAR is a simple package for solving large-scale regularized
linear classification. It currently supports L2-regularized logistic
-regression, L2-loss support vector machines, and L1-loss support
-vector machines. This document explains the usage of LIBLINEAR.
+regression/L2-loss support vector classification/L1-loss support vector
+classification, and L1-regularized L2-loss support vector classification/
+logistic regression. This document explains the usage of LIBLINEAR.
To get started, please read the ``Quick Start'' section first.
For developers, please check the ``Library Usage'' section to learn
options:
-s type : set type of solver (default 1)
0 -- L2-regularized logistic regression
- 1 -- L2-loss support vector machines (dual)
- 2 -- L2-loss support vector machines (primal)
- 3 -- L1-loss support vector machines (dual)
- 4 -- multi-class support vector machines by Crammer and Singer
+ 1 -- L2-regularized L2-loss support vector classification (dual)
+ 2 -- L2-regularized L2-loss support vector classification (primal)
+ 3 -- L2-regularized L1-loss support vector classification (dual)
+ 4 -- multi-class support vector classification by Crammer and Singer
+ 5 -- L1-regularized L2-loss support vector classification
+ 6 -- L1-regularized logistic regression
-c cost : set the parameter C (default 1)
-e epsilon : set tolerance of termination criterion
-s 0 and 2
positive/negative data (default 0.01)
-s 1, 3, and 4
Dual maximal violation <= eps; similar to libsvm (default 0.1)
+ -s 5 and 6
+ |f'(w)|_inf <= eps*min(pos,neg)/l*|f'(w0)|_inf,
+ where f is the primal function (default 0.01)
-B bias : if bias >= 0, instance x becomes [x; bias]; if < 0, no bias term added (default 1)
-wi weight: weights adjust the parameter C of different classes (see README for details)
-v n: n-fold cross validation mode
min_w w^Tw/2 + C \sum log(1 + exp(-y_i w^Tx_i))
-For L2-loss SVM dual (-s 1), we solve
+For L2-regularized L2-loss SVC dual (-s 1), we solve
min_alpha 0.5(alpha^T (Q + I/2/C) alpha) - e^T alpha
s.t. 0 <= alpha_i,
-For L2-loss SVM (-s 2), we solve
+For L2-regularized L2-loss SVC (-s 2), we solve
min_w w^Tw/2 + C \sum max(0, 1- y_i w^Tx_i)^2
-For L1-loss SVM dual (-s 3), we solve
+For L2-regularized L1-loss SVC dual (-s 3), we solve
min_alpha 0.5(alpha^T Q alpha) - e^T alpha
s.t. 0 <= alpha_i <= C,
+For L1-regularized L2-loss SVC (-s 5), we solve
+
+min_w \sum |w_j| + C \sum max(0, 1- y_i w^Tx_i)^2
+
+For L1-regularized logistic regression (-s 6), we solve
+
+min_w \sum |w_j| + C \sum log(1 + exp(-y_i w^Tx_i))
+
where
Q is a matrix with Q_ij = y_i y_j x_i^T x_j.
double* weight;
};
- solver_type can be one of L2_LR, L2LOSS_SVM_DUAL, L2LOSS_SVM, L1LOSS_SVM_DUAL, MCSVM_CS.
+ solver_type can be one of L2_LR, L2_L2LOSS_SVC_DUAL, L2_L2LOSS_SVC, L2_L1LOSS_SVC_DUAL, MCSVM_CS, L1_L2LOSS_SVC, L1_LR.
- L2_LR L2-regularized logistic regression
- L2LOSS_SVM_DUAL L2-loss support vector machines (dual)
- L2LOSS_SVM L2-loss support vector machines (primal)
- L1LOSS_SVM_DUAL L1-loss support vector machines (dual)
- MCSVM_CS multi-class support vector machines by Crammer and Singer
+ L2_LR L2-regularized logistic regression
+ L2_L2LOSS_SVC_DUAL L2-regularized L2-loss support vector classification (dual)
+ L2_L2LOSS_SVC L2-regularized L2-loss support vector classification (primal)
+ L2_L1LOSS_SVC_DUAL L2-regularized L1-loss support vector classification (dual)
+ MCSVM_CS multi-class support vector classification by Crammer and Singer
+ L1_L2LOSS_SVC L1-regularized L2-loss support vector classification
+ L1_LR L1-regularized logistic regression
C is the cost of constraints violation.
eps is the stopping criterion.