1 LIBLINEAR is a simple package for solving large-scale regularized linear
2 classification, regression and outlier detection. It currently supports
3 - L2-regularized logistic regression/L2-loss support vector classification/L1-loss support vector classification
4 - L1-regularized L2-loss support vector classification/L1-regularized logistic regression
5 - L2-regularized L2-loss support vector regression/L1-loss support vector regression
6 - one-class support vector machine.
7 This document explains the usage of LIBLINEAR.
9 To get started, please read the ``Quick Start'' section first.
10 For developers, please check the ``Library Usage'' section to learn
11 how to integrate LIBLINEAR in your software.
16 - When to use LIBLINEAR but not LIBSVM
24 - Building Windows Binaries
25 - MATLAB/OCTAVE interface
27 - Additional Information
29 When to use LIBLINEAR but not LIBSVM
30 ====================================
32 There are some large data for which with/without nonlinear mappings
33 gives similar performances. Without using kernels, one can
34 efficiently train a much larger set via linear classification/regression.
35 These data usually have a large number of features. Document classification
38 Warning: While generally liblinear is very fast, its default solver
39 may be slow under certain situations (e.g., data not scaled or C is
40 large). See Appendix B of our SVM guide about how to handle such
42 http://www.csie.ntu.edu.tw/~cjlin/papers/guide/guide.pdf
44 Warning: If you are a beginner and your data sets are not large, you
45 should consider LIBSVM first.
48 http://www.csie.ntu.edu.tw/~cjlin/libsvm
54 See the section ``Installation'' for installing LIBLINEAR.
56 After installation, there are programs `train' and `predict' for
57 training and testing, respectively.
59 About the data format, please check the README file of LIBSVM. Note
60 that feature index must start from 1 (but not 0).
62 A sample classification data included in this package is `heart_scale'.
64 Type `train heart_scale', and the program will read the training
65 data and output the model file `heart_scale.model'. If you have a test
66 set called heart_scale.t, then type `predict heart_scale.t
67 heart_scale.model output' to see the prediction accuracy. The `output'
68 file contains the predicted class labels.
70 For more information about `train' and `predict', see the sections
71 `train' Usage and `predict' Usage.
73 To obtain good performances, sometimes one needs to scale the
74 data. Please check the program `svm-scale' of LIBSVM. For large and
75 sparse data, use `-l 0' to keep the sparsity.
80 On Unix systems, type `make' to build the `train', `predict',
81 and `svm-scale' programs. Run them without arguments to show the usages.
83 On other systems, consult `Makefile' to build them (e.g., see
84 'Building Windows binaries' in this file) or use the pre-built
85 binaries (Windows binaries are in the directory `windows').
87 This software uses some level-1 BLAS subroutines. The needed functions are
88 included in this package. If a BLAS library is available on your
89 machine, you may use it by modifying the Makefile: Unmark the following line
97 The tool `svm-scale', borrowed from LIBSVM, is for scaling input data file.
102 Usage: train [options] training_set_file [model_file]
104 -s type : set type of solver (default 1)
105 for multi-class classification
106 0 -- L2-regularized logistic regression (primal)
107 1 -- L2-regularized L2-loss support vector classification (dual)
108 2 -- L2-regularized L2-loss support vector classification (primal)
109 3 -- L2-regularized L1-loss support vector classification (dual)
110 4 -- support vector classification by Crammer and Singer
111 5 -- L1-regularized L2-loss support vector classification
112 6 -- L1-regularized logistic regression
113 7 -- L2-regularized logistic regression (dual)
115 11 -- L2-regularized L2-loss support vector regression (primal)
116 12 -- L2-regularized L2-loss support vector regression (dual)
117 13 -- L2-regularized L1-loss support vector regression (dual)
118 for outlier detection
119 21 -- one-class support vector machine (dual)
120 -c cost : set the parameter C (default 1)
121 -p epsilon : set the epsilon in loss function of epsilon-SVR (default 0.1)
122 -n nu : set the parameter nu of one-class SVM (default 0.5)
123 -e epsilon : set tolerance of termination criterion
125 |f'(w)|_2 <= eps*min(pos,neg)/l*|f'(w0)|_2,
126 where f is the primal function and pos/neg are # of
127 positive/negative data (default 0.01)
129 |f'(w)|_2 <= eps*|f'(w0)|_2 (default 0.0001)
130 -s 1, 3, 4, 7, and 21
131 Dual maximal violation <= eps; similar to libsvm (default 0.1 except 0.01 for -s 21)
133 |f'(w)|_1 <= eps*min(pos,neg)/l*|f'(w0)|_1,
134 where f is the primal function (default 0.01)
136 |f'(alpha)|_1 <= eps |f'(alpha0)|,
137 where f is the dual function (default 0.1)
138 -B bias : if bias >= 0, instance x becomes [x; bias]; if < 0, no bias term added (default -1)
139 -R : not regularize the bias; must with -B 1 to have the bias; DON'T use this unless you know what it is
140 (for -s 0, 2, 5, 6, 11)
141 -wi weight: weights adjust the parameter C of different classes (see README for details)
142 -v n: n-fold cross validation mode
143 -C : find parameters (C for -s 0, 2 and C, p for -s 11)
144 -q : quiet mode (no outputs)
146 Option -v randomly splits the data into n parts and calculates cross
147 validation accuracy on them.
149 Option -C conducts cross validation under different parameters and finds
150 the best one. This option is supported only by -s 0, -s 2 (for finding
151 C) and -s 11 (for finding C, p). If the solver is not specified, -s 2
156 For L2-regularized logistic regression (-s 0), we solve
158 min_w w^Tw/2 + C \sum log(1 + exp(-y_i w^Tx_i))
160 For L2-regularized L2-loss SVC dual (-s 1), we solve
162 min_alpha 0.5(alpha^T (Q + I/2/C) alpha) - e^T alpha
165 For L2-regularized L2-loss SVC (-s 2), we solve
167 min_w w^Tw/2 + C \sum max(0, 1- y_i w^Tx_i)^2
169 For L2-regularized L1-loss SVC dual (-s 3), we solve
171 min_alpha 0.5(alpha^T Q alpha) - e^T alpha
172 s.t. 0 <= alpha_i <= C,
174 For L1-regularized L2-loss SVC (-s 5), we solve
176 min_w \sum |w_j| + C \sum max(0, 1- y_i w^Tx_i)^2
178 For L1-regularized logistic regression (-s 6), we solve
180 min_w \sum |w_j| + C \sum log(1 + exp(-y_i w^Tx_i))
182 For L2-regularized logistic regression (-s 7), we solve
184 min_alpha 0.5(alpha^T Q alpha) + \sum alpha_i*log(alpha_i) + \sum (C-alpha_i)*log(C-alpha_i) - a constant
185 s.t. 0 <= alpha_i <= C,
189 Q is a matrix with Q_ij = y_i y_j x_i^T x_j.
191 For L2-regularized L2-loss SVR (-s 11), we solve
193 min_w w^Tw/2 + C \sum max(0, |y_i-w^Tx_i|-epsilon)^2
195 For L2-regularized L2-loss SVR dual (-s 12), we solve
197 min_beta 0.5(beta^T (Q + lambda I/2/C) beta) - y^T beta + \sum |beta_i|
199 For L2-regularized L1-loss SVR dual (-s 13), we solve
201 min_beta 0.5(beta^T Q beta) - y^T beta + \sum |beta_i|
202 s.t. -C <= beta_i <= C,
206 Q is a matrix with Q_ij = x_i^T x_j.
208 For one-class SVM dual (-s 21), we solve
210 min_alpha 0.5(alpha^T Q alpha)
211 s.t. 0 <= alpha_i <= 1 and \sum alpha_i = nu*l,
215 Q is a matrix with Q_ij = x_i^T x_j.
217 If bias >= 0, w becomes [w; w_{n+1}] and x becomes [x; bias]. For
218 example, L2-regularized logistic regression (-s 0) becomes
220 min_w w^Tw/2 + (w_{n+1})^2/2 + C \sum log(1 + exp(-y_i [w; w_{n+1}]^T[x_i; bias]))
222 Some may prefer not having (w_{n+1})^2/2 (i.e., bias variable not
223 regularized). For primal solvers (-s 0, 2, 5, 6, 11), we provide an
224 option -R to remove (w_{n+1})^2/2. However, -R is generally not needed
225 as for most data with/without (w_{n+1})^2/2 give similar performances.
227 The primal-dual relationship implies that -s 1 and -s 2 give the same
228 model, -s 0 and -s 7 give the same, and -s 11 and -s 12 give the same.
230 We implement 1-vs-the rest multi-class strategy for classification.
231 In training i vs. non_i, their C parameters are (weight from -wi)*C
232 and C, respectively. If there are only two classes, we train only one
233 model. Thus weight1*C vs. weight2*C is used. See examples below.
235 We also implement multi-class SVM by Crammer and Singer (-s 4):
237 min_{w_m, \xi_i} 0.5 \sum_m ||w_m||^2 + C \sum_i \xi_i
238 s.t. w^T_{y_i} x_i - w^T_m x_i >= \e^m_i - \xi_i \forall m,i
240 where e^m_i = 0 if y_i = m,
241 e^m_i = 1 if y_i != m,
243 Here we solve the dual problem:
245 min_{\alpha} 0.5 \sum_m ||w_m(\alpha)||^2 + \sum_i \sum_m e^m_i alpha^m_i
246 s.t. \alpha^m_i <= C^m_i \forall m,i , \sum_m \alpha^m_i=0 \forall i
248 where w_m(\alpha) = \sum_i \alpha^m_i x_i,
249 and C^m_i = C if m = y_i,
250 C^m_i = 0 if m != y_i.
255 Usage: predict [options] test_file model_file output_file
257 -b probability_estimates: whether to output probability estimates, 0 or 1 (default 0); currently for logistic regression only
258 -q : quiet mode (no outputs)
260 Note that -b is only needed in the prediction phase. This is different
261 from the setting of LIBSVM.
273 Train linear SVM with L2-loss function.
275 > train -s 0 data_file
277 Train a logistic regression model.
279 > train -s 21 -n 0.1 data_file
281 Train a linear one-class SVM which selects roughly 10% data as outliers.
283 > train -v 5 -e 0.001 data_file
285 Do five-fold cross-validation using L2-loss SVM.
286 Use a smaller stopping tolerance 0.001 than the default
287 0.1 if you want more accurate solutions.
291 Best C = 0.000488281 CV accuracy = 83.3333%
292 > train -c 0.000488281 data_file
294 Conduct cross validation many times by L2-loss SVM and find the
295 parameter C which achieves the best cross validation accuracy. Then
296 use the selected C to train the data for getting a model.
298 > train -C -s 0 -v 3 -c 0.5 -e 0.0001 data_file
300 For parameter selection by -C, users can specify other
301 solvers (currently -s 0, -s 2 and -s 11 are supported) and
302 different number of CV folds. Further, users can use
303 the -c option to specify the smallest C value of the
304 search range. This option is useful when users want to
305 rerun the parameter selection procedure from a specified
306 C under a different setting, such as a stricter stopping
307 tolerance -e 0.0001 in the above example. Similarly, for
308 -s 11, users can use the -p option to specify the
309 maximal p value of the search range.
311 > train -c 10 -w1 2 -w2 5 -w3 2 four_class_data_file
313 Train four classifiers:
314 positive negative Cp Cn
315 class 1 class 2,3,4. 20 10
316 class 2 class 1,3,4. 50 10
317 class 3 class 1,2,4. 20 10
318 class 4 class 1,2,3. 10 10
320 > train -c 10 -w3 1 -w2 5 two_class_data_file
322 If there are only two classes, we train ONE model.
323 The C values for the two classes are 10 and 50.
325 > predict -b 1 test_file data_file.model output_file
327 Output probability estimates (for logistic regression only).
332 These functions and structures are declared in the header file `linear.h'.
333 You can see `train.c' and `predict.c' for examples showing how to use them.
334 We define LIBLINEAR_VERSION and declare `extern int liblinear_version; '
335 in linear.h, so you can check the version number.
337 - Function: model* train(const struct problem *prob,
338 const struct parameter *param);
340 This function constructs and returns a linear classification
341 or regression model according to the given training data and
344 struct problem describes the problem:
350 struct feature_node **x;
354 where `l' is the number of training data. If bias >= 0, we assume
355 that one additional feature is added to the end of each data
356 instance. `n' is the number of feature (including the bias feature
357 if bias >= 0). `y' is an array containing the target values. (integers
358 in classification, real numbers in regression) And `x' is an array
359 of pointers, each of which points to a sparse representation (array
360 of feature_node) of one training vector.
362 For example, if we have the following training data:
364 LABEL ATTR1 ATTR2 ATTR3 ATTR4 ATTR5
365 ----- ----- ----- ----- ----- -----
370 3 -0.1 -0.2 0.1 1.1 0.1
372 and bias = 1, then the components of problem are:
379 x -> [ ] -> (2,0.1) (3,0.2) (6,1) (-1,?)
380 [ ] -> (2,0.1) (3,0.3) (4,-1.2) (6,1) (-1,?)
381 [ ] -> (1,0.4) (6,1) (-1,?)
382 [ ] -> (2,0.1) (4,1.4) (5,0.5) (6,1) (-1,?)
383 [ ] -> (1,-0.1) (2,-0.2) (3,0.1) (4,1.1) (5,0.1) (6,1) (-1,?)
385 struct parameter describes the parameters of a linear classification
392 /* these are for training only */
393 double eps; /* stopping tolerance */
395 double nu; /* one-class SVM only */
403 solver_type can be one of L2R_LR, L2R_L2LOSS_SVC_DUAL, L2R_L2LOSS_SVC, L2R_L1LOSS_SVC_DUAL, MCSVM_CS, L1R_L2LOSS_SVC, L1R_LR, L2R_LR_DUAL, L2R_L2LOSS_SVR, L2R_L2LOSS_SVR_DUAL, L2R_L1LOSS_SVR_DUAL, ONECLASS_SVM.
405 L2R_LR L2-regularized logistic regression (primal)
406 L2R_L2LOSS_SVC_DUAL L2-regularized L2-loss support vector classification (dual)
407 L2R_L2LOSS_SVC L2-regularized L2-loss support vector classification (primal)
408 L2R_L1LOSS_SVC_DUAL L2-regularized L1-loss support vector classification (dual)
409 MCSVM_CS support vector classification by Crammer and Singer
410 L1R_L2LOSS_SVC L1-regularized L2-loss support vector classification
411 L1R_LR L1-regularized logistic regression
412 L2R_LR_DUAL L2-regularized logistic regression (dual)
414 L2R_L2LOSS_SVR L2-regularized L2-loss support vector regression (primal)
415 L2R_L2LOSS_SVR_DUAL L2-regularized L2-loss support vector regression (dual)
416 L2R_L1LOSS_SVR_DUAL L2-regularized L1-loss support vector regression (dual)
417 for outlier detection
418 ONECLASS_SVM one-class support vector machine (dual)
420 C is the cost of constraints violation.
421 p is the sensitiveness of loss of support vector regression.
422 nu in ONECLASS_SVM approximates the fraction of data as outliers.
423 eps is the stopping criterion.
425 nr_weight, weight_label, and weight are used to change the penalty
426 for some classes (If the weight for a class is not changed, it is
427 set to 1). This is useful for training classifier using unbalanced
428 input data or with asymmetric misclassification cost.
430 nr_weight is the number of elements in the array weight_label and
431 weight. Each weight[i] corresponds to weight_label[i], meaning that
432 the penalty of class weight_label[i] is scaled by a factor of weight[i].
434 If you do not want to change penalty for any of the classes,
435 just set nr_weight to 0.
437 init_sol includes the initial weight vectors (supported for only some
438 solvers). See the explanation of the vector w in the model
441 *NOTE* To avoid wrong parameters, check_parameter() should be
442 called before train().
444 struct model stores the model obtained from the training procedure:
448 struct parameter param;
449 int nr_class; /* number of classes */
452 int *label; /* label of each class */
454 double rho; /* one-class SVM only */
457 param describes the parameters used to obtain the model.
459 nr_class and nr_feature are the number of classes and features,
460 respectively. nr_class = 2 for regression.
462 The array w gives feature weights; its size is
463 nr_feature*nr_class but is nr_feature if nr_class = 2. We use one
464 against the rest for multi-class classification, so each feature
465 index corresponds to nr_class weight values. Weights are
466 organized in the following way
468 +------------------+------------------+------------+
469 | nr_class weights | nr_class weights | ...
470 | for 1st feature | for 2nd feature |
471 +------------------+------------------+------------+
473 The array label stores class labels.
475 If bias >= 0, x becomes [x; bias]. The number of features is
476 increased by one, so w is a (nr_feature+1)*nr_class array. The
477 value of bias is stored in the variable bias.
479 rho is the bias term used in one-class SVM only.
481 - Function: void cross_validation(const problem *prob, const parameter *param, int nr_fold, double *target);
483 This function conducts cross validation. Data are separated to
484 nr_fold folds. Under given parameters, sequentially each fold is
485 validated using the model from training the remaining. Predicted
486 labels in the validation process are stored in the array called
489 The format of prob is same as that for train().
491 - Function: void find_parameters(const struct problem *prob,
492 const struct parameter *param, int nr_fold, double start_C,
493 double start_p, double *best_C, double *best_p, double *best_score);
495 This function is similar to cross_validation. However, instead of
496 conducting cross validation under specified parameters. For -s 0, 2, it
497 conducts cross validation many times under parameters C = start_C,
498 2*start_C, 4*start_C, 8*start_C, ..., and finds the best one with
499 the highest cross validation accuracy. For -s 11, it conducts cross
500 validation many times with a two-fold loop. The outer loop considers a
501 default sequence of p = 19/20*max_p, ..., 1/20*max_p, 0 and
502 under each p value the inner loop considers a sequence of parameters
503 C = start_C, 2*start_C, 4*start_C, ..., and finds the best one with the
504 lowest mean squared error.
506 If start_C <= 0, then this procedure calculates a small enough C
507 for prob as the start_C. The procedure stops when the models of
508 all folds become stable or C reaches max_C.
510 If start_p <= 0, then this procedure calculates a maximal p for prob as
511 the start_p. Otherwise, the procedure starts with the first
512 i/20*max_p <= start_p so the outer sequence is i/20*max_p,
513 (i-1)/20*max_p, ..., 0.
515 The best C, the best p, and the corresponding accuracy (or MSE) are
516 assigned to *best_C, *best_p and *best_score, respectively. For
517 classification, *best_p is not used, and the returned value is -1.
519 - Function: double predict(const model *model_, const feature_node *x);
521 For a classification model, the predicted class for x is returned.
522 For a regression model, the function value of x calculated using
523 the model is returned.
525 - Function: double predict_values(const struct model *model_,
526 const struct feature_node *x, double* dec_values);
528 This function gives nr_w decision values in the array dec_values.
529 nr_w=1 if regression is applied or the number of classes is two. An exception is
530 multi-class SVM by Crammer and Singer (-s 4), where nr_w = 2 if there are two classes. For all other situations, nr_w is the
533 We implement one-vs-the rest multi-class strategy (-s 0,1,2,3,5,6,7)
534 and multi-class SVM by Crammer and Singer (-s 4) for multi-class SVM.
535 The class with the highest decision value is returned.
537 - Function: double predict_probability(const struct model *model_,
538 const struct feature_node *x, double* prob_estimates);
540 This function gives nr_class probability estimates in the array
541 prob_estimates. nr_class can be obtained from the function
542 get_nr_class. The class with the highest probability is
543 returned. Currently, we support only the probability outputs of
546 - Function: int get_nr_feature(const model *model_);
548 The function gives the number of attributes of the model.
550 - Function: int get_nr_class(const model *model_);
552 The function gives the number of classes of the model.
553 For a regression model, 2 is returned.
555 - Function: void get_labels(const model *model_, int* label);
557 This function outputs the name of labels into an array called label.
558 For a regression model, label is unchanged.
560 - Function: double get_decfun_coef(const struct model *model_, int feat_idx,
563 This function gives the coefficient for the feature with feature index =
564 feat_idx and the class with label index = label_idx. Note that feat_idx
565 starts from 1, while label_idx starts from 0. If feat_idx is not in the
566 valid range (1 to nr_feature), then a zero value will be returned. For
567 classification models, if label_idx is not in the valid range (0 to
568 nr_class-1), then a zero value will be returned; for regression models
569 and one-class SVM models, label_idx is ignored.
571 - Function: double get_decfun_bias(const struct model *model_, int label_idx);
573 This function gives the bias term corresponding to the class with the
574 label_idx. For classification models, if label_idx is not in a valid range
575 (0 to nr_class-1), then a zero value will be returned; for regression
576 models, label_idx is ignored. This function cannot be called for a one-class
579 - Function: double get_decfun_rho(const struct model *model_);
581 This function gives rho, the bias term used in one-class SVM only. This
582 function can only be called for a one-class SVM model.
584 - Function: const char *check_parameter(const struct problem *prob,
585 const struct parameter *param);
587 This function checks whether the parameters are within the feasible
588 range of the problem. This function should be called before calling
589 train() and cross_validation(). It returns NULL if the
590 parameters are feasible, otherwise an error message is returned.
592 - Function: int check_probability_model(const struct model *model);
594 This function returns 1 if the model supports probability output;
595 otherwise, it returns 0.
597 - Function: int check_regression_model(const struct model *model);
599 This function returns 1 if the model is a regression model; otherwise
602 - Function: int check_oneclass_model(const struct model *model);
604 This function returns 1 if the model is a one-class SVM model; otherwise
607 - Function: int save_model(const char *model_file_name,
608 const struct model *model_);
610 This function saves a model to a file; returns 0 on success, or -1
613 - Function: struct model *load_model(const char *model_file_name);
615 This function returns a pointer to the model read from the file,
616 or a null pointer if the model could not be loaded.
618 - Function: void free_model_content(struct model *model_ptr);
620 This function frees the memory used by the entries in a model structure.
622 - Function: void free_and_destroy_model(struct model **model_ptr_ptr);
624 This function frees the memory used by a model and destroys the model
627 - Function: void destroy_param(struct parameter *param);
629 This function frees the memory used by a parameter set.
631 - Function: void set_print_string_function(void (*print_func)(const char *));
633 Users can specify their output format by a function. Use
634 set_print_string_function(NULL);
635 for default printing to stdout.
637 Building Windows Binaries
638 =========================
640 Windows binaries are available in the directory `windows'. To re-build
641 them via Visual C++, use the following steps:
643 1. Open a dos command box and change to liblinear directory. If
644 environment variables of VC++ have not been set, type
646 "C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Auxiliary\Build\vcvars64.bat"
648 You may have to modify the above command according which version of
649 VC++ or where it is installed.
653 nmake -f Makefile.win clean all
655 3. (optional) To build shared library liblinear.dll, type
657 nmake -f Makefile.win lib
659 4. (Optional) To build 32-bit windows binaries, you must
660 (1) Setup "C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Auxiliary\Build\vcvars32.bat" instead of vcvars64.bat
661 (2) Change CFLAGS in Makefile.win: /D _WIN64 to /D _WIN32
663 MATLAB/OCTAVE Interface
664 =======================
666 Please check the file README in the directory `matlab'.
671 Please check the file README in the directory `python'.
673 Additional Information
674 ======================
676 If you find LIBLINEAR helpful, please cite it as
678 R.-E. Fan, K.-W. Chang, C.-J. Hsieh, X.-R. Wang, and C.-J. Lin.
679 LIBLINEAR: A Library for Large Linear Classification, Journal of
680 Machine Learning Research 9(2008), 1871-1874. Software available at
681 http://www.csie.ntu.edu.tw/~cjlin/liblinear
683 For any questions and comments, please send your email to
684 cjlin@csie.ntu.edu.tw