1 LIBLINEAR is a simple package for solving large-scale regularized linear
2 classification and regression. It currently supports
3 - L2-regularized logistic regression/L2-loss support vector classification/L1-loss support vector classification
4 - L1-regularized L2-loss support vector classification/L1-regularized logistic regression
5 - L2-regularized L2-loss support vector regression/L1-loss support vector regression.
6 This document explains the usage of LIBLINEAR.
8 To get started, please read the ``Quick Start'' section first.
9 For developers, please check the ``Library Usage'' section to learn
10 how to integrate LIBLINEAR in your software.
15 - When to use LIBLINEAR but not LIBSVM
23 - Building Windows Binaries
24 - Additional Information
25 - MATLAB/OCTAVE interface
28 When to use LIBLINEAR but not LIBSVM
29 ====================================
31 There are some large data for which with/without nonlinear mappings
32 gives similar performances. Without using kernels, one can
33 efficiently train a much larger set via linear classification/regression.
34 These data usually have a large number of features. Document classification
37 Warning: While generally liblinear is very fast, its default solver
38 may be slow under certain situations (e.g., data not scaled or C is
39 large). See Appendix B of our SVM guide about how to handle such
41 http://www.csie.ntu.edu.tw/~cjlin/papers/guide/guide.pdf
43 Warning: If you are a beginner and your data sets are not large, you
44 should consider LIBSVM first.
47 http://www.csie.ntu.edu.tw/~cjlin/libsvm
53 See the section ``Installation'' for installing LIBLINEAR.
55 After installation, there are programs `train' and `predict' for
56 training and testing, respectively.
58 About the data format, please check the README file of LIBSVM. Note
59 that feature index must start from 1 (but not 0).
61 A sample classification data included in this package is `heart_scale'.
63 Type `train heart_scale', and the program will read the training
64 data and output the model file `heart_scale.model'. If you have a test
65 set called heart_scale.t, then type `predict heart_scale.t
66 heart_scale.model output' to see the prediction accuracy. The `output'
67 file contains the predicted class labels.
69 For more information about `train' and `predict', see the sections
70 `train' Usage and `predict' Usage.
72 To obtain good performances, sometimes one needs to scale the
73 data. Please check the program `svm-scale' of LIBSVM. For large and
74 sparse data, use `-l 0' to keep the sparsity.
79 On Unix systems, type `make' to build the `train', `predict',
80 and `svm-scale' programs. Run them without arguments to show the usages.
82 On other systems, consult `Makefile' to build them (e.g., see
83 'Building Windows binaries' in this file) or use the pre-built
84 binaries (Windows binaries are in the directory `windows').
86 This software uses some level-1 BLAS subroutines. The needed functions are
87 included in this package. If a BLAS library is available on your
88 machine, you may use it by modifying the Makefile: Unmark the following line
96 The tool `svm-scale', borrowed from LIBSVM, is for scaling input data file.
101 Usage: train [options] training_set_file [model_file]
103 -s type : set type of solver (default 1)
104 for multi-class classification
105 0 -- L2-regularized logistic regression (primal)
106 1 -- L2-regularized L2-loss support vector classification (dual)
107 2 -- L2-regularized L2-loss support vector classification (primal)
108 3 -- L2-regularized L1-loss support vector classification (dual)
109 4 -- support vector classification by Crammer and Singer
110 5 -- L1-regularized L2-loss support vector classification
111 6 -- L1-regularized logistic regression
112 7 -- L2-regularized logistic regression (dual)
114 11 -- L2-regularized L2-loss support vector regression (primal)
115 12 -- L2-regularized L2-loss support vector regression (dual)
116 13 -- L2-regularized L1-loss support vector regression (dual)
117 -c cost : set the parameter C (default 1)
118 -p epsilon : set the epsilon in loss function of epsilon-SVR (default 0.1)
119 -e epsilon : set tolerance of termination criterion
121 |f'(w)|_2 <= eps*min(pos,neg)/l*|f'(w0)|_2,
122 where f is the primal function and pos/neg are # of
123 positive/negative data (default 0.01)
125 |f'(w)|_2 <= eps*|f'(w0)|_2 (default 0.0001)
127 Dual maximal violation <= eps; similar to libsvm (default 0.1)
129 |f'(w)|_1 <= eps*min(pos,neg)/l*|f'(w0)|_1,
130 where f is the primal function (default 0.01)
132 |f'(alpha)|_1 <= eps |f'(alpha0)|,
133 where f is the dual function (default 0.1)
134 -B bias : if bias >= 0, instance x becomes [x; bias]; if < 0, no bias term added (default -1)
135 -wi weight: weights adjust the parameter C of different classes (see README for details)
136 -v n: n-fold cross validation mode
137 -C : find parameters (C for -s 0, 2 and C, p for -s 11)
138 -q : quiet mode (no outputs)
140 Option -v randomly splits the data into n parts and calculates cross
141 validation accuracy on them.
143 Option -C conducts cross validation under different parameters and finds
144 the best one. This option is supported only by -s 0, -s 2 (for finding
145 C) and -s 11 (for finding C, p). If the solver is not specified, -s 2
150 For L2-regularized logistic regression (-s 0), we solve
152 min_w w^Tw/2 + C \sum log(1 + exp(-y_i w^Tx_i))
154 For L2-regularized L2-loss SVC dual (-s 1), we solve
156 min_alpha 0.5(alpha^T (Q + I/2/C) alpha) - e^T alpha
159 For L2-regularized L2-loss SVC (-s 2), we solve
161 min_w w^Tw/2 + C \sum max(0, 1- y_i w^Tx_i)^2
163 For L2-regularized L1-loss SVC dual (-s 3), we solve
165 min_alpha 0.5(alpha^T Q alpha) - e^T alpha
166 s.t. 0 <= alpha_i <= C,
168 For L1-regularized L2-loss SVC (-s 5), we solve
170 min_w \sum |w_j| + C \sum max(0, 1- y_i w^Tx_i)^2
172 For L1-regularized logistic regression (-s 6), we solve
174 min_w \sum |w_j| + C \sum log(1 + exp(-y_i w^Tx_i))
176 For L2-regularized logistic regression (-s 7), we solve
178 min_alpha 0.5(alpha^T Q alpha) + \sum alpha_i*log(alpha_i) + \sum (C-alpha_i)*log(C-alpha_i) - a constant
179 s.t. 0 <= alpha_i <= C,
183 Q is a matrix with Q_ij = y_i y_j x_i^T x_j.
185 For L2-regularized L2-loss SVR (-s 11), we solve
187 min_w w^Tw/2 + C \sum max(0, |y_i-w^Tx_i|-epsilon)^2
189 For L2-regularized L2-loss SVR dual (-s 12), we solve
191 min_beta 0.5(beta^T (Q + lambda I/2/C) beta) - y^T beta + \sum |beta_i|
193 For L2-regularized L1-loss SVR dual (-s 13), we solve
195 min_beta 0.5(beta^T Q beta) - y^T beta + \sum |beta_i|
196 s.t. -C <= beta_i <= C,
200 Q is a matrix with Q_ij = x_i^T x_j.
202 If bias >= 0, w becomes [w; w_{n+1}] and x becomes [x; bias].
204 The primal-dual relationship implies that -s 1 and -s 2 give the same
205 model, -s 0 and -s 7 give the same, and -s 11 and -s 12 give the same.
207 We implement 1-vs-the rest multi-class strategy for classification.
208 In training i vs. non_i, their C parameters are (weight from -wi)*C
209 and C, respectively. If there are only two classes, we train only one
210 model. Thus weight1*C vs. weight2*C is used. See examples below.
212 We also implement multi-class SVM by Crammer and Singer (-s 4):
214 min_{w_m, \xi_i} 0.5 \sum_m ||w_m||^2 + C \sum_i \xi_i
215 s.t. w^T_{y_i} x_i - w^T_m x_i >= \e^m_i - \xi_i \forall m,i
217 where e^m_i = 0 if y_i = m,
218 e^m_i = 1 if y_i != m,
220 Here we solve the dual problem:
222 min_{\alpha} 0.5 \sum_m ||w_m(\alpha)||^2 + \sum_i \sum_m e^m_i alpha^m_i
223 s.t. \alpha^m_i <= C^m_i \forall m,i , \sum_m \alpha^m_i=0 \forall i
225 where w_m(\alpha) = \sum_i \alpha^m_i x_i,
226 and C^m_i = C if m = y_i,
227 C^m_i = 0 if m != y_i.
232 Usage: predict [options] test_file model_file output_file
234 -b probability_estimates: whether to output probability estimates, 0 or 1 (default 0); currently for logistic regression only
235 -q : quiet mode (no outputs)
237 Note that -b is only needed in the prediction phase. This is different
238 from the setting of LIBSVM.
250 Train linear SVM with L2-loss function.
252 > train -s 0 data_file
254 Train a logistic regression model.
256 > train -v 5 -e 0.001 data_file
258 Do five-fold cross-validation using L2-loss SVM.
259 Use a smaller stopping tolerance 0.001 than the default
260 0.1 if you want more accurate solutions.
264 Best C = 0.000488281 CV accuracy = 83.3333%
265 > train -c 0.000488281 data_file
267 Conduct cross validation many times by L2-loss SVM and find the
268 parameter C which achieves the best cross validation accuracy. Then
269 use the selected C to train the data for getting a model.
271 > train -C -s 0 -v 3 -c 0.5 -e 0.0001 data_file
273 For parameter selection by -C, users can specify other
274 solvers (currently -s 0, -s 2 and -s 11 are supported) and
275 different number of CV folds. Further, users can use
276 the -c option to specify the smallest C value of the
277 search range. This option is useful when users want to
278 rerun the parameter selection procedure from a specified
279 C under a different setting, such as a stricter stopping
280 tolerance -e 0.0001 in the above example. Similarly, for
281 -s 11, users can use the -p option to specify the
282 maximal p value of the search range.
284 > train -c 10 -w1 2 -w2 5 -w3 2 four_class_data_file
286 Train four classifiers:
287 positive negative Cp Cn
288 class 1 class 2,3,4. 20 10
289 class 2 class 1,3,4. 50 10
290 class 3 class 1,2,4. 20 10
291 class 4 class 1,2,3. 10 10
293 > train -c 10 -w3 1 -w2 5 two_class_data_file
295 If there are only two classes, we train ONE model.
296 The C values for the two classes are 10 and 50.
298 > predict -b 1 test_file data_file.model output_file
300 Output probability estimates (for logistic regression only).
305 These functions and structures are declared in the header file `linear.h'.
306 You can see `train.c' and `predict.c' for examples showing how to use them.
307 We define LIBLINEAR_VERSION and declare `extern int liblinear_version; '
308 in linear.h, so you can check the version number.
310 - Function: model* train(const struct problem *prob,
311 const struct parameter *param);
313 This function constructs and returns a linear classification
314 or regression model according to the given training data and
317 struct problem describes the problem:
323 struct feature_node **x;
327 where `l' is the number of training data. If bias >= 0, we assume
328 that one additional feature is added to the end of each data
329 instance. `n' is the number of feature (including the bias feature
330 if bias >= 0). `y' is an array containing the target values. (integers
331 in classification, real numbers in regression) And `x' is an array
332 of pointers, each of which points to a sparse representation (array
333 of feature_node) of one training vector.
335 For example, if we have the following training data:
337 LABEL ATTR1 ATTR2 ATTR3 ATTR4 ATTR5
338 ----- ----- ----- ----- ----- -----
343 3 -0.1 -0.2 0.1 1.1 0.1
345 and bias = 1, then the components of problem are:
352 x -> [ ] -> (2,0.1) (3,0.2) (6,1) (-1,?)
353 [ ] -> (2,0.1) (3,0.3) (4,-1.2) (6,1) (-1,?)
354 [ ] -> (1,0.4) (6,1) (-1,?)
355 [ ] -> (2,0.1) (4,1.4) (5,0.5) (6,1) (-1,?)
356 [ ] -> (1,-0.1) (2,-0.2) (3,0.1) (4,1.1) (5,0.1) (6,1) (-1,?)
358 struct parameter describes the parameters of a linear classification
365 /* these are for training only */
366 double eps; /* stopping criteria */
374 solver_type can be one of L2R_LR, L2R_L2LOSS_SVC_DUAL, L2R_L2LOSS_SVC, L2R_L1LOSS_SVC_DUAL, MCSVM_CS, L1R_L2LOSS_SVC, L1R_LR, L2R_LR_DUAL, L2R_L2LOSS_SVR, L2R_L2LOSS_SVR_DUAL, L2R_L1LOSS_SVR_DUAL.
376 L2R_LR L2-regularized logistic regression (primal)
377 L2R_L2LOSS_SVC_DUAL L2-regularized L2-loss support vector classification (dual)
378 L2R_L2LOSS_SVC L2-regularized L2-loss support vector classification (primal)
379 L2R_L1LOSS_SVC_DUAL L2-regularized L1-loss support vector classification (dual)
380 MCSVM_CS support vector classification by Crammer and Singer
381 L1R_L2LOSS_SVC L1-regularized L2-loss support vector classification
382 L1R_LR L1-regularized logistic regression
383 L2R_LR_DUAL L2-regularized logistic regression (dual)
385 L2R_L2LOSS_SVR L2-regularized L2-loss support vector regression (primal)
386 L2R_L2LOSS_SVR_DUAL L2-regularized L2-loss support vector regression (dual)
387 L2R_L1LOSS_SVR_DUAL L2-regularized L1-loss support vector regression (dual)
389 C is the cost of constraints violation.
390 p is the sensitiveness of loss of support vector regression.
391 eps is the stopping criterion.
393 nr_weight, weight_label, and weight are used to change the penalty
394 for some classes (If the weight for a class is not changed, it is
395 set to 1). This is useful for training classifier using unbalanced
396 input data or with asymmetric misclassification cost.
398 nr_weight is the number of elements in the array weight_label and
399 weight. Each weight[i] corresponds to weight_label[i], meaning that
400 the penalty of class weight_label[i] is scaled by a factor of weight[i].
402 If you do not want to change penalty for any of the classes,
403 just set nr_weight to 0.
405 *NOTE* To avoid wrong parameters, check_parameter() should be
406 called before train().
408 struct model stores the model obtained from the training procedure:
412 struct parameter param;
413 int nr_class; /* number of classes */
416 int *label; /* label of each class */
420 param describes the parameters used to obtain the model.
422 nr_class and nr_feature are the number of classes and features,
423 respectively. nr_class = 2 for regression.
425 The array w gives feature weights; its size is
426 nr_feature*nr_class but is nr_feature if nr_class = 2. We use one
427 against the rest for multi-class classification, so each feature
428 index corresponds to nr_class weight values. Weights are
429 organized in the following way
431 +------------------+------------------+------------+
432 | nr_class weights | nr_class weights | ...
433 | for 1st feature | for 2nd feature |
434 +------------------+------------------+------------+
436 If bias >= 0, x becomes [x; bias]. The number of features is
437 increased by one, so w is a (nr_feature+1)*nr_class array. The
438 value of bias is stored in the variable bias.
440 The array label stores class labels.
442 - Function: void cross_validation(const problem *prob, const parameter *param, int nr_fold, double *target);
444 This function conducts cross validation. Data are separated to
445 nr_fold folds. Under given parameters, sequentially each fold is
446 validated using the model from training the remaining. Predicted
447 labels in the validation process are stored in the array called
450 The format of prob is same as that for train().
452 - Function: void find_parameters(const struct problem *prob,
453 const struct parameter *param, int nr_fold, double start_C,
454 double start_p, double *best_C, double *best_p, double *best_score);
456 This function is similar to cross_validation. However, instead of
457 conducting cross validation under specified parameters. For -s 0, 2, it
458 conducts cross validation many times under parameters C = start_C,
459 2*start_C, 4*start_C, 8*start_C, ..., and finds the best one with
460 the highest cross validation accuracy. For -s 11, it conducts cross
461 validation many times with a two-fold loop. The outer loop considers a
462 default sequence of p = 19/20*max_p, ..., 1/20*max_p, 0 and
463 under each p value the inner loop considers a sequence of parameters
464 C = start_C, 2*start_C, 4*start_C, ..., and finds the best one with the
465 lowest mean squared error.
467 If start_C <= 0, then this procedure calculates a small enough C
468 for prob as the start_C. The procedure stops when the models of
469 all folds become stable or C reaches max_C.
471 If start_p <= 0, then this procedure calculates a maximal p for prob as
472 the start_p. Otherwise, the procedure starts with the first
473 i/20*max_p <= start_p so the outer sequence is i/20*max_p,
474 (i-1)/20*max_p, ..., 0.
476 The best C, the best p, and the corresponding accuracy (or MSE) are
477 assigned to *best_C, *best_p and *best_score, respectively. For
478 classification, *best_p is not used, and the returned value is -1.
480 - Function: double predict(const model *model_, const feature_node *x);
482 For a classification model, the predicted class for x is returned.
483 For a regression model, the function value of x calculated using
484 the model is returned.
486 - Function: double predict_values(const struct model *model_,
487 const struct feature_node *x, double* dec_values);
489 This function gives nr_w decision values in the array dec_values.
490 nr_w=1 if regression is applied or the number of classes is two. An exception is
491 multi-class SVM by Crammer and Singer (-s 4), where nr_w = 2 if there are two classes. For all other situations, nr_w is the
494 We implement one-vs-the rest multi-class strategy (-s 0,1,2,3,5,6,7)
495 and multi-class SVM by Crammer and Singer (-s 4) for multi-class SVM.
496 The class with the highest decision value is returned.
498 - Function: double predict_probability(const struct model *model_,
499 const struct feature_node *x, double* prob_estimates);
501 This function gives nr_class probability estimates in the array
502 prob_estimates. nr_class can be obtained from the function
503 get_nr_class. The class with the highest probability is
504 returned. Currently, we support only the probability outputs of
507 - Function: int get_nr_feature(const model *model_);
509 The function gives the number of attributes of the model.
511 - Function: int get_nr_class(const model *model_);
513 The function gives the number of classes of the model.
514 For a regression model, 2 is returned.
516 - Function: void get_labels(const model *model_, int* label);
518 This function outputs the name of labels into an array called label.
519 For a regression model, label is unchanged.
521 - Function: double get_decfun_coef(const struct model *model_, int feat_idx,
524 This function gives the coefficient for the feature with feature index =
525 feat_idx and the class with label index = label_idx. Note that feat_idx
526 starts from 1, while label_idx starts from 0. If feat_idx is not in the
527 valid range (1 to nr_feature), then a zero value will be returned. For
528 classification models, if label_idx is not in the valid range (0 to
529 nr_class-1), then a zero value will be returned; for regression models,
530 label_idx is ignored.
532 - Function: double get_decfun_bias(const struct model *model_, int label_idx);
534 This function gives the bias term corresponding to the class with the
535 label_idx. For classification models, if label_idx is not in a valid range
536 (0 to nr_class-1), then a zero value will be returned; for regression
537 models, label_idx is ignored.
539 - Function: const char *check_parameter(const struct problem *prob,
540 const struct parameter *param);
542 This function checks whether the parameters are within the feasible
543 range of the problem. This function should be called before calling
544 train() and cross_validation(). It returns NULL if the
545 parameters are feasible, otherwise an error message is returned.
547 - Function: int check_probability_model(const struct model *model);
549 This function returns 1 if the model supports probability output;
550 otherwise, it returns 0.
552 - Function: int check_regression_model(const struct model *model);
554 This function returns 1 if the model is a regression model; otherwise
557 - Function: int save_model(const char *model_file_name,
558 const struct model *model_);
560 This function saves a model to a file; returns 0 on success, or -1
563 - Function: struct model *load_model(const char *model_file_name);
565 This function returns a pointer to the model read from the file,
566 or a null pointer if the model could not be loaded.
568 - Function: void free_model_content(struct model *model_ptr);
570 This function frees the memory used by the entries in a model structure.
572 - Function: void free_and_destroy_model(struct model **model_ptr_ptr);
574 This function frees the memory used by a model and destroys the model
577 - Function: void destroy_param(struct parameter *param);
579 This function frees the memory used by a parameter set.
581 - Function: void set_print_string_function(void (*print_func)(const char *));
583 Users can specify their output format by a function. Use
584 set_print_string_function(NULL);
585 for default printing to stdout.
587 Building Windows Binaries
588 =========================
590 Windows binaries are available in the directory `windows'. To re-build
591 them via Visual C++, use the following steps:
593 1. Open a dos command box and change to liblinear directory. If
594 environment variables of VC++ have not been set, type
596 ""C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\bin\amd64\vcvars64.bat""
598 You may have to modify the above command according which version of
599 VC++ or where it is installed.
603 nmake -f Makefile.win clean all
605 3. (optional) To build shared library liblinear.dll, type
607 nmake -f Makefile.win lib
609 4. (Optional) To build 32-bit windows binaries, you must
610 (1) Setup "C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\bin\vcvars32.bat" instead of vcvars64.bat
611 (2) Change CFLAGS in Makefile.win: /D _WIN64 to /D _WIN32
613 MATLAB/OCTAVE Interface
614 =======================
616 Please check the file README in the directory `matlab'.
621 Please check the file README in the directory `python'.
623 Additional Information
624 ======================
626 If you find LIBLINEAR helpful, please cite it as
628 R.-E. Fan, K.-W. Chang, C.-J. Hsieh, X.-R. Wang, and C.-J. Lin.
629 LIBLINEAR: A Library for Large Linear Classification, Journal of
630 Machine Learning Research 9(2008), 1871-1874. Software available at
631 http://www.csie.ntu.edu.tw/~cjlin/liblinear
633 For any questions and comments, please send your email to
634 cjlin@csie.ntu.edu.tw