From: rafan Date: Sun, 23 Aug 2009 16:36:37 +0000 (+0000) Subject: - white space cleanup X-Git-Tag: v140~2 X-Git-Url: https://granicus.if.org/sourcecode?a=commitdiff_plain;h=22dd3db9fb6f9004e95f62902de2b4d894d73619;p=liblinear - white space cleanup --- diff --git a/README b/README index 419261a..91aee51 100644 --- a/README +++ b/README @@ -80,13 +80,13 @@ On other systems, consult `Makefile' to build them (e.g., see 'Building Windows binaries' in this file) or use the pre-built binaries (Windows binaries are in the directory `windows'). -This software uses some level-1 BLAS subroutines. The needed functions are +This software uses some level-1 BLAS subroutines. The needed functions are included in this package. If a BLAS library is available on your machine, you may use it by modifying the Makefile: Unmark the following line #LIBS ?= -lblas -and mark +and mark LIBS ?= blas/blas.a @@ -107,8 +107,8 @@ options: -e epsilon : set tolerance of termination criterion -s 0 and 2 |f'(w)|_2 <= eps*min(pos,neg)/l*|f'(w0)|_2, - where f is the primal function and pos/neg are # of - positive/negative data (default 0.01) + where f is the primal function and pos/neg are # of + positive/negative data (default 0.01) -s 1, 3, and 4 Dual maximal violation <= eps; similar to libsvm (default 0.1) -s 5 and 6 @@ -130,7 +130,7 @@ min_w w^Tw/2 + C \sum log(1 + exp(-y_i w^Tx_i)) For L2-regularized L2-loss SVC dual (-s 1), we solve -min_alpha 0.5(alpha^T (Q + I/2/C) alpha) - e^T alpha +min_alpha 0.5(alpha^T (Q + I/2/C) alpha) - e^T alpha s.t. 0 <= alpha_i, For L2-regularized L2-loss SVC (-s 2), we solve @@ -139,7 +139,7 @@ min_w w^Tw/2 + C \sum max(0, 1- y_i w^Tx_i)^2 For L2-regularized L1-loss SVC dual (-s 3), we solve -min_alpha 0.5(alpha^T Q alpha) - e^T alpha +min_alpha 0.5(alpha^T Q alpha) - e^T alpha s.t. 0 <= alpha_i <= C, For L1-regularized L2-loss SVC (-s 5), we solve @@ -154,7 +154,7 @@ where Q is a matrix with Q_ij = y_i y_j x_i^T x_j. -If bias >= 0, w becomes [w; w_{n+1}] and x becomes [x; bias]. +If bias >= 0, w becomes [w; w_{n+1}] and x becomes [x; bias]. The primal-dual relationship implies that -s 1 and -s 2 gives the same model. @@ -216,8 +216,8 @@ class 4 class 1,2,3. 10 10 > train -c 10 -w3 1 -w2 5 two_class_data_file -If there are only two classes, we train ONE model. -The C values for the two classes are 10 and 50. +If there are only two classes, we train ONE model. +The C values for the two classes are 10 and 50. > predict -b 1 test_file data_file.model output_file @@ -290,15 +290,15 @@ Library Usage solver_type can be one of L2R_LR, L2R_L2LOSS_SVC_DUAL, L2R_L2LOSS_SVC, L2R_L1LOSS_SVC_DUAL, MCSVM_CS, L1R_L2LOSS_SVC, L1R_LR. L2R_LR L2-regularized logistic regression - L2R_L2LOSS_SVC_DUAL L2-regularized L2-loss support vector classification (dual) + L2R_L2LOSS_SVC_DUAL L2-regularized L2-loss support vector classification (dual) L2R_L2LOSS_SVC L2-regularized L2-loss support vector classification (primal) - L2R_L1LOSS_SVC_DUAL L2-regularized L1-loss support vector classification (dual) - MCSVM_CS multi-class support vector classification by Crammer and Singer + L2R_L1LOSS_SVC_DUAL L2-regularized L1-loss support vector classification (dual) + MCSVM_CS multi-class support vector classification by Crammer and Singer L1R_L2LOSS_SVC L1-regularized L2-loss support vector classification L1R_LR L1-regularized logistic regression - C is the cost of constraints violation. - eps is the stopping criterion. + C is the cost of constraints violation. + eps is the stopping criterion. nr_weight, weight_label, and weight are used to change the penalty for some classes (If the weight for a class is not changed, it is @@ -337,8 +337,8 @@ Library Usage organized in the following way +------------------+------------------+------------+ - | nr_class weights | nr_class weights | ... - | for 1st feature | for 2nd feature | + | nr_class weights | nr_class weights | ... + | for 1st feature | for 2nd feature | +------------------+------------------+------------+ If bias >= 0, x becomes [x; bias]. The number of features is @@ -367,7 +367,7 @@ Library Usage This function gives nr_w decision values in the array dec_values. nr_w is 1 if there are two classes except multi-class - svm by Crammer and Singer (-s 4), and is the number of classes otherwise. + svm by Crammer and Singer (-s 4), and is the number of classes otherwise. We implement one-vs-the rest multi-class strategy (-s 0,1,2,3) and multi-class svm by Crammer and Singer (-s 4) for multi-class SVM.