From: Josh Kupershmidt Date: Sat, 25 Jul 2015 16:50:37 +0000 (-0700) Subject: Few more copyediting fixes. X-Git-Tag: v7.2~24^2 X-Git-Url: https://granicus.if.org/sourcecode?a=commitdiff_plain;h=50911bc300af5567adec565dc465172b34b346a7;p=pgbadger Few more copyediting fixes. --- diff --git a/README b/README index e2941da..8d92a39 100644 --- a/README +++ b/README @@ -84,7 +84,7 @@ SYNOPSIS -w | --watch-mode : only report errors just like logwatch could do. -x | --extension : output format. Values: text, html, bin, json or tsung. Default: html - -X | --extra-files : in incremetal mode allow pgbadger to write CSS and + -X | --extra-files : in incremetal mode allow pgBadger to write CSS and JS files in the output directory as separate files. -z | --zcat exec_path : set the full path to the zcat program. Use it if zcat or bzcat or unzip is not in your path. @@ -121,14 +121,14 @@ SYNOPSIS You can use this option multiple times. --exclude-appname name : exclude entries for the specified application name from report. Example: "pg_dump". - --exclude-line regex : pgbadger will start to exclude any log entry that + --exclude-line regex : pgBadger will start to exclude any log entry that will match the given regex. Can be used multiple time. --anonymize : obscure all literals in queries, useful to hide confidential data. - --noreport : prevent pgbadger to create reports in incremental + --noreport : prevent pgBadger to create reports in incremental mode. - --log-duration : force pgbadger to associate log entries generated + --log-duration : force pgBadger to associate log entries generated by both log_duration = on and log_statement = 'all' --enable-checksum : used to add a md5 sum under each query report. @@ -196,7 +196,7 @@ SYNOPSIS -O /var/www/pg_reports/ If you have a pg_dump at 23:00 and 13:00 each day lasting half an hour, - you can use pgbadger as follows to exclude those periods from the report: + you can use pgBadger as follows to exclude those periods from the report: pgbadger --exclude-time "2013-09-.* (23|13):.*" postgresql.log @@ -439,7 +439,7 @@ PARALLEL PROCESSING To enable parallel processing you just have to use the -j N option where N is the number of cores you want to use. - pgbadger will then proceed as follow: + pgBadger will then proceed as follow: for each log file chunk size = int(file size / N) @@ -465,7 +465,7 @@ PARALLEL PROCESSING of the -J option starts being really interesting with 8 Cores. Using this method you will be sure not to lose any queries in the reports. - He are benchmarks done on a server with 8 CPUs and a single file of + Here are benchmarks done on a server with 8 CPUs and a single file of 9.5GB. Option | 1 CPU | 2 CPU | 4 CPU | 8 CPU