From: Euler Taveira Date: Thu, 2 Mar 2017 13:19:10 +0000 (-0300) Subject: Fix a bunch of typos and do some cosmetic modifications X-Git-Tag: v9.2~15^2 X-Git-Url: https://granicus.if.org/sourcecode?a=commitdiff_plain;h=9370ee92da7e97c38727190ec3924d2dc6c20311;p=pgbadger Fix a bunch of typos and do some cosmetic modifications Fix a lot of mystyped words and do some grammatical fixes. Use 'pgBadger' where it refers to the program and not the binary file. Also, use "official" expressions such as PgBouncer, GitHub, and CSS. POD file was synced with README. --- diff --git a/README b/README index 5567dda..dcdb8ab 100644 --- a/README +++ b/README @@ -23,11 +23,11 @@ SYNOPSIS -c | --dbclient host : only report on entries for the given client host. -C | --nocomment : remove comments like /* ... */ from queries. -d | --dbname database : only report on entries for the given database. - -D | --dns-resolv : client ip adresses are replaced by their DNS name. + -D | --dns-resolv : client ip addresses are replaced by their DNS name. Be warned that this can really slow down pgBadger. -e | --end datetime : end date/time for the data to be parsed in log. -f | --format logtype : possible values: syslog, syslog2, stderr, csv and - pgbouncer. Use this option when pgbadger is not + pgbouncer. Use this option when pgBadger is not able to auto-detect the log format Default: stderr. -G | --nograph : disable graphs on HTML output. Enabled by default. -h | --help : show this message and exit. @@ -66,9 +66,9 @@ SYNOPSIS -q | --quiet : don't print anything to stdout, not even a progress bar. -r | --remote-host ip : set the host where to execute the cat command on - remote logfile to parse localy the file. - -R | --retention N : number of week to keep in incremental mode. Default - to 0, disabled. Used to set the number of weel to + remote logfile to parse locally the file. + -R | --retention N : number of weeks to keep in incremental mode. Default + to 0, disabled. Used to set the number of weeks to keep in output directory. Older weeks and days directory are automatically removed. -s | --sample number : number of query samples to store. Default: 3. @@ -83,12 +83,12 @@ SYNOPSIS -w | --watch-mode : only report errors just like logwatch could do. -x | --extension : output format. Values: text, html, bin, json or tsung. Default: html - -X | --extra-files : in incremetal mode allow pgbadger to write CSS and + -X | --extra-files : in incremental mode allow pgBadger to write CSS and JS files in the output directory as separate files. -z | --zcat exec_path : set the full path to the zcat program. Use it if zcat or bzcat or unzip is not in your path. - -Z | --timezone +/-XX : Set the number of hour(s) from GMT of the timezone. - Use this to adjust date/time in javascript graphs. + -Z | --timezone +/-XX : Set the number of hours from GMT of the timezone. + Use this to adjust date/time in JavaScript graphs. --pie-limit num : pie data lower than num% will show a sum instead. --exclude-query regex : any query matching the given regex will be excluded from the report. For example: "^(VACUUM|COMMIT)" @@ -122,27 +122,27 @@ SYNOPSIS You can use this option multiple times. --exclude-appname name : exclude entries for the specified application name from report. Example: "pg_dump". - --exclude-line regex : pgbadger will start to exclude any log entry that + --exclude-line regex : pgBadger will start to exclude any log entry that will match the given regex. Can be used multiple time. --anonymize : obscure all literals in queries, useful to hide confidential data. - --noreport : prevent pgbadger to create reports in incremental + --noreport : prevent pgBadger to create reports in incremental mode. - --log-duration : force pgbadger to associate log entries generated + --log-duration : force pgBadger to associate log entries generated by both log_duration = on and log_statement = 'all' --enable-checksum : used to add a md5 sum under each query report. --journalctl command : command to use to replace PostgreSQL logfile by a call to journalctl. Basically it might be: journalctl -u postgresql-9.5 --pid-dir dirpath : set the path of the directory where the pid file - will be written to be able to run two pgbadger at + will be written to be able to run two pgBadger at the same time. --rebuild : used to rebuild all html reports in incremental output directories where there is binary data files. - --pgbouncer-only : only show pgbouncer related menu in the header. - --start-monday : in incremental mode, calendar's weeks start on - sunday. Use this otpion to start on monday. + --pgbouncer-only : only show PgBouncer related menu in the header. + --start-monday : in incremental mode, weeks start on sunday. Use + this option to start on monday. --normalized-only : only dump all normalized query to out.txt pgBadger is able to parse a remote log file using a passwordless ssh @@ -209,7 +209,7 @@ SYNOPSIS -O /var/www/pg_reports/ If you have a pg_dump at 23:00 and 13:00 each day during half an hour, - you can use pgbadger as follow to exclude these period from the report: + you can use pgBadger as follow to exclude these period from the report: pgbadger --exclude-time "2013-09-.* (23|13):.*" postgresql.log @@ -228,23 +228,23 @@ SYNOPSIS you don't need to specify any log file at command line, but if you have others PostgreSQL log files to parse, you can add them as usual. - To rebuild all incremantal html reports after, proceed as follow: + To rebuild all incremental html reports after, proceed as follow: rm /path/to/reports/*.js rm /path/to/reports/*.css pgbadger -X -I -O /path/to/reports/ --rebuild - it will also update all ressources file (JS and CSS). + it will also update all resource files (JS and CSS). DESCRIPTION pgBadger is a PostgreSQL log analyzer built for speed with fully reports from your PostgreSQL log file. It's a single and small Perl script that outperforms any other PostgreSQL log analyzer. - It is written in pure Perl and uses a javascript library (flotr2) to + It is written in pure Perl and uses a JavaScript library (flotr2) to draw graphs so that you don't need to install any additional Perl modules or other packages. Furthermore, this library gives us more - features such as zooming. pgBadger also uses the Bootstrap javascript + features such as zooming. pgBadger also uses the Bootstrap JavaScript library and the FontAwesome webfont for better design. Everything is embedded. @@ -273,7 +273,7 @@ DESCRIPTION FEATURE pgBadger reports everything about your SQL queries: - Overall statistics + Overall statistics. The most frequent waiting queries. Queries that waited the most. Queries generating the most temporary files. @@ -313,7 +313,7 @@ FEATURE All charts are zoomable and can be saved as PNG images. SQL queries reported are highlighted and beautified automatically. - pgBadger is also able to parse pgbouncer log files and to create the + pgBadger is also able to parse PgBouncer log files and to create the following reports: Request Throughput @@ -338,8 +338,9 @@ FEATURE combined. Histogram granularity can be adjusted using the -A command line option. - By default they will report the mean of each top queries/errors occuring - per hour, but you can specify the granularity down to the minute. + By default they will report the mean of each top queries/errors + occurring per hour, but you can specify the granularity down to the + minute. pgBadger can also be used in a central place to parse remote log files using a passwordless SSH connection. This mode can be used with @@ -348,7 +349,7 @@ FEATURE REQUIREMENT pgBadger comes as a single Perl script - you do not need anything other - than a modern Perl distribution. Charts are rendered using a Javascript + than a modern Perl distribution. Charts are rendered using a JavaScript library so you don't need anything other than a web browser. Your browser will do all the work. @@ -368,7 +369,7 @@ REQUIREMENT This module is optional, if you don't select the json output format you don't need to install it. - Compressed log file format is autodetected from the file exension. If + Compressed log file format is autodetected from the file extension. If pgBadger find a gz extension it will use the zcat utility, with a bz2 extension it will use bzcat and if the file extension is zip or xz then the unzip or xz utilities will be used. @@ -389,7 +390,7 @@ REQUIREMENT files as well as under Windows platform. INSTALLATION - Download the tarball from github and unpack the archive as follow: + Download the tarball from GitHub and unpack the archive as follow: tar xzf pgbadger-7.x.tar.gz cd pgbadger-7.x/ @@ -518,7 +519,7 @@ PARALLEL PROCESSING of the -J option starts being really interesting with 8 Cores. Using this method you will be sure not to lose any queries in the reports. - He are a benchmarck done on a server with 8 CPUs and a single file of + He are a benchmark done on a server with 8 CPUs and a single file of 9.5GB. Option | 1 CPU | 2 CPU | 4 CPU | 8 CPU @@ -552,7 +553,7 @@ INCREMENTAL REPORTS index file. The main index file will show a dropdown menu per week with a link to - each week's report and links to daily reports of each week. + each week report and links to daily reports of each week. For example, if you run pgBadger as follows based on a daily rotated file: @@ -569,7 +570,7 @@ INCREMENTAL REPORTS count the log entries twice. To save disk space you may want to use the -X or --extra-files command - line option to force pgBadger to write javascript and css to separate + line option to force pgBadger to write JavaScript and CSS to separate files in the output directory. The resources will then be loaded using script and link tags. @@ -604,7 +605,7 @@ BINARY FORMAT JSON FORMAT JSON format is good for sharing data with other languages, which makes - it easy to integrate pgBadger's result into other monitoring tools like + it easy to integrate pgBadger result into other monitoring tools like Cacti or Graphite. AUTHORS diff --git a/doc/pgBadger.pod b/doc/pgBadger.pod index ac2e9dd..e5bf106 100644 --- a/doc/pgBadger.pod +++ b/doc/pgBadger.pod @@ -25,11 +25,11 @@ Options: -c | --dbclient host : only report on entries for the given client host. -C | --nocomment : remove comments like /* ... */ from queries. -d | --dbname database : only report on entries for the given database. - -D | --dns-resolv : client ip adresses are replaced by their DNS name. + -D | --dns-resolv : client ip addresses are replaced by their DNS name. Be warned that this can really slow down pgBadger. -e | --end datetime : end date/time for the data to be parsed in log. -f | --format logtype : possible values: syslog, syslog2, stderr, csv and - pgbouncer. Use this option when pgbadger is not + pgbouncer. Use this option when pgBadger is not able to auto-detect the log format Default: stderr. -G | --nograph : disable graphs on HTML output. Enabled by default. -h | --help : show this message and exit. @@ -68,9 +68,9 @@ Options: -q | --quiet : don't print anything to stdout, not even a progress bar. -r | --remote-host ip : set the host where to execute the cat command on - remote logfile to parse localy the file. - -R | --retention N : number of week to keep in incremental mode. Default - to 0, disabled. Used to set the number of weel to + remote logfile to parse locally the file. + -R | --retention N : number of weeks to keep in incremental mode. Default + to 0, disabled. Used to set the number of weeks to keep in output directory. Older weeks and days directory are automatically removed. -s | --sample number : number of query samples to store. Default: 3. @@ -85,12 +85,12 @@ Options: -w | --watch-mode : only report errors just like logwatch could do. -x | --extension : output format. Values: text, html, bin, json or tsung. Default: html - -X | --extra-files : in incremetal mode allow pgbadger to write CSS and + -X | --extra-files : in incremental mode allow pgBadger to write CSS and JS files in the output directory as separate files. -z | --zcat exec_path : set the full path to the zcat program. Use it if zcat or bzcat or unzip is not in your path. - -Z | --timezone +/-XX : Set the number of hour(s) from GMT of the timezone. - Use this to adjust date/time in javascript graphs. + -Z | --timezone +/-XX : Set the number of hours from GMT of the timezone. + Use this to adjust date/time in JavaScript graphs. --pie-limit num : pie data lower than num% will show a sum instead. --exclude-query regex : any query matching the given regex will be excluded from the report. For example: "^(VACUUM|COMMIT)" @@ -124,27 +124,27 @@ Options: You can use this option multiple times. --exclude-appname name : exclude entries for the specified application name from report. Example: "pg_dump". - --exclude-line regex : pgbadger will start to exclude any log entry that + --exclude-line regex : pgBadger will start to exclude any log entry that will match the given regex. Can be used multiple time. --anonymize : obscure all literals in queries, useful to hide confidential data. - --noreport : prevent pgbadger to create reports in incremental + --noreport : prevent pgBadger to create reports in incremental mode. - --log-duration : force pgbadger to associate log entries generated + --log-duration : force pgBadger to associate log entries generated by both log_duration = on and log_statement = 'all' --enable-checksum : used to add a md5 sum under each query report. --journalctl command : command to use to replace PostgreSQL logfile by a call to journalctl. Basically it might be: journalctl -u postgresql-9.5 --pid-dir dirpath : set the path of the directory where the pid file - will be written to be able to run two pgbadger at + will be written to be able to run two pgBadger at the same time. --rebuild : used to rebuild all html reports in incremental output directories where there is binary data files. - --pgbouncer-only : only show pgbouncer related menu in the header. - --start-monday : in incremental mode, calendar's weeks start on - sunday. Use this otpion to start on monday. + --pgbouncer-only : only show PgBouncer related menu in the header. + --start-monday : in incremental mode, weeks start on sunday. Use + this option to start on monday. --normalized-only : only dump all normalized query to out.txt @@ -211,7 +211,7 @@ reports: -O /var/www/pg_reports/ If you have a pg_dump at 23:00 and 13:00 each day during half an hour, you can -use pgbadger as follow to exclude these period from the report: +use pgBadger as follow to exclude these period from the report: pgbadger --exclude-time "2013-09-.* (23|13):.*" postgresql.log @@ -230,13 +230,13 @@ or worst, call it from a remote host: you don't need to specify any log file at command line, but if you have others PostgreSQL log files to parse, you can add them as usual. -To rebuild all incremantal html reports after, proceed as follow: +To rebuild all incremental html reports after, proceed as follow: rm /path/to/reports/*.js rm /path/to/reports/*.css pgbadger -X -I -O /path/to/reports/ --rebuild -it will also update all ressources file (JS and CSS). +it will also update all resource files (JS and CSS). =head1 DESCRIPTION @@ -244,10 +244,10 @@ pgBadger is a PostgreSQL log analyzer built for speed with fully reports from your PostgreSQL log file. It's a single and small Perl script that outperforms any other PostgreSQL log analyzer. -It is written in pure Perl and uses a javascript library (flotr2) to draw +It is written in pure Perl and uses a JavaScript library (flotr2) to draw graphs so that you don't need to install any additional Perl modules or other packages. Furthermore, this library gives us more features such -as zooming. pgBadger also uses the Bootstrap javascript library and +as zooming. pgBadger also uses the Bootstrap JavaScript library and the FontAwesome webfont for better design. Everything is embedded. pgBadger is able to autodetect your log file format (syslog, stderr or csvlog). @@ -274,7 +274,7 @@ log_min_duration_statement to have reports on duration and number of queries onl pgBadger reports everything about your SQL queries: - Overall statistics + Overall statistics. The most frequent waiting queries. Queries that waited the most. Queries generating the most temporary files. @@ -314,7 +314,7 @@ There are also some pie charts about distribution of: All charts are zoomable and can be saved as PNG images. SQL queries reported are highlighted and beautified automatically. -pgBadger is also able to parse pgbouncer log files and to create the following +pgBadger is also able to parse PgBouncer log files and to create the following reports: Request Throughput @@ -338,7 +338,7 @@ one using one core per log file, and the second using multiple cores to parse a single file. These modes can be combined. Histogram granularity can be adjusted using the -A command line option. By default -they will report the mean of each top queries/errors occuring per hour, but you can +they will report the mean of each top queries/errors occurring per hour, but you can specify the granularity down to the minute. pgBadger can also be used in a central place to parse remote log files using a @@ -349,7 +349,7 @@ the multiprocess per file mode (-J) but can not be used with the CSV log format. =head1 REQUIREMENT pgBadger comes as a single Perl script - you do not need anything other than a modern -Perl distribution. Charts are rendered using a Javascript library so you don't need +Perl distribution. Charts are rendered using a JavaScript library so you don't need anything other than a web browser. Your browser will do all the work. If you planned to parse PostgreSQL CSV log files you might need some Perl Modules: @@ -366,7 +366,7 @@ If you want to export statistics as JSON file you need an additional Perl module This module is optional, if you don't select the json output format you don't need to install it. -Compressed log file format is autodetected from the file exension. If pgBadger find +Compressed log file format is autodetected from the file extension. If pgBadger find a gz extension it will use the zcat utility, with a bz2 extension it will use bzcat and if the file extension is zip or xz then the unzip or xz utilities will be used. @@ -386,7 +386,7 @@ well as under Windows platform. =head1 INSTALLATION -Download the tarball from github and unpack the archive as follow: +Download the tarball from GitHub and unpack the archive as follow: tar xzf pgbadger-7.x.tar.gz cd pgbadger-7.x/ @@ -511,7 +511,7 @@ option -J N instead. With 200 log files of 10MB each the use of the -J option starts being really interesting with 8 Cores. Using this method you will be sure not to lose any queries in the reports. -He are a benchmarck done on a server with 8 CPUs and a single file of 9.5GB. +He are a benchmark done on a server with 8 CPUs and a single file of 9.5GB. Option | 1 CPU | 2 CPU | 4 CPU | 8 CPU --------+---------+-------+-------+------ @@ -544,7 +544,7 @@ format into the mandatory output directory (see option -O or --outdir), then in HTML format for daily and weekly reports with a main index file. The main index file will show a dropdown menu per week with a link to each -week's report and links to daily reports of each week. +week report and links to daily reports of each week. For example, if you run pgBadger as follows based on a daily rotated file: @@ -560,7 +560,7 @@ this mode each day on a log file rotated each week, and it will not count the log entries twice. To save disk space you may want to use the -X or --extra-files command line -option to force pgBadger to write javascript and css to separate files in +option to force pgBadger to write JavaScript and CSS to separate files in the output directory. The resources will then be loaded using script and link tags. @@ -597,7 +597,7 @@ Adjust the commands to suit your particular needs. =head1 JSON FORMAT JSON format is good for sharing data with other languages, which makes it -easy to integrate pgBadger's result into other monitoring tools like Cacti +easy to integrate pgBadger result into other monitoring tools like Cacti or Graphite. =head1 AUTHORS diff --git a/pgbadger b/pgbadger index d4e75cf..e9a937e 100644 --- a/pgbadger +++ b/pgbadger @@ -1731,11 +1731,11 @@ Options: -c | --dbclient host : only report on entries for the given client host. -C | --nocomment : remove comments like /* ... */ from queries. -d | --dbname database : only report on entries for the given database. - -D | --dns-resolv : client ip adresses are replaced by their DNS name. + -D | --dns-resolv : client ip addresses are replaced by their DNS name. Be warned that this can really slow down pgBadger. -e | --end datetime : end date/time for the data to be parsed in log. -f | --format logtype : possible values: syslog, syslog2, stderr, csv and - pgbouncer. Use this option when pgbadger is not + pgbouncer. Use this option when pgBadger is not able to auto-detect the log format Default: stderr. -G | --nograph : disable graphs on HTML output. Enabled by default. -h | --help : show this message and exit. @@ -1774,9 +1774,9 @@ Options: -q | --quiet : don't print anything to stdout, not even a progress bar. -r | --remote-host ip : set the host where to execute the cat command on - remote logfile to parse localy the file. - -R | --retention N : number of week to keep in incremental mode. Default - to 0, disabled. Used to set the number of weel to + remote logfile to parse locally the file. + -R | --retention N : number of weeks to keep in incremental mode. Default + to 0, disabled. Used to set the number of weeks to keep in output directory. Older weeks and days directory are automatically removed. -s | --sample number : number of query samples to store. Default: 3. @@ -1791,12 +1791,12 @@ Options: -w | --watch-mode : only report errors just like logwatch could do. -x | --extension : output format. Values: text, html, bin, json or tsung. Default: html - -X | --extra-files : in incremetal mode allow pgbadger to write CSS and + -X | --extra-files : in incremental mode allow pgBadger to write CSS and JS files in the output directory as separate files. -z | --zcat exec_path : set the full path to the zcat program. Use it if zcat or bzcat or unzip is not in your path. - -Z | --timezone +/-XX : Set the number of hour(s) from GMT of the timezone. - Use this to adjust date/time in javascript graphs. + -Z | --timezone +/-XX : Set the number of hours from GMT of the timezone. + Use this to adjust date/time in JavaScript graphs. --pie-limit num : pie data lower than num% will show a sum instead. --exclude-query regex : any query matching the given regex will be excluded from the report. For example: "^(VACUUM|COMMIT)" @@ -1830,27 +1830,27 @@ Options: You can use this option multiple times. --exclude-appname name : exclude entries for the specified application name from report. Example: "pg_dump". - --exclude-line regex : pgbadger will start to exclude any log entry that + --exclude-line regex : pgBadger will start to exclude any log entry that will match the given regex. Can be used multiple time. --anonymize : obscure all literals in queries, useful to hide confidential data. - --noreport : prevent pgbadger to create reports in incremental + --noreport : prevent pgBadger to create reports in incremental mode. - --log-duration : force pgbadger to associate log entries generated + --log-duration : force pgBadger to associate log entries generated by both log_duration = on and log_statement = 'all' --enable-checksum : used to add a md5 sum under each query report. --journalctl command : command to use to replace PostgreSQL logfile by a call to journalctl. Basically it might be: journalctl -u postgresql-9.5 --pid-dir dirpath : set the path of the directory where the pid file - will be written to be able to run two pgbadger at + will be written to be able to run two pgBadger at the same time. --rebuild : used to rebuild all html reports in incremental output directories where there is binary data files. - --pgbouncer-only : only show pgbouncer related menu in the header. + --pgbouncer-only : only show PgBouncer related menu in the header. --start-monday : in incremental mode, calendar's weeks start on - sunday. Use this otpion to start on monday. + sunday. Use this option to start on monday. --normalized-only : only dump all normalized query to out.txt pgBadger is able to parse a remote log file using a passwordless ssh connection. @@ -1916,7 +1916,7 @@ reports: -O /var/www/pg_reports/ If you have a pg_dump at 23:00 and 13:00 each day during half an hour, you can -use pgbadger as follow to exclude these period from the report: +use pgBadger as follow to exclude these period from the report: pgbadger --exclude-time "2013-09-.* (23|13):.*" postgresql.log @@ -1935,13 +1935,13 @@ or worst, call it from a remote host: you don't need to specify any log file at command line, but if you have other PostgreSQL log file to parse, you can add them as usual. -To rebuild all incremantal html reports after, proceed as follow: +To rebuild all incremental html reports after, proceed as follow: rm /path/to/reports/*.js rm /path/to/reports/*.css pgbadger -X -I -O /path/to/reports/ --rebuild -it will also update all ressources file (JS and CSS). +it will also update all resource files (JS and CSS). };