-c | --dbclient host : only report on entries for the given client host.
-C | --nocomment : remove comments like /* ... */ from queries.
-d | --dbname database : only report on entries for the given database.
- -D | --dns-resolv : client ip adresses are replaced by their DNS name.
+ -D | --dns-resolv : client ip addresses are replaced by their DNS name.
Be warned that this can really slow down pgBadger.
-e | --end datetime : end date/time for the data to be parsed in log.
-f | --format logtype : possible values: syslog, syslog2, stderr, csv and
- pgbouncer. Use this option when pgbadger is not
+ pgbouncer. Use this option when pgBadger is not
able to auto-detect the log format Default: stderr.
-G | --nograph : disable graphs on HTML output. Enabled by default.
-h | --help : show this message and exit.
-q | --quiet : don't print anything to stdout, not even a progress
bar.
-r | --remote-host ip : set the host where to execute the cat command on
- remote logfile to parse localy the file.
- -R | --retention N : number of week to keep in incremental mode. Default
- to 0, disabled. Used to set the number of weel to
+ remote logfile to parse locally the file.
+ -R | --retention N : number of weeks to keep in incremental mode. Default
+ to 0, disabled. Used to set the number of weeks to
keep in output directory. Older weeks and days
directory are automatically removed.
-s | --sample number : number of query samples to store. Default: 3.
-w | --watch-mode : only report errors just like logwatch could do.
-x | --extension : output format. Values: text, html, bin, json or
tsung. Default: html
- -X | --extra-files : in incremetal mode allow pgbadger to write CSS and
+ -X | --extra-files : in incremental mode allow pgBadger to write CSS and
JS files in the output directory as separate files.
-z | --zcat exec_path : set the full path to the zcat program. Use it if
zcat or bzcat or unzip is not in your path.
- -Z | --timezone +/-XX : Set the number of hour(s) from GMT of the timezone.
- Use this to adjust date/time in javascript graphs.
+ -Z | --timezone +/-XX : Set the number of hours from GMT of the timezone.
+ Use this to adjust date/time in JavaScript graphs.
--pie-limit num : pie data lower than num% will show a sum instead.
--exclude-query regex : any query matching the given regex will be excluded
from the report. For example: "^(VACUUM|COMMIT)"
You can use this option multiple times.
--exclude-appname name : exclude entries for the specified application name
from report. Example: "pg_dump".
- --exclude-line regex : pgbadger will start to exclude any log entry that
+ --exclude-line regex : pgBadger will start to exclude any log entry that
will match the given regex. Can be used multiple
time.
--anonymize : obscure all literals in queries, useful to hide
confidential data.
- --noreport : prevent pgbadger to create reports in incremental
+ --noreport : prevent pgBadger to create reports in incremental
mode.
- --log-duration : force pgbadger to associate log entries generated
+ --log-duration : force pgBadger to associate log entries generated
by both log_duration = on and log_statement = 'all'
--enable-checksum : used to add a md5 sum under each query report.
--journalctl command : command to use to replace PostgreSQL logfile by
a call to journalctl. Basically it might be:
journalctl -u postgresql-9.5
--pid-dir dirpath : set the path of the directory where the pid file
- will be written to be able to run two pgbadger at
+ will be written to be able to run two pgBadger at
the same time.
--rebuild : used to rebuild all html reports in incremental
output directories where there is binary data files.
- --pgbouncer-only : only show pgbouncer related menu in the header.
- --start-monday : in incremental mode, calendar's weeks start on
- sunday. Use this otpion to start on monday.
+ --pgbouncer-only : only show PgBouncer related menu in the header.
+ --start-monday : in incremental mode, weeks start on sunday. Use
+ this option to start on monday.
--normalized-only : only dump all normalized query to out.txt
pgBadger is able to parse a remote log file using a passwordless ssh
-O /var/www/pg_reports/
If you have a pg_dump at 23:00 and 13:00 each day during half an hour,
- you can use pgbadger as follow to exclude these period from the report:
+ you can use pgBadger as follow to exclude these period from the report:
pgbadger --exclude-time "2013-09-.* (23|13):.*" postgresql.log
you don't need to specify any log file at command line, but if you have
others PostgreSQL log files to parse, you can add them as usual.
- To rebuild all incremantal html reports after, proceed as follow:
+ To rebuild all incremental html reports after, proceed as follow:
rm /path/to/reports/*.js
rm /path/to/reports/*.css
pgbadger -X -I -O /path/to/reports/ --rebuild
- it will also update all ressources file (JS and CSS).
+ it will also update all resource files (JS and CSS).
DESCRIPTION
pgBadger is a PostgreSQL log analyzer built for speed with fully reports
from your PostgreSQL log file. It's a single and small Perl script that
outperforms any other PostgreSQL log analyzer.
- It is written in pure Perl and uses a javascript library (flotr2) to
+ It is written in pure Perl and uses a JavaScript library (flotr2) to
draw graphs so that you don't need to install any additional Perl
modules or other packages. Furthermore, this library gives us more
- features such as zooming. pgBadger also uses the Bootstrap javascript
+ features such as zooming. pgBadger also uses the Bootstrap JavaScript
library and the FontAwesome webfont for better design. Everything is
embedded.
FEATURE
pgBadger reports everything about your SQL queries:
- Overall statistics
+ Overall statistics.
The most frequent waiting queries.
Queries that waited the most.
Queries generating the most temporary files.
All charts are zoomable and can be saved as PNG images. SQL queries
reported are highlighted and beautified automatically.
- pgBadger is also able to parse pgbouncer log files and to create the
+ pgBadger is also able to parse PgBouncer log files and to create the
following reports:
Request Throughput
combined.
Histogram granularity can be adjusted using the -A command line option.
- By default they will report the mean of each top queries/errors occuring
- per hour, but you can specify the granularity down to the minute.
+ By default they will report the mean of each top queries/errors
+ occurring per hour, but you can specify the granularity down to the
+ minute.
pgBadger can also be used in a central place to parse remote log files
using a passwordless SSH connection. This mode can be used with
REQUIREMENT
pgBadger comes as a single Perl script - you do not need anything other
- than a modern Perl distribution. Charts are rendered using a Javascript
+ than a modern Perl distribution. Charts are rendered using a JavaScript
library so you don't need anything other than a web browser. Your
browser will do all the work.
This module is optional, if you don't select the json output format you
don't need to install it.
- Compressed log file format is autodetected from the file exension. If
+ Compressed log file format is autodetected from the file extension. If
pgBadger find a gz extension it will use the zcat utility, with a bz2
extension it will use bzcat and if the file extension is zip or xz then
the unzip or xz utilities will be used.
files as well as under Windows platform.
INSTALLATION
- Download the tarball from github and unpack the archive as follow:
+ Download the tarball from GitHub and unpack the archive as follow:
tar xzf pgbadger-7.x.tar.gz
cd pgbadger-7.x/
of the -J option starts being really interesting with 8 Cores. Using
this method you will be sure not to lose any queries in the reports.
- He are a benchmarck done on a server with 8 CPUs and a single file of
+ He are a benchmark done on a server with 8 CPUs and a single file of
9.5GB.
Option | 1 CPU | 2 CPU | 4 CPU | 8 CPU
index file.
The main index file will show a dropdown menu per week with a link to
- each week's report and links to daily reports of each week.
+ each week report and links to daily reports of each week.
For example, if you run pgBadger as follows based on a daily rotated
file:
count the log entries twice.
To save disk space you may want to use the -X or --extra-files command
- line option to force pgBadger to write javascript and css to separate
+ line option to force pgBadger to write JavaScript and CSS to separate
files in the output directory. The resources will then be loaded using
script and link tags.
JSON FORMAT
JSON format is good for sharing data with other languages, which makes
- it easy to integrate pgBadger's result into other monitoring tools like
+ it easy to integrate pgBadger result into other monitoring tools like
Cacti or Graphite.
AUTHORS
-c | --dbclient host : only report on entries for the given client host.
-C | --nocomment : remove comments like /* ... */ from queries.
-d | --dbname database : only report on entries for the given database.
- -D | --dns-resolv : client ip adresses are replaced by their DNS name.
+ -D | --dns-resolv : client ip addresses are replaced by their DNS name.
Be warned that this can really slow down pgBadger.
-e | --end datetime : end date/time for the data to be parsed in log.
-f | --format logtype : possible values: syslog, syslog2, stderr, csv and
- pgbouncer. Use this option when pgbadger is not
+ pgbouncer. Use this option when pgBadger is not
able to auto-detect the log format Default: stderr.
-G | --nograph : disable graphs on HTML output. Enabled by default.
-h | --help : show this message and exit.
-q | --quiet : don't print anything to stdout, not even a progress
bar.
-r | --remote-host ip : set the host where to execute the cat command on
- remote logfile to parse localy the file.
- -R | --retention N : number of week to keep in incremental mode. Default
- to 0, disabled. Used to set the number of weel to
+ remote logfile to parse locally the file.
+ -R | --retention N : number of weeks to keep in incremental mode. Default
+ to 0, disabled. Used to set the number of weeks to
keep in output directory. Older weeks and days
directory are automatically removed.
-s | --sample number : number of query samples to store. Default: 3.
-w | --watch-mode : only report errors just like logwatch could do.
-x | --extension : output format. Values: text, html, bin, json or
tsung. Default: html
- -X | --extra-files : in incremetal mode allow pgbadger to write CSS and
+ -X | --extra-files : in incremental mode allow pgBadger to write CSS and
JS files in the output directory as separate files.
-z | --zcat exec_path : set the full path to the zcat program. Use it if
zcat or bzcat or unzip is not in your path.
- -Z | --timezone +/-XX : Set the number of hour(s) from GMT of the timezone.
- Use this to adjust date/time in javascript graphs.
+ -Z | --timezone +/-XX : Set the number of hours from GMT of the timezone.
+ Use this to adjust date/time in JavaScript graphs.
--pie-limit num : pie data lower than num% will show a sum instead.
--exclude-query regex : any query matching the given regex will be excluded
from the report. For example: "^(VACUUM|COMMIT)"
You can use this option multiple times.
--exclude-appname name : exclude entries for the specified application name
from report. Example: "pg_dump".
- --exclude-line regex : pgbadger will start to exclude any log entry that
+ --exclude-line regex : pgBadger will start to exclude any log entry that
will match the given regex. Can be used multiple
time.
--anonymize : obscure all literals in queries, useful to hide
confidential data.
- --noreport : prevent pgbadger to create reports in incremental
+ --noreport : prevent pgBadger to create reports in incremental
mode.
- --log-duration : force pgbadger to associate log entries generated
+ --log-duration : force pgBadger to associate log entries generated
by both log_duration = on and log_statement = 'all'
--enable-checksum : used to add a md5 sum under each query report.
--journalctl command : command to use to replace PostgreSQL logfile by
a call to journalctl. Basically it might be:
journalctl -u postgresql-9.5
--pid-dir dirpath : set the path of the directory where the pid file
- will be written to be able to run two pgbadger at
+ will be written to be able to run two pgBadger at
the same time.
--rebuild : used to rebuild all html reports in incremental
output directories where there is binary data files.
- --pgbouncer-only : only show pgbouncer related menu in the header.
- --start-monday : in incremental mode, calendar's weeks start on
- sunday. Use this otpion to start on monday.
+ --pgbouncer-only : only show PgBouncer related menu in the header.
+ --start-monday : in incremental mode, weeks start on sunday. Use
+ this option to start on monday.
--normalized-only : only dump all normalized query to out.txt
-O /var/www/pg_reports/
If you have a pg_dump at 23:00 and 13:00 each day during half an hour, you can
-use pgbadger as follow to exclude these period from the report:
+use pgBadger as follow to exclude these period from the report:
pgbadger --exclude-time "2013-09-.* (23|13):.*" postgresql.log
you don't need to specify any log file at command line, but if you have others
PostgreSQL log files to parse, you can add them as usual.
-To rebuild all incremantal html reports after, proceed as follow:
+To rebuild all incremental html reports after, proceed as follow:
rm /path/to/reports/*.js
rm /path/to/reports/*.css
pgbadger -X -I -O /path/to/reports/ --rebuild
-it will also update all ressources file (JS and CSS).
+it will also update all resource files (JS and CSS).
=head1 DESCRIPTION
from your PostgreSQL log file. It's a single and small Perl script that
outperforms any other PostgreSQL log analyzer.
-It is written in pure Perl and uses a javascript library (flotr2) to draw
+It is written in pure Perl and uses a JavaScript library (flotr2) to draw
graphs so that you don't need to install any additional Perl modules or
other packages. Furthermore, this library gives us more features such
-as zooming. pgBadger also uses the Bootstrap javascript library and
+as zooming. pgBadger also uses the Bootstrap JavaScript library and
the FontAwesome webfont for better design. Everything is embedded.
pgBadger is able to autodetect your log file format (syslog, stderr or csvlog).
pgBadger reports everything about your SQL queries:
- Overall statistics
+ Overall statistics.
The most frequent waiting queries.
Queries that waited the most.
Queries generating the most temporary files.
All charts are zoomable and can be saved as PNG images. SQL queries reported are
highlighted and beautified automatically.
-pgBadger is also able to parse pgbouncer log files and to create the following
+pgBadger is also able to parse PgBouncer log files and to create the following
reports:
Request Throughput
a single file. These modes can be combined.
Histogram granularity can be adjusted using the -A command line option. By default
-they will report the mean of each top queries/errors occuring per hour, but you can
+they will report the mean of each top queries/errors occurring per hour, but you can
specify the granularity down to the minute.
pgBadger can also be used in a central place to parse remote log files using a
=head1 REQUIREMENT
pgBadger comes as a single Perl script - you do not need anything other than a modern
-Perl distribution. Charts are rendered using a Javascript library so you don't need
+Perl distribution. Charts are rendered using a JavaScript library so you don't need
anything other than a web browser. Your browser will do all the work.
If you planned to parse PostgreSQL CSV log files you might need some Perl Modules:
This module is optional, if you don't select the json output format you don't
need to install it.
-Compressed log file format is autodetected from the file exension. If pgBadger find
+Compressed log file format is autodetected from the file extension. If pgBadger find
a gz extension it will use the zcat utility, with a bz2 extension it will use bzcat
and if the file extension is zip or xz then the unzip or xz utilities will be used.
=head1 INSTALLATION
-Download the tarball from github and unpack the archive as follow:
+Download the tarball from GitHub and unpack the archive as follow:
tar xzf pgbadger-7.x.tar.gz
cd pgbadger-7.x/
starts being really interesting with 8 Cores. Using this method you will be
sure not to lose any queries in the reports.
-He are a benchmarck done on a server with 8 CPUs and a single file of 9.5GB.
+He are a benchmark done on a server with 8 CPUs and a single file of 9.5GB.
Option | 1 CPU | 2 CPU | 4 CPU | 8 CPU
--------+---------+-------+-------+------
then in HTML format for daily and weekly reports with a main index file.
The main index file will show a dropdown menu per week with a link to each
-week's report and links to daily reports of each week.
+week report and links to daily reports of each week.
For example, if you run pgBadger as follows based on a daily rotated file:
the log entries twice.
To save disk space you may want to use the -X or --extra-files command line
-option to force pgBadger to write javascript and css to separate files in
+option to force pgBadger to write JavaScript and CSS to separate files in
the output directory. The resources will then be loaded using script and
link tags.
=head1 JSON FORMAT
JSON format is good for sharing data with other languages, which makes it
-easy to integrate pgBadger's result into other monitoring tools like Cacti
+easy to integrate pgBadger result into other monitoring tools like Cacti
or Graphite.
=head1 AUTHORS
-c | --dbclient host : only report on entries for the given client host.
-C | --nocomment : remove comments like /* ... */ from queries.
-d | --dbname database : only report on entries for the given database.
- -D | --dns-resolv : client ip adresses are replaced by their DNS name.
+ -D | --dns-resolv : client ip addresses are replaced by their DNS name.
Be warned that this can really slow down pgBadger.
-e | --end datetime : end date/time for the data to be parsed in log.
-f | --format logtype : possible values: syslog, syslog2, stderr, csv and
- pgbouncer. Use this option when pgbadger is not
+ pgbouncer. Use this option when pgBadger is not
able to auto-detect the log format Default: stderr.
-G | --nograph : disable graphs on HTML output. Enabled by default.
-h | --help : show this message and exit.
-q | --quiet : don't print anything to stdout, not even a progress
bar.
-r | --remote-host ip : set the host where to execute the cat command on
- remote logfile to parse localy the file.
- -R | --retention N : number of week to keep in incremental mode. Default
- to 0, disabled. Used to set the number of weel to
+ remote logfile to parse locally the file.
+ -R | --retention N : number of weeks to keep in incremental mode. Default
+ to 0, disabled. Used to set the number of weeks to
keep in output directory. Older weeks and days
directory are automatically removed.
-s | --sample number : number of query samples to store. Default: 3.
-w | --watch-mode : only report errors just like logwatch could do.
-x | --extension : output format. Values: text, html, bin, json or
tsung. Default: html
- -X | --extra-files : in incremetal mode allow pgbadger to write CSS and
+ -X | --extra-files : in incremental mode allow pgBadger to write CSS and
JS files in the output directory as separate files.
-z | --zcat exec_path : set the full path to the zcat program. Use it if
zcat or bzcat or unzip is not in your path.
- -Z | --timezone +/-XX : Set the number of hour(s) from GMT of the timezone.
- Use this to adjust date/time in javascript graphs.
+ -Z | --timezone +/-XX : Set the number of hours from GMT of the timezone.
+ Use this to adjust date/time in JavaScript graphs.
--pie-limit num : pie data lower than num% will show a sum instead.
--exclude-query regex : any query matching the given regex will be excluded
from the report. For example: "^(VACUUM|COMMIT)"
You can use this option multiple times.
--exclude-appname name : exclude entries for the specified application name
from report. Example: "pg_dump".
- --exclude-line regex : pgbadger will start to exclude any log entry that
+ --exclude-line regex : pgBadger will start to exclude any log entry that
will match the given regex. Can be used multiple
time.
--anonymize : obscure all literals in queries, useful to hide
confidential data.
- --noreport : prevent pgbadger to create reports in incremental
+ --noreport : prevent pgBadger to create reports in incremental
mode.
- --log-duration : force pgbadger to associate log entries generated
+ --log-duration : force pgBadger to associate log entries generated
by both log_duration = on and log_statement = 'all'
--enable-checksum : used to add a md5 sum under each query report.
--journalctl command : command to use to replace PostgreSQL logfile by
a call to journalctl. Basically it might be:
journalctl -u postgresql-9.5
--pid-dir dirpath : set the path of the directory where the pid file
- will be written to be able to run two pgbadger at
+ will be written to be able to run two pgBadger at
the same time.
--rebuild : used to rebuild all html reports in incremental
output directories where there is binary data files.
- --pgbouncer-only : only show pgbouncer related menu in the header.
+ --pgbouncer-only : only show PgBouncer related menu in the header.
--start-monday : in incremental mode, calendar's weeks start on
- sunday. Use this otpion to start on monday.
+ sunday. Use this option to start on monday.
--normalized-only : only dump all normalized query to out.txt
pgBadger is able to parse a remote log file using a passwordless ssh connection.
-O /var/www/pg_reports/
If you have a pg_dump at 23:00 and 13:00 each day during half an hour, you can
-use pgbadger as follow to exclude these period from the report:
+use pgBadger as follow to exclude these period from the report:
pgbadger --exclude-time "2013-09-.* (23|13):.*" postgresql.log
you don't need to specify any log file at command line, but if you have other
PostgreSQL log file to parse, you can add them as usual.
-To rebuild all incremantal html reports after, proceed as follow:
+To rebuild all incremental html reports after, proceed as follow:
rm /path/to/reports/*.js
rm /path/to/reports/*.css
pgbadger -X -I -O /path/to/reports/ --rebuild
-it will also update all ressources file (JS and CSS).
+it will also update all resource files (JS and CSS).
};