2013-10-31 - Version 4.0
This major release is the "Say goodbye to the fouine" release. With a full
-rewrite of the reports design, pgBadger has now turn the HTML reports into
+rewrite of the reports design, pgBadger has now turned the HTML reports into
a more intuitive user experience and professional look.
The report is now driven by a dynamic menu with the help of the embedded
-boostrap library. Every main menu correspond to hidden an slide that is
-brought to front when the menu or one of his submenus is activated. There's
-also the embedded font FontAwasome webfont to beautify the report.
+Bootstrap library. Every main menu corresponds to a hidden slide that is
+revealed when the menu or one of its submenus is activated. There's
+also the embedded font Font Awesome webfont to beautify the report.
-Every statistic report now include a key value section that shows you
-immediately some of the relevant informations. Pie charts have also been
+Every statistics report now includes a key value section that immediately
+shows you some of the relevant information. Pie charts have also been
separated from their data tables using two tabs, one for the chart and the
other one for the data.
-Tables reporting hourly statistic have been moved to a multiple tabs report
+Tables reporting hourly statistics have been moved to a multiple tabs report
following the data. This is used with General (queries, connections, sessions),
-Checkpoints (buffer, files, warnings), Temporary file and Vacuums activities.
+Checkpoints (buffer, files, warnings), Temporary files and Vacuums activities.
-There's some new useful informations shown in the key value sections. Peak
+There's some new useful information shown in the key value sections. Peak
information shows the number and datetime of the highest activity. Here is the
list of those reports:
- Write queries peak
- Connections peak
- Checkpoints peak
- - Wal files usage Peak
+ - WAL files usage Peak
- Checkpoints warnings peak
- Temporary file size peak
- Temporary file number peak
-Reports about Checkpoints and Restartpoints have been merge in a single one.
-This is the same, outside the fact that restartpoints are on a slave cluster,
-so there was no need to separate those informations.
+Reports about Checkpoints and Restartpoints have been merged into a single report.
+These are almost one in the same event, except that restartpoints occur on a slave
+cluster, so there was no need to distinguish between the two.
-Recent PostgreSQL versions add additional information about checkpoint, the
+Recent PostgreSQL versions add additional information about checkpoints, the
number of synced files, the longest sync and the average of sync time per file.
-pgBadger collects and shows these informations in the Checkpoint Activity report.
+pgBadger collects and shows this information in the Checkpoint Activity report.
There's also some new reports:
- Prepared queries ratio (execute vs prepare)
- Prepared over normal queries
- Queries (select, insert, update, delete) per user/host/application
- - Pie charts for tables with the more tuples and pages removed during vacuum.
+ - Pie charts for tables with the most tuples and pages removed during vacuum.
-The vacuum report will now highlight the costly table during a vacuum or
+The vacuum report will now highlight the costly tables during a vacuum or
analyze of a database.
The errors are now highlighted by a different color following the level.
A LOG level will be green, HINT will be yellow, WARNING orange, ERROR red
and FATAL dark red.
-Some changes in the binary format are not backward compatible and option
---client have been remove as it was replaced by --dbclient for a long time now.
+Some changes in the binary format are not backward compatible and the option
+--client has been removed as it has been superseded by --dbclient for a long time now.
-If you are running a pg_dump or some batch process with very slow queries your
-report analyze will be annoyed by those queries taking too much place in the
-report. Before that release it was a pain to exclude those queries from the
+If you are running a pg_dump or some batch process with very slow queries, your
+report analysis will be hindered by those queries having unwanted prominence in the
+report. Before this release it was a pain to exclude those queries from the
report. Now you can use the --exclude-time command line option to exclude all
traces matching the given time regexp from the report. For example, let's say
you have a pg_dump at 13:00 each day during half an hour, you can use pgbadger
-as follow:
+as follows:
pgbadger --exclude-time "2013-09-.* 13:.*" postgresql.log
-If your are also running a pg_dump at night, let's say 22:00, you can write it
-as follow:
+If you are also running a pg_dump at night, let's say 22:00, you can write it
+as follows:
pgbadger --exclude-time '2013-09-\d+ 13:[0-3]' --exclude-time '2013-09-\d+ 22:[0-3]' postgresql.log
pgbadger --exclude-time '2013-09-\d+ (13|22):[0-3]' postgresql.log
-Exclude time always require the iso notation yyyy-mm-dd hh:mm:ss, even if log
-format is syslog. This is the same for all time related options. Take care that
-this option has a high cost on the parser performances.
+Exclude time always requires the iso notation yyyy-mm-dd hh:mm:ss, even if log
+format is syslog. This is the same for all time-related options. Use this option
+with care as it has a high cost on the parser performance.
2013-09-17 - version 3.6
$prefix_vars{'t_session_line'} =~ s/\..*//;
$prefix_vars{'t_loglevel'} = $row->[11];
$prefix_vars{'t_query'} = $row->[13];
- # Set ERROR additional informations
+ # Set ERROR additional information
$prefix_vars{'t_detail'} = $row->[14];
$prefix_vars{'t_hint'} = $row->[15];
$prefix_vars{'t_context'} = $row->[18];
# skip non postgresql lines
next if ($prefix_vars{'t_ident'} ne $ident);
- # Stores temporary files and locks informations
+ # Stores temporary files and locks information
&store_temporary_and_lock_infos($cur_pid);
# Standard syslog format does not have year information, months are
$prefix_vars{$prefix_params[$i]} = $matches[$i];
}
- # Stores temporary files and locks informations
+ # Stores temporary files and locks information
&store_temporary_and_lock_infos($cur_pid);
if (!$prefix_vars{'t_timestamp'} && $prefix_vars{'t_mtimestamp'}) {
# Get stats from all pending temporary storage
foreach my $pid (sort {$cur_info{$a}{date} <=> $cur_info{$b}{date}} keys %cur_info) {
- # Stores last queries informations
+ # Stores last queries information
&store_queries($pid);
}
- # Stores last temporary files and locks informations
+ # Stores last temporary files and locks information
foreach my $pid (keys %cur_temp_info) {
&store_temporary_and_lock_infos($pid);
}
- # Stores last temporary files and locks informations
+ # Stores last temporary files and locks information
foreach my $pid (keys %cur_lock_info) {
&store_temporary_and_lock_infos($pid);
}
}
}
- # Show lock wait detailed informations
+ # Show lock wait detailed information
if (!$disable_lock && scalar keys %lock_info > 0) {
my @top_locked_queries;
print $fh "\n";
}
- # Show temporary files detailed informations
+ # Show temporary files detailed information
if (!$disable_temporary && scalar keys %tempfile_info > 0) {
my @top_temporary;
<li class="slide" id="connections-slide">
};
- # Draw connections indormation
+ # Draw connections information
&print_simultaneous_connection() if (!$disable_hourly);
# Show per database/user connections
</li>
<li class="slide" id="tempfiles-slide">
};
- # Show temporary files detailed informations
+ # Show temporary files detailed information
&print_temporary_file();
- # Show informations about queries generating temporary files
+ # Show information about queries generating temporary files
&print_tempfile_report();
}
</li>
<li class="slide" id="vacuums-slide">
};
- # Show vacuums/analyses detailed informations
+ # Show detailed vacuum/analyse information
&print_vacuum();
}
# Lock stats per type
&print_lock_type();
- # Show lock wait detailed informations
+ # Show lock wait detailed information
&print_lock_queries_report();
}
{
my %infos = ();
- # Some message have seen their log level change during log parsing.
+ # Some messages have seen their log level change during log parsing.
# Set the real log level count back
foreach my $k (sort {$error_info{$b}{count} <=> $error_info{$a}{count}} keys %error_info) {
next if (!$error_info{$k}{count});
$connection_info{chronos}{$day}{$hour}{count} += $_connection_info{chronos}{$day}{$hour}{count}
###############################################################################
-# May be used in the future to display more detailed informations on connection
+# May be used in the future to display more detailed information on connection
#
# foreach my $db (keys %{ $_connection_info{chronos}{$day}{$hour}{database} }) {
# $connection_info{chronos}{$day}{$hour}{database}{$db} += $_connection_info{chronos}{$day}{$hour}{database}{$db};
$checkpoint_info{file_added} += $_checkpoint_info{file_added};
$checkpoint_info{write} += $_checkpoint_info{write};
- #### Autovacuum infos ####
+ #### Autovacuum info ####
$autovacuum_info{count} += $_autovacuum_info{count};
$autovacuum_info{peak}{system_usage}{table} = $_autovacuum_info{peak}{system_usage}{table};
$autovacuum_info{peak}{system_usage}{date} = $_autovacuum_info{peak}{system_usage}{date};
}
- #### Autoanalyze infos ####
+ #### Autoanalyze info ####
$autoanalyze_info{count} += $_autoanalyze_info{count};
# Escape HTML code into SQL values
$code = &escape_html($code);
- # Do not try to prettify queries longuer
- # than 10KB this will take too much time
+ # Do not try to prettify queries longer
+ # than 10KB as this will take too much time
return $code if (length($code) > 10240);
# prettify SQL query
sub compute_arg_list
{
- # Some command lines arguments can be used multiple time or be written
- # as a coma separated list.
+ # Some command line arguments can be used multiple times or written
+ # as a comma-separated list.
# For example: --dbuser=postgres --dbuser=joe or --dbuser=postgres,joe
- # So we have to aggregate all the possible value
+ # So we have to aggregate all the possible values
my @tmp = ();
foreach my $v (@exclude_user) {
push(@tmp, split(/,/, $v));
# Check user and/or database if require
if ($#dbname >= 0) {
- # Log line do not match the required dbname
+ # Log line does not match the required dbname
if (!$prefix_vars{'t_dbname'} || !grep(/^$prefix_vars{'t_dbname'}$/i, @dbname)) {
return 0;
}
}
if ($#dbuser >= 0) {
- # Log line do not match the required dbuser
+ # Log line does not match the required dbuser
if (!$prefix_vars{'t_dbuser'} || !grep(/^$prefix_vars{'t_dbuser'}$/i, @dbuser)) {
return 0;
}
return;
}
- # Do not parse lines that are an error like message when error report are not wanted
+ # Do not parse lines that are an error-like message when error reports are not wanted
if ($disable_error && ($prefix_vars{'t_loglevel'} =~ $full_error_regex)) {
return;
}
$connection_info{database_user}{$db}{$usr}++;
$connection_info{chronos}{$date_part}{$prefix_vars{'t_hour'}}{count}++;
###############################################################################
-# May be used in the future to display more detailed informations on connection
+# May be used in the future to display more detailed information on connection
# $connection_info{chronos}{$date_part}{$prefix_vars{'t_hour'}}{user}{$usr}++;
# $connection_info{chronos}{$date_part}{$prefix_vars{'t_hour'}}{database}{$db}++;
# $connection_info{chronos}{$date_part}{$prefix_vars{'t_hour'}}{database_user}{$db}{$usr}++;
return;
}
- # Store autovacuum informations
+ # Store autovacuum information
if (
($prefix_vars{'t_loglevel'} eq 'LOG')
&& ($prefix_vars{'t_query'} =~
return;
}
- # Store autoanalyze informations
+ # Store autoanalyze information
if (
($prefix_vars{'t_loglevel'} eq 'LOG')
&& ($prefix_vars{'t_query'} =~
}
####
- # Store current query informations
+ # Store current query information
####
# Log lines with duration only, generated by log_duration = on in postgresql.conf
$cur_info{$t_pid}{query} =~ s/\/\*(.*?)\*\///gs;
}
- # Stores temporary files and locks informations
+ # Stores temporary files and locks information
&store_temporary_and_lock_infos($t_pid);
return if (!exists $cur_info{$t_pid});