Sometimes logs are the last things checked when things are going wrong, but they are usually the first things screaming for help when something happens. Manually looking through the logs for problems helps, but why not use log analyzers to automatically generate reports to provide insight on the database before something goes wrong?
The PostgreSQL log analyzer “pg-badger” is an open source “fast PostgreSQL log analysis report” program written in Perl that takes the log output from a running PostgreSQL instance and processes it into an HTML file. The report it generates shows all information found in a nice and easy to read report format. These reports can help shed light on errors happening in the system, checkpoint behavior, vacuum behavior, trends, and other basic but crucial information for a PostgreSQL system.
Demonstration: How to generate pg-badger report:
Note: Here we are not analyze the report.
-- Let us create a new PostgreSQL cluster and then we ll create new databases, users and add some load
-- Create a base directory
# cd /opt/PostgreSQL/10/
# mkdir pgdata
[root@postgres 10]# chown postgres:postgres pgdata -R
-- Initialize the base directory
[root@postgres 10]# su - postgres
$ pwd
/opt/PostgreSQL/10
$ initdb -D pgdata
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.
....
....
Success. You can now start the database server using:
pg_ctl -D pgdata -l logfile start
$
$
-- Now change the port number to access the new cluster change given below log settings
$ cd pgdata/
$ ls postgresql.conf
postgresql.conf
$ vi postgresql.conf
port=5005
log_destination = 'stderr'
logging_collector = on
log_rotation_age = 0
client_min_messages = notice
log_min_messages = warning
log_min_error_statement = error
log_min_duration_statement = 0
log_checkpoints = on
log_connections = on
log_disconnections = on
log_duration = on
log_error_verbosity = verbose
log_hostname = on
log_line_prefix = '%t [%p]: [%l-1] user=%u,db=%d '
log_lock_waits = on
log_statement = 'all'
log_temp_files = 0
:wq
-- Now start the cluster
pg_ctl -D /opt/PostgreSQL/10/pgdata start
$ pg_ctl -D /opt/PostgreSQL/10/pgdata start
waiting for server to start....2020-04-19 22:19:33 IST [17280]: [1-1] user=,db= LOG: 00000: listening on IPv6 address "::1", port 5005
2020-04-19 22:19:33 IST [17280]: [2-1] user=,db= LOCATION: StreamServerPort, pqcomm.c:593
2020-04-19 22:19:33 IST [17280]: [3-1] user=,db= LOG: 00000: listening on IPv4 address "127.0.0.1", port 5005
2020-04-19 22:19:33 IST [17280]: [4-1] user=,db= LOCATION: StreamServerPort, pqcomm.c:593
2020-04-19 22:19:33 IST [17280]: [5-1] user=,db= LOG: 00000: listening on Unix socket "/tmp/.s.PGSQL.5005"
2020-04-19 22:19:33 IST [17280]: [6-1] user=,db= LOCATION: StreamServerPort, pqcomm.c:587
2020-04-19 22:19:33 IST [17280]: [7-1] user=,db= LOG: 00000: redirecting log output to logging collector process
2020-04-19 22:19:33 IST [17280]: [8-1] user=,db= HINT: Future log output will appear in directory "log".
2020-04-19 22:19:33 IST [17280]: [9-1] user=,db= LOCATION: SysLogger_Start, syslogger.c:666
done
server started
$
-- Connect to the newly created cluster
$ psql -p 5005 postgres
psql.bin (10.9)
Type "help" for help.
-- From here now create some users, create databases and cone=nect to these databases and create some load
Example:
postgres=# create user user1 password 'Pas$w0rd';
CREATE ROLE
postgres=# create user hr password 'Pas$w0rd';
CREATE ROLE
postgres=# create user admin password 'Pas$w0rd';
CREATE ROLE
postgres=# create database db1;
CREATE DATABASE
postgres=# create database db2;
CREATE DATABASE
postgres=# create database db3;
CREATE DATABASE
postgres=# \c db1 user1
You are now connected to database "db1" as user "user1".
db1=> create table tbl_test as select * from pg_class;
SELECT 341
db1=> select count(*) from tbl_test;
count
-------
341
(1 row)
db1=> \c db2 hr
You are now connected to database "db2" as user "hr".
db2=> create table tbl_test1 (id int);
CREATE TABLE
db2=> insert into tbl_test1 select * from generate_Series(1,100000) order by random();
INSERT 0 100000
db2=> update tbl_test set id=1 where id between 1 and 100;
UPDATE 100
db2=>
db2=> \c db3 admin
You are now connected to database "db3" as user "admin".
db3=>
db3=>
db3=> create table tbl_test2 (id int,name varchar);
CREATE TABLE
db3=>
db3=>
db3=> insert into tbl_test2 values(generate_series(1,100000),'data '||generate_series(1,100000));
INSERT 0 100000
db3=>
db3=>
db3=> delete from tbl_test2 where id between 1 and 6000;
DELETE 6000
db3=>
db3=>
db3=> select count(*) from tbl_test2;
count
-------
94000
(1 row)
db3=> \q
-- Now download the pg-badger and copy to some location
https://sourceforge.net/projects/pgbadger/
Example:
#
[root@postgres Postgres Software]# cp pgbadger-master.zip /opt
[root@postgres Postgres Software]# cd /opt
[root@postgres opt]# unzip pgbadger-master.zip
[root@postgres opt]# cd /opt/pgbadger-master
[root@postgres pgbadger-master]# pwd
/opt/pgbadger-master
[root@postgres log]# /opt/PostgreSQL/10/pgdata/log
[root@postgres log]# ls
postgresql-2020-04-19_221933.log
[root@postgres log]# /opt/pgbadger-master
[root@postgres pgbadger-master]#
[root@postgres pgbadger-master]# ./pgbadger -f stderr -o report.html /opt/PostgreSQL/10/pgdata/log/postgresql-2020-04-19_221933.log
e.g.,
[root@postgres pgbadger-master]# ./pgbadger -f stderr -o report.html /opt/PostgreSQL/10/pgdata/log/postgresql-2020-04-19_221933.log
[========================>] Parsed 15468 bytes of 15468 (100.00%), queries: 28, events: 3
LOG: Ok, generating html report...
[root@postgres pgbadger-master]#
[root@postgres pgbadger-master]# ls -ltr
total 2148
drwxr-xr-x. 2 root root 4096 Apr 18 2016 tools
-rw-r--r--. 1 root root 30433 Apr 18 2016 README
-rwxr-xr-x. 1 root root 1219877 Apr 18 2016 pgbadger
-rw-r--r--. 1 root root 334 Apr 18 2016 META.yml
-rw-r--r--. 1 root root 81 Apr 18 2016 MANIFEST
-rw-r--r--. 1 root root 1400 Apr 18 2016 Makefile.PL
-rw-r--r--. 1 root root 903 Apr 18 2016 LICENSE
drwxr-xr-x. 2 root root 4096 Apr 18 2016 doc
-rw-r--r--. 1 root root 347 Apr 18 2016 CONTRIBUTING.md
-rw-r--r--. 1 root root 81420 Apr 18 2016 ChangeLog
-rw-r--r--. 1 root root 832859 Apr 19 22:41 report.html
Now download report.html to your windows machine and analyze it.
Here is one sample screen-shot:
The PostgreSQL log analyzer “pg-badger” is an open source “fast PostgreSQL log analysis report” program written in Perl that takes the log output from a running PostgreSQL instance and processes it into an HTML file. The report it generates shows all information found in a nice and easy to read report format. These reports can help shed light on errors happening in the system, checkpoint behavior, vacuum behavior, trends, and other basic but crucial information for a PostgreSQL system.
Demonstration: How to generate pg-badger report:
Note: Here we are not analyze the report.
-- Let us create a new PostgreSQL cluster and then we ll create new databases, users and add some load
-- Create a base directory
# cd /opt/PostgreSQL/10/
# mkdir pgdata
[root@postgres 10]# chown postgres:postgres pgdata -R
-- Initialize the base directory
[root@postgres 10]# su - postgres
$ pwd
/opt/PostgreSQL/10
$ initdb -D pgdata
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.
....
....
Success. You can now start the database server using:
pg_ctl -D pgdata -l logfile start
$
$
-- Now change the port number to access the new cluster change given below log settings
$ cd pgdata/
$ ls postgresql.conf
postgresql.conf
$ vi postgresql.conf
port=5005
log_destination = 'stderr'
logging_collector = on
log_rotation_age = 0
client_min_messages = notice
log_min_messages = warning
log_min_error_statement = error
log_min_duration_statement = 0
log_checkpoints = on
log_connections = on
log_disconnections = on
log_duration = on
log_error_verbosity = verbose
log_hostname = on
log_line_prefix = '%t [%p]: [%l-1] user=%u,db=%d '
log_lock_waits = on
log_statement = 'all'
log_temp_files = 0
:wq
-- Now start the cluster
pg_ctl -D /opt/PostgreSQL/10/pgdata start
$ pg_ctl -D /opt/PostgreSQL/10/pgdata start
waiting for server to start....2020-04-19 22:19:33 IST [17280]: [1-1] user=,db= LOG: 00000: listening on IPv6 address "::1", port 5005
2020-04-19 22:19:33 IST [17280]: [2-1] user=,db= LOCATION: StreamServerPort, pqcomm.c:593
2020-04-19 22:19:33 IST [17280]: [3-1] user=,db= LOG: 00000: listening on IPv4 address "127.0.0.1", port 5005
2020-04-19 22:19:33 IST [17280]: [4-1] user=,db= LOCATION: StreamServerPort, pqcomm.c:593
2020-04-19 22:19:33 IST [17280]: [5-1] user=,db= LOG: 00000: listening on Unix socket "/tmp/.s.PGSQL.5005"
2020-04-19 22:19:33 IST [17280]: [6-1] user=,db= LOCATION: StreamServerPort, pqcomm.c:587
2020-04-19 22:19:33 IST [17280]: [7-1] user=,db= LOG: 00000: redirecting log output to logging collector process
2020-04-19 22:19:33 IST [17280]: [8-1] user=,db= HINT: Future log output will appear in directory "log".
2020-04-19 22:19:33 IST [17280]: [9-1] user=,db= LOCATION: SysLogger_Start, syslogger.c:666
done
server started
$
-- Connect to the newly created cluster
$ psql -p 5005 postgres
psql.bin (10.9)
Type "help" for help.
-- From here now create some users, create databases and cone=nect to these databases and create some load
Example:
postgres=# create user user1 password 'Pas$w0rd';
CREATE ROLE
postgres=# create user hr password 'Pas$w0rd';
CREATE ROLE
postgres=# create user admin password 'Pas$w0rd';
CREATE ROLE
postgres=# create database db1;
CREATE DATABASE
postgres=# create database db2;
CREATE DATABASE
postgres=# create database db3;
CREATE DATABASE
postgres=# \c db1 user1
You are now connected to database "db1" as user "user1".
db1=> create table tbl_test as select * from pg_class;
SELECT 341
db1=> select count(*) from tbl_test;
count
-------
341
(1 row)
db1=> \c db2 hr
You are now connected to database "db2" as user "hr".
db2=> create table tbl_test1 (id int);
CREATE TABLE
db2=> insert into tbl_test1 select * from generate_Series(1,100000) order by random();
INSERT 0 100000
db2=> update tbl_test set id=1 where id between 1 and 100;
UPDATE 100
db2=>
db2=> \c db3 admin
You are now connected to database "db3" as user "admin".
db3=>
db3=>
db3=> create table tbl_test2 (id int,name varchar);
CREATE TABLE
db3=>
db3=>
db3=> insert into tbl_test2 values(generate_series(1,100000),'data '||generate_series(1,100000));
INSERT 0 100000
db3=>
db3=>
db3=> delete from tbl_test2 where id between 1 and 6000;
DELETE 6000
db3=>
db3=>
db3=> select count(*) from tbl_test2;
count
-------
94000
(1 row)
db3=> \q
-- Now download the pg-badger and copy to some location
https://sourceforge.net/projects/pgbadger/
Example:
#
[root@postgres Postgres Software]# cp pgbadger-master.zip /opt
[root@postgres Postgres Software]# cd /opt
[root@postgres opt]# unzip pgbadger-master.zip
[root@postgres opt]# cd /opt/pgbadger-master
[root@postgres pgbadger-master]# pwd
/opt/pgbadger-master
[root@postgres log]# /opt/PostgreSQL/10/pgdata/log
[root@postgres log]# ls
postgresql-2020-04-19_221933.log
[root@postgres log]# /opt/pgbadger-master
[root@postgres pgbadger-master]#
[root@postgres pgbadger-master]# ./pgbadger -f stderr -o report.html /opt/PostgreSQL/10/pgdata/log/postgresql-2020-04-19_221933.log
e.g.,
[root@postgres pgbadger-master]# ./pgbadger -f stderr -o report.html /opt/PostgreSQL/10/pgdata/log/postgresql-2020-04-19_221933.log
[========================>] Parsed 15468 bytes of 15468 (100.00%), queries: 28, events: 3
LOG: Ok, generating html report...
[root@postgres pgbadger-master]#
[root@postgres pgbadger-master]# ls -ltr
total 2148
drwxr-xr-x. 2 root root 4096 Apr 18 2016 tools
-rw-r--r--. 1 root root 30433 Apr 18 2016 README
-rwxr-xr-x. 1 root root 1219877 Apr 18 2016 pgbadger
-rw-r--r--. 1 root root 334 Apr 18 2016 META.yml
-rw-r--r--. 1 root root 81 Apr 18 2016 MANIFEST
-rw-r--r--. 1 root root 1400 Apr 18 2016 Makefile.PL
-rw-r--r--. 1 root root 903 Apr 18 2016 LICENSE
drwxr-xr-x. 2 root root 4096 Apr 18 2016 doc
-rw-r--r--. 1 root root 347 Apr 18 2016 CONTRIBUTING.md
-rw-r--r--. 1 root root 81420 Apr 18 2016 ChangeLog
-rw-r--r--. 1 root root 832859 Apr 19 22:41 report.html
Now download report.html to your windows machine and analyze it.
Here is one sample screen-shot:
Welcome to the future! Financing made easy with Prof. Mrs. DOROTHY LOAN INVESTMENTS
ReplyDeleteHello, Have you been looking for financing options for your new business plans, Are you seeking for a loan to expand your existing business, Do you find yourself in a bit of trouble with unpaid bills and you don’t know which way to go or where to turn to? Have you been turned down by your banks? MRS. DOROTHY JEAN INVESTMENTS says YES when your banks say NO. Contact us as we offer financial services at a low and affordable interest rate of 2% for long and short term loans. Interested applicants should contact us for further loan acquisition procedures via profdorothyinvestments@gmail.com
I'm here to share an amazing life changing opportunity with you. its called Bitcoin / Forex trading options, Are you interested in earning a consistent income through binary/forex trade? or crypto currency trading. An investment of $200 can get you a return of $2,840 in 7 days of trading, We invest in all profitable projects with cryptocurrencies. It goes on and on The higher the investment, the higher the profits. Your investment is safe and secured and payouts assured 100%. if you wish to know more about investing in Cryptocurrency and earn daily, weekly OR Monthly in trading on bitcoin or any cryptocurrency and want a successful trade without losing Contact MRS.DOROTHY JEAN INVESTMENTS profdorothyinvestments@gmail.com
categories of investment
Cryptocurrency
Loan Offer
Mining Plan
Business Finance Plan
Binary option Trade Plan
Forex trade Plan
Stocks market Trade Plan
Return on investment (ROI) Plan
Gold and Silver Trade Plan
Oil and Gas Trade Plan
Diamond Trade Plan
Agriculture Trade Plan
Real Estate Trade Plan
YOURS IN SERVICE
Mrs. Dorothy Pilkenton Jean
Financial Advisor on Bank Instruments,
Private Banking and Client Services
Email Address: profdorothyinvestments@gmail.com
Operation: We provide Financial Service Such As Bank Instrument
From AA Rate Banks, Cash Loan,BG,SBLC,BOND,PPP,MTN,TRADING,FUNDING MONETIZING etc.