LOG: autovacuum launcher started
LOG: database system is ready to accept connections
Try some OS commands to check the disk space used to identify whether the DB server is in heavy load. Please run the following commands on the Linux DB VM (Virtual Machine) to check disk space available:
df -h
For Windows check the Task Manager or the hard drive properties:
Workaround:
If the command output of df -h on Linux DB shows %100 on dev/sdc1 partition and If using a virtual machine for the FMS Database repository, please use this solution to increase the disk size SOL206354
For better performance the Database machine should be running on SSD drive.
Note: if doing changes on the DB server, please follow a chronological order like this:
1. Stop the FMS, then allocate another 6 GB memory for the DB server, in order to do this, please do the following:
A). From Linux FMS server navigate and run from this directory /home/%FMS_HOME/bin
./fmsShutdown.sh
B).From Windows FMS server navigate and run from this directory %FMS_HOME\bin\:
fms -stop
Or stop the Foglight Windows Service
C).Run command: "top" to see when the FMS is totally stopped.
2. Shutdown the FMS database server:
A). From Linux go to the database server and run:
service postgresql stop
B). Run command: "top" to see when postgres is totally stopped.
3. Do all necessary changes, then restart the DB server and finally start the FMS Server.
If getting Out of Memory error messages, please do full memory reservation for the database machine.
Check this solution for more information: CPU and memory virtual machine (VM) reservations and Foglight SOL176968
Note: After allocating more resources, if still an issue with the database performance, check the following guide:
SHOW VARIABLES LIKE ‘innodb_buffer_pool_size’
Possible Actions:
./fglcmd.sh -usr foglight -pwd foglight -cmd util:topologyexport -f handlers.xml -topology_query CatalystPersistenceHandler
./fglcmd.sh -usr foglight -pwd foglight -cmd util:metricexport -f retrieve-last-n-values-time.csv -metric_query "retrieveLastNValuesTime from CatalystPersistenceHandler for 1 week" -output_format csv
./fglcmd.sh -usr foglight -pwd foglight -cmd util:metricexport -f retrieve-time.csv -metric_query "retrieveTime from CatalystPersistenceHandler for 1 week" -output_format csv
./fglcmd.sh -usr foglight -pwd foglight -cmd util:metricexport -f retrieve-earliest-time.csv -metric_query "retrieveEarliestTimeTime from CatalystPersistenceHandler for 1 week" -output_format csv
ps auxww | grep ^postgres
fglcmd -usr foglight -pwd foglight -cmd util:topologyexport -f handlers.xml -topology_query CatalystPersistenceHandler
fglcmd -usr foglight -pwd foglight -cmd util:metricexport -f retrieve-last-n-values-time.csv -metric_query "retrieveLastNValuesTime from CatalystPersistenceHandler for 1 week" -output_format csv
fglcmd -usr foglight -pwd foglight -cmd util:metricexport -f retrieve-time.csv -metric_query "retrieveTime from CatalystPersistenceHandler for 1 week" -output_format csv
fglcmd -usr foglight -pwd foglight -cmd util:metricexport -f retrieve-earliest-time.csv -metric_query "retrieveEarliestTimeTime from CatalystPersistenceHandler for 1 week" -output_format csv
The metrics returned in the CSV file are in milliseconds. Look for times of 500 or greater in AVERAGE, or greater than 10000 in MAX. If these values are consistently high, the database is failing to keep up with the load, causing the Management Server to be slow.
What you look for from the queries:
Reference: Foglight - Performance Tuning Field Guide
1. Please zip the postgres logs directory and provide the postgresql.conf, as well as the system logs file located here:
/postgresql/db/data/postgresql.conf
/postgresql/db/logs
/var/log/messages
This is one example about how to Zip it from Linux:
tar -cvpzf backup.tar.gz --exclude=/backup.tar.gz --one-file-system /postgresql/db/logs
From Windows just zip /postgresql/db/logs
Instructions about how to run a groovy script from Script Console can be found here SOL232352
Please provide all the above information to support if performance did not improve on the database.