Postgres Database Resources Monitor

Postgres Database Resources Monitor

Monitoring the health and performance of your PostgreSQL database is not just a technical requirement—it’s an investment in your users’ experience. A well-tuned, responsive database minimizes delays, ensuring smooth interactions, reducing timeouts, and providing prompt data delivery in figaf application. Inefficiencies or misconfigurations in the database can lead to slow responses, adversely affecting everything from page load times to data availability.

When performance issues arise, they can result in:

  • Decreased Responsiveness: Delays in processing operations can create noticeable lags during user interactions. For instance, pages such as the tracked objects synchronization page, the landscape overview page, and the support tool page may exhibit slower load times.

  • Unpredictable Behavior: Errors or timeouts caused by underlying database performance problems can disrupt the user experience.

The guide below details a step-by-step process to troubleshoot and optimize your PostgreSQL-backed application, ensuring a consistently high-performance environment for your users.

By identifying and addressing these issues early, you ensure that your PostgreSQL database efficiently handles user requests, thereby enhancing overall performance.

Monitor and identify performance issues. Check the API’s response times in Glowroot:
  1. Visit Glowroot:
    Go to /instance-monitor (configured by default in the BTP installation) at <url-figaf>/instance-monitor.

  2. Review Slow Traces:
    Look for slow-performing queries in the slow traces section.



Observe the time that jdbc query costed 



If you determine that the operation is taking too long, connect to your database and use the commands below for the BTP installation:

  1. Enable SSH for the App:

    cf enable-ssh <app-id from vars.yml>-app
    
  2. Restart the App:

    cf restart <app-id from vars.yml>-app
    
  3. SSH into the App:

    cf ssh <app-id from vars.yml>-app
    
  4. Connect to PostgreSQL:

    psql -h <REMOTE HOST> -p <REMOTE PORT> -U <DB_USER> <DB_NAME>
    
  5. Identify Unused Dead Tuples:

    Execute the following query on problematic tables (e.g., tracked_object and blob_store_entry) to identify unused dead tuples. A high number of dead tuples may indicate that many records have been deleted or updated:

    SELECT schemaname,
           relname,
           n_live_tup,
           n_dead_tup
    FROM pg_stat_user_tables
    WHERE relname = 'tracked_object'; -- also check blob_store_entry
    
  6. Clean Up Dead Tuples:

    If you find a significant number of dead tuples, run:

    VACUUM FULL ANALYZE irt.tracked_object;
    VACUUM FULL ANALYZE irt.blobstore_entry;
    

    These commands may take some time to execute, but they will reclaim disk space and update query statistics.

  7. Assess Disk Reads vs. Memory Hits:

    If problems persist, run the following query to calculate the ratio of disk reads to memory hits. If the memory hit rate is below 80%, consider adding more memory. You may also adjust the PostgreSQL parameters shared_buffers and effective_cache_size.

    SELECT datname,
           numbackends,
           xact_commit,
           xact_rollback,
           blks_read,
           blks_hit
    FROM pg_stat_database;
    

Recommendation: For databases over 13 GB in size, it is advisable to use at least 4 GB of memory and allocate around 20 GB of disk space.


s
I