Postgres Database Resources Monitor

Postgres Database Resources Monitor

Monitoring the health and performance of your PostgreSQL database is not just a technical requirement—it’s an investment in your users’ experience. A well-tuned, responsive database minimizes delays, ensuring smooth interactions, reducing timeouts, and providing prompt data delivery in figaf application. Inefficiencies or misconfigurations in the database can lead to slow responses, adversely affecting everything from page load times to data availability.

When performance issues arise, they can result in:

  • Decreased Responsiveness: Delays in processing operations can create noticeable lags during user interactions. For instance, pages such as the tracked objects synchronization page, the landscape overview page, and the support tool page may exhibit slower load times.

  • Unpredictable Behavior: Errors or timeouts caused by underlying database performance problems can disrupt the user experience.

The guide below details a step-by-step process to troubleshoot and optimize your PostgreSQL-backed application, ensuring a consistently high-performance environment for your users.

By identifying and addressing these issues early, you ensure that your PostgreSQL database efficiently handles user requests, thereby enhancing overall performance.

Monitor and identify performance issues. Check the API’s response times in Glowroot:
  1. Visit Glowroot:
    Go to /instance-monitor (configured by default in the BTP installation) at <url-figaf>/instance-monitor.

  2. Review Slow Traces:
    Look for slow-performing queries in the slow traces section.



Observe the time that jdbc query costed 



If you determine that the operation is taking too long, connect to your database and use the commands below for the BTP installation:

  1. Connect to sap psql database following article  https://help.figaf.com/portal/en/kb/articles/connect-to-sap-psql

  2. Identify Unused Dead Tuples:

    Execute the following query on problematic tables (e.g., tracked_object and blob_store_entry) to identify unused dead tuples. A high number of dead tuples may indicate that many records have been deleted or updated:

    SELECT schemaname,
           relname,
           n_live_tup,
           n_dead_tup
    FROM pg_stat_user_tables
    WHERE relname = 'tracked_object'; -- also check blob_store_entry
    
  3. Clean Up Dead Tuples:

    If you find a significant number of dead tuples, run:

    VACUUM FULL ANALYZE irt.tracked_object;
    VACUUM FULL ANALYZE irt.blobstore_entry;
    

    These commands may take some time to execute, but they will reclaim disk space and update query statistics.

  4. Assess Disk Reads vs. Memory Hits:

    If problems persist, run the following query to calculate the ratio of disk reads to memory hits. If the memory hit rate is below 80%, consider adding more memory. You may also adjust the PostgreSQL parameters shared_buffers and effective_cache_size.

    SELECT datname,
           numbackends,
           xact_commit,
           xact_rollback,
           blks_read,
           blks_hit
    FROM pg_stat_database;
    

Recommendation: For databases over 13 GB in size, it is advisable to use at least 4 GB of memory and allocate around 20 GB of disk space.


s
I
    • Related Articles

    • Connection to Your SAP BTP PostgreSQL Database

      Install the Cloud Foundry CLI (if needed) If you don’t already have the cf CLI on your machine, follow the installers or download a compressed binary from the V8 CLI Installation Guide: ...
    • Unable to start Figaf App because of missing Database

      We experienced an issue that the Postgresql figaf-db did not respond when trying to deploy Figaf Tool. Also the Figaf app was crashed because of something. Once we ran CF deploy we got the error For application 'xxx-figaf-app': For service ...
    • Configure SAP Alert Notification service

      1. Go to Service Marketplace and find Alert Notification service. 2. Finish installation wizard and create the instance. 3. Go to your alert-notification instance and configure subscriptions. 4. Create conditions based on the resources that interest ...
    • Setting global ICO logging/staging configuration for PI agents in the Figaf tool

      For the Figaf tool, it's important to know relevant logging and staging configuration used on the PI system. This configuration helps to perform better analysis of the Integration scenarios and gives more automation for testing. In other words, these ...
    • Custom migration of SAP PI Communication Channels to Cloud Integration

      We have templates that allows you to make migrations SAP PI/PO channels to SAP Cloud Integration Faster. In 2309 this has been moved into the main Figaf Tool and gotten a viewer that will allow you to update the templates faster. On the migration ...