Icon

Comments have been closed on this page. Please use AppMon & UEM Open Q & A forum for questions about this plugin.

Partitioning of Database Tables helps improving performance of the dynaTrace Performance Warehouse for large installations. It also improves manageability since old data can be removed very quickly without the need for long-running delete statements. In the current implementation, the tables MEASUREMENT_HIGH and PERCENTILES_HIGH are subject for partitioning.

Adding Partitioning

Icon

Using Partitioning will add additional complexity in Database Administration. The scripts shown here should act as an example on how to partition the MEASUREMENT_HIGH and PERCENTILES_HIGH tables. Other tables are not supported. Only use it for large installations, double-check with the database administrator (e.g. licensing in Oracle) before applying the script.

Change of Percentiles Duration

Icon

With the usage of the provided partitioning scripts, the storage duration for percentiles changes from the Low Resolution duration to the High Resolution duration.

Prerequisites

In order to use Partitioning of tables, you need one of the following Database Systems:

Oracle Enterprise Edition with the Partitioning Option

Oracle Partitioning Licensing

Icon

In Oracle, Partitioning must be licensed separately as an Option on top of the Oracle Enterprise Edition.

Open or download Oracle_partitioning_v7.sql

Our partitioning implementation for Oracle executes a split partition operation on a so-called overflow partition. The partitioning script creates new empty daily partitions beforehand, where the data will automatically be stored by Oracle (depending on the timestamp range of the daily partition). Daily partitions, being older than the retention time will be dropped.

Feature-set/description of our partitioning implementation for Oracle:

  • Fully PL/SQL based partition maintenance implementation, ready to be scheduled in a daily Oracle job
  • Creating a new day partition by executing a split partition operation on the overflow partition
  • Support for switching/purging out data with a configurable retention time in days
  • Ready to be used on existing data. No need to start with an empty database
  • Automatic update of statistics on an incremental base (optimized with version V5)

Usage:

  • Before script execution, make sure you create a backup of the database (just in case)
  • Check the temp space settings: for the initial run of the script, temp space must be at least the size of the biggest table to be partitioned (measurement_high or percentiles_high). After the initial run, the standard setting can be used again.
  • Run/connect as a SYSTEM equivalent user
  • Disconnect the dynaTrace server from the PWH
  • Replace <your_schema> in the SQL script with the schema name of the dynaTrace database
  • Search for a local variable called highRetentionTimeInDays in the SQL script and set the value (in days) according to the High Resolution Duration configuration for the Performance Warehouse in the dynaTrace client. They MUST match!
  • Run the script, together with your DBA.
  • The last step to finish up the partitioning setup is to enable the option Manually manage deletion of high resolution data (for partitioning) in PWH storage management

Note: if you are still using Oracle 10g, please use this script, as the script above contains some statements, not supported within 10g: Oracle_partitioning_v5_10g.sql

 

 Click here to view error handling tips...

Query FAILED in Oracle partitioning maintenance jobs

The following two dba_* views can be used to query information on failed scheduled Oracle partitioning maintenance jobs execution:

Adjust <owner>

Icon

Before executing: Adjust <owner> and optionally job_name in the WHERE clause properly!

Especially the ADDITIONAL_INFO field in DBA_SCHEDULER_JOB_RUN_DETAILS can return some useful information in case something went wrong during job execution.

 

Hint: If you have deleted a high amount of data out of the measurement_high table or have problems with the size of this table, you might have to shrink the table, using the following script. Please review the script with your DBAs and let them decide, if it is needed for your DB instance.

Microsoft SQL Server Enterprise/Developer Edition

Microsoft SQL Server Partitioning Licensing

Icon

In Microsoft SQL Server, the Partitioning feature is included in the Enterprise/Developer Edition only. Compared to Oracle, it is NOT an additional feature/option on-top of the Enterprise/Developer Edition, which must be licensed separately. If the customer is running SQL Server Enterprise/Developer Edition, then Partitioning is available out-of-the box.

Open or download sqlserver_partitioning_v2.sql

Our partitioning implementation for Microsoft SQL Server adds new empty daily partitions before-hand and purges data older than the configured retention time by switching partitioned data into an auxiliary table, which then gets truncated. The implementation is designed to minimize SQL Server lock contention and zero data movement while adding the new empty daily partitions.

Feature-set/description of our partitioning implementation for Microsoft SQL Server :

  • Fully T-SQL based partition maintenance implementation, ready to be scheduled in a daily SQL Server job
  • Minimized SQL Server lock contention to ensure concurrency between the daily partitioning maintenance job and regular operations
  • Zero data movement while adding new empty daily partition ranges
  • Support for switching/purging out data with a configurable retention time in days
  • Ready to be used on existing MEASUREMENT_HIGH, PERCENTILES_HIGH data. No need to start with an empty database
  • Operates on the [PRIMARY] file group only!
  • The script does not include code to schedule a daily SQL Server job

Usage:

  • Before script execution, make sure you create a backup of the database (just in case)
  • Disconnect the dynaTrace server from the PWH
  • Replace <database> with the real name of the dynaTrace database
  • Search for a local variable called @highRetentionTimeInDays and set the value (in days) according to the High Resolution Duration configuration for the Performance Warehouse in the dynaTrace client. They MUST match!
  • Set the retention time while calling the stored procedure. Default is 14 days. This MUST match with the configured high retention time as well!
  • Check, if the data type of the countvalue field in the additional/new table MEASUREMENT _HIGH _PARTITION_SWITCH_OUT is exactly the same as in MEASUREMENT _HIGH . If they are different, change the new MEASUREMENT _HIGH _PARTITION_SWITCH_OUT and leave MEASUREMENT _HIGH as it is.
  • SQL Server Job
    • It can be created with SQL Server Management Studio in a visual way. The job simply needs to include a single T-SQL step for executing the stored procedure. Make sure the stored procedure parameter matches the high retention time in days.
    • The TSQL executed by the daily job should look like this (note that the "14" is your high resolution retention time):

      EXECUTE dbo.P_PROCESS_DAILY_PARTITIONS 14
      GO
      
    • It needs to run once per day, ideally at least 1 hour before the regular dynaTrace aging task
  • The last step to finish up the partitioning setup is to enable the option Manually manage deletion of high resolution data (for partitioning) in PWH storage management

 

 Click here to view useful queries

Partition boundaries

Partition row count - First approach

Partition row count - Second approach

SQL Server Job Execution Information

DB2 Enterprise Edition

DB2 administrative task scheduler

Icon

In DB2, the administrative task scheduler enables DB2 database servers to automate the execution of tasks. The daily partitioning job is implement such a task. The ATS(administrative task scheduler) is disabled by default. To enable the ATS, see the DB2 documentation

Open or download db2_partitionioning_v3.sql

Feature-set/description of our new partitioning implementation V2:

  •   Fully PL/SQL based partition maintenance implementation, ready to be scheduled in a daily task
  •   Creating new (empty) future day partitions to avoid I/O
  •   Support for switching/purging out data with a configurable retention time in days
  •   Ready to be used on existing MEASUREMENT_HIGH, PERCENTILES_HIGH data. No need to start with an empty database
  •  Optionally automatically updates statistics for both

Usage:

  •  Run/connect as dba user
  •  Before script execution, make sure you create a backup of the database (just in case)
  •  Disconnect the dynaTrace server from the PWH
  •  Replace ALL DYNTRC  with the real schema name of the dynaTrace database
  •  Search for a local variable called 'declare retentionDays' and set the value (in days) according to the High Resolution Duration configuration for the Performance Warehouse in the dynaTrace client. They MUST match!
  • The last step to finish up the partitioning setup is to enable the option 'Manually manage deletion of high resolution data (for partitioning)' in PWH storage management(Dynatrace Client).

     Click here to view useful queries

    List all tasks

    List the status of tasks

PostgreSQL

Partitioning for postgreSQL works by creating inheriting tables from a base table. The base table contains rules to distribute the inserts into the correct child table. This enables us to DROP the respective child table during the nightly aging task, so that we don't need to execute many DELETE statements.

Please use the script PostgreSQL_partitioning.sql to set up partitioning. Note and review the instructions at the top of the script.

Usage:

  •  Run/connect as the Dynatrace DB user (with sufficient privileges to setup partitioning (incl creating new tables))
  •  Before script execution, make sure you create a backup of the database (just in case)
  •  Disconnect the dynaTrace server from the PWH
  •  Search for the function called 'high_retention_time()' and set the values (in days) according to the desired retention time for normal measures (the measurement_high case) and percentiles (the percentiles_high case). The value for measuremnt_high MUST match the value that's configured in the AppMon client for high resolution retention! Default is 14 days for both values. Measurement records that are older than this value will be dropped during the daily partitioning job.
  •  Run the script! It should be run with the customer DBA.
  • Setup a scheduler to run the daily partitioning job. PostgreSQL does not include one by default, but there are several options available, e.g. crontab, pgAgent, Windows Scheduler, dt scheduler, etc. This job MUST execute select dynatrace_partitioning_daily_job() . See the instructions at the top of the script for some example.
  • Connect the dynaTrace server to the PWH
  • The last step to finish up the partitioning setup is to enable the option 'Manually manage deletion of high resolution data (for partitioning)' in PWH storage management(Dynatrace Client).

Data migration and cleanup:

After executing the partitioning script, the PWH schema is partitioned and new data will be stored in this partitioned table, but the old high resolution data (one minute resolution) it not available. Data in other resolution are available. To migrate the old data, run the two commands

These two commands have been commented out, as the execution of them may take some while (depending on the amount of data several hours or even days). Therefore, we recommend to run this migration outside of the business hours or on the weekend. You may want to reduce the chunk size (currently it's set to one day, which can be a lot of data in some cases) by adapting the function 'migrate_data'.

After the migration has finished (or after the setup retention time), it is recommended to delete the backup tables 'measurement_high_old_data' and 'percentiles_high_old_data' by dropping them:

Icon

PostgreSQL doesn't include a native scheduler, so you have to run the daily partitioning step from some other place. As an example, "Cron" or the "Windows Task Scheduler" can do this, simply have the dynatrace database user execute the statement

select dynatrace_partitioning_daily_job()

sometime at night (preferably before the dynatrace aging task that start by default at 02:00 AM).

 

Files

  File Modified
File DB2_partitionioning_v3.sql Oct 27, 2016 by Peter Karlhuber-Voeckl
File Oracle_partitioning_v5_10g.sql Apr 13, 2017 by Roland Ossmann
Text File Oracle_partitioning_v7.sql Nov 23, 2017 by Roland Ossmann
File PostgreSQL_partitioning.sql Oct 16, 2017 by Peter Karlhuber-Voeckl
Text File sqlserver_partitioning_v2.sql Feb 14, 2014 by Thomas Steinmaurer

  1. Anonymous (login to see details)

    Any dates for the DB2 support for partitioning. 

     

    Jibi Uthaman

    1. Anonymous (login to see details)

      No date. Sorry. Do you have a concrete DB2 customer case where partitioning might be useful?

      Thomas

  2. Anonymous (login to see details)

    We have issues with DB2 and then after a support case and no resolution decided to rebuild. We do have heavy traffic both UEM and Java agents.

  3. Anonymous (login to see details)

    It would be helpful to have some guidance on when Partitioning is recommended.  Number of agents?  Volume of transactions/day?  Total size of the database? Number of rows in Measurement_high table?

    1. Anonymous (login to see details)

      These recommendations are hard to give, because this strongly depends on the customer DB environment and it's capabilities. We have seen customers with ~ 70 million records in MEASUREMENT_HIGH per day. This is a volume where we have introduced partitioning in their environment, because the aging task can't keep up with removing old data in that sizing. Beside number of records, another indicator is a long running aging task (> 10hrs), also affecting daily business resulting in skipped PWH in the server log etc. We also don't want to (semi-)automatically recommend partitioning, because it adds additional complexity to the environment and shifting responsibility for parts of the aging task to the control of the DB and/or DBA, so the route through the support team is recommended.

      And what we have seen a lot in the past is that customers, although high load, wouldn't need partitioning at all, if some sort of measure reduction in the customer system profiles would take affect in a first step, because the customer might not be interested in a lot of mesures/measurements at all. E.g. API measures, split by agents etc.

  4. Anonymous (login to see details)

    We are at the moment testing DB2 partitioning... fingers crossed....

    1. Anonymous (login to see details)

      Keep us updated. Partitioning in DB2 is a bit "different" compared to Oracle and SQL Server, thus we appreciate all feedback we can get. Thanks!

  5. Anonymous (login to see details)

    Update to DB2 partitioning: With the great help from Jibi, partitioning for DB2 is also available now.

    Thanks!

  6. Anonymous (login to see details)

    Thanks Alois for your help too. I was wondering how to find 'retentionDays' that is right now configured in the DB? Just in case there is a mismatch in DT client and DB. 

  7. Anonymous (login to see details)

    Since PostgreSQL also supports partitioning, are there any plans to support this for DTAM in the future?

    Thx,
    Kristof 

    1. Anonymous (login to see details)

      Hi Kristof,
      just talked to Alois and he says: Not immediately.

      What are your experiences with PostgreSQL in general or partitioning in particular? Do you / Does someone know here how long this feature is around and how stable it is?

      Thanks
      G.

    2. Anonymous (login to see details)

      We've now implemented a partitioning script for PostgreSQL. Currently we use it on one of our internal PostgreSQL performance warehouses. Let me know if you want to try it at a customer's.

  8. Anonymous (login to see details)

    It's unclear when we say that we support partitioning with PostgreSQL, whether that applies to any version of AppMon, or does this only work with version X.Y and onwards.  For example,  does 6.1 work with PostgreSQL Partitioning?  I thought this page might better belong in the regular versioned doc pages, but given partitioning is really a database issue, i'm not sure the version of AppMon  matters.   A statement in the doc pages would be great to clarify this issue.

    1. Anonymous (login to see details)

      Hi Joe,

      you're right, that's not clearly stated. Currently, all the scripts support all the supported dynatrace versions (so yes, dt 6.1 will work with postgresql partitioning), because the two tables that are being partitioned haven't changed in structure for a long time. As soon as either measurement_high or percentiles_high changes, we would need to reconsider this page here.

      The cleanest way would probably be to host the page once per version, I'll talk to the doc team about this.

      Cheers,

      Peter

  9. Anonymous (login to see details)

    Hi database experts, 

    is it really mandatory to do partioning when switching to Xlarge sizing ? If the clean up task works fine and also the DB manages with the load...

    If I don't set the 'Manually manage deletion...'  everything should be fine, right ?

     

    cheers,

    Torsten

    1. Anonymous (login to see details)

      Hi Torsten,

      if everything runs fine, you should be ok without partitioning. It isn't enforced in the product, only highly recommended.

      Even though the clean up task works fine, for larger installations it will run into the business hours where it could negatively affect pwh performance for charting / storing.

      Also, our support engineers are entitled to gently nudge you towards partitioning whenever you open a support ticket (especially when it's about performance). Cheers,

      Peter

  10. Anonymous (login to see details)

    Hello Peter,

    What we will lose in case of moving PWH from Large to xLarge

    I meant for the xLarge environment we are going to do a special partitioning. 

    Is there any best practices to not achieve without loosing any historic data?

    Regards,

    Babar

    1. Anonymous (login to see details)

      Hi Babar,

      if you have already partitioning:
      you have to do nothing from the performance warehouse side.

       

      if you do not have partitioning:
      move the appmon(dynatrace) server to xLarge and watch the performance warehouse for problems during measure writing (check selfmonitoring performance warehouse write dashboard).

       

      in general:

      much more as the server sizing, the number of measurements per minute is the key metric for partitioning. If there are to many measurements to delete during the cleanup task it produce to much load on the dbserver and decrease the insertion of measurements. in such case, the solution is partitioning.

       

      In your case, if your performance warehouse run smotly with an large appmon (dt) server it will also work with an xlarge server( because only a move from large to xlarge does not produce more measurements).

       

      Regards,

      Alois

  11. Anonymous (login to see details)

    HI,

    Oracle Partitioning Usage in 

    • Replace <your_schema> in the SQL script with the real schema name of the dynaTrace database

    What does real schema name mean?

    1. Anonymous (login to see details)

      Hi 동민 박,

       

      I've edited the line in question. It simply means the schema name which normally is the same as the user name for Oracle. Best regards,

      Peter