We are experimenting with the CPU numbers for Oracle servers coming back from Oracle Insights both in the UI and v2 API. The numbers that we are getting back are wildly high.
Here is a sample query:
DEBUG:root:Composing the initial Dynatrace V2 API get metrics call - https://xxxxxxxx.live.dynatrace.com/api/v2/metrics/query?metricSelector=builtin%3Atech.oracleDb.cd.c...
QUESTION 1 - Sometimes we see values coming back for CPU in the thousands of %. Why is that? Should we be using both background and foregroung CPU per database?
QUESTION 2 - If Oracle insights is only collecting data at 5 or 10 minute intervals, how does that equate into the data as it is displayed in the UI or pulled via the API, especially if Dynatrace is delivering back 1-minute data.
Any advice on how this metric should be interpreted or handled would be appreciated.
Solved! Go to Solution.
High CPU usage and spikes reported for that metric can be caused by many different reasons, like e.g. CPU provisioning. It's really hard to tell the reason without investigating the case directly in the database, and we would still need to rely on your DBA's assistance.
Oracle DB Insights collects data in 1 minute intervals. Only query statistics and wait states are collected in 5 minute intervals, as those queries consume more resources.
Keep in mind that any improvements we may come up with will only be implemented in the Oracle DB extension, not the DB Insights.
Kind regards, Darek