I would like to see how much of an overhead AppMon causes on the CPU where AppMon agent is in place. We were told that it is around 3% of the CPU time. We are performing some load testing on the server and we would like to see visually(thru a graph) the overhead introduced by AppMon. Would we be able to do that?
AppMon comes with a built-in measure which gives you an estimation of the CPU-time overhead caused by the agent. You can chart it by adding the following series to a chart:
System Profile "AppMon Self-Monitoring" > AppMon System Performance > Estimated CPU-Overhead.
This measure gives a rough lower-bounds estimation of the CPU-time overhead caused by the agent (in percent of the total CPU time). But please keep in mind that the overhead depends on many factors. Most importantly it depends on which methods are instrumented. If you instrument too many methods then the overhead might be well over 100%.
Another possibility to see the CPU-time overhead is to run your application twice: Once with no instrumentation and once with your instrumentation settings. Then you can compare the CPU utilization of those two runs (Measure: Process Performance > Current CPU load).
an additional way would be to turn on CPU sampling while you run your test. The resulting smaple overview will show you the ammount /percentage of time beeing spent in AppMon execution.
I am trying to determine the overhead of my Production Profile and I cannot see any agents when I select this value in the dynaTrace Self Monitoring profile.
Does this indicate something isn't set up right?
Also, are the two methods given above by Gunther still the best way to determine profile overhead?
When you create a new dashboard and add a chart the Measure Picker dialog will show you those splitting value options of the timeframe defined in the dashboard. By default the timeframe of a dashboard is 30 minutes. So - if you havent had agents connected within the last 30 minutes you wouldnt see a value here.
Good news is that you can still add this measure shown in your screenshot. If you then change the timeframe of your dashboard to e.g: last 7 days the chart will automatically show measures for those agents that were active in the last 7 days.
Please give this a try.
Just like Reinhard pointed out, turning on CPU sampling, and then open up the report/analysis for a sample, you can show them the percentage of CPU and CPU time consumed by the DT agent and also how it compares for that period for the rest of methods/APIs, etc...
Also, showing the CPU and MEM metrics before instrumentation and after instrumentation clear a lot of things and doubts.
Hope this helps,
How many custom sensor packs do you have place? Also take a look at the level of details in those sensor configurations, ie (Aggregate DB and exceptions, look at your string capture limit and look at reducing the amount of Auto sensors placed)
Hello @Sreedhar M.
Auto Sensors adjust automatically to ensure they do not incur more overhead than the percentage setting. The amount of detail captured depends on the complexity of the application or environment. You may need to adjust the resolution settings until you see the desired level of detail for a tolerable amount of overhead.
Have a look on the below link for 'How to reduce overhead in monitored systems'?