The following article is shared from Derek Abing, Performance Engineer and long term Compuware APM/dynaTrace power user. He has onboarded colleagues over the last years and shares his Best Practices with us on the steps he takes to get his Developers started with dynaTrace.

  1. Educate developers that they can also analyze their unit tests with dynaTrace -> and -> to move this to Continuous Integration where they automatically get full dynaTrace analytics on every build. When focusing on things like # of DB calls, # of SQLs, .. they will immediately identify any “architectural” regressions
  2. How to analyze transactions on the local workstation -> learn what’s going on in your code + 3rd party code -> look for exceptions + database
  3. How to define custom sensors and how to “push” this to Test and Prod in order to get more context information from these environments
  4. How to do memory analysis
  5. How to compare data from different sessions

Educate developers that they can also analyze their unit tests with dynaTrace -> and -> to move this to Continuous Integration where they automatically get full dynaTrace analytics on every build. When focusing on things like # of DB calls, # of SQLs, .. they will immediately identify any “architectural” regressions

Compuware DynaTrace is a great APM solution, but don’t take my word for it; Gartner Inc. placed Compuware in the “Leaders” quadrant of the “Magic Quadrant for Application Performance Monitoring (APM)” report. Aside from being an APM tool, dynaTrace acts as a great Enterprise Level Monitoring tool with its plugin capabilities. This article will touch on how to get your developers and users to take advantage of the tool so that they start thinking about performance and scalability everyday in their application development and build process. This should allow them to find problems before they ever hit QA, let alone Production.

Like any tool, dynaTrace is an investment. It’s only as useful as you make it. To get your developers and users to use the tool and rely on it, they first need to become familiar with it and be taught how to use it effectively. Compuware can certainly put on a training session, but if you are familiar enough with the tool, you can easily develop your own training class. We’ve developed a two day training curriculum for our developers to attend so we may teach them in lecture format and hands on training how to use the tool, analyze PurePaths, go over terminology, etc. tailoring the training to the specific applications or technologies they support. This was a tremendous success and we saw a large increase in demand for our developers to instrument their own local applications as they get developed. Developers now know how to use the tool and speak the terminology with one another which increases productivity even more.

The first day of the training was mostly lecture and covered everything from what a System Profile is along with what everything inside the System Profile meant. How to configure measures, thresholds, incidents, business transactions, sensors, etc. We touched on how to actually view the data inside their application describing and showing all of the Dashlets available and what they meant. And of course how to instrument their local applications. The second day of the training was much more hands on getting the client installed and focusing on analyzing their PurePaths and other data specific to their applications, dashboarding, and basic first steps and how-to’s along with custom exercises developed to have them practice what they’ve learned.

Another topic of discussion during the course of the training was how to integrate load testing tools such as Load Runner with dynaTrace. Doing so gives you the added benefit of tagging these PurePaths associated with a load test so you can look at the data collected during a specific scenario/test. Once you’ve built the LoadRunner scripts for your particular test, dynaTrace can convert this script by adding custom dynaTrace Headers for tagging. To convert the script, open the dynaTrace Client and go to Tools > LoadRunner Script Converter. Point to the script you wish to convert and click the [Patch] Button when you are ready to convert the script. This will add the appropriate Headers. You can also [Unpatch] the script to remove the Headers. There is a dashlet specifically created for viewing tagged web requests. Under the System Profile > Diagnose Performance > Tagged Web Requests.

Note the below images on the script prior to the dynaTrace conversion and the finished view in the Tagged Web Requests.

Script before Conversion:


Conversion:


Script after Conversion:


Tagged Web Requests:


How to analyze transactions on the local workstation -> learn what’s going on in your code + 3rd party code -> look for exceptions + database

Now that your application is instrumented lets touch on some key terminology and where to do some initial troubleshooting either when there is a problem or if you simply want to just monitor data flowing through your application.

They key term that dynaTrace revolves around and what you’ll hear over and over is the dynaTrace PurePath. The PurePath is the core technology behind everything in dynaTrace. The PurePath is a trace through a system of applications that gives you a full end-to-end view of what happened in that transaction. This PurePath will give you response time information as well as context level information (i.e. SQL statements, logging messages, arguments, methods executed, etc.)

Third Party Calls

Current applications depend more and more on external services - be it the ubiquitous Facebook or Twitter integrations on a web page or calls to external systems via Web Services or message queues. Now dynaTrace makes these external calls much more obvious and enables you to understand which third party service affects which parts of your applications, so you can easily spot dependencies and bottlenecks.

For dynaTrace installations using User Experience Monitoring, Page Actions involving content from third party servers will be visualized in a special way in the Transaction Flow Dashlet. Such transactions have so called Third Party nodes in the Transaction Flow Dashlet.
The time value which is displayed for third party calls is the average response time per call. These times are not taken into account for overall response time calculation since most of the third party calls run asynchronously.

How to do memory analysis

Memory analysis is very important if you want your application to both perform and scale. This can be accomplished easily with dynaTrace. You should be monitoring the memory trends regardless if you have any load on the application because if the memory is continuously growing then you should analyze that immediately. There is a pre-built dashboard detailing memory consumption over a period of time. This can be found under the START Center > Memory Diagnosis. Select the appropriate System Profile in the corner and select Analyze Memory Usage.

Whenever you monitor the memory you also need to keep an eye on the GC behavior. High GC can lead to high CPU utilization and decreased response times due to suspensions. Typically, there are two main contributors to an increasing memory trend and they include a miss-configured GC/memory setting or an actual application problem.  A general rule of thumb is if the memory is always high with occasional GCs but no OutOfMemory error, you should analyze constant high memory usage and also identify how these GC are impacting your response times. If memory grows slowly, but a major GC gets it back down again only to slowly increase, then you might simply have configured your GC/memory settings wrong.  If memory grows to a point of an OutOfMemory error you should try to pinpoint any memory leaks.

To pinpoint memory leaks, browse to the START Center > Diagnose Applications/Diagnose Memory and select Pinpoint Memory Leaks. This view will show you where any memory dumps or snapshots have been performed. Create a memory snapshot to analyze the heap memory, identify memory Hotspots, and potential memory leaks. DynaTrace will give you the option of doing a Deep Memory Leak Analysis which will contain the full heap data and provide the most detail, but when running it will cause the application to be suspended. The Memory Consumption Trending option is more light weight and won’t suspend the application, but you won’t get as much detail either as it only contains instance counts per class. In both cases, you need to decide whether to force a GC or not. If you are looking for a memory leak or something that consumes memory all the time and is not load dependent than this is a good option. If you are also interested in things that are still not Garbage Collected because a non optimal application or GC setting then you should not choose that option.?GALLERY

By default, you cannot do any sort of memory snapshot for .NET applications because of the increase in overhead that it can cause. To enable this, edit the Agent Mapping in the System Profile by selecting the Advanced button. I wouldn’t recommend keeping this on all the time though because it will attribute to  longer suspensions in your transactions.

After the snapshot has completed and processed there are various tabs we can look at to get the details.  In the Hotspots view, we see what holds most of the memory in a JVM. This view can be used to look for memory leaks or to simply understand which objects keep most of the objects alive. We see that 52% is held by a HashMap Segment, which might be perfectly fine, but we want to understand that better.

We can easily look what that consists of by right clicking the hot spot and choosing the Keep Alive Set and we can check what holds this object by using the Follow References instead. The Keep Alive Set will show only instances kept alive by the one selected and will give an aggregated set with instance count. The GC Size indicates how much memory would be freed if the object were to be garbage collected. Follow references on the other hand will not aggregate but show the real reference tree. The difference between keep alive and direct is that keep alive will only show you what is kept alive or what keeps alive. Direct on the other hand follows the object structure and also allowing you to see which memory exactly holds the reference

Another area of interest would be the HTTP Sessions or Duplicated Strings. The HTTP Sessions are often responsible for high memory as too many things are kept in them. Duplicate Strings can be an easy way to simply optimize memory.

How to define custom sensors and how to “push” this to Test and Prod in order to get more context information from these environments

Out of the box, dynaTrace captures a wealth of information flowing through your application. With the introduction of Auto Sensors it captures even more information with even less overhead.  There are a lot of out of the box sensors that you can immediately take advantage of. Depending on the technology selected in the System Profile, dynaTrace will automatically select those sensors important to that technology.

Entry Points ensure a new PurePath gets started. Sensors will capture context level information (i.e database calls, SQL statements, methods and classes being executed, logging messages, etc.) and can be seen in the PurePaths for further diagnostics or you can use this context level information to put in a business transaction to further analyze the data. Lastly sensors are responsible to follow the PurePath across multiple tiers. This stitches together the full end-to-end PurePath.Aside from out of the box sensors and Auto Sensors, you have the ability to place a Custom Sensor. Custom sensors are configured within the System Profile and allow you to capture class and method calls that dynaTrace might not see. Be careful when placing custom sensors though as you don’t want to do a “shotgun” approach by instrumenting everything because you can easily over instrument your application and cause it to perform poorly or prevent it from starting up all together. General rule of thumb here is to be specific as possible.

You can place custom sensors in different groups and then assign those Sensor Groups to any Agent Group within the System Profile which allows you to enable or disable sensors on the fly with Hot Sensor Placement. You can also copy/paste Custom Sensors into different System Profiles or even different dynaTrace environments. So if you have a local application on your developer workstation with custom sensors that you want to move to Production, you can accomplish this easily by just copying from your development System Profile up to the System Profile in Production.

Placing a Sensor will impact the Byte Code instrumentation of Agents that map to the Agent Group/Tier. There are some extra properties of certain sensors that you can customize even further. Things like enabling bind value capturing for SQL statements, include or exclude certain logging messages or exceptions, and enabling more capturing of different Servlet/ASP.NET parameters and attributes. Changes to the properties of sensors are activated automatically on the fly and don't require a restart of the application or a Hot Sensor Placement.

Keep in mind you only want to capture information that you care about. This will limit overhead on the instrumented application and also improve overall performance of your dynaTrace environment.

  • Only capture context data you need for BTs and Root Cause Analysis
  • Avoid using “*” (asterisk) in Servlet/ASP.NET Sensor
  • Reduce length of SQL Capturing and Bind Values in ADO.NET/JDBC Exclude Exceptions and Log
  • Messages you don’t need
  • Exclude URLS that you don’t need

If you have a Batch application or console app that doesn’t start a PurePath with a web request, you’ll want to place a custom sensor on the appropriate method call in your batch process and tell it to start a PurePath whenever this method is executed.

How to compare data from different sessions

DynaTrace makes it easy to compare two different sets of data. This makes it perfect to compare, for example, how one load test ran compared to another or how version 1.2 of a code release compares to 1.3. You can then make really helpful regression dashboards for you application.First you’ll need a data source so you’ll want to reference stored sessions for these. Either create a new stored session by initiating a session recording or export a bunch of PurePaths to create a stored session. Name the sessions that make sense (i.e. LoadTest1 and LoadTest2). Launch the Start Center and go to View Trends & Change > Assess Performance Changes
!derek_startcenter_comparesessions.jpg|border=1,width=300!Select a session a System Profile for comparison and this will launch the API Breakdown Dashlet immediately. You can change the sessions being compared at anytime by selecting them from the top of the Dashboard.

You can add the following Dashlets to further compare your data. The following Dashlets are compatible with comparison and once added to the Dashboard they will automatically show comparison data.

  • API
  • Database
  • Browser Hotspots
  • Browser Network
  • Exceptions
  • Errors
  • Logging
  • Methods
  • Remoting
  • Web Services
  • Web Requests
  • Tagged Web Requests
  • PurePath Comparison