cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Transaction Monitoring

saldrich
Participant

I was asked about the possibility for transaction monitoring for entitlement pushdowns within one of our applications. 

The data currently resides only within the messages. each message has a transaction id in the header. Monitoring is to confirm the completed flow of async transaction between creation, and successful update at last point. Messages flow on EDF , MQ and in rest or soap calls. 

Is there anyway Dynatrace can monitor the above and if so what would be the best approach to take for this?

Thanks.

 

 

3 REPLIES 3

marco_irmer
Champion

How much of this process is currently visible? More specifically, is Dynatrace already capturing distributed traces for this activity? The distributed traces would likely be the key to making this possible, through a combination of request attributes, failure detection, and anomaly detection.

Hi Marco,

The process from our OLBB side has Dynatrace agents on all the servers. Some of the EJBs would need to be instrumented. 

That's good to hear. There are three general approaches I can think of to accomplish what you are looking for. Which approach is right for you is heavily dependent on your specific circumstances and the degree of visibility you are able to achieve with the Dynatrace agents.

Approach #1 - The "Full-stack" Solution

This approach assumes that you able to capture end-to-end distributed traces and that you are able to implement effective failure detection. In this scenario you could leverage DQL to surface specific transaction examples where the process started but failed to complete as expected. For example, if a completed trace consists of five spans, you could craft a query that looks for traces containing fewer spans than expected. Alternatively, you could also surface spans that show the process starting but where no successful completion was recorded.

Approach #2: Comparative analysis of partial traces or service metrics

This approach comes into play when it is not possible to capture complete end-to-end traces for the entire process, but different pieces of the process are observable separately. In such a scenario, DQL logic can be used to separately count up how many transactions started (using either trace or metric data) and how many completed successfully. These values can then be compared to compute the relationship between these figures. This approach is likely to be less precise because you might miss specific transactions if the part of their execution falls outside of your query timeframe. Rather, you would get a general idea of how the ratio of starts to completions is behaving over time, which them opens up possibilities for further analysis and/or anomaly detection.

Approach #3 - Comparative analysis of log data

This is basically the same as #2 above, but using log-based methods to quantify how many transactions started and how many completed. It's got similar limitations as #2 around precision, but does not require Full-stack visibility, which could be a positive if there are cost considerations or technical barriers to full-stack instrumentation.

I hope this helps and would love to hear how things progress.

Featured Posts