cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Take the Tracing Fan Challenge! ‌‌‌‌‌‌‌‌‌‌‌‌‌‌‌💚‌

Michal_Gebacki
Community Team
Community Team

The "Tracer" badgeThe "Tracer" badge

On Valentine’s Day one year ago, we asked you to share the Dynatrace features you love, and received so much astonishing feedback in response! This time, we're returning to explore Dynatrace user stories from a bit different perspective! ❤️

Share your unique use cases for Dynatrace's Trace Intelligence tools, such as TraceView, ResponseTimeAnalysis, MethodHotspots, MemoryAllocations, ServiceFlow, and FailureAnalysis. What do you like the most? Did it help you to find a critical bug in your software? Did you get information about something critical? We’d like to hear it! :dynatrace:

And this month, we’ve joined forces with @josef_schiessl and his team, preparing something special for you! Every participant will receive the usual 100 bonus points and a unique “Tracer” forum badge, but, additionally, the top three use cases (shared before March 10th) of our development team’s choice will get 1 x $75, and 2 x $50 gift codes to utilize in the Dynatrace Swag Store!

Looking forward to your great use cases!

40 REPLIES 40

josef_schiessl
Dynatrace Helper
Dynatrace Helper

Hello everyone! :fist_bump:
As an initiator of this challenge, I want to actually also contribute / kick it off now 🙂


A cool thing about developing our tracing tools is that we can actually use them ourselves!
One of my favourite tools is the memory analysis (heres a recent very interesting blog post about it).

Whenever we touch any code that is highly critical from a performance standpoint, we always make sure to compare the before and after using memory analysis.
We use the "method" filter and put the class name or package where the changed happened to only analyze the memory consumption of our change. Tip: The "method filter" can filter for packages, classnames and even methods.
Sometimes very small changes in the code / adaptions can lead to lots of allocated heap / performance loss. On the other hand sometimes refactorings might not have any impact on heap because you never know what the JVM does behind the scenes when it executes the code.

Its awesome to compare the raw numbers but also see how the flamegraph changes.

mem_prof.png


Looking forward to hearing about your usecases / tips and tricks!

Cheers,
Josef

Thanks for sharing this. I think this is another one for the category "Tips & Tricks". If you are up for it lets record a session on how to properly use the Memory Profiler!!

Contact our DevRel team through devrel@dynatrace.com

Happy to see this challenge result in another awesome collaboration!

 

Sharing the final product of it with the Community: 

The only constant is change. Finding ways for great things to happen!

dannemca
DynaMight Guru
DynaMight Guru

In my case was a new feature introduced few weeks ago in Synthetics, where we can now compare, side by side, the execution results of 2 different locations or status, and point out their differences easily.

This feature helped me to find out the reason of too many inconsistent execution result from different monitors to an external application not managed by the client, where his app is hosted.

Checking the executions result, I could point to a bad IP from the hosting DNS Load Balancer. We have then opened a ticket to hosting service, which confirmed the IP issues after our investigation and finally have fixed their services.

image (2).png

That's pure AIOps, where AI stands for Actionable Insights.

That's really useful data, meant to help and guide troubleshooting.

Site Reliability Engineer @ Kyndryl

Oh man. I wasnt aware of that synthetic enhancement! We should do a Tips & Tricks session!!

Contact our DevRel team through devrel@dynatrace.com

AntonioSousa
DynaMight Guru
DynaMight Guru

All of the tools are great! So many I could say, but just the first that came to mind:

I have a client that has machines where you can pay and get a receipt.  Lately clients were complaining it was slower. Quickly, traces showed a big 5 second gap, each! Very rapidly Method Hotspots revealed the culprit: a sleep(5) left in the code, in a recent deployment! Immediate fix and immediate improvement of response time!

Antonio Sousa

What a story. While its great to see Dynatrace can capture this I would advocate for also using static code scanning tools in CI to detect artificial sleeps 🙂 

Contact our DevRel team through devrel@dynatrace.com

ChadTurner
DynaMight Legend
DynaMight Legend

So I have two unique use cases of Dynatrace and the traces and Service flows. THe first one has been turned into a blog post so you might have read it: https://www.dynatrace.com/news/blog/staying-ahead-of-your-competition-with-dynatrace-synthetics/ 

Using Synthetics to test your competitors.  At a previous employer we started to synthetically test our direct competitors. We were a insurance company that wrote homes, renters and auto. We tested a director competitor to record and track the customer experience to see if we provide a faster and better quality experience/service at login. Now if we failed to beat the competitor, we then had that service flow and tons of transaction data surrounding that test which we could adopt at the organization.

 

Using backtracing Via SmartScape to validate Application ownership.  It saddens me to say this but man many times when we onboarded a new set of hosts the application owners/requestors had no idea what their front facing app was, let alone their URL. If a user did have that information it also tended to be incorrect as the URL had changed or they were just totally wrong about it. Management came to us and said we are paying for Dynatrace, YOU need to tell me what their app URL is because we cant rely on the customer knowing it. That night when i should have been sleeping i racked my brain over and over again. How can we work it backwards from the URL to the Hosts, and that's when it hit me. Use all the metric data that Dynatrace has, all the relationships and work it back via Smartscape. So I went to the catch all - My Web App, and started to break out these domains as their own app. I prefixed the URL with my name so I could have a bunch broken out but filter on my name "chad - <URL>". Once it was broken out as its own Application we let it have some time to collect user sessions, then we went to smartscape and looked back to the Services level, from there we could see the metadata and tags in which were populated from host down to the processes down to the services. Not only did this allow us to be proactive and self reliant, but it also gave us 100% certainty that this App belongs to this host/HostGroup/Management Zone. Management was very please and strengthened our tooling at the organization as Dynatrace was a single point of truth by laying the direct relationships based off the actual communication and not something that a human made in Visio. 

-Chad

What a solid piece of a complex user journey, sounds like a great source of inspiration!

I remember writing that blog post with you 🙂

Contact our DevRel team through devrel@dynatrace.com

Kenny_Gillette
DynaMight Leader
DynaMight Leader

My story:

Our team was brought in to troubleshoot high CPU on a server.  Dynatrace was immediately blamed for high CPU.  Don’t we always get blamed?  Many were on the call trying to figure out with no luck.

I told the team to let me look.  Within a few minutes I figured out the issue.  What did I do?

I went to offending process then went to “View detailed CPU breakdown”

Kenny_Gillette_3-1676500578802.png

 

 

 

Next, I look at highest CPU usage and went to Method HotSpots

Kenny_Gillette_4-1676500578803.png

 

 

Then went to Hotspots and found that a logging application was causing high cpu, not Dynatrace.  Go figure.

 

Kenny_Gillette_5-1676500578806.png

 

 

It is fun to show the power of Dynatrace.

Dynatrace Certified Professional

We love to see Dynatrace shine a light on the true root cause! 💪 Thanks for sharing!

The only constant is change. Finding ways for great things to happen!

Yes, love how Dynatrace proved it wasn't Dynatrace and it was another app.

Dynatrace Certified Professional

Hey Kenny. Great to see those examples - well - great might be the wrong word as in 2023 I havent expect seeing logging as the main performance issues in apps. Just curious. Was this due to a bad logging framework? Or did somebody turn on debug logging under heavy load?

Contact our DevRel team through devrel@dynatrace.com

Hey, logging misconfig on the app side. 

Dynatrace Certified Professional

Mizső
DynaMight Guru
DynaMight Guru

Hi Folks,

I really love use Dynatrace for dig into the traces. My recent story with traces was a rescue of a project (watermelone project - three weeks before the release was green, two weeks before the release was yellow, one week before the release was full red. 😉). So one week before the release the BAs and developers found me becasue there were huge performance issues (huge response times) at a new green field kubernetes app. Quickly I checked a random individual trace between two meetings:

Mizs_0-1676927591393.png

I immadiately checked the method hotspots and in the call hierarchy the bcrypto indetified as a source of the huge response time at this single request:

Mizs_1-1676928075180.png

Then I asked the projects to generate load on the services and my first observation was perfect (Dynatrace observation). At each services I could see code execution perfromance issues becasue of the bcrypto during the whole load.

Mizs_2-1676928776727.png

After that the bcrypto was replaced by developers there were new perfromance tests. These performance tests were checked quickly in DT again with compare function:

Mizs_3-1676929562092.png

So the project, the release was saved becasue received the green light in time thanks to DT (and to me 😉).

Best regards,

Mizső

Dynatrace Community RockStar 2024, Certified Dynatrace Professional

Impressive workaround, thank you @Mizső for so much details!

A watermelon project? 🍉 Love it! Thank you for taking time to share it with us in such a detail! 

The only constant is change. Finding ways for great things to happen!

Wow. I have seen plugins like this in the past that either consume all the CPU for doing tasks that might not be necessary - OR - that are simply configured wrong and can only handle a small amount of concurrent requests. Thanks for sharing. I am curious though -> why was this library added in the first place? Was it by accident? or was there a real reason for adding this crypto hashing algorithm?

Contact our DevRel team through devrel@dynatrace.com

Hi Andi,

Devs did not add this plugin by accident. I am not a dev but as fas as I understand this plugin was used at a rest api authentication pw hash generation. Simply they replaced the rest api authentication method after (pleasant secret) DT identified this guilty plugin. 😉

Mizső

Dynatrace Community RockStar 2024, Certified Dynatrace Professional

Shirai
Advisor

Hi everyone.

I like Dynatrace's Backtrace feature. Requests to the DB often result in Response time hotspots. The Backtrace feature allows you to trace back from slow SQL statements all the way back to Useractions. Application development members and DB administrators will share this information and proceed with troubleshooting.

Shirai_0-1676969676464.png


Toshimasa Shirai.

T.Shirai IIM Corp. Osaka Japan

Great choice to highlight! Big thanks Toshimasa for your reply, it's truly enriching this thread!

PedroDeodato
DynaMight Pro
DynaMight Pro

This one's made for me: I'm a true Tracing Fan! :star_struck:

As far as Observability Pillars go, Tracing is absolutely the best one ❤️

I got to know Tracing long before I knew Dynatrace: from my academic times, when I was doing my Master's Thesis on OpenTelemetry :opentelemetry:

Back then, all of Tracing was manual for me: instrument application code to create trace, setup context, create root span, create child spans, end spans, end trace, send tracing info to X, ... 🔨🔨🔨

When I first saw Dynatrace, and saw Automatic Tracing with so much detail, with no effort at all... I mean, like magic 😍

I think this generational leap really makes you treasure and appreciate the massive load of work that Dynatrace gets out of your way!

On the other hand, beginning with an open-source and manual approach also brought me the base knowledge to leverage Dynatrace's tracing capabilities at its fullest: it helped me to avoid being overwhelmed with so many great information, as I already knew what I was looking for 🔎

 

Speaking of leveraging Dynatrace's tracing capabilites (there are so many!), a funny story that I remember was of a client who was complaining of Service A when it invoked Service B, regardless of Service B showing pretty fast response times.

When we looked at the trace examples, we immediately saw what was going on: indeed Service B had fast Server-Side Response Time, but - from Service A perspective - there was a VERY long Client-Side Response Time!

Unfortunately, I do not have screenshot on me of what exactly happened at the time, but I found something similar on another post:

 

1.png

What was great is that no significant analysis was needed to point us on the right way: we just saw the huge difference between the whole span bar (light blue + dark blue) and only the Server-Side response time (only dark blue), because of the great Client-Side Response time (light blue) overhead.

This was only possible, at least in such a simple way, because of Tracing: instead of just having the Response Time from Service A and Service B, we got traces from Service A that showed us when the request was sent out, when did it reach Service B, when did Service B sent out the response and when did that response reached Service A again!

Knowing this, we immediately suspected of a network issue and, after some investigation we were right: a request from Service A to Service B would travel through a hugely inneficient route, instead of going straight to the final destination, which lead to a huge network time overhead.

This was why Client-Side Response Time (Server-Side + Network Time), as seen from Service A, was a lot greater than the Server-Side Response Time reported by Service B itself: Service B handled the request and sent out a response very quickly, it was the travel time between the two that was causing impact.

Think about the amount of data you would have to go through to figure this one out without Tracing! :mind_blown:

Best regards, Pedro Deodato

Great story and a great reminder that there is a lot going on when a request leaves one service until it makes it to the other service.

What I have seen in my life of analyzing PurePaths is that - while the network is most likely to blame in the scenario above - it might also be other components that are not part of the trace, e.g: serializing or deserializing the data package (also called marshalling). Or - there might be component such as queues between A and B which end up delaying message delivery. This is why its always important to ask about which components you have in a distributed systems and how can you monitor them. For network or other components such as queues you can look at key metrics such as throughput, package loss or queue length

Keep up the great work and please keep sharing those stories

Contact our DevRel team through devrel@dynatrace.com

Wow, thank you for sharing this great story with us, and also for taking us back to your pre-Dynatrace times! 

The only constant is change. Finding ways for great things to happen!

DanielS
DynaMight Guru
DynaMight Guru

This is the story of a great great project... :clap::dynaspin:

The reality: The client (a global financial institution) needed to improve the experience of its users since the current time to reload a card used for urban transport took approximately 2 minutes, with the wind in favor and assuming no there were other drawbacks. :facepalm_pickard:

The Drama: Local Fintechs were achieving very agile results that, to the detriment of the institution, were losing market share. Although the reengineering of the application was general, this was a pain point for many users. It is taken as a witness case. 💣

The objective: The objective that had been assigned to us was to reduce this process to no more than 20 seconds, with a user experience measured in apdex of 0.98 or higher. 😵

The Question: Can a dinosaur , to avoid extinction, transform into an agile and fast gazelle? :sonic:

Wizard's Bag: We used all available Dynatrace tools at our fingertips: Purepaths, Mobile instrumentation, Hotspots, USQL, Session Replay, geolocation info, synthetics, just to name a few. :dynatrace_logo:

The community: In an interdisciplinary team, developers, marketing, testers, ux designers, users. My role was SME of the Dynatrace tool, it was one of the most beautiful jobs that I had to do with Dynatrace and with an excellent human group.:think_about_it:

To prevent spoilers, the following video shows the before and after of a job that without Dynatrace would have been impossible:

All data is from a test environment run by an end user to see actual reactions. No users were harmed during the film of this video:oneagent:

The true delight is in the finding out rather than in the knowing.

LOVE the storytelling aspect of your use case @DanielS😍 

The only constant is change. Finding ways for great things to happen!

pahofmann
DynaMight Guru
DynaMight Guru

So many great use cases it's hard to pick a single one 🙂 

 

The first time I used the ServiceFlow (back then TransactionFlow in AppMon :appmon: actually) at a big customer with thousands of Java Agents stuck in my memory though. With most transactions hitting upwards of 70 services in a single call it was no wonder they had issues pinpointing performance problems without a tracing tool.

 

With Dynatrace I love the continuous memory profiling. Makes analyzing so much faster, compared to taking a few dumps and comparing them.

Dynatrace Certified Master - Dynatrace Partner - 360Performance.net

jegron
DynaMight Champion
DynaMight Champion

Hi there!

My use case is related to AIX & WebLogic 🙂

Some time ago, customer detected a recurring anomaly by observing the Browser response times and asked me to investigate the root cause:

image002-3.png

 

We identified a huge CPU spike every fixed hour:

Img2_blur.png

10 minutes of recurrent full contention in production, no way. Let's deep dive in the CPU profiling :

img3.pngimg4.png

 

Thanks to it we saw many locks related to the code cache and something goes wrong with an increased size every hour:

 

img5.png

 Long-term view confirmed the anomaly:

img6.png

 

After some research we found this useful article : https://www.igorkromin.net/index.php/2017/09/25/how-to-fix-weblogic-high-cpu-usage-due-to-a-corrupte...

CPU Spike was reduced a lot after the clean of this DAT File:

img7_blur.png

 

Better but the Spike still remained. I found another awesome article, same author, same quick win:

https://www.igorkromin.net/index.php/2017/10/05/what-does-an-idle-weblogic-server-run-on-the-hour-to...

So we disabled Diagnostic Archive (which was enabled by default on all WebLogic servers ..) ... annnnd ....

img8_blur.png

 

This resulted in an immediate gain in stability and overall user response time:

img9.png

 

Thanks Dynatrace !

Julien

 

Observability Engineer at Phenisys - Dynatrace Professional

Thank you for sharing your amazing use case with the Community, Julien! 😊 And those useful links as well!

The only constant is change. Finding ways for great things to happen!

Love this one. Thanks for sharing.

The true delight is in the finding out rather than in the knowing.

Mohamed_Hamdy
DynaMight Champion
DynaMight Champion

Hello everyone,

this is a use case related to one of our customers in the banking sector while upgrading the core banking system, although it was tested on the UAT and results had been evaluated by Dynatrace with no issue, however, after upgrading the system on the production environment, there was a degradation in the performance that impacting multiple channels including the digital channel.

of course, multiple services problems had been generated on other impacted systems, and to be honest, it was a little bit difficult to get the exact root cause of the problem out-of-the-box especially since this was a new upgraded system that hosted on new servers and the automated baselining was still in the learning period. so by using the service flow, backtrace, and (purepath) distributed traces, we found that there were extensive calls to the database by the application which was something wrong with the code, also we've found slow response time on the database not only related to these calls but the response time degradation was on certain tables and as suspected these tables were not indexed.

Certified Dynatrace Professional | Certified Dynatrace Services Delivery - Observability & CloudOps | Dynatrace Partner - yourcompass.ca

GregOReilly
Helper
In our recent case study involving Dynatrace, we applied a blend of logs, metrics and traces to unravel peculiar events that transpired with the new Azure Kubenetes Hosting, affecting certain customer applications. The arrangement was like this: from the internet to Azure, then to AKS and going through WAF, Front Door and so forth.
 
We established that AKS responded appropriately on the inside. The traces and logs obtained from the pods unveiled a few glitches that required remedying, but the slowness that had been reported was not corroborated. Furthermore, the Real User Monitoring (RUM) data also attested to the slow response time of some XHR/API requests and we were able to identify the concerned critical paths. Nevertheless, the reason for why this sluggishness influenced some users and not others remained a mystery. Out of box metrics like volume, health and latency were unhelpful; there was no hint of any pattern.

 

It was then that we decided to look into Azure Front Door metrics and log-gathering to build up a set of metrics with dimensions. Every request included such values as pop, cache.status, http.status, request.uri and so on. Ultimately, this process enabled us to certify that the caching on Azure Front Door had caused disparate responses for requests made for the same URLs: a cache HIT took just a few milliseconds to process, but a cache MISS or NO_CACHE consumed up to 30+ seconds. Exploration of factors such as origin, rules and routes further uncovered what was really happening..

GregOReilly_0-1677673242352.png

GregOReilly_1-1677674549298.png

All the mistakes and caching difficulties have now been almost wholly resolved or amelioration plans are underway. Consolidating all the components together here was instrumental in finding the underlying cause. When searching for elusive solutions, Dynatrace is a great tool as it provides the capacity to exclude certain components using various data readings.

 

We posted a blog on how to do this here

AurelienGravier
DynaMight Champion
DynaMight Champion

Hello members,

 

My turn to talk about an issue encountered during a load testing campain before the golive of a new plateform related to the checkout system of a retail client.

 

An inflection point was detected approximatively around 60 hits/sec (the production target was 115 hits/sec equivalent in business to 60K checkout /H) :

AurelienGravier_0-1677701317685.png

Due to a Cpu and memory contention on application servers (detected by Davis) : 

AurelienGravier_2-1677701476116.png

The impact was huge on key transactions response time:

AurelienGravier_2-1677751949385.png

After analysis, the behaviour of suspension time before and after the inflection point was suspicious :

AurelienGravier_0-1677751688327.png

And a memory leak was suspected on .net because the memory allocation was never released even when the load testing was off :

AurelienGravier_3-1677754832686.png

To go further a heap dump memory was generated during the next similar test.

Developers use it to identify objects related to the memory leak and they identified an anomaly with objects which generate GC anormal behaviour when the amount of object created reach a certain volume in million.

Few days later, a workaround was implemented by developpers to solve the .Net memory leak.

 

Thanks to dynatrace, others optimizations on jvm, load balancing and api performance were done to reach the goals of this campain and give confidence for the platform production launch.

Regards Aurelien.

 

 

 

Observability consultant - Dynatrace Associate/Pro/Services certified

v-wesam_ahmed
Visitor

One Day, all frontends got unexplained slowdown from , we detected through anomaly detection for the user experience.. , through long journey of the analysis, we discovered a hint through one of the network metrics got increased suddenly, which is RTT for many uncorrected servers,   finally, we discovered huge backup traffic exceeded the interface bandwidth of one of the firewall security devices..

Michal_Gebacki
Community Team
Community Team

Today is the last day to submit your answer to the Tracing Fan Challenge! :alarm:

Make sure you've shared your story in the comments to be considered as one one of three laureates granted with gift codes to the Dynatrace Swag Store!

Ana_Kuzmenchuk
Community Team
Community Team

Hello, dear Community friends!

And welcome to the closing ceremony of our February Community challenge 🎉

 

We'd like to start by thanking all participants for their awesome use cases, that provided invaluable insights into how Tracing tools are being used. We didn't expect such detailed and even illustrated replies, but it was the best surprise we could have asked for 😉 And it was nice to see some new faces among the familiar ones, too!


@josef_schiessl and his team have carefully went through all use cases submitted for this Challenge and chose their three favorite ones. With no further ado, join us in congratulating the winners! 🤗

  • @Mizső - not only analysing the issue but also using Dynatrace to verify the fix 💪
  • @ChadTurner - reminding us that Smartscape is only there because of tracing and the topology that is created through it
  • @jegron - using a multitude of tools to fix a problem that seems impossible to diagnose without a tool like Dynatrace :dynaspin:

 

Thank you once again and see you in the comment section of our new Community challenge that is going to be published tomorrow 😉 Cheers! 

The only constant is change. Finding ways for great things to happen!

:clap::clap::clap::clap::clap::clap::clap::clap::clap::clap::clap::clap::clap::clap::clap::clap::clap::clap::clap::clap::clap:

The true delight is in the finding out rather than in the knowing.

Thank you very much everyone for participating! It was very interesting to read all the replys and the different usecases we help you solve. Hopefully our future advancements in our tools can help you out even more!
Cheers, 

Josef

Malaik
Champion

For me one of the most important thing is that we are using the traces to reduce the failure rates by ignoring some business Failures.

 

For each application, We are exporting all faced exceptions, share with developers and business team to state what is Business error and What is not.

So after removing all business Exceptions and keep only Technical ones, the failure rate statistics start to be more accurate (before the rate was very High and not reflect the real Application Availability).

Malaik_0-1679228876127.png

 

Malaik_1-1679228927430.png

 

Sharing Knowledge

Featured Posts