13 Apr 2026
04:25 PM
- last edited on
23 Apr 2026
08:15 AM
by
Michal_Gebacki
This post is an answer to the MCP Server Challenge 🤖
We've been making enormous use of the Dynatrace MCP, dtctl, and APIs for a while now - here are a few of our coolest use cases so far.
If you caught our chat at Dynatrace Perform, you will have seen how we're connecting Slack to Dynatrace with our agent, IRIS. Using both the Dynatrace API and a combination of our own MCP and the official Dynatrace one, we can now gather incredible amounts of data into Slack via a simple query. It allows us to work in one spot with multiple tools and platforms - not just in the observability space - and simplifies how our users interact with discrete systems.
Since then, we've been having a great deal of fun with Claude Code and dtctl - it's been a very powerful combination. Our first major breakthrough came a few weeks ago, and it solved a problem we've been noticing in our Kubernetes environment. So, we posed to Claude: why, if Dynatrace already has context of Kubernetes services, will it show raw IPs in trace context when the application pod back-ends are unavailable? After 45 minutes and minimal additional prompting, we had a solution built out which now has:
So now, anytime a back-end is unavailable, we no longer just see a random IP as "request to unmonitored host" or "request to public networks" in the span, but see the Kubernetes service metadata already within Dynatrace, but never before added to span context in failure cases. For this effort, I had to write precisely zero DQL, workflows, or pipelines myself. Claude and dtctl did it all!
Just last week, I gave Claude another challenge. We have an environment where OneAgent installation isn't possible, but we have OpenTelemetry and Prometheus capabilities. Given this setup, how can I validate the performance between hosts at the network level in particular? After about a minute, it suggested the Prometheus Blackbox Exporter for our use case - and, once I gave it a list of host pairs to test (stored in GitHub, I just copy-pasted our internal repo link and it handled the gh auth stuff) - it built out a set of configs to deploy.
The best part? It asked me on its own after that if I wanted a Dynatrace dashboard - and when I responded yes, it built out all the JSON and uploaded it to Dynatrace, based on similar dashboards for this project that we've already made (also with dtctl)!
If you have access to Claude Code, I highly recommend plugging the dtctl skill into it. It's been a total game-changer for us.
14 Apr 2026 06:55 AM
Thanks for sharing. I LOVE both use cases but wanted to ask a follow up question on the first one.
Can you share your OpenPipeline Configuration? What I try to understand is whether the OpenPipeline Configuration pulls out data by doing a Lookup Table query - or - whether your Workflow that runs every 6 hours is updating the OpenPipeline Configuration with those IPs/Names?
If you can - can you share the configuration and maybe also a representative sample trace?
THANKS!!
14 Apr 2026 03:06 PM
I can't share it outright, but can elaborate a little more here!
The workflow's only job is to update the OpenPipeline configuration with the KUBERNETES_SERVICE names and IPs, you nailed it. It's a one-step workflow (minus the cron trigger) with a JS code step to grab the IPs, parse everything, and update OpenPipeline.
The real trick here is that OpenPipeline only supports so many steps per DQL processor - so the Workflow also chunks them into separate processors, diving the list evenly.
17 Apr 2026 07:55 PM
@andreas_grabner Unfortunately, to my knowledge, OpenPipeline cannot perform a query against a static lookup and enrich new data coming in. This is something we'd love to see and I know @m3tomlins and @StrangerThing are interested in as well. This would prevent the need to use a workflow and allow live data enrichment from a lookup. See Static lookup for OpenPipeline - Dynatrace Community
20 Apr 2026 07:39 AM
Hi.
Without making any timeline promises. But. The team is actively working on extending OpenPipeline with 2 variations of a lookup processor. Seems you are already in contact with the right folks in the other product idea posting 🙂
14 Apr 2026 06:58 AM
And one more question on the Prometheus Blackbox Exporter. Would that use case also be possible with Dynatrace Synthetics? Or is Synthetics missing some of the listed protocols, features or is it not as easy to be rolled out like the blackbox exporter? Just curious to learn if Synthetics could also be used here or if we are missing features
14 Apr 2026 01:23 PM
It is possible with Synthetics, but our challenge was that in the network zone of our source machines, we don’t currently have the ability to place an ActiveGate to run them (or to punch in from an AG to execute commands in the source hosts).
Blackbox Exporter ended up playing nicely in the OneAgent-less ecosystem of these hosts, where it’s easy for us to gather data with other Prometheus exporters and forward to an OTel Collector, and then on to an ActiveGate through the firewall.
17 Apr 2026 08:07 PM
@MikeDouglas yea I think what @danaharrison1 is talking about is building their own switch/case statement inside of the processing step in OP. I know I'm doing this as well in our span pipeline and it's very painful to manage. I agree that having the native DQL lookup function in OP for the static lookup tables would be a much easier way to accomplish this.
Featured Posts