AI
Everything around AI: AI observability, agentic AI, LLMs, MCP servers, and more
cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

MCP Server Challenge entry #1: Powerful combination of Dynatrace MCP, dtctl, and APIs, see the coolest use cases!

danaharrison1
Contributor

This post is an answer to the MCP Server Challenge 🤖


We've been making enormous use of the Dynatrace MCP, dtctl, and APIs for a while now - here are a few of our coolest use cases so far.

If you caught our chat at Dynatrace Perform, you will have seen how we're connecting Slack to Dynatrace with our agent, IRIS. Using both the Dynatrace API and a combination of our own MCP and the official Dynatrace one, we can now gather incredible amounts of data into Slack via a simple query. It allows us to work in one spot with multiple tools and platforms - not just in the observability space - and simplifies how our users interact with discrete systems.

  • mention @iris in Slack with your query
  • kicks off an n8n workflow with connection to our AI engine, parsing the query and routing to tool calls
  • tool calls Dynatrace, Google Cloud, our API gateway, change systems, and more
  • final data is gathered and response is given in Slack

Since then, we've been having a great deal of fun with Claude Code and dtctl - it's been a very powerful combination. Our first major breakthrough came a few weeks ago, and it solved a problem we've been noticing in our Kubernetes environment. So, we posed to Claude: why, if Dynatrace already has context of Kubernetes services, will it show raw IPs in trace context when the application pod back-ends are unavailable? After 45 minutes and minimal additional prompting, we had a solution built out which now has:

  • a Dynatrace workflow which runs every six hours to enrich a data table with all current KUBERNTES_SERVICE entity names and IPs, and
  • an OpenPipeline setup which takes all traces going through Kubernetes, and - if the IP in a particular span matches one of the KUBERNETES_SERVICE entity names from the workflow - enriches the span

So now, anytime a back-end is unavailable, we no longer just see a random IP as "request to unmonitored host" or "request to public networks" in the span, but see the Kubernetes service metadata already within Dynatrace, but never before added to span context in failure cases. For this effort, I had to write precisely zero DQL, workflows, or pipelines myself. Claude and dtctl did it all!

Just last week, I gave Claude another challenge. We have an environment where OneAgent installation isn't possible, but we have OpenTelemetry and Prometheus capabilities. Given this setup, how can I validate the performance between hosts at the network level in particular? After about a minute, it suggested the Prometheus Blackbox Exporter for our use case - and, once I gave it a list of host pairs to test (stored in GitHub, I just copy-pasted our internal repo link and it handled the gh auth stuff) - it built out a set of configs to deploy.

The best part? It asked me on its own after that if I wanted a Dynatrace dashboard - and when I responded yes, it built out all the JSON and uploaded it to Dynatrace, based on similar dashboards for this project that we've already made (also with dtctl)!

danaharrison1_0-1776093874216.png


If you have access to Claude Code, I highly recommend plugging the dtctl skill into it. It's been a total game-changer for us.

Time is an illusion. Lunchtime, doubly so.
7 REPLIES 7

andreas_grabner
Dynatrace Guru
Dynatrace Guru

Thanks for sharing. I LOVE both use cases but wanted to ask a follow up question on the first one.

Can you share your OpenPipeline Configuration? What I try to understand is whether the OpenPipeline Configuration pulls out data by doing a Lookup Table query - or - whether your Workflow that runs every 6 hours is updating the OpenPipeline Configuration with those IPs/Names?

If you can - can you share the configuration and maybe also a representative sample trace?

THANKS!!

Contact our DevRel team through devrel@dynatrace.com

I can't share it outright, but can elaborate a little more here!

The workflow's only job is to update the OpenPipeline configuration with the KUBERNETES_SERVICE names and IPs, you nailed it. It's a one-step workflow (minus the cron trigger) with a JS code step to grab the IPs, parse everything, and update OpenPipeline.

The real trick here is that OpenPipeline only supports so many steps per DQL processor - so the Workflow also chunks them into separate processors, diving the list evenly.

Time is an illusion. Lunchtime, doubly so.

@andreas_grabner  Unfortunately, to my knowledge, OpenPipeline cannot perform a query against a static lookup and enrich new data coming in. This is something we'd love to see and I know @m3tomlins and @StrangerThing are interested in as well. This would prevent the need to use a workflow and allow live data enrichment from a lookup. See Static lookup for OpenPipeline - Dynatrace Community

Subway - Sr. Mgr, Enterprise Operations & Observability Engineering

Hi.

Without making any timeline promises. But. The team is actively working on extending OpenPipeline with 2 variations of a lookup processor. Seems you are already in contact with the right folks in the other product idea posting 🙂 

Contact our DevRel team through devrel@dynatrace.com

andreas_grabner
Dynatrace Guru
Dynatrace Guru

And one more question on the Prometheus Blackbox Exporter. Would that use case also be possible with Dynatrace Synthetics? Or is Synthetics missing some of the listed protocols, features or is it not as easy to be rolled out like the blackbox exporter? Just curious to learn if Synthetics could also be used here or if we are missing features

Contact our DevRel team through devrel@dynatrace.com

It is possible with Synthetics, but our challenge was that in the network zone of our source machines, we don’t currently have the ability to place an ActiveGate to run them (or to punch in from an AG to execute commands in the source hosts).

Blackbox Exporter ended up playing nicely in the OneAgent-less ecosystem of these hosts, where it’s easy for us to gather data with other Prometheus exporters and forward to an OTel Collector, and then on to an ActiveGate through the firewall.

Time is an illusion. Lunchtime, doubly so.

StrangerThing
DynaMight Pro
DynaMight Pro

@MikeDouglas yea I think what @danaharrison1 is talking about is building their own switch/case statement inside of the processing step in OP. I know I'm doing this as well in our span pipeline and it's very painful to manage. I agree that having the native DQL lookup function in OP for the static lookup tables would be a much easier way to accomplish this.

Observability Engineer at FreedomPay

Featured Posts