AI
Everything around AI: AI observability, agentic AI, LLMs, MCP servers, and more
cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Dynatrace MCP Server and N8N, let's discuss!

MaximilianoML
Champion

Hi everyone!

I’ve been diving into the recent release of the Dynatrace MCP (Model Context Protocol) Server and its potential when paired with n8n. Given that MCP is becoming the "universal connector" for AI agents, the synergy between Dynatrace’s observability data and n8n’s workflow automation is something I believe we should be talking more about.

For those who haven't tried it yet, the Dynatrace MCP Server allows AI models to directly interface with Dynatrace (Grail, Smartscape, etc.) using a standardized protocol. When you bring n8n into the mix, especially with its native AI nodes, you essentially give your automation workflows "eyes and ears" into your full-stack environment.

Potential use cases I’m exploring:

  • Self-Healing Workflows: Using n8n to trigger a workflow when a problem is detected, and having an AI Agent use the MCP Server to fetch specific logs or traces to suggest a fix.

  • Natural Language Operations: Building a Slack bot in n8n where I can ask "How is the checkout service performing?" and the AI uses the MCP server to run the DQL and summarize the answer.

I’d love to hear from the community:

  1. Is anyone already running the Dynatrace MCP Server (via Docker or npx, or what else) in combination with n8n?

  2. How are you handling the connection? Are you using the new n8n AI Agent nodes, a custom bridge or MCP Client Node?

  3. What are your biggest security concerns or best practices when exposing your Dynatrace environment to an LLM via MCP?

I’m planning to document my setup and share a template soon, but I’d love to gather some insights or challenges you’ve faced first!

Max Lopes
8 REPLIES 8

tracegazer
Helper

Hey! Great topic. I've been working on exactly this integration..

I'm running the Dynatrace MCP Server connected to Claude Desktop for interactive analysis, and I've built a parallel architecture with n8n for automated workflows. The setup handles multiple Dynatrace tenants through a centralized orchestration layer.

Architecture

  1. Claude Desktop + MCP Server: For ad-hoc analysis, entity inventory reports, and DQL generation. The MCP server handles the heavy lifting of translating natural language to DQL and fetching data from Grail/Smartscape.
  2. n8n + MCP: I'm connecting n8n directly to the MCP server.
  3. Message Orchestration: I built a "Main Assistant Flow" in n8n that receives messages from Telegram/WhatsApp, uses AI (Claude with Gemini fallback) for intent classification, and routes to appropriate handlers - including direct Dynatrace queries via MCP when observability data is needed.

On your use cases:

  • Self-Healing Workflows: Definitely viable. The key is having good entity context from Smartscape.
  • Natural Language Operations: This works well. The MCP server's generate_dql_from_natural_language capability is solid for converting questions into DQL.

Security Considerations

  • Network isolation: Run the MCP server in a controlled environment. I use Docker with explicit network policies.
  • Query budget limits: The MCP server has built-in Grail budget tracking - use it to prevent runaway queries.
  • Audit logging: Everything the LLM queries should be logged for compliance.

Challenges I've Hit

  • MCP transport layer: The standard MCP setup didn't work out of the box for my n8n integration. I had to configure Supergateway to handle HTTP streamable transport properly - this was key to getting reliable communication between n8n and the MCP server.
  • LLM cost optimization: Running AI queries against observability data can get expensive fast. I've spent time tuning temperature settings and penalties to reduce token usage while maintaining quality responses. Every unnecessary token adds up when you're processing hundreds of alerts.
  • Context management: You can't just dump raw logs or traces into an LLM. I built specific prompts for each use case - problem analysis, entity inventory, performance summaries - each with tailored context windows to avoid bloat and hallucinations.

I'm still actively iterating on these workflows and exploring new initiatives to propose to clients - things like automated SLA reporting with natural language summaries, and proactive capacity planning using Davis AI insights piped through n8n.

Happy to share more details. What specific integration challenges are you running into?

 

Logs, Traces, Metrics... and a bit of sanity.

Woow! The challenges u had are exactly what I discussed with a friend 2 days ago, specially the LLM Costs and the Context management. It would be awesome if you could share more details about the "specific prompts for each use case - problem analysis, entity inventory, performance summaries" part 😁

Max Lopes

JustSchwendi
Dynatrace Enthusiast
Dynatrace Enthusiast

Nice! n8n was also on the list of things I considered to get started and learn how to leverage the MCP server.
Last weekend, I just tapped into the MCP waters myself, trying to connect Dynatrace with an AI Voice Call Agent (VAPI.ai), that would give me a call when any / a specific problem is created.

As I did not look to share my platform token with the SaaS service, I've vibe-coded this MCP Proxy (personal project - not an official Dynatrace project - https://github.com/JustSchwendi/Dynatrace-MCP-Proxy) to connect with the Dynatrace Remote MCP of the tenant.

First of all, it holds the platform token in the .env file, but also limits access via CORS - e.g., allowing only calls from VAPI.ai SaaS service. Browser calls are also allowed, as I've also implemented a playground to live-test a few of the MCP tools.

Pitch video of the how and why is on my x account: justSchwendi


Reading tracegazer's reply, seemingly supergateway mentioned would be a ready to use solution, I wasn't aware of, but on the other hand would lack the playground and testing capabilities I was also looking for, taking my first steps. 

Hello @JustSchwendi !

So nice to see you here 😀

Need to say, such a good documentation on GitHub! I didn't test by myself, but I saw your video on X! Congrats for the job, can you imagine the Operations Centers with a type of "Jarvis" ready to answer everything of your company Observability like a built-in Operator in the room?! 😲 (I'd be Scared and Surprised)

 

Max Lopes

antoine_buffoto
Dynatrace Helper
Dynatrace Helper

Hello all,

Nice to see work done on integration with N8N. I've also worked on that, not in a customer context but out of curiosity for AI and AI solutions. I installed N8N on my machine and successfully integrated the Dynatrace Remote MCP server. I also developed a Dynatrace App to invoke an N8N workflow (as well as enable and disable N8N workflows) from a Dynatrace workflow. I am looking to share it internally at Dynatrace if it can help people. Let me know if you want more information.

 

Here is a screenshot of the Dynatrace App

Oh, I see now! You made a Workflow in Dynatrace that Triggers a Workflow in N8N. Can we see the Dynatrace Workflow logic?

Max Lopes

Hello @antoine_buffoto !

That's cool! I would love to see the app you Developed and how that work together 😃

Max Lopes

Featured Posts