Community challenges
All Community challenges in one place. Get notified about the new ones!
cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Take the MCP Server Challenge! ‌‌‌‌‌‌‌‌‌🤖‌

Michal_Gebacki
Community Team
Community Team
Welcome back, Dynatrace Community!
We're excited to launch a new Community Challenge focused on the powerful capabilities of Dynatrace's Remote Model Context Protocol (MCP), Dynatrace Command-Line Tool (DTCTL), and our robust APIs!
 
We want to hear from you – how are you using (or envisioning using) these tools to enhance your observability and automation? Whether you're a seasoned Dynatrace expert or just starting to explore these features, we want to see your creativity and ingenuity. Share your use case for Dynatrace Remote Model Context Protocol (MCP), DTCTL, and/or the API. We're looking for practical, repeatable examples of how these tools are solving real-world problems.
 
How to document your use case? ✍️
1. A Detailed Write-Up: Explain the scenario, the steps you took, and the results you achieved.
2. A Blog Post: Share your expertise with a wider audience!
3. A Short Video: Demonstrate your solution in action!

How to participate? ➡️Simply share your use case in a new post on the dedicated, brand new subforum: AI - Dynatrace Community. Cross-post a link to your posting here, to make sure we keep track of your submission.
 
What's more, explore the Dynatrace Documentation on the MCP Server.

Challenge duration 🗓
1 calendar month/30 days: April 13th-May 12th

Judges & Prizes 🎁1. Our expert panel of Dynatrace Product Managers will be reviewing submissions and selecting the top 5 use cases.
2. Each of the 5 winners will receive awesome Dynatrace swag, like shirts, socks, Rubik's Cubes, or other cool goodies!
3. Our chamber of experts consists of: @wolfgang_beer@wolfgang_heider@GabrieleHB, and @andreas_grabner
 
Extra Visibility 📣If you have your company's approval, we'd love to feature your story and use case in a blog post or a webinar!
 
New to Dynatrace? :dynatrace:No problem! If you don’t have access to a Dynatrace tenant, feel free to spin up a free trial tenant to explore these features.
 
Let’s unlock the full potential of Dynatrace together! We can't wait to see what you come up with!
Benefits of taking Community Challenges!
👉 Every participant receives a "MCP Badge"
👉 You will also get +100 bonus points for extra activity
👉 5 selected participants will receive exclusive Dynatrace Swag!
April 2026 1.png
19 REPLIES 19

AntonioSousa
DynaMight Guru
DynaMight Guru

Interesting! This is a different challenge, and it has swag!

Antonio Sousa

MaximilianoML
Champion

WOW! This is a great Challenge, of course I'm in 😀

Max Lopes

danaharrison1
Contributor

Oh, I came prepared. 😎

https://community.dynatrace.com/t5/AI/MCP-Server-Challenge-see-what-we-re-up-to-at-TELUS/m-p/297628#...

Time is an illusion. Lunchtime, doubly so.

flo_lettner
Dynatrace Participant
Dynatrace Participant

We also built an MCP server that allows you to interact with the Dynatrace Playground. You can either install it locally using your Visual Studio Code instance, or run it straight out of the GitHub Codespace.

https://github.com/dynatrace-oss/dt-mcp-playground 

maciej_grynda
Dynatrace Helper
Dynatrace Helper

I'm slowly working on agentic onboarding of extensions for terraform users Challenge - Agentic detection of technologies and extensions onboarding for Terraform users - Dynatr...

tracegazer
Helper

I built an automated observability auditor that uses Claude AI + the Dynatrace MCP server to assess tenant maturity across 15 dimensions (infrastructure, configuration, DEM, operations, security). A single command triggers 7 MCP tools — execute_dql, list_problems, list_vulnerabilities, list_davis_analyzers, get_kubernetes_events, get_environment_info, and chat_with_davis_copilot — to collect data, evaluate findings weighted by blast radius, and generate a scored interactive HTML report with root cause analysis and actionable next steps.

 

Full write-up: MCP Server Challenge: Observability Maturity Auditor

Demo videos: drive.google.com/auditDynatrace

Logs, Traces, Metrics... and a bit of sanity.

dannemca
DynaMight Guru
DynaMight Guru

I feel like a kid among adults in a room here, but here we go: MCP-Server-Challenge - My very first App - A Kubernetes Cluster Performance & Capacity Report  

Site Reliability Engineer @ Kyndryl

RWC
Participant

Mike, I've harmonized the two submissions: "SDF Governance Guard: How We Built a Signal–Defect–Failure Classification Framework, Proved It With 85+ Exam Scores, and Extended It to Govern Remote Model Context Protocol (MCP) Server Interactions" and "SDF Governance Guard — Ready for Federal Scale" into a single document, "SDF Governance Guard — Ready for Federal Scale v2."

Link: https://community.dynatrace.com/t5/AI/MCP-Server-Challenge-entry-7-SDF-Governance-Guard-v2/td-p/2983...

Hello, @RWC 

Thank you for letting us know about updates on your challenge's submission. The linked above post in the AI Forum has been updated, if you have anything to add or change make sure you edit it anytime you want, your challenge submission is successfully cross-posted in this thread.

Mike,

Thank you.

Mike, we met with Dynatrace on Friday, and I have either an update or replace my existing submission (MCP Server Challenge entry #7: SDF Governance Guard v2) or a completely different submission, "Governance for AI-Driven Operations: An MCP-Powered Framework for Federal and Enterprise Environments."  I'd prefer to submit the latest challenge. Please let me know asap which direction works best for the community challenge.

 

Hello,

your very first submission is safe and sound, don't worry, the topic is here and the link to it is already cross-posted above to make it accessible for the challenge's judges! 😊 If there are any updates to do, make sure to edit your submission post in AI subforum anytime you prefer and do it within this post to avoid a confusion for the judges.

You're already a challenge participant, your post is available to read in the AI subform, and link to it is featured here in this topic (it sums up all challenge submissions), so no more actions from your side are needed. 

I've updated the content of your submission following your guidelines and merged other topics into the one to rise the visibility of your current submission. Of course, if there's something more to change in your post, just edit it anytime and anyway you want, there's still time to review it until the challenge ends by May 12th.

rgarzon1
Champion

my input 

Autonomous SRE Analysis by logs patterns - Dynatrace Community

fuelled by coffee and curiosity.  searching for a job,

wolfgang_heider
Dynatrace Advisor
Dynatrace Advisor

Hey everyone! A curious PM here 👋
After reading through all submissions, one thing is clear to me: We basically have all the ingredients of an agentic observability platform in use, just nicely spread across the different posts.

  • someone built the automation
  • someone built the analysis
  • someone built the governance (👀 SDF, nicely done)
  • someone built the UX
  • someone is figuring out Day 0 onboarding magic from Terraform

…nobody put it all together yet, though? 😎 Challenge 😅?

What I love:

  • real problems, not toy demos
  • lots of “oh wow, that actually saves hours/days”
  • and a surprising amount of agent thinking already happening without calling it that

If I had to poke a bit (because… PM 😇😞

  • what part is “AI vibes” vs actually reliable + repeatable?
  • and how do we turn this from cool projectthing everyone can reuse without rebuilding it?

Overall: this looks less like a challenge… and more like a sneak preview of what we’ll all (should) be building in ~12 months 

Thanks a lot so far! :party_blob:

Re: MCP Server Challenge #7 — SDF Governance Guard (Community RFC Proposal)

Randy Chambers

Dynatrace Practice Lead — Discipline Consulting Group LLC

 

Hi Wolfgang,

Your read on the submissions resonates strongly — the community has effectively produced all the core components of an agentic observability platform. What we don’t yet have is the shared architecture that binds these contributions into something reusable, governed, and platform‑grade.

That’s the gap SDF is designed to fill, and based on your feedback, it feels like the right moment to frame it as a community RFC rather than a single submission.

  1. SDF as a Candidate Reference Architecture (RFC‑0001)

SDF proposes a unifying governance and reasoning layer that standardizes how automation, analysis, UX, and onboarding components interact. The goal is not to replace anyone’s work, but to provide the contract that makes all of our components interoperable and repeatable.

  1. LOCATE as the Shared Cognitive Model

By defining a common reasoning protocol — Layer → Origin → Context → Architecture → Trigger → Eliminate — SDF gives every agent the same diagnostic worldview. This is the foundation for explainability, determinism, and cross‑team reuse.

  1. Cross‑Plane Integration as a Community Standard

The RFC proposes a unified flow across:

  • Grail (signals)
  • Smartscape (relationships)
  • Davis AI (causality)
  • AppSec (risk)
  • Workflows (action)

This creates a single causal narrative from telemetry to remediation — something no individual submission can achieve alone, but the community can.

  1. Modular Interfaces for Plug‑and‑Play Contributions

SDF defines reusable interfaces for:

  • ingestion
  • classification
  • governance
  • execution
  • validation

This allows every contributor’s automation, analysis, or UX module to slot into the architecture without re‑engineering. It’s how we turn “cool project” into shared platform capability.

The Proposal: If the community is open to it, I’d like to formalize SDF as RFC‑0001: Agentic Observability Governance & Integration Framework — a starting point for a shared reference architecture that everyone can extend.

Your comment — “nobody put it all together yet” — is exactly the catalyst for this. The ingredients exist. The community is ready. An RFC gives us the structure to assemble it together.

Thanks for the push — it feels like the beginning of something bigger than a challenge.

 

Wolfgang, I submitted an updated version of the SDF Governance Guard titled "SDF Governance Guard  — Ready for Federal Scale v2." It's posted at https://community.dynatrace.com/t5/AI/MCP-Server-Challenge-entry-7-SDF-Governance-Guard-v2/td-p/2983.... The update leans into  (NIST IR) 8011, which defines an automated security assessment methodology built on defect checks — systematic evaluations that determine whether a security control is operating as intended for federal customers/markets.

Featured Posts