The message you are trying to access is permanently deleted.
13 Apr 2026 09:15 AM - edited 23 Apr 2026 08:14 AM
13 Apr 2026 10:20 AM
Interesting! This is a different challenge, and it has swag!
13 Apr 2026 10:22 AM
WOW! This is a great Challenge, of course I'm in 😀
13 Apr 2026 04:25 PM
Oh, I came prepared. 😎
https://community.dynatrace.com/t5/AI/MCP-Server-Challenge-see-what-we-re-up-to-at-TELUS/m-p/297628#...
14 Apr 2026 09:26 AM
We also built an MCP server that allows you to interact with the Dynatrace Playground. You can either install it locally using your Visual Studio Code instance, or run it straight out of the GitHub Codespace.
https://github.com/dynatrace-oss/dt-mcp-playground
14 Apr 2026 01:19 PM
I'm slowly working on agentic onboarding of extensions for terraform users Challenge - Agentic detection of technologies and extensions onboarding for Terraform users - Dynatr...
21 Apr 2026 01:53 PM
21 Apr 2026 08:31 PM
Here is one of our use cases: MCP Server Challenge: Automation of the group creation in Dynatrace, using Dynatrace APIs - Dynatrac...
22 Apr 2026 09:43 PM
I built an automated observability auditor that uses Claude AI + the Dynatrace MCP server to assess tenant maturity across 15 dimensions (infrastructure, configuration, DEM, operations, security). A single command triggers 7 MCP tools — execute_dql, list_problems, list_vulnerabilities, list_davis_analyzers, get_kubernetes_events, get_environment_info, and chat_with_davis_copilot — to collect data, evaluate findings weighted by blast radius, and generate a scored interactive HTML report with root cause analysis and actionable next steps.
Full write-up: MCP Server Challenge: Observability Maturity Auditor
Demo videos: drive.google.com/auditDynatrace
24 Apr 2026 02:21 PM - edited 24 Apr 2026 02:21 PM
I feel like a kid among adults in a room here, but here we go: MCP-Server-Challenge - My very first App - A Kubernetes Cluster Performance & Capacity Report
25 Apr 2026
04:02 PM
- last edited on
30 Apr 2026
12:31 PM
by
Michal_Gebacki
Mike, I've harmonized the two submissions: "SDF Governance Guard: How We Built a Signal–Defect–Failure Classification Framework, Proved It With 85+ Exam Scores, and Extended It to Govern Remote Model Context Protocol (MCP) Server Interactions" and "SDF Governance Guard — Ready for Federal Scale" into a single document, "SDF Governance Guard — Ready for Federal Scale v2."
30 Apr 2026 12:31 PM - edited 30 Apr 2026 12:31 PM
Hello, @RWC
Thank you for letting us know about updates on your challenge's submission. The linked above post in the AI Forum has been updated, if you have anything to add or change make sure you edit it anytime you want, your challenge submission is successfully cross-posted in this thread.
30 Apr 2026 05:38 PM
Mike,
Thank you.
01 May 2026 01:25 AM
Mike,
There are two Challenge submission entries being displayed:
MCP Server Challenge entry #7: SDF Governance Guard v2 - Dynatrace Community
and
MCP Server Challenge SDF Governance Guard — Ready for Federal Scale v2 - Dynatrace Community
Please delete the entry that isn't the number 7, to eliminate any confusion to the judges.
MCP Server Challenge SDF Governance Guard — Ready for Federal Scale v2 - Dynatrace Community
Thank you.
RWC
04 May 2026 03:03 AM
Mike, we met with Dynatrace on Friday, and I have either an update or replace my existing submission (MCP Server Challenge entry #7: SDF Governance Guard v2) or a completely different submission, "Governance for AI-Driven Operations: An MCP-Powered Framework for Federal and Enterprise Environments." I'd prefer to submit the latest challenge. Please let me know asap which direction works best for the community challenge.
04 May 2026 08:08 AM
Hello,
your very first submission is safe and sound, don't worry, the topic is here and the link to it is already cross-posted above to make it accessible for the challenge's judges! 😊 If there are any updates to do, make sure to edit your submission post in AI subforum anytime you prefer and do it within this post to avoid a confusion for the judges.
You're already a challenge participant, your post is available to read in the AI subform, and link to it is featured here in this topic (it sums up all challenge submissions), so no more actions from your side are needed.
I've updated the content of your submission following your guidelines and merged other topics into the one to rise the visibility of your current submission. Of course, if there's something more to change in your post, just edit it anytime and anyway you want, there's still time to review it until the challenge ends by May 12th.
27 Apr 2026 10:20 PM
my input
Autonomous SRE Analysis by logs patterns - Dynatrace Community
29 Apr 2026 06:39 AM
Hey everyone! A curious PM here 👋
After reading through all submissions, one thing is clear to me: We basically have all the ingredients of an agentic observability platform in use, just nicely spread across the different posts.
…nobody put it all together yet, though? 😎 Challenge 😅?
What I love:
If I had to poke a bit (because… PM 😇😞
Overall: this looks less like a challenge… and more like a sneak preview of what we’ll all (should) be building in ~12 months
Thanks a lot so far! ![]()
29 Apr 2026 03:02 PM
Re: MCP Server Challenge #7 — SDF Governance Guard (Community RFC Proposal)
Randy Chambers
Dynatrace Practice Lead — Discipline Consulting Group LLC
Hi Wolfgang,
Your read on the submissions resonates strongly — the community has effectively produced all the core components of an agentic observability platform. What we don’t yet have is the shared architecture that binds these contributions into something reusable, governed, and platform‑grade.
That’s the gap SDF is designed to fill, and based on your feedback, it feels like the right moment to frame it as a community RFC rather than a single submission.
SDF proposes a unifying governance and reasoning layer that standardizes how automation, analysis, UX, and onboarding components interact. The goal is not to replace anyone’s work, but to provide the contract that makes all of our components interoperable and repeatable.
By defining a common reasoning protocol — Layer → Origin → Context → Architecture → Trigger → Eliminate — SDF gives every agent the same diagnostic worldview. This is the foundation for explainability, determinism, and cross‑team reuse.
The RFC proposes a unified flow across:
This creates a single causal narrative from telemetry to remediation — something no individual submission can achieve alone, but the community can.
SDF defines reusable interfaces for:
This allows every contributor’s automation, analysis, or UX module to slot into the architecture without re‑engineering. It’s how we turn “cool project” into shared platform capability.
The Proposal: If the community is open to it, I’d like to formalize SDF as RFC‑0001: Agentic Observability Governance & Integration Framework — a starting point for a shared reference architecture that everyone can extend.
Your comment — “nobody put it all together yet” — is exactly the catalyst for this. The ingredients exist. The community is ready. An RFC gives us the structure to assemble it together.
Thanks for the push — it feels like the beginning of something bigger than a challenge.
30 Apr 2026 05:47 PM
Wolfgang, I submitted an updated version of the SDF Governance Guard titled "SDF Governance Guard — Ready for Federal Scale v2." It's posted at https://community.dynatrace.com/t5/AI/MCP-Server-Challenge-entry-7-SDF-Governance-Guard-v2/td-p/2983.... The update leans into (NIST IR) 8011, which defines an automated security assessment methodology built on defect checks — systematic evaluations that determine whether a security control is operating as intended for federal customers/markets.
Featured Posts