<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: MCP Server Challenge entry #5: Observability Maturity Auditor in AI</title>
    <link>https://community.dynatrace.com/t5/AI/MCP-Server-Challenge-entry-5-Observability-Maturity-Auditor/m-p/298261#M133</link>
    <description>&lt;P&gt;Hi &lt;a href="https://community.dynatrace.com/t5/user/viewprofilepage/user-id/3364"&gt;@Julius_Loman&lt;/a&gt;&amp;nbsp;.. The current solution runs with the help of Claude and Dynatrace’s MCP. However, I have a previous solution that used Dynatrace APIs, which, as I understand, should also work in both SaaS and Managed environments. Let me check if I have it committed and published on Git.&lt;/P&gt;</description>
    <pubDate>Thu, 23 Apr 2026 19:14:23 GMT</pubDate>
    <dc:creator>tracegazer</dc:creator>
    <dc:date>2026-04-23T19:14:23Z</dc:date>
    <item>
      <title>MCP Server Challenge entry #5: Observability Maturity Auditor</title>
      <link>https://community.dynatrace.com/t5/AI/MCP-Server-Challenge-entry-5-Observability-Maturity-Auditor/m-p/298171#M129</link>
      <description>&lt;H2&gt;AI-Powered Tenant Assessment via Dynatrace MCP&lt;/H2&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Hi everyone! I built an automated observability maturity auditor that uses Claude AI + Dynatrace MCP to run a 15-agent audit across infrastructure, configuration, DEM, operations, and security — producing a scored HTML report with root cause analysis and actionable next steps. The entire audit runs from a single command: "audit tenant&amp;nbsp;uhv42169".&lt;/P&gt;
&lt;P&gt;See it in action:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;A title="https://drive.google.com/drive/folders/1sIJEEbbsAnApDpa6wB7Jd8i6cSg20tN9?usp=sharing" href="https://drive.google.com/drive/folders/1sIJEEbbsAnApDpa6wB7Jd8i6cSg20tN9?usp=sharing" target="_self"&gt;drive.google.com/auditDynatrace&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;HR /&gt;
&lt;H2&gt;The Problem&lt;/H2&gt;
&lt;P&gt;As an consultant, I audit Dynatrace tenants regularly for clients across Latin America. Each audit follows a repeatable pattern:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Connect to the tenant&lt;/LI&gt;
&lt;LI&gt;Check 12-15 observability dimensions&lt;/LI&gt;
&lt;LI&gt;Score each dimension by severity&lt;/LI&gt;
&lt;LI&gt;Write an SRE analysis with root causes and recommendations&lt;/LI&gt;
&lt;LI&gt;Generate a professional report for the client&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;This process used to take &lt;STRONG&gt;1-3 days manually&lt;/STRONG&gt;. With the Dynatrace MCP server, I automated it down to &lt;STRONG&gt;~15 minutes&lt;/STRONG&gt;.&lt;/P&gt;
&lt;HR /&gt;
&lt;H2&gt;The Solution: AI-as-Auditor via MCP&lt;/H2&gt;
&lt;P&gt;The key insight is using &lt;STRONG&gt;CLAUDE.md as an executable playbook&lt;/STRONG&gt;. Instead of writing Python code to call APIs, I wrote a markdown file that instructs Claude AI to execute the audit step by step, using the Dynatrace MCP server as its data source.&lt;/P&gt;
&lt;DIV&gt;&lt;STRONG&gt;Architecture&lt;/STRONG&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;SPAN&gt;User:&lt;/SPAN&gt; "audit tenant uhv42169"&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;SPAN&gt;↓&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;Claude AI&lt;/SPAN&gt; reads CLAUDE.md playbook&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;SPAN&gt;↓&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;Claude AI&lt;/SPAN&gt; reads 15 agent definitions (agents/*.md)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;SPAN&gt;↓&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;Dynatrace MCP Server&lt;/SPAN&gt; ← execute_dql, list_problems, list_vulnerabilities, ...&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;SPAN&gt;↓&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;Claude AI&lt;/SPAN&gt; evaluates findings, calculates scores, writes SRE analysis&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;SPAN&gt;↓&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;HTML Report&lt;/SPAN&gt; with scores, findings, root cause, recommendations&lt;/DIV&gt;
&lt;H3&gt;&amp;nbsp;&lt;/H3&gt;
&lt;H3&gt;MCP Tools Used (7 out of 20 available)&lt;/H3&gt;
&lt;P&gt;MCP Tool Used By Agents Purpose&lt;/P&gt;
&lt;TABLE&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD&gt;get_environment_info&lt;/TD&gt;
&lt;TD&gt;Setup&lt;/TD&gt;
&lt;TD&gt;Verify connectivity, get tenant ID&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;execute_dql&lt;/TD&gt;
&lt;TD&gt;01-09, 12, 15&lt;/TD&gt;
&lt;TD&gt;Query entities, tags, management zones, services&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;list_problems&lt;/TD&gt;
&lt;TD&gt;10, 11, 13&lt;/TD&gt;
&lt;TD&gt;Problem history, MTTR, noise analysis, custom alerts&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;list_davis_analyzers&lt;/TD&gt;
&lt;TD&gt;10&lt;/TD&gt;
&lt;TD&gt;Verify Davis AI capabilities&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;list_vulnerabilities&lt;/TD&gt;
&lt;TD&gt;14&lt;/TD&gt;
&lt;TD&gt;Security posture assessment&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;get_kubernetes_events&lt;/TD&gt;
&lt;TD&gt;15&lt;/TD&gt;
&lt;TD&gt;K8s cluster health and event analysis&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;chat_with_davis_copilot&lt;/TD&gt;
&lt;TD&gt;Exploration&lt;/TD&gt;
&lt;TD&gt;Settings discovery (dt.setting workaround)&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;HR /&gt;
&lt;H2&gt;The 15 Audit Agents&lt;/H2&gt;
&lt;P&gt;Each agent is a markdown file defining: DQL queries or MCP tool calls, checks with PASS/WARN/FAIL/INFO criteria, blast radius (CRITICAL/HIGH/MEDIUM/LOW), remediation text, and analysis guidelines with root cause/recommendations/next steps.&lt;/P&gt;
&lt;DIV&gt;
&lt;DIV&gt;&lt;STRONG&gt;Infrastructure&lt;/STRONG&gt;&lt;BR /&gt;&lt;SPAN&gt;01. OneAgent &amp;amp; ActiveGate&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;02. Host Groups&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;15. Kubernetes Health&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;&lt;STRONG&gt;Configuration&lt;/STRONG&gt;&lt;BR /&gt;&lt;SPAN&gt;03. Management Zones&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;04. Auto Tags&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;05. Manual Tags&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;06. Ownership&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;07. Security Context&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;10. Anomaly Detection&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;11. Problem Notifications&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;12. SLOs&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;&lt;STRONG&gt;DEM&lt;/STRONG&gt;&lt;BR /&gt;&lt;SPAN&gt;08. Real User Monitoring&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;09. Synthetic Monitors&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;&lt;STRONG&gt;Operations&lt;/STRONG&gt;&lt;BR /&gt;&lt;SPAN&gt;13. Problem History&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;&lt;STRONG&gt;Security&lt;/STRONG&gt;&lt;BR /&gt;&lt;SPAN&gt;14. Vulnerabilities&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;HR /&gt;
&lt;H2&gt;Scoring System&lt;/H2&gt;
&lt;P&gt;Each finding is weighted by &lt;STRONG&gt;blast radius&lt;/STRONG&gt;:&lt;/P&gt;
&lt;TABLE&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD&gt;&lt;SPAN&gt;CRITICAL&lt;/SPAN&gt;&lt;/TD&gt;
&lt;TD&gt;Weight 4.0&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;&lt;SPAN&gt;HIGH&lt;/SPAN&gt;&lt;/TD&gt;
&lt;TD&gt;Weight 3.0&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;&lt;SPAN&gt;MEDIUM&lt;/SPAN&gt;&lt;/TD&gt;
&lt;TD&gt;Weight 2.0&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;&lt;SPAN&gt;LOW&lt;/SPAN&gt;&lt;/TD&gt;
&lt;TD&gt;Weight 1.0&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;Status scoring: &lt;STRONG&gt;PASS&lt;/STRONG&gt; = 100% of weight, &lt;STRONG&gt;WARN&lt;/STRONG&gt; = 50%, &lt;STRONG&gt;FAIL&lt;/STRONG&gt; = 0%, &lt;STRONG&gt;INFO&lt;/STRONG&gt; = excluded.&lt;BR /&gt;Agent score = (earned_weight / total_weight) × 100. Global score = average of all agents with data.&lt;/P&gt;
&lt;HR /&gt;
&lt;H2&gt;Real Audit Results: Tenant uhv42169&lt;/H2&gt;
&lt;DIV&gt;
&lt;DIV&gt;
&lt;DIV&gt;Global Score&amp;nbsp;36.6/100 — Critical gaps detected&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;DIV&gt;
&lt;DIV&gt;
&lt;DIV&gt;34.5&amp;nbsp;Infra&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;DIV&gt;
&lt;DIV&gt;28.6&amp;nbsp;Config&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;DIV&gt;
&lt;DIV&gt;27.3&amp;nbsp;DEM&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;DIV&gt;
&lt;DIV&gt;45.5&amp;nbsp;Ops&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;DIV&gt;
&lt;DIV&gt;100&amp;nbsp;Security&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;H3&gt;&amp;nbsp;&lt;/H3&gt;
&lt;H3&gt;Per-Agent Scores&lt;/H3&gt;
&lt;P&gt;# Agent Score Data Source&lt;/P&gt;
&lt;TABLE&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD&gt;01&lt;/TD&gt;
&lt;TD&gt;OneAgent &amp;amp; ActiveGate&lt;/TD&gt;
&lt;TD&gt;&lt;SPAN&gt;73.6&lt;/SPAN&gt;&lt;/TD&gt;
&lt;TD&gt;execute_dql&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;02&lt;/TD&gt;
&lt;TD&gt;Host Groups&lt;/TD&gt;
&lt;TD&gt;&lt;SPAN&gt;0.0&lt;/SPAN&gt;&lt;/TD&gt;
&lt;TD&gt;execute_dql&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;03&lt;/TD&gt;
&lt;TD&gt;Management Zones&lt;/TD&gt;
&lt;TD&gt;&lt;SPAN&gt;0.0&lt;/SPAN&gt;&lt;/TD&gt;
&lt;TD&gt;execute_dql&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;04&lt;/TD&gt;
&lt;TD&gt;Auto Tags&lt;/TD&gt;
&lt;TD&gt;&lt;SPAN&gt;0.0&lt;/SPAN&gt;&lt;/TD&gt;
&lt;TD&gt;execute_dql&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;05&lt;/TD&gt;
&lt;TD&gt;Manual Tags&lt;/TD&gt;
&lt;TD&gt;&lt;SPAN&gt;50.0&lt;/SPAN&gt;&lt;/TD&gt;
&lt;TD&gt;execute_dql&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;06&lt;/TD&gt;
&lt;TD&gt;Ownership&lt;/TD&gt;
&lt;TD&gt;&lt;SPAN&gt;0.0&lt;/SPAN&gt;&lt;/TD&gt;
&lt;TD&gt;execute_dql&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;07&lt;/TD&gt;
&lt;TD&gt;Security Context&lt;/TD&gt;
&lt;TD&gt;&lt;SPAN&gt;N/A&lt;/SPAN&gt;&lt;/TD&gt;
&lt;TD&gt;dt.setting unavailable&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;08&lt;/TD&gt;
&lt;TD&gt;RUM&lt;/TD&gt;
&lt;TD&gt;&lt;SPAN&gt;N/A&lt;/SPAN&gt;&lt;/TD&gt;
&lt;TD&gt;No apps (intentional)&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;09&lt;/TD&gt;
&lt;TD&gt;Synthetic Monitors&lt;/TD&gt;
&lt;TD&gt;&lt;SPAN&gt;27.3&lt;/SPAN&gt;&lt;/TD&gt;
&lt;TD&gt;execute_dql&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;10&lt;/TD&gt;
&lt;TD&gt;Anomaly Detection&lt;/TD&gt;
&lt;TD&gt;&lt;SPAN&gt;83.3&lt;/SPAN&gt;&lt;/TD&gt;
&lt;TD&gt;list_davis_analyzers + list_problems&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;11&lt;/TD&gt;
&lt;TD&gt;Problem Notifications&lt;/TD&gt;
&lt;TD&gt;&lt;SPAN&gt;66.7&lt;/SPAN&gt;&lt;/TD&gt;
&lt;TD&gt;list_problems (CUSTOM_ALERT inference)&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;12&lt;/TD&gt;
&lt;TD&gt;SLOs&lt;/TD&gt;
&lt;TD&gt;&lt;SPAN&gt;0.0&lt;/SPAN&gt;&lt;/TD&gt;
&lt;TD&gt;execute_dql&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;13&lt;/TD&gt;
&lt;TD&gt;Problem History&lt;/TD&gt;
&lt;TD&gt;&lt;SPAN&gt;45.5&lt;/SPAN&gt;&lt;/TD&gt;
&lt;TD&gt;list_problems + execute_dql&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;14&lt;/TD&gt;
&lt;TD&gt;Vulnerabilities&lt;/TD&gt;
&lt;TD&gt;&lt;SPAN&gt;100.0&lt;/SPAN&gt;&lt;/TD&gt;
&lt;TD&gt;list_vulnerabilities&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;15&lt;/TD&gt;
&lt;TD&gt;Kubernetes&lt;/TD&gt;
&lt;TD&gt;&lt;SPAN&gt;30.0&lt;/SPAN&gt;&lt;/TD&gt;
&lt;TD&gt;get_kubernetes_events + execute_dql&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;HR /&gt;
&lt;H2&gt;How It Works: Step by Step&lt;/H2&gt;
&lt;H3&gt;Step 1: CLAUDE.md as Executable Playbook&lt;/H3&gt;
&lt;P&gt;The core innovation is that &lt;STRONG&gt;CLAUDE.md IS the automation&lt;/STRONG&gt;. No Python, no scripts, no SDK wrappers. The AI reads the playbook and follows it:&lt;/P&gt;
&lt;DIV&gt;&lt;SPAN&gt;# CLAUDE.md — audit_mcp Playbook&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;When the user says "audit [tenant]", follow this sequence:&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt;### Step 1: SETUP&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;1. Call&lt;/SPAN&gt; &lt;SPAN&gt;get_environment_info&lt;/SPAN&gt; &lt;SPAN&gt;to verify connectivity&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;### Step 2: COLLECT DATA (for each of the 15 agents)&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;1. Read the agent file from&lt;/SPAN&gt; &lt;SPAN&gt;agents/NN_name.md&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;2. Execute each query or MCP tool call listed&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt;### Step 3: EVALUATE&lt;/SPAN&gt; &lt;SPAN&gt;— Apply checks, determine PASS/WARN/FAIL/INFO&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;### Step 4: CALCULATE SCORES&lt;/SPAN&gt; &lt;SPAN&gt;— Weighted by blast_radius&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;### Step 5: SRE ANALYSIS&lt;/SPAN&gt; &lt;SPAN&gt;— Root Cause + Recommendations + Next Steps&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;### Step 6: GENERATE REPORT&lt;/SPAN&gt; &lt;SPAN&gt;— Interactive HTML with Mainsoft branding&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;H3&gt;Step 2: Agent Definitions as Markdown&lt;/H3&gt;
&lt;P&gt;Each agent is a self-contained markdown file. Here's a simplified example of Agent 10 (Anomaly Detection), which was redesigned to use MCP tools instead of the unavailable dt.setting:&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;# Agent 10: Anomaly Detection&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;Category: configuration | Blast Radius: HIGH&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;## MCP Tools&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;list_davis_analyzers&lt;/SPAN&gt; &lt;SPAN&gt;— verify AI capabilities&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;list_problems(timeframe="30d")&lt;/SPAN&gt; &lt;SPAN&gt;— check if detection fires&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;## Checks&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;davis_analyzers_available:&lt;/SPAN&gt; &lt;SPAN&gt;PASS if ≥3 analyzers&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;anomaly_detection_firing:&lt;/SPAN&gt; &lt;SPAN&gt;PASS if SLOWDOWN/RESOURCE problems exist&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;anomaly_problem_ratio:&lt;/SPAN&gt; &lt;SPAN&gt;WARN if ≥60% anomaly-based (noisy)&lt;/SPAN&gt;&lt;/P&gt;
&lt;H3&gt;Step 3: The Report&lt;/H3&gt;
&lt;P&gt;The generated HTML report features:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Global score gauge with semaphore coloring (green/yellow/red)&lt;/LI&gt;
&lt;LI&gt;Category breakdown cards (Infrastructure, Configuration, DEM, Operations, Security)&lt;/LI&gt;
&lt;LI&gt;Per-agent sections with mini-gauges, findings tables with sort/filter, and collapsible AI analysis&lt;/LI&gt;
&lt;LI&gt;Each analysis includes &lt;STRONG&gt;Root Cause&lt;/STRONG&gt;, &lt;STRONG&gt;Recommendations&lt;/STRONG&gt; (prioritized), and &lt;STRONG&gt;Next Steps&lt;/STRONG&gt; (with effort estimates)&lt;/LI&gt;
&lt;LI&gt;Dark/light theme toggle, global search, print-to-PDF support&lt;/LI&gt;
&lt;LI&gt;Responsive design for mobile viewing&lt;/LI&gt;
&lt;/UL&gt;
&lt;HR /&gt;
&lt;H2&gt;Key Discovery: Working Around dt.setting&lt;/H2&gt;
&lt;P&gt;A significant challenge: fetch dt.setting is &lt;STRONG&gt;not available as a DQL data object&lt;/STRONG&gt;, which initially blocked 4 agents (Security Context, Anomaly Detection, Problem Notifications, and partially Management Zones/Auto Tags/Ownership/SLOs).&lt;/P&gt;
&lt;P&gt;The solution was to &lt;STRONG&gt;leverage other MCP tools creatively&lt;/STRONG&gt;:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Anomaly Detection:&lt;/STRONG&gt; Instead of querying settings, we check if list_davis_analyzers returns analyzers AND if list_problems shows anomaly-based problems are being generated. If both are true, anomaly detection is working.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Problem Notifications:&lt;/STRONG&gt; If list_problems returns CUSTOM_ALERT category problems, it proves both alerting rules AND notification channels are configured (you can't have a CUSTOM_ALERT without both).&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Security:&lt;/STRONG&gt; list_vulnerabilities provides runtime security assessment without needing settings access.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Kubernetes:&lt;/STRONG&gt; get_kubernetes_events reveals cluster health that entity queries alone can't show.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;This turned a limitation into a feature — the audit now uses &lt;STRONG&gt;7 different MCP tools&lt;/STRONG&gt; instead of relying solely on DQL, making it more resilient and comprehensive.&lt;/P&gt;
&lt;HR /&gt;
&lt;H2&gt;Impact&lt;/H2&gt;
&lt;TABLE&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD&gt;&lt;STRONG&gt;Metric&lt;/STRONG&gt;&lt;/TD&gt;
&lt;TD&gt;&lt;STRONG&gt;Before (Manual)&lt;/STRONG&gt;&lt;/TD&gt;
&lt;TD&gt;&lt;STRONG&gt;After (MCP)&lt;/STRONG&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;Audit duration&lt;/TD&gt;
&lt;TD&gt;1-3 days&lt;/TD&gt;
&lt;TD&gt;&lt;STRONG&gt;~20 minutes&lt;/STRONG&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;Dimensions checked&lt;/TD&gt;
&lt;TD&gt;8-10&lt;/TD&gt;
&lt;TD&gt;&lt;STRONG&gt;15 (5 categories)&lt;/STRONG&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;MCP tools used&lt;/TD&gt;
&lt;TD&gt;N/A&lt;/TD&gt;
&lt;TD&gt;&lt;STRONG&gt;7 tools&lt;/STRONG&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;Report format&lt;/TD&gt;
&lt;TD&gt;Google Slides / PDF&lt;/TD&gt;
&lt;TD&gt;&lt;STRONG&gt;Interactive HTML (dark mode, search, filter)&lt;/STRONG&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;Consistency&lt;/TD&gt;
&lt;TD&gt;Varies by analyst&lt;/TD&gt;
&lt;TD&gt;&lt;STRONG&gt;100% repeatable&lt;/STRONG&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;HR /&gt;
&lt;H2&gt;What's Next&lt;/H2&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;Settings API v2 integration:&lt;/STRONG&gt; When the MCP server adds a settings tool, the remaining N/A agents will become fully functional&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Multi-tenant comparison:&lt;/STRONG&gt; Run audits across tenants and compare maturity scores&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Trend tracking:&lt;/STRONG&gt; Store historical scores to show improvement over time&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Dynatrace Notebook export:&lt;/STRONG&gt; Use create_dynatrace_notebook to push findings directly into the tenant&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Automated remediation:&lt;/STRONG&gt; Use send_event to create deployment events tracking fixes&lt;/LI&gt;
&lt;/UL&gt;
&lt;HR /&gt;
&lt;P&gt;&lt;STRONG&gt;Stack:&lt;/STRONG&gt; Claude AI + Dynatrace MCP Server + Markdown playbooks&lt;BR /&gt;&lt;STRONG&gt;Source:&lt;/STRONG&gt; Available on request — the entire framework is ~15 markdown files + 1 HTML template&lt;/P&gt;</description>
      <pubDate>Thu, 23 Apr 2026 07:13:52 GMT</pubDate>
      <guid>https://community.dynatrace.com/t5/AI/MCP-Server-Challenge-entry-5-Observability-Maturity-Auditor/m-p/298171#M129</guid>
      <dc:creator>tracegazer</dc:creator>
      <dc:date>2026-04-23T07:13:52Z</dc:date>
    </item>
    <item>
      <title>Re: MCP Server Challenge entry #5: Observability Maturity Auditor</title>
      <link>https://community.dynatrace.com/t5/AI/MCP-Server-Challenge-entry-5-Observability-Maturity-Auditor/m-p/298254#M132</link>
      <description>&lt;P&gt;Hey&amp;nbsp;&lt;a href="https://community.dynatrace.com/t5/user/viewprofilepage/user-id/59639"&gt;@tracegazer&lt;/a&gt;&amp;nbsp;,&lt;BR /&gt;&lt;BR /&gt;would mind sharing it? Reminds me of the tenant review&amp;nbsp;&lt;A href="https://github.com/dynatrace-oss/CustomerSuccess/tree/main" target="_blank"&gt;https://github.com/dynatrace-oss/CustomerSuccess/tree/main&lt;/A&gt;&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;I internally tried to build a similar solution for Dynatrace Managed using Claude, but actually not using the agent approach, but generating the utility to provide such reports. (Managed installations are typically airgapped + customers have AI regulations).&lt;/P&gt;</description>
      <pubDate>Thu, 23 Apr 2026 18:03:11 GMT</pubDate>
      <guid>https://community.dynatrace.com/t5/AI/MCP-Server-Challenge-entry-5-Observability-Maturity-Auditor/m-p/298254#M132</guid>
      <dc:creator>Julius_Loman</dc:creator>
      <dc:date>2026-04-23T18:03:11Z</dc:date>
    </item>
    <item>
      <title>Re: MCP Server Challenge entry #5: Observability Maturity Auditor</title>
      <link>https://community.dynatrace.com/t5/AI/MCP-Server-Challenge-entry-5-Observability-Maturity-Auditor/m-p/298261#M133</link>
      <description>&lt;P&gt;Hi &lt;a href="https://community.dynatrace.com/t5/user/viewprofilepage/user-id/3364"&gt;@Julius_Loman&lt;/a&gt;&amp;nbsp;.. The current solution runs with the help of Claude and Dynatrace’s MCP. However, I have a previous solution that used Dynatrace APIs, which, as I understand, should also work in both SaaS and Managed environments. Let me check if I have it committed and published on Git.&lt;/P&gt;</description>
      <pubDate>Thu, 23 Apr 2026 19:14:23 GMT</pubDate>
      <guid>https://community.dynatrace.com/t5/AI/MCP-Server-Challenge-entry-5-Observability-Maturity-Auditor/m-p/298261#M133</guid>
      <dc:creator>tracegazer</dc:creator>
      <dc:date>2026-04-23T19:14:23Z</dc:date>
    </item>
    <item>
      <title>Re: MCP Server Challenge entry #5: Observability Maturity Auditor</title>
      <link>https://community.dynatrace.com/t5/AI/MCP-Server-Challenge-entry-5-Observability-Maturity-Auditor/m-p/298360#M135</link>
      <description>&lt;P&gt;&lt;FONT size="3"&gt;Hi &lt;a href="https://community.dynatrace.com/t5/user/viewprofilepage/user-id/3364"&gt;@Julius_Loman&lt;/a&gt;&amp;nbsp;, I’ve finally uploaded the repository. Here’s the URL:&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&lt;FONT size="3"&gt;git clone &lt;A href="https://github.com/alanfuentes92/observability-auditor.git" target="_blank"&gt;https://github.com/alanfuentes92/observability-auditor.git&lt;/A&gt;&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT size="3"&gt;cd observability-auditor/audit-mcp&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT size="3"&gt;cp mcp-config.example.json .mcp.json&lt;BR /&gt;&lt;/FONT&gt;&lt;FONT size="3"&gt;&lt;STRONG&gt;Edit .mcp.json with your tenant URL and token&lt;/STRONG&gt;&lt;BR /&gt;&lt;/FONT&gt;&lt;FONT size="3"&gt;claude .&lt;BR /&gt;&lt;/FONT&gt;&lt;STRONG&gt;&lt;FONT size="3"&gt;Then say: "audit this tenant"&amp;nbsp;→ An HTML report will be generated in the output/ folder&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;FONT size="3"&gt;Feel free to reach out with any questions, feedback, suggestions, or ideas—everything is welcome.&lt;/FONT&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 24 Apr 2026 17:26:03 GMT</pubDate>
      <guid>https://community.dynatrace.com/t5/AI/MCP-Server-Challenge-entry-5-Observability-Maturity-Auditor/m-p/298360#M135</guid>
      <dc:creator>tracegazer</dc:creator>
      <dc:date>2026-04-24T17:26:03Z</dc:date>
    </item>
  </channel>
</rss>

