<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic MCP Server Challenge entry #9: R.E.A.D.Y. - Reliability Evidence Assessment for Dynatrace Readiness in AI</title>
    <link>https://community.dynatrace.com/t5/AI/MCP-Server-Challenge-entry-9-R-E-A-D-Y-Reliability-Evidence/m-p/298993#M151</link>
    <description>&lt;H3&gt;R.E.A.D.Y. Project Use Case Write-Up&lt;/H3&gt;
&lt;P&gt;R.E.A.D.Y. is a Dynatrace-native app that uses Dynatrace Remote MCP, DQL, and Dynatrace platform APIs to transform observability data into two practical operator workflows:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Problems Intelligence for real operational triage&lt;/LI&gt;
&lt;LI&gt;Ready Report Generation for fleet-level operational readiness&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;The goal is not to create another chatbot-first experience. Instead, R.E.A.D.Y. reduces manual observability work by collecting, normalizing, and scoring evidence before any AI explanation is generated.&lt;/P&gt;
&lt;P&gt;The workflow follows a simple pattern:&lt;/P&gt;
&lt;PRE&gt;Dynatrace MCP / DQL / APIs
-&amp;gt; normalized evidence
-&amp;gt; deterministic checks and scoring
-&amp;gt; optional OpenAI summarization
-&amp;gt; operator-ready insight&lt;/PRE&gt;
&lt;H2&gt;Problem We Wanted to Solve&lt;/H2&gt;
&lt;P&gt;In many environments, the signals already exist in Dynatrace, but answering operational questions still requires too many manual steps.&lt;/P&gt;
&lt;P&gt;Operators often need to:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;review recent Davis Problems&lt;/LI&gt;
&lt;LI&gt;understand which services, applications, or entities are most affected&lt;/LI&gt;
&lt;LI&gt;compare categories such as Error, Slowdown, Resource, Availability, or Custom&lt;/LI&gt;
&lt;LI&gt;inspect duration and recurrence patterns&lt;/LI&gt;
&lt;LI&gt;identify the next entity to investigate&lt;/LI&gt;
&lt;LI&gt;assess whether a service or application fleet is operationally ready&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;The data is available, but the workflow is fragmented. R.E.A.D.Y. brings that evidence together into a structured, repeatable, and operator-friendly experience.&lt;/P&gt;
&lt;H2&gt;Tools Used&lt;/H2&gt;
&lt;H3&gt;1. Dynatrace Remote MCP&lt;/H3&gt;
&lt;P&gt;Dynatrace Remote MCP is used as the main evidence and context bridge.&lt;/P&gt;
&lt;P&gt;MCP tools used in the project include:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;execute-dql&lt;/LI&gt;
&lt;LI&gt;get-entity-id&lt;/LI&gt;
&lt;LI&gt;get-entity-name&lt;/LI&gt;
&lt;LI&gt;query-problems&lt;/LI&gt;
&lt;LI&gt;get-problem-by-id&lt;/LI&gt;
&lt;LI&gt;find-documents&lt;/LI&gt;
&lt;LI&gt;find-troubleshooting-guides&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;2. Dynatrace Platform APIs and App Functions&lt;/H3&gt;
&lt;P&gt;Dynatrace App Functions are used on the backend side of the app to orchestrate evidence collection and report generation securely.&lt;/P&gt;
&lt;P&gt;This keeps sensitive configuration and tokens out of the browser and makes the flow more production-friendly.&lt;/P&gt;
&lt;H3&gt;3. OpenAI API&lt;/H3&gt;
&lt;P&gt;OpenAI is used only after the evidence has already been collected, normalized, and scored.&lt;/P&gt;
&lt;P&gt;The AI layer is optional and is used to generate concise operator-facing explanations from the real evidence set, not to invent conclusions.&lt;/P&gt;
&lt;H3&gt;4. Playbook based&lt;/H3&gt;
&lt;P&gt;The Playbooks layer provides structured guidance for the LLM, defining how it should use Dynatrace MCP tools, DQL, and platform evidence during the analysis flow. Instead of allowing the model to guess, each playbook orients the LLM to first collect evidence, resolve entities, query Problems or telemetry when needed, validate the available context, and only then generate an operator-facing explanation. This keeps the output grounded in real Dynatrace data, reduces hallucination risk, and makes the workflow repeatable across different operational scenarios.&lt;/P&gt;
&lt;H2&gt;Primary Use Case 1: Problems Intelligence for Real Operational Triage&lt;/H2&gt;
&lt;H3&gt;Scenario&lt;/H3&gt;
&lt;P&gt;An operator wants to understand the health of a selected scope, such as services, applications, frontends, or infrastructure, over a time window like the last 2 hours, 24 hours, 7 days, or 30 days.&lt;/P&gt;
&lt;P&gt;Instead of opening multiple Dynatrace views manually, the operator opens the Problems view in R.E.A.D.Y. and filters by:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;time window&lt;/LI&gt;
&lt;LI&gt;impact&lt;/LI&gt;
&lt;LI&gt;status&lt;/LI&gt;
&lt;LI&gt;category or type&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;What the App Does&lt;/H3&gt;
&lt;OL&gt;
&lt;LI&gt;Queries recent Davis Problems through Dynatrace MCP.&lt;/LI&gt;
&lt;LI&gt;Normalizes the result set into a stable Problems overview payload.&lt;/LI&gt;
&lt;LI&gt;Aggregates total Problems, active vs. closed Problems, status, category, time trends, duration statistics, recurrent Problems, top affected entities, and slowdown-related endpoints or services.&lt;/LI&gt;
&lt;LI&gt;Resolves entity IDs into readable names using MCP.&lt;/LI&gt;
&lt;LI&gt;Links affected entities directly to Dynatrace topology views.&lt;/LI&gt;
&lt;LI&gt;Optionally generates one AI Operator Insight from the real evidence set.&lt;/LI&gt;
&lt;/OL&gt;
&lt;H3&gt;Why MCP Matters&lt;/H3&gt;
&lt;P&gt;MCP makes the integration practical because the app can use stable, named tools instead of hardcoding every tenant-specific retrieval path into the UI.&lt;/P&gt;
&lt;P&gt;It allows the app to:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;discover available tools&lt;/LI&gt;
&lt;LI&gt;query Problems consistently&lt;/LI&gt;
&lt;LI&gt;resolve entity names&lt;/LI&gt;
&lt;LI&gt;execute DQL&lt;/LI&gt;
&lt;LI&gt;search related documentation and troubleshooting content&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;Results Achieved&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;a live Problems dashboard backed by real Dynatrace data&lt;/LI&gt;
&lt;LI&gt;filtering by scope, status, impact, category, and time window&lt;/LI&gt;
&lt;LI&gt;readable entity names where resolution is possible&lt;/LI&gt;
&lt;LI&gt;direct links to affected entities in Dynatrace&lt;/LI&gt;
&lt;LI&gt;histograms showing how long Problems stay open&lt;/LI&gt;
&lt;LI&gt;category-level duration distribution&lt;/LI&gt;
&lt;LI&gt;recurrence and concentration signals&lt;/LI&gt;
&lt;LI&gt;optional AI-generated operator insight based only on collected evidence&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;Practical Value&lt;/H3&gt;
&lt;P&gt;This helps operators answer questions such as:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Are current Problems active pressure or mostly historical noise?&lt;/LI&gt;
&lt;LI&gt;Which category dominates this time window?&lt;/LI&gt;
&lt;LI&gt;Are Slowdown Problems short-lived or staying open too long?&lt;/LI&gt;
&lt;LI&gt;Is one service, application, or entity repeatedly involved?&lt;/LI&gt;
&lt;LI&gt;Which entity should be inspected next?&lt;/LI&gt;
&lt;LI&gt;Where is operational risk concentrated?&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;Primary Use Case 2: Fleet-Level Ready Reports for Operational Readiness&lt;/H2&gt;
&lt;H3&gt;Scenario&lt;/H3&gt;
&lt;P&gt;A platform team or SRE team wants to assess whether a fleet is operationally ready.&lt;/P&gt;
&lt;P&gt;This is different from simply checking whether telemetry exists. A service may have traces and metrics but still be missing important operational metadata, ownership, documentation, or governance signals.&lt;/P&gt;
&lt;P&gt;R.E.A.D.Y. currently supports readiness generation for:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;All Services&lt;/LI&gt;
&lt;LI&gt;All Applications&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;What the App Does&lt;/H3&gt;
&lt;P&gt;The Ready Report workflow collects evidence for the selected scope and evaluates readiness using deterministic checks.&lt;/P&gt;
&lt;P&gt;Example signals include:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;ownership metadata&lt;/LI&gt;
&lt;LI&gt;team tags&lt;/LI&gt;
&lt;LI&gt;environment tags&lt;/LI&gt;
&lt;LI&gt;runbook or contact metadata&lt;/LI&gt;
&lt;LI&gt;dashboard evidence&lt;/LI&gt;
&lt;LI&gt;documentation evidence&lt;/LI&gt;
&lt;LI&gt;governance metadata&lt;/LI&gt;
&lt;LI&gt;application or service operational context&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;The app then generates a structured report containing:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;overall readiness score&lt;/LI&gt;
&lt;LI&gt;domain-level results&lt;/LI&gt;
&lt;LI&gt;detected gaps&lt;/LI&gt;
&lt;LI&gt;recommendations&lt;/LI&gt;
&lt;LI&gt;evidence status&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;The result clearly separates:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;evidence present&lt;/LI&gt;
&lt;LI&gt;evidence missing&lt;/LI&gt;
&lt;LI&gt;evidence unknown or unavailable&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;That distinction is important. R.E.A.D.Y. does not pretend to know more than the data supports.&lt;/P&gt;
&lt;H3&gt;Why This Is Useful&lt;/H3&gt;
&lt;P&gt;Readiness reviews are often manual and inconsistent. One team may consider a service ready because it has traffic. Another team may expect ownership, SLOs, dashboards, alerts, documentation, and runbooks.&lt;/P&gt;
&lt;P&gt;R.E.A.D.Y. makes that conversation more explicit and repeatable by turning readiness into an evidence-based workflow.&lt;/P&gt;
&lt;H3&gt;Results Achieved&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;real fleet-level readiness generation for Services&lt;/LI&gt;
&lt;LI&gt;real fleet-level readiness generation for Applications&lt;/LI&gt;
&lt;LI&gt;deterministic scoring before AI explanation&lt;/LI&gt;
&lt;LI&gt;clear visibility into missing operational metadata&lt;/LI&gt;
&lt;LI&gt;structured recommendations based on the collected evidence&lt;/LI&gt;
&lt;LI&gt;a repeatable report format that can be reused across environments&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;Repeatable Workflow Pattern&lt;/H2&gt;
&lt;P&gt;This project is not tied to one specific tenant. The same pattern can be reused in other Dynatrace environments.&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Configure the Dynatrace environment URL and platform token.&lt;/LI&gt;
&lt;LI&gt;Configure the Dynatrace MCP server endpoint.&lt;/LI&gt;
&lt;LI&gt;Discover available MCP tools.&lt;/LI&gt;
&lt;LI&gt;Collect evidence through MCP, DQL, and platform APIs.&lt;/LI&gt;
&lt;LI&gt;Normalize the evidence into a stable internal structure.&lt;/LI&gt;
&lt;LI&gt;Apply deterministic checks and scoring.&lt;/LI&gt;
&lt;LI&gt;Use AI only after the evidence is already structured.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;This same pattern can be extended to:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Kubernetes workload readiness&lt;/LI&gt;
&lt;LI&gt;synthetic monitor readiness&lt;/LI&gt;
&lt;LI&gt;host readiness&lt;/LI&gt;
&lt;LI&gt;deployment-change correlation workflows&lt;/LI&gt;
&lt;LI&gt;fleet-wide governance audits&lt;/LI&gt;
&lt;LI&gt;service ownership validation&lt;/LI&gt;
&lt;LI&gt;operational metadata quality checks&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;Why This Is a Good Dynatrace MCP Use Case&lt;/H2&gt;
&lt;P&gt;This project demonstrates Dynatrace MCP beyond a simple chat interface.&lt;/P&gt;
&lt;P&gt;R.E.A.D.Y. uses MCP as part of a real operator workflow for:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;evidence discovery&lt;/LI&gt;
&lt;LI&gt;DQL execution&lt;/LI&gt;
&lt;LI&gt;entity resolution&lt;/LI&gt;
&lt;LI&gt;Problems analytics&lt;/LI&gt;
&lt;LI&gt;documentation search&lt;/LI&gt;
&lt;LI&gt;troubleshooting context&lt;/LI&gt;
&lt;LI&gt;readiness assessment&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;The key value is that MCP becomes an operational building block. It helps answer questions teams already care about:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Where should we investigate first?&lt;/LI&gt;
&lt;LI&gt;What is recurring?&lt;/LI&gt;
&lt;LI&gt;Which entities are driving risk?&lt;/LI&gt;
&lt;LI&gt;What evidence is missing?&lt;/LI&gt;
&lt;LI&gt;Is this fleet operationally ready?&lt;/LI&gt;
&lt;LI&gt;What should be improved before production readiness is accepted?&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;Architecture Pattern&lt;/H2&gt;
&lt;PRE&gt;UI
-&amp;gt; Dynatrace App Function orchestration
-&amp;gt; Dynatrace MCP / DQL / Platform APIs
-&amp;gt; normalized evidence layer
-&amp;gt; deterministic rules and scoring
-&amp;gt; optional AI summarization
-&amp;gt; operator-facing result&lt;/PRE&gt;
&lt;P&gt;This design keeps the system explainable, testable, auditable, extensible, and grounded in Dynatrace data.&lt;/P&gt;
&lt;H2&gt;What Makes It Creative&lt;/H2&gt;
&lt;P&gt;The creativity in R.E.A.D.Y. is not about replacing operators with AI.&lt;/P&gt;
&lt;P&gt;The creative part is combining Dynatrace-native evidence collection, MCP-powered context retrieval, deterministic operational scoring, and optional AI explanation into one repeatable workflow.&lt;/P&gt;
&lt;H2&gt;Business and Operational Outcome&lt;/H2&gt;
&lt;P&gt;R.E.A.D.Y. helps teams:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;shorten time to triage&lt;/LI&gt;
&lt;LI&gt;standardize readiness reviews&lt;/LI&gt;
&lt;LI&gt;identify recurring Problems&lt;/LI&gt;
&lt;LI&gt;detect concentration of operational risk&lt;/LI&gt;
&lt;LI&gt;highlight missing ownership or governance metadata&lt;/LI&gt;
&lt;LI&gt;produce structured reports instead of relying on tribal knowledge&lt;/LI&gt;
&lt;LI&gt;make operational readiness more evidence-based&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;In short, Dynatrace MCP is used here not as a novelty, but as a repeatable operational building block for observability-driven decision support.&lt;/P&gt;
&lt;P&gt;Below you have some images of the App:&lt;/P&gt;
&lt;H4&gt;Problems Intelligence&lt;/H4&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="MaximilianoML_3-1777970963661.png" style="width: 400px;"&gt;&lt;img src="https://community.dynatrace.com/t5/image/serverpage/image-id/33047i925AB275BCB1FF84/image-size/medium?v=v2&amp;amp;px=400" role="button" title="MaximilianoML_3-1777970963661.png" alt="MaximilianoML_3-1777970963661.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="MaximilianoML_1-1777970881063.png" style="width: 400px;"&gt;&lt;img src="https://community.dynatrace.com/t5/image/serverpage/image-id/33045i18034BE82348DCDB/image-size/medium?v=v2&amp;amp;px=400" role="button" title="MaximilianoML_1-1777970881063.png" alt="MaximilianoML_1-1777970881063.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="MaximilianoML_4-1777971057463.png" style="width: 400px;"&gt;&lt;img src="https://community.dynatrace.com/t5/image/serverpage/image-id/33048i6E12028C92AFAC49/image-size/medium?v=v2&amp;amp;px=400" role="button" title="MaximilianoML_4-1777971057463.png" alt="MaximilianoML_4-1777971057463.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;H4&gt;Fleet-Level Ready Reports for Operational Readiness&lt;/H4&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="MaximilianoML_5-1777971217464.png" style="width: 400px;"&gt;&lt;img src="https://community.dynatrace.com/t5/image/serverpage/image-id/33051i79D2B1C6B58DAC62/image-size/medium?v=v2&amp;amp;px=400" role="button" title="MaximilianoML_5-1777971217464.png" alt="MaximilianoML_5-1777971217464.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="MaximilianoML_6-1777971334014.png" style="width: 400px;"&gt;&lt;img src="https://community.dynatrace.com/t5/image/serverpage/image-id/33052i5BA62A4D0190521A/image-size/medium?v=v2&amp;amp;px=400" role="button" title="MaximilianoML_6-1777971334014.png" alt="MaximilianoML_6-1777971334014.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="MaximilianoML_7-1777971404041.png" style="width: 400px;"&gt;&lt;img src="https://community.dynatrace.com/t5/image/serverpage/image-id/33053i18C3E8760AA63E44/image-size/medium?v=v2&amp;amp;px=400" role="button" title="MaximilianoML_7-1777971404041.png" alt="MaximilianoML_7-1777971404041.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Tue, 05 May 2026 10:25:12 GMT</pubDate>
    <dc:creator>MaximilianoML</dc:creator>
    <dc:date>2026-05-05T10:25:12Z</dc:date>
    <item>
      <title>MCP Server Challenge entry #9: R.E.A.D.Y. - Reliability Evidence Assessment for Dynatrace Readiness</title>
      <link>https://community.dynatrace.com/t5/AI/MCP-Server-Challenge-entry-9-R-E-A-D-Y-Reliability-Evidence/m-p/298993#M151</link>
      <description>&lt;H3&gt;R.E.A.D.Y. Project Use Case Write-Up&lt;/H3&gt;
&lt;P&gt;R.E.A.D.Y. is a Dynatrace-native app that uses Dynatrace Remote MCP, DQL, and Dynatrace platform APIs to transform observability data into two practical operator workflows:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Problems Intelligence for real operational triage&lt;/LI&gt;
&lt;LI&gt;Ready Report Generation for fleet-level operational readiness&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;The goal is not to create another chatbot-first experience. Instead, R.E.A.D.Y. reduces manual observability work by collecting, normalizing, and scoring evidence before any AI explanation is generated.&lt;/P&gt;
&lt;P&gt;The workflow follows a simple pattern:&lt;/P&gt;
&lt;PRE&gt;Dynatrace MCP / DQL / APIs
-&amp;gt; normalized evidence
-&amp;gt; deterministic checks and scoring
-&amp;gt; optional OpenAI summarization
-&amp;gt; operator-ready insight&lt;/PRE&gt;
&lt;H2&gt;Problem We Wanted to Solve&lt;/H2&gt;
&lt;P&gt;In many environments, the signals already exist in Dynatrace, but answering operational questions still requires too many manual steps.&lt;/P&gt;
&lt;P&gt;Operators often need to:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;review recent Davis Problems&lt;/LI&gt;
&lt;LI&gt;understand which services, applications, or entities are most affected&lt;/LI&gt;
&lt;LI&gt;compare categories such as Error, Slowdown, Resource, Availability, or Custom&lt;/LI&gt;
&lt;LI&gt;inspect duration and recurrence patterns&lt;/LI&gt;
&lt;LI&gt;identify the next entity to investigate&lt;/LI&gt;
&lt;LI&gt;assess whether a service or application fleet is operationally ready&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;The data is available, but the workflow is fragmented. R.E.A.D.Y. brings that evidence together into a structured, repeatable, and operator-friendly experience.&lt;/P&gt;
&lt;H2&gt;Tools Used&lt;/H2&gt;
&lt;H3&gt;1. Dynatrace Remote MCP&lt;/H3&gt;
&lt;P&gt;Dynatrace Remote MCP is used as the main evidence and context bridge.&lt;/P&gt;
&lt;P&gt;MCP tools used in the project include:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;execute-dql&lt;/LI&gt;
&lt;LI&gt;get-entity-id&lt;/LI&gt;
&lt;LI&gt;get-entity-name&lt;/LI&gt;
&lt;LI&gt;query-problems&lt;/LI&gt;
&lt;LI&gt;get-problem-by-id&lt;/LI&gt;
&lt;LI&gt;find-documents&lt;/LI&gt;
&lt;LI&gt;find-troubleshooting-guides&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;2. Dynatrace Platform APIs and App Functions&lt;/H3&gt;
&lt;P&gt;Dynatrace App Functions are used on the backend side of the app to orchestrate evidence collection and report generation securely.&lt;/P&gt;
&lt;P&gt;This keeps sensitive configuration and tokens out of the browser and makes the flow more production-friendly.&lt;/P&gt;
&lt;H3&gt;3. OpenAI API&lt;/H3&gt;
&lt;P&gt;OpenAI is used only after the evidence has already been collected, normalized, and scored.&lt;/P&gt;
&lt;P&gt;The AI layer is optional and is used to generate concise operator-facing explanations from the real evidence set, not to invent conclusions.&lt;/P&gt;
&lt;H3&gt;4. Playbook based&lt;/H3&gt;
&lt;P&gt;The Playbooks layer provides structured guidance for the LLM, defining how it should use Dynatrace MCP tools, DQL, and platform evidence during the analysis flow. Instead of allowing the model to guess, each playbook orients the LLM to first collect evidence, resolve entities, query Problems or telemetry when needed, validate the available context, and only then generate an operator-facing explanation. This keeps the output grounded in real Dynatrace data, reduces hallucination risk, and makes the workflow repeatable across different operational scenarios.&lt;/P&gt;
&lt;H2&gt;Primary Use Case 1: Problems Intelligence for Real Operational Triage&lt;/H2&gt;
&lt;H3&gt;Scenario&lt;/H3&gt;
&lt;P&gt;An operator wants to understand the health of a selected scope, such as services, applications, frontends, or infrastructure, over a time window like the last 2 hours, 24 hours, 7 days, or 30 days.&lt;/P&gt;
&lt;P&gt;Instead of opening multiple Dynatrace views manually, the operator opens the Problems view in R.E.A.D.Y. and filters by:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;time window&lt;/LI&gt;
&lt;LI&gt;impact&lt;/LI&gt;
&lt;LI&gt;status&lt;/LI&gt;
&lt;LI&gt;category or type&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;What the App Does&lt;/H3&gt;
&lt;OL&gt;
&lt;LI&gt;Queries recent Davis Problems through Dynatrace MCP.&lt;/LI&gt;
&lt;LI&gt;Normalizes the result set into a stable Problems overview payload.&lt;/LI&gt;
&lt;LI&gt;Aggregates total Problems, active vs. closed Problems, status, category, time trends, duration statistics, recurrent Problems, top affected entities, and slowdown-related endpoints or services.&lt;/LI&gt;
&lt;LI&gt;Resolves entity IDs into readable names using MCP.&lt;/LI&gt;
&lt;LI&gt;Links affected entities directly to Dynatrace topology views.&lt;/LI&gt;
&lt;LI&gt;Optionally generates one AI Operator Insight from the real evidence set.&lt;/LI&gt;
&lt;/OL&gt;
&lt;H3&gt;Why MCP Matters&lt;/H3&gt;
&lt;P&gt;MCP makes the integration practical because the app can use stable, named tools instead of hardcoding every tenant-specific retrieval path into the UI.&lt;/P&gt;
&lt;P&gt;It allows the app to:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;discover available tools&lt;/LI&gt;
&lt;LI&gt;query Problems consistently&lt;/LI&gt;
&lt;LI&gt;resolve entity names&lt;/LI&gt;
&lt;LI&gt;execute DQL&lt;/LI&gt;
&lt;LI&gt;search related documentation and troubleshooting content&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;Results Achieved&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;a live Problems dashboard backed by real Dynatrace data&lt;/LI&gt;
&lt;LI&gt;filtering by scope, status, impact, category, and time window&lt;/LI&gt;
&lt;LI&gt;readable entity names where resolution is possible&lt;/LI&gt;
&lt;LI&gt;direct links to affected entities in Dynatrace&lt;/LI&gt;
&lt;LI&gt;histograms showing how long Problems stay open&lt;/LI&gt;
&lt;LI&gt;category-level duration distribution&lt;/LI&gt;
&lt;LI&gt;recurrence and concentration signals&lt;/LI&gt;
&lt;LI&gt;optional AI-generated operator insight based only on collected evidence&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;Practical Value&lt;/H3&gt;
&lt;P&gt;This helps operators answer questions such as:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Are current Problems active pressure or mostly historical noise?&lt;/LI&gt;
&lt;LI&gt;Which category dominates this time window?&lt;/LI&gt;
&lt;LI&gt;Are Slowdown Problems short-lived or staying open too long?&lt;/LI&gt;
&lt;LI&gt;Is one service, application, or entity repeatedly involved?&lt;/LI&gt;
&lt;LI&gt;Which entity should be inspected next?&lt;/LI&gt;
&lt;LI&gt;Where is operational risk concentrated?&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;Primary Use Case 2: Fleet-Level Ready Reports for Operational Readiness&lt;/H2&gt;
&lt;H3&gt;Scenario&lt;/H3&gt;
&lt;P&gt;A platform team or SRE team wants to assess whether a fleet is operationally ready.&lt;/P&gt;
&lt;P&gt;This is different from simply checking whether telemetry exists. A service may have traces and metrics but still be missing important operational metadata, ownership, documentation, or governance signals.&lt;/P&gt;
&lt;P&gt;R.E.A.D.Y. currently supports readiness generation for:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;All Services&lt;/LI&gt;
&lt;LI&gt;All Applications&lt;/LI&gt;
&lt;/UL&gt;
&lt;H3&gt;What the App Does&lt;/H3&gt;
&lt;P&gt;The Ready Report workflow collects evidence for the selected scope and evaluates readiness using deterministic checks.&lt;/P&gt;
&lt;P&gt;Example signals include:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;ownership metadata&lt;/LI&gt;
&lt;LI&gt;team tags&lt;/LI&gt;
&lt;LI&gt;environment tags&lt;/LI&gt;
&lt;LI&gt;runbook or contact metadata&lt;/LI&gt;
&lt;LI&gt;dashboard evidence&lt;/LI&gt;
&lt;LI&gt;documentation evidence&lt;/LI&gt;
&lt;LI&gt;governance metadata&lt;/LI&gt;
&lt;LI&gt;application or service operational context&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;The app then generates a structured report containing:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;overall readiness score&lt;/LI&gt;
&lt;LI&gt;domain-level results&lt;/LI&gt;
&lt;LI&gt;detected gaps&lt;/LI&gt;
&lt;LI&gt;recommendations&lt;/LI&gt;
&lt;LI&gt;evidence status&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;The result clearly separates:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;evidence present&lt;/LI&gt;
&lt;LI&gt;evidence missing&lt;/LI&gt;
&lt;LI&gt;evidence unknown or unavailable&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;That distinction is important. R.E.A.D.Y. does not pretend to know more than the data supports.&lt;/P&gt;
&lt;H3&gt;Why This Is Useful&lt;/H3&gt;
&lt;P&gt;Readiness reviews are often manual and inconsistent. One team may consider a service ready because it has traffic. Another team may expect ownership, SLOs, dashboards, alerts, documentation, and runbooks.&lt;/P&gt;
&lt;P&gt;R.E.A.D.Y. makes that conversation more explicit and repeatable by turning readiness into an evidence-based workflow.&lt;/P&gt;
&lt;H3&gt;Results Achieved&lt;/H3&gt;
&lt;UL&gt;
&lt;LI&gt;real fleet-level readiness generation for Services&lt;/LI&gt;
&lt;LI&gt;real fleet-level readiness generation for Applications&lt;/LI&gt;
&lt;LI&gt;deterministic scoring before AI explanation&lt;/LI&gt;
&lt;LI&gt;clear visibility into missing operational metadata&lt;/LI&gt;
&lt;LI&gt;structured recommendations based on the collected evidence&lt;/LI&gt;
&lt;LI&gt;a repeatable report format that can be reused across environments&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;Repeatable Workflow Pattern&lt;/H2&gt;
&lt;P&gt;This project is not tied to one specific tenant. The same pattern can be reused in other Dynatrace environments.&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Configure the Dynatrace environment URL and platform token.&lt;/LI&gt;
&lt;LI&gt;Configure the Dynatrace MCP server endpoint.&lt;/LI&gt;
&lt;LI&gt;Discover available MCP tools.&lt;/LI&gt;
&lt;LI&gt;Collect evidence through MCP, DQL, and platform APIs.&lt;/LI&gt;
&lt;LI&gt;Normalize the evidence into a stable internal structure.&lt;/LI&gt;
&lt;LI&gt;Apply deterministic checks and scoring.&lt;/LI&gt;
&lt;LI&gt;Use AI only after the evidence is already structured.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;This same pattern can be extended to:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Kubernetes workload readiness&lt;/LI&gt;
&lt;LI&gt;synthetic monitor readiness&lt;/LI&gt;
&lt;LI&gt;host readiness&lt;/LI&gt;
&lt;LI&gt;deployment-change correlation workflows&lt;/LI&gt;
&lt;LI&gt;fleet-wide governance audits&lt;/LI&gt;
&lt;LI&gt;service ownership validation&lt;/LI&gt;
&lt;LI&gt;operational metadata quality checks&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;Why This Is a Good Dynatrace MCP Use Case&lt;/H2&gt;
&lt;P&gt;This project demonstrates Dynatrace MCP beyond a simple chat interface.&lt;/P&gt;
&lt;P&gt;R.E.A.D.Y. uses MCP as part of a real operator workflow for:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;evidence discovery&lt;/LI&gt;
&lt;LI&gt;DQL execution&lt;/LI&gt;
&lt;LI&gt;entity resolution&lt;/LI&gt;
&lt;LI&gt;Problems analytics&lt;/LI&gt;
&lt;LI&gt;documentation search&lt;/LI&gt;
&lt;LI&gt;troubleshooting context&lt;/LI&gt;
&lt;LI&gt;readiness assessment&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;The key value is that MCP becomes an operational building block. It helps answer questions teams already care about:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;Where should we investigate first?&lt;/LI&gt;
&lt;LI&gt;What is recurring?&lt;/LI&gt;
&lt;LI&gt;Which entities are driving risk?&lt;/LI&gt;
&lt;LI&gt;What evidence is missing?&lt;/LI&gt;
&lt;LI&gt;Is this fleet operationally ready?&lt;/LI&gt;
&lt;LI&gt;What should be improved before production readiness is accepted?&lt;/LI&gt;
&lt;/UL&gt;
&lt;H2&gt;Architecture Pattern&lt;/H2&gt;
&lt;PRE&gt;UI
-&amp;gt; Dynatrace App Function orchestration
-&amp;gt; Dynatrace MCP / DQL / Platform APIs
-&amp;gt; normalized evidence layer
-&amp;gt; deterministic rules and scoring
-&amp;gt; optional AI summarization
-&amp;gt; operator-facing result&lt;/PRE&gt;
&lt;P&gt;This design keeps the system explainable, testable, auditable, extensible, and grounded in Dynatrace data.&lt;/P&gt;
&lt;H2&gt;What Makes It Creative&lt;/H2&gt;
&lt;P&gt;The creativity in R.E.A.D.Y. is not about replacing operators with AI.&lt;/P&gt;
&lt;P&gt;The creative part is combining Dynatrace-native evidence collection, MCP-powered context retrieval, deterministic operational scoring, and optional AI explanation into one repeatable workflow.&lt;/P&gt;
&lt;H2&gt;Business and Operational Outcome&lt;/H2&gt;
&lt;P&gt;R.E.A.D.Y. helps teams:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;shorten time to triage&lt;/LI&gt;
&lt;LI&gt;standardize readiness reviews&lt;/LI&gt;
&lt;LI&gt;identify recurring Problems&lt;/LI&gt;
&lt;LI&gt;detect concentration of operational risk&lt;/LI&gt;
&lt;LI&gt;highlight missing ownership or governance metadata&lt;/LI&gt;
&lt;LI&gt;produce structured reports instead of relying on tribal knowledge&lt;/LI&gt;
&lt;LI&gt;make operational readiness more evidence-based&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;In short, Dynatrace MCP is used here not as a novelty, but as a repeatable operational building block for observability-driven decision support.&lt;/P&gt;
&lt;P&gt;Below you have some images of the App:&lt;/P&gt;
&lt;H4&gt;Problems Intelligence&lt;/H4&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="MaximilianoML_3-1777970963661.png" style="width: 400px;"&gt;&lt;img src="https://community.dynatrace.com/t5/image/serverpage/image-id/33047i925AB275BCB1FF84/image-size/medium?v=v2&amp;amp;px=400" role="button" title="MaximilianoML_3-1777970963661.png" alt="MaximilianoML_3-1777970963661.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="MaximilianoML_1-1777970881063.png" style="width: 400px;"&gt;&lt;img src="https://community.dynatrace.com/t5/image/serverpage/image-id/33045i18034BE82348DCDB/image-size/medium?v=v2&amp;amp;px=400" role="button" title="MaximilianoML_1-1777970881063.png" alt="MaximilianoML_1-1777970881063.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="MaximilianoML_4-1777971057463.png" style="width: 400px;"&gt;&lt;img src="https://community.dynatrace.com/t5/image/serverpage/image-id/33048i6E12028C92AFAC49/image-size/medium?v=v2&amp;amp;px=400" role="button" title="MaximilianoML_4-1777971057463.png" alt="MaximilianoML_4-1777971057463.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;H4&gt;Fleet-Level Ready Reports for Operational Readiness&lt;/H4&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="MaximilianoML_5-1777971217464.png" style="width: 400px;"&gt;&lt;img src="https://community.dynatrace.com/t5/image/serverpage/image-id/33051i79D2B1C6B58DAC62/image-size/medium?v=v2&amp;amp;px=400" role="button" title="MaximilianoML_5-1777971217464.png" alt="MaximilianoML_5-1777971217464.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="MaximilianoML_6-1777971334014.png" style="width: 400px;"&gt;&lt;img src="https://community.dynatrace.com/t5/image/serverpage/image-id/33052i5BA62A4D0190521A/image-size/medium?v=v2&amp;amp;px=400" role="button" title="MaximilianoML_6-1777971334014.png" alt="MaximilianoML_6-1777971334014.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="MaximilianoML_7-1777971404041.png" style="width: 400px;"&gt;&lt;img src="https://community.dynatrace.com/t5/image/serverpage/image-id/33053i18C3E8760AA63E44/image-size/medium?v=v2&amp;amp;px=400" role="button" title="MaximilianoML_7-1777971404041.png" alt="MaximilianoML_7-1777971404041.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 05 May 2026 10:25:12 GMT</pubDate>
      <guid>https://community.dynatrace.com/t5/AI/MCP-Server-Challenge-entry-9-R-E-A-D-Y-Reliability-Evidence/m-p/298993#M151</guid>
      <dc:creator>MaximilianoML</dc:creator>
      <dc:date>2026-05-05T10:25:12Z</dc:date>
    </item>
  </channel>
</rss>

