05 May 2026
10:05 AM
- last edited on
05 May 2026
11:25 AM
by
Michal_Gebacki
R.E.A.D.Y. is a Dynatrace-native app that uses Dynatrace Remote MCP, DQL, and Dynatrace platform APIs to transform observability data into two practical operator workflows:
The goal is not to create another chatbot-first experience. Instead, R.E.A.D.Y. reduces manual observability work by collecting, normalizing, and scoring evidence before any AI explanation is generated.
The workflow follows a simple pattern:
Dynatrace MCP / DQL / APIs -> normalized evidence -> deterministic checks and scoring -> optional OpenAI summarization -> operator-ready insight
In many environments, the signals already exist in Dynatrace, but answering operational questions still requires too many manual steps.
Operators often need to:
The data is available, but the workflow is fragmented. R.E.A.D.Y. brings that evidence together into a structured, repeatable, and operator-friendly experience.
Dynatrace Remote MCP is used as the main evidence and context bridge.
MCP tools used in the project include:
Dynatrace App Functions are used on the backend side of the app to orchestrate evidence collection and report generation securely.
This keeps sensitive configuration and tokens out of the browser and makes the flow more production-friendly.
OpenAI is used only after the evidence has already been collected, normalized, and scored.
The AI layer is optional and is used to generate concise operator-facing explanations from the real evidence set, not to invent conclusions.
The Playbooks layer provides structured guidance for the LLM, defining how it should use Dynatrace MCP tools, DQL, and platform evidence during the analysis flow. Instead of allowing the model to guess, each playbook orients the LLM to first collect evidence, resolve entities, query Problems or telemetry when needed, validate the available context, and only then generate an operator-facing explanation. This keeps the output grounded in real Dynatrace data, reduces hallucination risk, and makes the workflow repeatable across different operational scenarios.
An operator wants to understand the health of a selected scope, such as services, applications, frontends, or infrastructure, over a time window like the last 2 hours, 24 hours, 7 days, or 30 days.
Instead of opening multiple Dynatrace views manually, the operator opens the Problems view in R.E.A.D.Y. and filters by:
MCP makes the integration practical because the app can use stable, named tools instead of hardcoding every tenant-specific retrieval path into the UI.
It allows the app to:
This helps operators answer questions such as:
A platform team or SRE team wants to assess whether a fleet is operationally ready.
This is different from simply checking whether telemetry exists. A service may have traces and metrics but still be missing important operational metadata, ownership, documentation, or governance signals.
R.E.A.D.Y. currently supports readiness generation for:
The Ready Report workflow collects evidence for the selected scope and evaluates readiness using deterministic checks.
Example signals include:
The app then generates a structured report containing:
The result clearly separates:
That distinction is important. R.E.A.D.Y. does not pretend to know more than the data supports.
Readiness reviews are often manual and inconsistent. One team may consider a service ready because it has traffic. Another team may expect ownership, SLOs, dashboards, alerts, documentation, and runbooks.
R.E.A.D.Y. makes that conversation more explicit and repeatable by turning readiness into an evidence-based workflow.
This project is not tied to one specific tenant. The same pattern can be reused in other Dynatrace environments.
This same pattern can be extended to:
This project demonstrates Dynatrace MCP beyond a simple chat interface.
R.E.A.D.Y. uses MCP as part of a real operator workflow for:
The key value is that MCP becomes an operational building block. It helps answer questions teams already care about:
UI -> Dynatrace App Function orchestration -> Dynatrace MCP / DQL / Platform APIs -> normalized evidence layer -> deterministic rules and scoring -> optional AI summarization -> operator-facing result
This design keeps the system explainable, testable, auditable, extensible, and grounded in Dynatrace data.
The creativity in R.E.A.D.Y. is not about replacing operators with AI.
The creative part is combining Dynatrace-native evidence collection, MCP-powered context retrieval, deterministic operational scoring, and optional AI explanation into one repeatable workflow.
R.E.A.D.Y. helps teams:
In short, Dynatrace MCP is used here not as a novelty, but as a repeatable operational building block for observability-driven decision support.
Below you have some images of the App:
Featured Posts