25 Apr 2026 10:17 PM - edited 04 May 2026 02:55 PM
Author: Randy Chambers — Dynatrace Practice Lead
Role: Federal & Enterprise Governance Architect
Organization: Discipline Consulting Group, LLC
Email: rchambers@discipline-consulting.com
Date: May 2026
Version: 1.0 — Competition Submission
Classification: Unclassified / For Public Distribution
Category: AI-Driven Governance & Compliance Automation
Platform: Dynatrace Model Context Protocol (MCP)
Edition: Challenge-Ready — Final Submission (Unified)
“Transforming the MCP Server from an Observability API into a Governance Control Plane”
Federal agencies and large enterprises have embraced AI-driven observability as the operational backbone of modern digital infrastructure. Dynatrace’s Davis AI delivers deterministic root-cause analysis, anomaly detection, and predictive insights across hybrid and cloud-native environments. Yet a critical gap persists: the governance of AI-initiated operations remains fragmented, manual, and fundamentally non-auditable. When Davis AI detects an anomaly, creates a ServiceNow incident, or triggers an automated remediation workflow, the compliance context — which NIST control was tested, which authorization boundary was assessed, which policy was enforced — is lost in transit. Auditors reconstruct this lineage manually, days or weeks after the fact, using spreadsheets and screenshots. This is incompatible with the federal mandate for continuous Authority to Operate (cATO) and the enterprise demand for real-time governance posture.
This submission introduces SDF Governance Guard (Software-Defined Governance Framework), a governance-as-code architecture purpose-built on top of the Dynatrace MCP server. SDF Governance Guard transforms the MCP server from an observability API into a governance control plane by injecting compliance metadata into every tool invocation and propagating that metadata through all downstream systems — from Davis AI detection through ServiceNow remediation and back.
The framework comprises five integrated components:
1. SDF Governance Guard — the core governance-as-code policy engine
| 2. | LOCATE Protocol — a six-phase operational cadence (Log, Observe, Correlate, Act, Trace, Enforce) that standardizes governance processing for every signal |
| 3. | Davis AI → ServiceNow Inheritance Model — governance metadata propagation across six Dynatrace-ServiceNow integration points |
| 4. | NIST IR 8011 Crosswalk — a mapping of all 14 Dynatrace MCP server tools to NIST IR 8011 security capabilities, sub-capabilities, and defect checks |
| 5. | MCP Orchestration Layer — governance-aware tool invocation patterns for MCP clients |
The quantifiable impact is significant: reduction in mean-time-to-compliance-evidence from days to minutes, elimination of manual control-assessment artifacts, and continuous ATO readiness backed by an immutable Grail-native compliance ledger. SDF Governance Guard does not require custom Dynatrace plugins — it is built entirely on existing MCP server tools, DQL, Grail, and Workflows capabilities. This is governance that ships today, on infrastructure that already exists.
This submission is addressed with genuine respect to Wolfgang Beer, Wolfgang Heider, Gabriele HB, and Andreas Grabner — the product leadership team whose vision has made the Dynatrace MCP server one of the most consequential integration capabilities in modern observability.
We recognize that the MCP Challenge invited the community to demonstrate creative and impactful uses of the Dynatrace MCP server. What we present here goes beyond a single use case. SDF Governance Guard is a complete governance framework that repositions the MCP server as a compliance control plane — extending its value from operational observability into the governance, risk, and compliance domain that represents a multi-billion-dollar addressable market.
This submission was purpose-built to align with Dynatrace’s product trajectory. The framework requires zero modifications to the MCP server. Every capability described in this document uses the existing MCP tools exactly as your team designed them: execute_dql, list_problems, get_entity_details, get_ownership, list_vulnerabilities, create_workflow_for_notification, and the rest of the thirteen-tool portfolio. What SDF Governance Guard adds is a governance orchestration layer that makes every MCP tool call a testable, auditable, NIST-aligned compliance event.
We built this framework with three convictions:
First, that the Dynatrace MCP server is more capable than even its current user base realizes — and that governance is the use case that proves it.
Second, that federal and enterprise customers are actively seeking a platform that unifies observability and compliance, and Dynatrace is uniquely positioned to be that platform.
Third, that the NIST IR 8011 crosswalk presented in this submission represents a first-of-its-kind mapping that no competitor — not Splunk, not Datadog, not New Relic — has attempted or can replicate without an equivalent MCP capability.
We invite the judging panel to evaluate this submission not only as a challenge entry but as a strategic blueprint. The architecture, the LOCATE protocol, the ServiceNow inheritance model, and the NIST crosswalk are all production-ready concepts that can be demonstrated, piloted, and deployed using today’s Dynatrace platform.
Thank you for building the foundation that made this framework possible.
This submission includes an interactive web-based companion visualization that allows the judging panel to explore the regulatory convergence analysis in depth. Access it at:
https://copilot.microsoft.com/shares/artifacts/U6e8RBukw1bkrhW3NVPV9?expand=true
The interactive companion is designed to be self-guided. Upon opening, a Reviewer Navigation Guide will appear with a numbered walkthrough of all sections. The guide can be dismissed and re-opened via the “?” button in the top-right corner. The visualization includes seven sections:
6. Mandate Evolution Timeline — Trace the lineage from EO 14110 through EO 14179 and M-25-21. REVOKED and ACTIVE status badges indicate which mandates are current and which have been superseded.
| 7. | Three-Force Convergence — Explore how NIST IR 8011, Federal AI Mandates (EO 14179/M-25-21), and the Continuous ATO movement converge at the governance gap. Hover or click each force for detailed analysis. |
| 8. | Urgency Indicators — Five indicator cards showing why the governance gap has moved from a strategic concern to an operational emergency, including the Zero-Dollar Compliance indicator addressing the unfunded mandate dimension. |
| 9. | The Unfunded Mandate Reality — Constitutional analysis demonstrating that all three forces are unfunded mandates. The 3/0/1 Compliance Paradox display (3 Simultaneous Mandates, $0 Dedicated Appropriations, 1 Viable Solution) captures the fiscal dimension. |
| 10. | Platform Footprint — ServiceNow’s 75% federal penetration alongside Dynatrace’s expanding presence across civilian, IC, and DoD verticals. The convergence zone shows how SDF Governance Guard bridges both platforms at zero incremental cost. |
| 11. | What Changed vs. What Persists — A side-by-side comparison of the EO 14110 → EO 14179 and M-24-10 → M-25-21 transitions. Key insight for reviewers: the governance requirements survived the policy transition intact. |
| 12. | Strategic Significance — Why SDF Governance Guard is more essential than ever, supported by authoritative compliance data from GAO, OMB, Stanford RegLab/ACUS, and DoD CIO. |
We recommend reviewing the interactive companion alongside Section 4 (Regulatory Landscape) of this document. The visualization renders the same analysis in an explorable format that may be more effective for presenting the convergence thesis to broader stakeholders. All data points in the visualization are sourced from the authoritative references cited in this document.
This submission comprises three synchronized deliverables, each serving a distinct purpose:
Deliverable Format Purpose Recommended Use
| Unified Submission Document | .docx (this document) | Complete framework specification with all evidence layers, architecture details, and authoritative citations | Primary evaluation artifact — read end-to-end for full technical and strategic depth |
| Judge Presentation Deck | .pptx (14 slides) | Visual summary pulling the strongest evidence from the document for live presentation | Use for panel discussion or quick-reference overview of key arguments |
| Interactive Convergence Visualization | Web application (link above) | Explorable regulatory landscape analysis with hover/click interactivity | Best for understanding the three-force convergence, unfunded mandate reality, and platform footprint — share link with additional reviewers as needed |
All three deliverables are consistent in data and conclusions. The document is the authoritative source; the presentation and visualization are companion formats optimized for different review contexts.
Federal agencies operating under the NIST Risk Management Framework (RMF) must demonstrate continuous compliance with SP 800-53 controls. In practice, control evidence is generated manually — typically in the weeks preceding an audit or annual assessment. Security teams export Dynatrace dashboards to PDF, screenshot ServiceNow ticket histories, and compile spreadsheets correlating incidents to controls. This retroactive evidence generation is labor-intensive, error-prone, and fundamentally incompatible with the continuous monitoring mandate of NIST SP 800-137 and OMB M-22-09 (Zero Trust Architecture).
When Davis AI identifies a root cause and triggers an automated response — whether creating a ServiceNow incident, updating a CMDB configuration item, or initiating a remediation workflow — the governance context is severed at the integration boundary. The ServiceNow incident carries operational data (affected service, error rate, impact duration) but no compliance data (which SP 800-53 control family this relates to, which authorization boundary was affected, which IR 8011 defect check this observation would satisfy). The AI action is operationally sound but governancely invisible.
NIST Interagency Report 8011 defines a rigorous methodology for automated security control assessment: desired state specification, actual state collection, comparison, defect identification, and root cause analysis. The framework identifies specific security capabilities (Hardware Asset Management, Software Asset Management, Vulnerability Management, Configuration Settings Management) and maps them to testable defect checks. Yet there is no tooling that connects IR 8011’s assessment methodology to the actual state data that observability platforms like Dynatrace collect continuously. The assessment framework exists on paper; the data exists in Grail. The bridge between them does not exist — until now.
The Department of Defense and civilian federal agencies are moving toward continuous Authority to Operate (cATO), replacing the traditional three-year reauthorization cycle with ongoing, evidence-based risk acceptance. cATO requires a continuous stream of machine-readable compliance evidence — not periodic audit artifacts. Observability platforms generate this evidence as a byproduct of normal operations, but without a governance framework to classify, tag, and route that evidence, it remains raw telemetry rather than compliance data.
The Model Context Protocol (MCP) standardizes how AI agents interact with enterprise systems through discoverable, callable tools. The Dynatrace MCP server exposes 14 tools spanning data query, problem management, security analysis, entity resolution, and operational communication. MCP is the ideal governance integration layer — every tool invocation is a discrete, auditable event — but the protocol itself has no governance framework purpose-built for it. No one has defined what it means for an MCP tool call to be governance-aware. This submission fills that void.
The governance gap identified in Section 3 is not a static problem — it is actively widening. Three converging forces in the federal regulatory landscape are simultaneously demanding automated governance capabilities while accelerating the pace of AI adoption. Understanding this convergence is essential context for evaluating SDF Governance Guard’s strategic positioning and urgency.
The federal AI governance landscape has undergone significant transformation since late 2023. The following timeline traces the lineage of key mandates, showing which remain active and which have been superseded — a critical distinction for any submission claiming federal relevance.
Date Mandate Description Status
| Oct 30, 2023 | EO 14110 (Biden) | Safe, Secure, and Trustworthy Development and Use of AI | REVOKED |
| Mar 28, 2024 | OMB M-24-10 | Advancing Governance, Innovation, and Risk Management for Agency Use of AI | RESCINDED |
| Jan 20, 2025 | EO 14110 Revocation | Revoked by incoming administration on Inauguration Day | — |
| Jan 23, 2025 | EO 14179 (Trump) | Removing Barriers to American Leadership in Artificial Intelligence | ACTIVE |
| Feb 20, 2025 | NIST IR 8011 Vol. 1 Rev. 1 | Automation Support for Security Control Assessments — Major revision, Initial Public Draft | ACTIVE |
| Apr 3, 2025 | OMB M-25-21 | Accelerating Federal Use of AI through Innovation, Governance, and Public Trust (replaces M-24-10) | ACTIVE |
The critical insight from this timeline is one of continuity through change. While the policy posture shifted from risk-restrictive (EO 14110) to innovation-accelerative (EO 14179), the core governance infrastructure requirements — Chief AI Officers, Governance Boards, high-impact AI risk management, public inventories — survived the transition intact. Simultaneously, NIST IR 8011’s February 2025 revision signals that the technical methodology for automated control assessment is being actively modernized, independent of any executive order.
Three distinct regulatory and operational forces are converging to create unprecedented governance urgency. Each force independently demands automated compliance capabilities; together, they make frameworks like SDF Governance Guard not merely valuable but essential.
Force 1 — NIST IR 8011: Technical Methodology for Automated Assessment
NIST Interagency Report 8011 defines the rigorous methodology for automated security control assessment: desired state specification, actual state collection, comparison, defect identification, and root cause analysis. The February 2025 revision (Volume 1, Revision 1) represents a major overhaul of the foundational volume, modernizing terminology and alignment with revised SP 800-53, SP 800-53A, and SP 800-53B.
Critically, NIST’s public comment period explicitly invited GRC tool developers to implement the IR 8011 methodology — signaling that the framework is transitioning from theoretical specification to operational tooling expectation. The four published security capabilities map directly to the Dynatrace MCP server’s tool portfolio, as demonstrated in Section 8 of this submission.
IR 8011 Security Capability Abbreviation Focus Area MCP Tool Coverage
| Hardware Asset Management | HWAM | Unmanaged/unauthorized devices | get_entity_details, execute_dql |
| Software Asset Management | SWAM | Unmanaged/unauthorized software | get_entity_details, execute_dql |
| Vulnerability Management | VULN | Software vulnerabilities (CVE/CWE) | list_vulnerabilities, get_vulnerability_details |
| Configuration Settings Management | CSM | Incorrect configuration settings | get_kubernetes_events, execute_dql, verify_dql |
Force 2 — Federal AI Mandates: EO 14179 + OMB M-25-21
Executive Order 14179 (January 23, 2025) and OMB Memorandum M-25-21 (April 3, 2025) replaced their predecessors with a pro-innovation posture while retaining core governance accountability structures. M-25-21 consolidated M-24-10’s multi-tiered risk classification into a single “high-impact AI” category — simplifying classification but maintaining the requirement for rigorous risk management of consequential AI systems.
The following governance requirements persist under the current mandate regime:
● Chief AI Officers (CAIO) — every federal agency must designate one with authority over AI governance
| ● | AI Governance Boards — senior leadership bodies coordinating AI policy, risk acceptance, and deployment decisions |
| ● | High-impact AI risk management — minimum practices for AI whose output serves as a principal basis for decisions affecting rights or safety |
| ● | Public AI use case inventories — transparency and accountability requirements for all agency AI deployments |
| ● | Civil rights, privacy, and safety safeguards — retained under the new regime with implementation flexibility |
The net effect is a policy environment that expects agencies to move faster on AI adoption while still demonstrating governance compliance — widening the gap between operational velocity and governance capability.
Force 3 — Continuous ATO: The Operational Imperative
The Department of Defense and civilian federal agencies are accelerating the transition from traditional three-year Authority to Operate (ATO) reauthorization cycles to continuous ATO (cATO). The DoD’s Platform One model integrates continuous security scanning, automated testing, and Zero Trust architecture principles to maintain compliance at operational speed. DHS and other civilian agencies are implementing similar models.
cATO requires a continuous stream of machine-readable compliance evidence — not periodic audit artifacts compiled in the weeks before an assessment. Observability platforms like Dynatrace generate this evidence as a byproduct of normal operations: every problem detection, every vulnerability scan, every entity relationship mapping is a potential control assessment event. But without a governance framework to classify, tag, and route that evidence, it remains raw telemetry rather than compliance data.
These three forces do not operate in isolation — they converge to create a compounding urgency:
Convergence Point Forces Intersecting Resulting Requirement Current State
| Automated control evidence | IR 8011 + cATO | Machine-readable, continuous assessment artifacts | Manual, retroactive, spreadsheet-based |
| AI governance accountability | EO 14179/M-25-21 + cATO | Real-time demonstration of AI risk management compliance | Fragmented across disconnected systems |
| Observability-to-compliance bridge | IR 8011 + AI Mandates | Tooling that translates operational telemetry into auditable control assessments | No tooling exists — until SDF Governance Guard |
| Governance metadata inheritance | All three forces | AI-initiated actions must carry compliance context through all downstream systems | Governance context severed at every integration boundary |
Four specific developments signal that the governance gap has moved from a strategic concern to an operational emergency:
M-25-21 Consolidation Effect. The consolidation from multi-tiered risk categories to a single “high-impact AI” classification simplifies the governance decision tree — but it also compresses enforcement timelines. Agencies can no longer defer governance decisions through classification ambiguity. Every consequential AI system must be governed, and the simplified framework makes non-compliance more visible.
NIST IR 8011 Modernization Signal. NIST’s February 2025 revision is not a routine update — it is an active invitation to the GRC vendor community to build automated assessment tooling. The public comment period specifically asked whether tool developers would incorporate the IR 8011 methodology into commercial solutions. SDF Governance Guard’s NIST IR 8011 crosswalk (Section 😎 is a direct answer to this invitation.
cATO Acceleration Across DoD and Civilian Agencies. The continuous ATO model is no longer experimental. DHS, DoD, and multiple civilian agencies have moved beyond pilot programs into operational cATO frameworks. Static three-year ATOs are being replaced by continuous evidence streams — and agencies that cannot produce those streams face authorization delays and operational risk.
The Persistent Metadata Gap. The most critical urgency indicator is what did not change across the mandate transition. The new policy regime removed bureaucratic barriers and accelerated AI adoption timelines, but it did not solve the fundamental technical problem: AI-initiated actions still lack governance metadata inheritance. When Davis AI detects an anomaly and creates a ServiceNow incident, the compliance context is still lost in transit. The gap persists because no framework has addressed it — until now.
For clarity and precision — particularly for evaluators assessing this submission’s federal relevance — the following comparison distinguishes between policy elements that changed in the EO 14110 → EO 14179 and M-24-10 → M-25-21 transitions versus governance requirements that persist regardless of policy posture.
What Changed (Policy Posture) What Persists (Governance Requirements)
| Multi-tiered risk categories → single “high-impact AI” category | Chief AI Officers (CAIO) still required at every agency |
| Prescriptive compliance requirements → pro-innovation with safeguards | AI Governance Boards still mandatory for coordination and risk acceptance |
| Red-team testing and sharing mandates → removed | High-impact AI risk management practices still required |
| Focus on restricting AI risks → focus on accelerating AI adoption | Public AI use case inventories still required for transparency |
| M-24-10’s detailed multi-tier risk framework → M-25-21’s streamlined governance | Civil rights, privacy, and safety safeguards retained with implementation flexibility |
| Mandatory watermarking for AI-generated content → removed | NIST frameworks (SP 800-53, IR 8011, RMF) entirely independent of any EO — unchanged |
| AI safety institute funding mandates → restructured | Continuous monitoring mandates (SP 800-137) still in force |
| Restrictive export controls on AI models → eased | cATO expectations continue accelerating across DoD and civilian agencies |
All three converging forces share a critical fiscal characteristic: they are unfunded mandates. Every governance requirement identified in this submission — CAIO designation, AI Governance Boards, high-impact AI risk management, NIST IR 8011 automated assessments, continuous ATO evidence — must be implemented using existing agency budgets with no dedicated appropriation.
This is not an oversight — it is structural. Executive Orders cannot appropriate funds; under Article I, Section 9 of the Constitution, only Congress holds the power of the purse. OMB memoranda are binding policy directives but carry no funding mechanism. NIST guidance becomes effectively mandatory through FISMA but has no dedicated appropriation. DoD cATO implementation guides provide technical direction but no funding vehicle.
Force Legal Authority Funding Status Implementation Reality
| EO 14179 + OMB M-25-21 | Executive Order + OMB Memorandum | Unfunded — no appropriation attached | Agencies must designate CAIOs, stand up Governance Boards, implement high-impact AI risk management, and maintain public inventories using existing staff and existing budgets |
| NIST IR 8011 | NIST Interagency Report (mandatory via FISMA) | Unfunded — no dedicated IR 8011 appropriation | Automated control assessment tooling must be built or acquired from existing cybersecurity budgets; OMB acknowledges agencies have "finite resources" |
| Continuous ATO (cATO) | DoD CIO policy directive; civilian equivalents via FedRAMP ConMon | Unfunded — no standalone appropriation | Defense and civilian agencies must achieve continuous authorization capabilities within existing program budgets |
The fiscal constraint intensifies the urgency. OMB's own FISMA guidance acknowledges that "federal agencies have finite resources to dedicate to cybersecurity" and must "focus those resources" on the highest-impact activities. Agencies face a compliance paradox: mandated to implement AI governance, automated control assessment, and continuous ATO — but funded to do none of them.
Governance as Code
Every governance policy is expressed as a machine-readable artifact — a JSON or YAML document that specifies which NIST controls apply to which entity types, which authorization boundaries they belong to, and what thresholds trigger compliance events. When an MCP tool like list_vulnerabilities returns CVE data, the governance policy artifact determines in real-time whether that vulnerability affects a FedRAMP High boundary, which SP 800-53 control family applies (RA-5 in this case), and what IR 8011 defect check should be generated. No human interpretation. No manual classification.
Inheritance by Default
Every MCP tool invocation carries a Governance Metadata Envelope (GME) — a structured data object that propagates through all downstream actions. When Davis AI creates a problem, the GME tags it with control family, authorization boundary, and IR 8011 capability. When that problem generates a ServiceNow incident, the GME travels with it. When the incident triggers a remediation workflow, the GME persists. Governance metadata is never optional — it is structurally embedded in every transaction.
Continuous Assessment
Every observation from Davis AI, every DQL query result, every vulnerability scan is treated as a potential control assessment event. SDF Governance Guard does not wait for scheduled assessments — it converts every operational signal into a testable control observation using the IR 8011 methodology. The assessment is continuous because the data generation is continuous.
Immutable Audit Trail
All governance events are written to Grail as structured records in a dedicated audit.governance bucket. Each record includes a cryptographic hash of the previous record, creating a tamper-evident chain. Auditors can query the governance ledger using DQL with the same tools they use for operational analysis. The audit trail is not a separate system — it is native to the platform.
Layer Name Components Function
| 5 | Action Plane | ServiceNow ITSM, Slack, Workflows, External SIEM/SOAR | Executes governance-aware remediation; receives inherited metadata; closes the compliance loop |
| 4 | Orchestration Plane | Dynatrace MCP Server — 14 tools exposed as governance-aware endpoints | Provides the standardized tool interface; carries governance metadata envelope on every invocation |
| 3 | Governance Plane | SDF Governance Guard — policy engine, control mapper, inheritance resolver | Injects compliance context; maps observations to NIST controls; resolves metadata inheritance chains |
| 2 | Intelligence Plane | Davis AI — root cause analysis, anomaly detection, predictive analysis | Generates causal, deterministic observations from raw telemetry; provides the “why” behind signals |
| 1 | Data Plane | Dynatrace OneAgent, Grail data lakehouse, Entity Model, Smartscape topology | Ingests all telemetry (logs, metrics, traces, events, business data); maintains entity relationships and topology |
Layers 1, 2, 4, and 5 already exist in production Dynatrace environments. SDF Governance Guard introduces Layer 3 — the Governance Plane — as a logical layer that requires no new infrastructure, no custom plugins, and no modifications to the MCP server. It operates entirely through governance policy artifacts, DQL queries, and MCP tool orchestration patterns.
The Governance Metadata Envelope (GME) is the atomic unit of compliance data in the SDF framework.
Field Type Description Example Value
| control_family | String | SP 800-53 control family identifier | RA-5 |
| ir8011_capability | String | NIST IR 8011 security capability | VULN |
| ir8011_subcapability | String | IR 8011 sub-capability / defect check | VULN-SC-01 |
| auth_boundary_id | String | RMF authorization boundary identifier | AB-PROD-EAST-001 |
| assessment_boundary_id | String | Assessment scope boundary | ASSESS-FY26-Q2 |
| policy_version | String | Version of the governance policy artifact applied | v2.4.1 |
| locate_phase | Enum | Current LOCATE protocol phase | OBSERVE |
| timestamp | ISO 8601 | UTC timestamp of envelope creation | 2026-05-02T14:10:00Z |
| chain_hash | SHA-256 | Hash of previous envelope in the governance chain | a3f2b9c1... |
The LOCATE Protocol defines the six-phase operational cadence that every governance-relevant signal follows through SDF Governance Guard. Each phase maps to specific MCP tool invocations, specific Governance Metadata Envelope fields, and specific NIST control assessment activities.
Every governance-relevant signal begins with capture. The Log phase ensures that raw telemetry — logs, metrics, traces, events — is captured with baseline governance context: authorization boundary assignment, data sensitivity classification, and entity ownership.
MCP Tool: execute_dql
DQL Example:
fetch logs | filter dt.entity.host IN ("HOST-ABC123", "HOST-DEF456") | fieldsAdd auth_boundary = "AB-PROD-EAST-001", data_sensitivity = "FIPS-199-MODERATE"Davis AI contextualizes the raw signal — identifying anomalies, correlating patterns, and generating problem entities with deterministic root cause analysis. SDF Governance Guard enriches Davis AI observations with compliance context: which control families are relevant, which IR 8011 capabilities apply, and what defect classification should be assigned.
MCP Tools: list_problems, list_vulnerabilities
Example: list_vulnerabilities returns CVE-2026-1234 affecting svc-payment-api. SDF enriches: control_family = RA-5, ir8011_capability = VULN, ir8011_subcapability = VULN-SC-01.
The Correlate phase connects the observed signal to the organization’s governance posture. SDF Governance Guard queries entity relationships, ownership chains, and historical governance data to classify the signal as one of three defect types:
● New defect — a previously unobserved deviation from desired state
| ● | Recurring defect — a deviation that matches a previously identified and (presumably) remediated defect |
| ● | Control drift — a gradual divergence from desired state that has not yet triggered a discrete alert |
MCP Tools: get_entity_details, get_ownership, execute_dql
The Act phase triggers remediation — but unlike traditional operational responses, every action carries the full Governance Metadata Envelope. When create_workflow_for_notification creates a remediation workflow, the workflow parameters include the GME. When that workflow creates a ServiceNow incident, the GME propagates via the Inheritance Model (Section 7).
MCP Tools: create_workflow_for_notification, send_slack_message
The Trace phase ensures that the complete causal chain — from raw signal through enrichment, correlation, and action — is preserved as an immutable record. Each GME includes a chain_hash field that links it to the previous envelope in the governance chain, creating a tamper-evident lineage.
MCP Tools: get_logs_for_entity, execute_dql
The Enforce phase validates that the remediation action achieved the desired state. SDF Governance Guard uses verify_dql to validate assessment queries and execute_dql to run post-remediation checks. The enforcement result — CONTROL_SATISFIED or POA&M_REQUIRED — is recorded as the final entry in the governance chain.
MCP Tools: verify_dql, execute_dql
The October 2025 strategic collaboration between Dynatrace and ServiceNow created six distinct integration points between the platforms. SDF Governance Guard leverages all six as governance metadata propagation channels.
Integration Data Flow Inherited Governance Metadata NIST Control Families Supported
| 1. Incident Integration App | Davis AI problem → ServiceNow incident | Control family tag, IR 8011 defect-check ID, assessment boundary ID, severity-to-impact mapping per FIPS 199 | IR (Incident Response), SI (System and Information Integrity), CA (Assessment, Authorization, and Monitoring) |
| 2. Dynatrace Workflows for ServiceNow | Custom workflow triggers → ServiceNow ticket creation | Full Governance Metadata Envelope as structured workflow parameters; policy version, LOCATE phase, chain hash | IR, CP (Contingency Planning), SA (System and Services Acquisition) |
| 3. Service Graph Connector | Dynatrace entity model → ServiceNow CMDB | Authorization boundary component mapping; entity-to-system classification; FIPS 199 categorization per entity type | CM (Configuration Management), PM (Program Management), SA |
| 4. Event Management Connector | Problem events → ServiceNow alerts/incidents | Control family auto-classification; IR 8011 security capability tag; alert-to-defect-check correlation | SI, IR, AU (Audit and Accountability) |
| 5. Service Observability Connector | Dynatrace data displayed in ServiceNow | Governance overlay: compliance status badges alongside operational metrics; control assessment results per entity | CA, PM, RA (Risk Assessment) |
| 6. Davis AI Analysis Agent Connector | ServiceNow AI agents → Dynatrace Davis AI | Governance Guard ensures Davis responses include compliance context (not just operational data); authorization boundary scoping for agent queries | AC (Access Control), AU, RA |
Incident Integration App
When Davis AI identifies a problem, the Incident Integration App creates a corresponding ServiceNow incident. SDF Governance Guard intercepts this creation via Dynatrace Workflows and enriches the ServiceNow incident with governance metadata. The severity-to-impact mapping follows FIPS 199: a “high” severity Davis AI problem affecting a FIPS 199 “High” integrity boundary is automatically classified as a “Critical” ServiceNow incident with control families SI-4 and IR-4 pre-tagged.
Service Graph Connector
The Service Graph Connector synchronizes the Dynatrace entity model with the ServiceNow CMDB. SDF Governance Guard extends this synchronization by mapping each Dynatrace entity to its RMF authorization boundary and FIPS 199 categorization. The result: every CMDB configuration item carries its governance context, enabling ServiceNow GRC modules to assess compliance at the entity level without manual boundary mapping.
Davis AI Analysis Agent Connector
When ServiceNow AI agents query Dynatrace Davis AI for operational insights, SDF Governance Guard ensures that the response includes compliance context — not just “this service has a 5% error rate” but “this service, within authorization boundary AB-PROD-EAST-001, classified as FIPS 199 High for availability, has a 5% error rate that constitutes a potential defect under IR 8011 capability VULN, sub-capability VULN-SC-01.”
NIST IR 8011 defines a five-step methodology for automated security control assessment. This section maps all 14 Dynatrace MCP server tools to IR 8011 security capabilities, sub-capabilities, and defect checks — creating the first comprehensive crosswalk between an observability platform’s API and the NIST automated assessment framework.
Abbreviation Security Capability IR 8011 Volume Focus
| HWAM | Hardware Asset Management | Vol. 2 | Managing risk from unmanaged/unauthorized devices |
| SWAM | Software Asset Management | Vol. 3 | Managing risk from unmanaged/unauthorized software |
| VULN | Vulnerability Management | Vol. 4 | Managing risk from software vulnerabilities (CVE/CWE) |
| CSM | Configuration Settings Management | Vol. 5 (Draft) | Managing risk from incorrect configuration settings |
SDF Governance Guard extends the crosswalk beyond the four published IR 8011 volumes to include security capabilities implied by IR 8011’s methodology but not yet covered by dedicated volumes — including incident response, audit and accountability, and access control. These extensions are clearly marked and follow IR 8011’s own sub-capability and defect-check naming conventions.
MCP Tool IR 8011 Security Capability Sub-Capability SP 800-53 Control Family Defect Check Type Assessment Method
| execute_dql | HWAM, SWAM, CSM, VULN | Multiple — determined by query content | CM-8, RA-5, SI-2, AU-6 | Actual-state collection; desired/actual comparison | DQL query returns actual state; compared against desired state spec in governance policy artifact |
| verify_dql | CSM | CSM-SC-01: Configuration Validation | CM-6 | Statement validation; syntax and semantic verification | Validates that DQL assessment queries are syntactically correct before execution, ensuring assessment reliability |
| list_problems | IR (Incident Response), Continuous Monitoring | IR-SC-01: Problem Detection; CM-SC-02: Anomaly Identification | IR-4, IR-5, SI-4 | Defect identification; anomaly detection | Returns active/closed problems as defect candidates; each problem is a potential control assessment event |
| get_problem_details | Risk Assessment, Root Cause Analysis | RA-SC-01: Impact Analysis; RCA-SC-01: Causal Determination | RA-3, RA-5, SI-4 | Root cause analysis; impact classification | Davis AI causal analysis provides deterministic root cause — satisfies IR 8011 root cause analysis step |
| list_vulnerabilities | VULN (Vol. 4) | VULN-SC-01: Vulnerability Discovery; VULN-SC-02: Patch Currency | RA-5, SI-2 | Defect identification; vulnerability enumeration | Returns CVEs with severity, affected entities, and remediation status — direct IR 8011 Vol. 4 defect check |
| get_vulnerability_details | VULN (Vol. 4) | VULN-SC-03: Risk Scoring; VULN-SC-04: Remediation Tracking | RA-5, SI-2, SI-5 | Defect analysis; remediation verification | Detailed CVE analysis with CVSS scoring, affected components, and patch availability — defect characterization |
| get_entity_details | HWAM (Vol. 2), SWAM (Vol. 3) | HWAM-SC-01: Device Discovery; SWAM-SC-01: Software Inventory | CM-8, PM-5 | Actual-state collection; inventory verification | Returns entity properties, relationships, and topology — serves as automated asset inventory for IR 8011 |
| get_ownership | AC (Access Control), Accountability | AC-SC-01: Ownership Attribution; AC-SC-02: Responsibility Mapping | AC-5, AC-6, PS-7 | Ownership verification; separation of duties check | Returns ownership team and responsibility chain — validates accountability controls |
| get_logs_for_entity | AU (Audit and Accountability) | AU-SC-01: Log Availability; AU-SC-02: Log Integrity | AU-2, AU-3, AU-6, AU-12 | Audit trail verification; log completeness check | Returns entity-specific logs — verifies that auditable events are being captured per AU control requirements |
| get_kubernetes_events | CSM, Container Security | CSM-SC-02: Container Configuration; CSM-SC-03: Orchestration Compliance | CM-2, CM-6, SI-3 | Configuration deviation detection; container drift identification | Returns cluster events — detects configuration changes, pod restarts, and security-relevant Kubernetes events |
| get_environment_info | System Characterization, Boundary Definition | SYS-SC-01: Environment Classification; BND-SC-01: Boundary Identification | PL-2, CA-3, SA-4 | System boundary verification; environment classification | Returns environment metadata — foundational for defining authorization boundaries and assessment scope |
| create_workflow_for_notification | IR (Incident Response), CP (Contingency Planning) | IR-SC-02: Automated Response; CP-SC-01: Recovery Initiation | IR-4, IR-6, CP-2, CP-10 | Remediation execution; response automation | Creates governance-aware notification workflows — each workflow execution is logged as a control response action |
| send_slack_message | IR (Incident Response Communication) | IR-SC-03: Stakeholder Notification | IR-6, IR-7 | Communication verification; notification completeness | Sends governance-tagged notifications — verifies that incident communication requirements are met |
| update_workflow | CSM, CM (Configuration Management) | CSM-SC-04: Workflow Configuration Integrity | CM-3, CM-5 | Change control verification; workflow modification tracking | Updates existing workflows with governance context — ensures change management controls are satisfied |
IR 8011 Step Description MCP Tool(s) SDF Governance Guard Action
| 1. Desired State Specification | Define what “compliant” looks like for each control | N/A (policy artifact) | Governance policy artifact defines desired state as DQL query predicates and threshold values per control |
| 2. Actual State Collection | Collect the current state of the system | execute_dql, get_entity_details, list_vulnerabilities, get_kubernetes_events | MCP tools collect actual state; results tagged with Governance Metadata Envelope |
| 3. Comparison | Compare desired vs. actual state | execute_dql (comparison queries) | DQL queries compare actual state against desired state predicates; discrepancies flagged as potential defects |
| 4. Defect Identification | Identify where actual deviates from desired | list_problems, list_vulnerabilities | Davis AI problems and vulnerabilities are classified as defects per IR 8011 taxonomy; assigned defect-check IDs |
| 5. Root Cause Analysis | Determine why the defect exists | get_problem_details, get_vulnerability_details | Davis AI causal analysis provides deterministic root cause; mapped to IR 8011 root cause categories |
A Davis AI problem detection fires for a production Kubernetes service — svc-payment-api — experiencing anomalous error rates within the authorization boundary AB-PROD-EAST-001, classified as FIPS 199 “High” for integrity and availability. This scenario demonstrates the complete LOCATE cycle executing against a realistic production incident, showing how SDF Governance Guard transforms an operational event into a fully auditable compliance artifact.
Phase L — Log: Capture the Raw Signal
MCP Tool: execute_dql
fetch logs | filter dt.entity.service == "SERVICE-PAY001" | filter loglevel == "ERROR" | fieldsAdd auth_boundary = "AB-PROD-EAST-001", fips199_impact = "HIGH"
Result: 847 error-level log entries captured and governance-tagged within authorization boundary AB-PROD-EAST-001.
Phase O — Observe: Contextualize the Signal
MCP Tools: list_problems, list_vulnerabilities
list_problems returns Davis AI problem P-2026-05-001 with deterministic root cause: deployment regression in deploy-v3.2.1 causing null pointer exceptions in payment validation service. list_vulnerabilities returns two CVEs affecting the deployment target: CVE-2026-1234 (CVSS 7.8) and CVE-2026-5678 (CVSS 5.4). SDF Governance Guard enriches: control families SI-4 and IR-4 for the problem; RA-5 for the vulnerabilities.
Phase C — Correlate: Connect to Governance Posture
MCP Tools: get_entity_details, get_ownership, execute_dql
get_entity_details resolves the dependency topology: svc-payment-api → prod-payments namespace → K8S-EAST-001 cluster, with downstream dependencies on svc-ledger-api and svc-notification-api. get_ownership resolves Team-Payments-SRE as the responsible team. DQL correlation query against the audit.governance bucket confirms no prior defects for this service/control combination. Result: classified as new defect — VULN-SC-01 for the CVEs, IR-SC-01 for the deployment regression.
Phase A — Act: Execute Governance-Aware Remediation
MCP Tools: create_workflow_for_notification, send_slack_message
create_workflow_for_notification triggers governance-aware remediation workflow WF-GOV-2026-0501. ServiceNow incident INC-2026-0501 created via Incident Integration App with full GME inheritance: control families SI-4, IR-4, RA-5; authorization boundary AB-PROD-EAST-001; defect checks VULN-SC-01, IR-SC-01. send_slack_message to #payments-sre-oncall with governance-tagged message including the authorization boundary, affected controls, and required response timeline per the governance policy artifact.
Phase T — Trace: Maintain Causal Lineage
MCP Tools: get_logs_for_entity
get_logs_for_entity for SERVICE-PAY001 captures the full governance chain: Log → Observe → Correlate → Act, linked by chain_hash values in the Grail audit.governance bucket. Each phase transition is recorded as a separate envelope with a hash reference to its predecessor.
Phase E — Enforce: Validate and Close the Loop
MCP Tools: verify_dql, execute_dql
verify_dql validates the post-remediation assessment query for syntactic correctness. execute_dql runs the assessment:
fetch logs | filter dt.entity.service == "SERVICE-PAY001" | filter loglevel == "ERROR" | filter timestamp > now() - 30m | summarize error_count = count()
Result: error_count = 2 (within acceptable threshold of < 10 per 30-minute window per governance policy artifact v2.4.1). Governance Metadata Envelope closed with enforcement_result = CONTROL_SATISFIED. Final chain_hash written to Grail.
SDF Governance Guard represents a strategic expansion of the MCP server’s addressable market. The governance, risk, and compliance (GRC) market exceeds $15B annually and is growing at 14%+ CAGR. By positioning the MCP server as a compliance control plane, Dynatrace gains entry to this market without building new infrastructure — the capability exists today in the 14 MCP tools. Product implications include: the Governance Metadata Envelope as a first-class MCP concept, the Grail-native compliance ledger as a differentiated data platform capability, and the NIST IR 8011 crosswalk as a marketplace asset that can be published, certified, and monetized.
SDF Governance Guard creates a structured, repeatable engagement model for Expert Services:
● Phase 1 — Foundation (4-6 weeks): Deploy governance policy artifacts, configure Grail compliance bucket, establish authorization boundary mappings for target environment
| ● | Phase 2 — Crosswalk (3-4 weeks): Implement NIST IR 8011 crosswalk for customer-specific control families, configure LOCATE protocol workflows, establish ServiceNow inheritance mappings |
| ● | Phase 3 — Continuous ATO (4-6 weeks): Activate continuous assessment workflows, validate compliance ledger output with customer auditors, establish POA&M automation and SSP evidence population |
SDF Governance Guard transforms the Dynatrace value proposition for channel partners selling into federal agencies. The conversation shifts from “we monitor your infrastructure” to “we continuously demonstrate your compliance posture.” This repositioning addresses the single largest unmet need in federal IT: automated compliance evidence generation. Channel partners can differentiate against Splunk, Datadog, and New Relic by offering a unified observability-and-compliance platform. Federal agencies spend an estimated 30-40% of their cybersecurity staff time on compliance evidence generation — SDF Governance Guard automates that burden.
The unfunded mandate dimension adds a decisive sales argument. Federal AI governance (EO 14179/M-25-21), automated control assessment (NIST IR 8011), and continuous ATO are all unfunded mandates — agencies must comply using existing budgets with no new appropriations. Channel partners can position Dynatrace with SDF Governance Guard as the zero-incremental-cost compliance solution: agencies that already run Dynatrace can achieve governance compliance without additional platform procurement, without new licensing, and without custom development. In a budget environment where OMB itself acknowledges that agencies have "finite resources," the ability to extract governance capability from existing observability investments is not a feature — it is a fiscal necessity. This transforms the sales conversation from "add another tool" to "unlock the governance value already embedded in the platform you own."
SDF Governance Guard provides four implementation pattern categories that Technical Implementation Engineers can deploy and customize:
13. Governance policy artifacts (JSON/YAML templates for common control frameworks)
| 14. | DQL assessment query libraries (pre-built queries for each IR 8011 defect check) |
| 15. | LOCATE workflow templates (Dynatrace Workflow configurations for each LOCATE phase) |
| 16. | ServiceNow inheritance mappings (configuration guides for each of the six integration points) |
No competing observability platform offers governance-aware MCP tool orchestration, automated NIST IR 8011 crosswalk, or compliance metadata inheritance across AI-to-ITSM integration boundaries.
Capability Dynatrace + SDF Governance Guard Splunk / Cisco Datadog New Relic ServiceNow Native
| MCP Server with governance-aware tool orchestration | Yes — 14 tools with Governance Metadata Envelope | No MCP server | No MCP server | No MCP server | No MCP server |
| Automated NIST IR 8011 crosswalk | Yes — complete crosswalk for all 14 tools | Manual mapping only | No IR 8011 support | No IR 8011 support | Partial (GRC module) |
| Governance metadata inheritance (AI → ITSM) | Yes — across 6 ServiceNow integration points | Limited (SOAR → ITSM) | No governance metadata | No governance metadata | Internal only (no observability context) |
| Operational governance protocol (LOCATE) | Yes — six-phase cadence | No defined protocol | No defined protocol | No defined protocol | No defined protocol |
| Immutable compliance ledger (Grail-native) | Yes — DQL-queryable, hash-chained | Splunk indexes (mutable) | Log retention (mutable) | NRDB (mutable) | Audit trail (limited scope) |
| Deterministic AI root cause for compliance | Yes — Davis AI causal analysis | Statistical correlation | ML-based correlation | Applied Intelligence | No native AI RCA |
The differentiation is architectural, not incremental. Competitors would need to build an MCP-equivalent server, implement a governance metadata envelope, create NIST crosswalks, and establish ITSM inheritance models — a multi-year effort that Dynatrace can deliver today with SDF Governance Guard.
Phase Timeline Capabilities Deliverables
| Phase 1 — Foundation | Current (Q2 2026) | Governance metadata tagging, NIST IR 8011 crosswalk for HWAM/SWAM/VULN/CSM, LOCATE protocol via MCP tools, Grail-native compliance ledger | Governance policy artifacts (JSON/YAML), DQL assessment query library, LOCATE operational runbook |
| Phase 2 — Expansion | Q3 2026 | Custom MCP tool extensions for agency-specific control frameworks: HIPAA (healthcare), PCI-DSS (payment), CMMC 2.0 (defense industrial base) | Framework-specific policy artifacts, crosswalk extensions, industry-specific DQL query libraries |
| Phase 3 — Automation | Q4 2026 | Automated POA&M generation from Grail compliance data; automated SSP evidence population; eMASS integration for DoD environments | POA&M generator workflow, SSP evidence auto-population templates, eMASS data feed connector |
| Phase 4 — Visualization | 2027 | Continuous ATO dashboard with real-time control inheritance visualization; Authorizing Official risk acceptance portal; multi-boundary compliance aggregation | Dynatrace App for Governance (Grail-powered), AO Decision Support dashboard, Enterprise compliance posture view |
Extensibility Model
SDF Governance Guard is designed for extensibility across three dimensions:
● Framework extensibility: New compliance frameworks (HIPAA, PCI-DSS, CMMC, ISO 27001) can be added by creating new governance policy artifacts and crosswalk mappings — no code changes required
| ● | Tool extensibility: As the Dynatrace MCP server adds new tools, the IR 8011 crosswalk can be extended by adding new rows to the crosswalk matrix and updating governance policy artifacts |
| ● | Integration extensibility: The Governance Metadata Envelope is a portable data structure that can propagate to any downstream system with a structured data interface — not just ServiceNow |
SDF Governance Guard transforms the Dynatrace MCP server from an observability API into a governance control plane. The framework introduces no new infrastructure, requires no MCP server modifications, and operates entirely on existing Dynatrace capabilities — DQL, Grail, Workflows, Davis AI, and the 14 MCP server tools.
The five contributions of this submission are:
17. SDF Governance Guard — a governance-as-code policy engine that injects compliance metadata into every MCP tool invocation
| 18. | The LOCATE Protocol — a six-phase operational cadence that standardizes governance processing for every operational signal |
| 19. | The Davis AI → ServiceNow Governance Inheritance Model — compliance metadata propagation across six Dynatrace-ServiceNow integration points |
| 20. | The NIST IR 8011 Crosswalk — a first-of-its-kind mapping of all 14 MCP server tools to NIST automated assessment capabilities |
| 21. | The Regulatory Landscape Analysis — demonstrating that the governance gap is widening as federal mandates evolve, making SDF Governance Guard more essential than ever |
This is not a theoretical framework. Every component described in this document can be implemented using today’s Dynatrace platform, today’s MCP server, and today’s ServiceNow integrations. The governance gap is real, the mandate urgency is accelerating, and SDF Governance Guard is the bridge.
We respectfully submit this framework for the Dynatrace MCP Server Challenge and invite the judging panel to evaluate it as both a competition entry and a strategic blueprint for the future of governance-aware observability.
SDF Governance Guard — Dynatrace MCP Server Challenge Submission
Version 1.0 — Challenge-Ready — Final Submission (Unified)
Randy Chambers — Discipline Consulting Group, LLC
May 2026
25 Apr 2026 10:25 PM
Here's my submission for the MCP Server Challenge: [paste your post URL here]
At Discipline Consulting Group, we built a Signal–Defect–Failure (SDF) classification framework that started as a certification training methodology — our candidates now score 85+ on the Dynatrace Practitioner exam using it. We then discovered that SDF classification governs every integration boundary between Dynatrace and its strategic partners (all 6 ServiceNow connectors, all 17 workflow connectors). So we extended it to the MCP Server: we classified all 14 MCP tools and 6 agent-level tools by SDF layer, and built a governance framework — the SDF Governance Guard — that uses data classification to determine what AI agents can see and do. For review or a comment: @wolfgang_beer, @wolfgang_heider, @GabrieleHB, @andreas_grabner
Full write-up with the Agent Permission Matrix, MCP tool governance map, three practical scenarios, and seven governance rules in the post. Looking forward to feedback!
— Randy Chambers, Dynatrace Practice Lead, Discipline Consulting Group LLC
Featured Posts