Hi Community,
We are in the process of transitioning our alerting and monitoring workflows from Splunk to Dynatrace, and I’m looking for guidance on the technical pre-requisites and migration considerations involved in this process.
Specifically, I’m interested in understanding:
- Baseline Requirements: What foundational configurations (e.g., custom metrics ingestion, tagging strategies, management zones, entity modeling) should be in place in Dynatrace before replicating Splunk alerts?
- Alert Mapping Strategy: How do Splunk alerts (based on saved searches or correlation rules) translate into Dynatrace’s problem detection model, Davis AI, and custom event-based alerts?
- Data Source Alignment: Are there recommended approaches for ensuring parity between Splunk data sources and Dynatrace-monitored entities (e.g., log ingestion, OneAgent coverage, API integrations)?
- Automation & Tooling: Are there any tools, APIs, or scripts available to automate or streamline the alert migration process?
- Governance & Tuning: Best practices for managing alert noise, threshold tuning, and aligning with Dynatrace’s AI-driven root cause analysis.
- Lessons Learned: Any known challenges, limitations, or gotchas from teams who have already completed this migration?
If there’s any official documentation, migration playbooks, or community-shared templates, I’d greatly appreciate the pointers.
Thanks in advance for your insights!
Best regards,