21 Jan 2025 10:02 AM - edited 21 Jan 2025 10:03 AM
AI observability is a modern, complete approach to understanding how your AI applications behave, how data flows, and how performance changes over time.
At Dynatrace, we provide advanced model observability with predictive capabilities for efficient cost management, end-to-end traceability in RAG pipelines and agentic frameworks, compliance and governance to adhere to laws like the EU AI Act, and LLM safeguards such as PII leakage prevention, language toxicity assessment, and hallucination detection. This ensures service health and performance, service quality, compliance, and governance.
Monitor standard metrics like response times, error rates, and cost signals, and chart them based on different AI models. Leverage the predictive capabilities of Dynatrace Davis AI® to detect changes in usage behavior and predict cost changes to help team members understand model performance and optimization opportunities.
Key Features:
Monitor safety, privacy, and truthfulness safeguards for your generative AI applications with Dynatrace. Detect toxic language, track Personally Identifiable Information (PII) leakage, identify attempts at LLM misuse such as malicious prompt injection, and monitor model hallucinations.
Key Features
Map dependencies between multiple large language models that work in concert in your RAG pipelines or agentic frameworks to provide end-to-end observability of the entire system. Track every step - from request to response - and gain full visibility into your pipeline's performance.
Key Features
Dynatrace automatically tracks every input and output with no sampling to provide an audit trail of accurate monitoring and observability for your GenAI invocations. This helps ensure that applications comply with applicable laws such as the EU AI act and the US Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.
Key Features
13 Jun 2025 10:58 AM
Hello thank you very much for that explanation : can you help in understanding what we can do with the Managed version and what requires SaaS ? Thanks, Nathalie
04 May 2026 06:24 PM
Hi Nathalie,
Dynatrace SaaS provides the richer and more purpose-built experience for AI and LLM observability, especially through the AI Observability app. This includes out-of-the-box analytics for LLM workloads, GenAI span analysis, token/cost/latency visibility, prompt and completion-level insights, and correlation across traces, metrics, and logs powered by Grail.
With Dynatrace Managed, AI/LLM observability is more limited and mainly depends on what the application explicitly sends through OpenTelemetry or OpenLLMetry instrumentation. You can still ingest and analyze relevant telemetry such as traces, spans, metrics, logs, model names, token counts, latency, and errors, provided those attributes are captured and forwarded correctly.
The key difference is that Managed does not provide the SaaS AI Observability app experience, Grail-powered analytics, or the same out-of-the-box handling of GenAI/OpenLLMetry semantic attributes. So for Managed, the approach is more custom-instrumentation and dashboard/query driven, while SaaS provides a more native and automated AI observability experience.
So in short: basic AI workload visibility is possible on Managed if the right telemetry is sent, but the advanced AI Observability capabilities are SaaS-focused.
Featured Posts