on
12 Sep 2025
12:20 AM
- edited on
12 Sep 2025
07:39 AM
by
MaciejNeumann
In April 2025, Dynatrace released a log module feature for automatic log ingest of container logs on clusters monitored with OneAgent workload in Cloud Native Fullstack or App-only modes or without OneAgent. If you encounter any log ingest issues with our log module feature, please review the below pre-requisites and steps to enabling logs on k8s.
If you deploy the OneAgent workload in Classic Fullstack or deploy a deprecated technology like OneAgent Operator, we encourage migrating to our Cloud Native Fullstack deployment and enabling our log module feature for automatic log ingest of container logs.
- Update installation with CSI driver included:
helm upgrade dynatrace-operator oci://docker.io/dynatrace/dynatrace-operator \` `--atomic \` `--csidriver.enabled="true" \ # By default CSI driver is enabled` `--namespace dynatrace
- In your Dynakube, change
.spec.OneAgent.classicFullStack: {}
to.spec.OneAgent.cloudNativeFullStack: {}
Please update Dynatrace Operator and follow the below steps for log monitoring on k8s with our log module. Check our latest version in Dynatrace Operator latest release.
Below command is only to update the Dynatrace Operator with CSI driver:kubectl apply -f https://github.com/Dynatrace/dynatrace-operator/releases/download/v1.7.0/kubernetes-csi.yaml
Below command is only to update the Dynatrace Operator without CSI driver:kubectl apply -f https://github.com/Dynatrace/dynatrace-operator/releases/download/v1.7.0/kubernetes.yaml
Enable log module feature flag Collect all Container Logs in cluster Settings > Log monitoring > Log module feature flags.
In your Dynakube, add .spec.logMonitoring: {}
, which Dynatrace Operator uses to create an unconditional log ingest rule in the cluster settings > Log monitoring > Log ingest rules.
You may set granular ingest rules with Dynakube parameter .spec.logMonitoring.ingestRuleMatchers
, which only supports storage include rules with k8s matchers.
You may override the Dynatrace Operator created log ingest rule(s) with your own log ingest rules in Dynatrace at the cluster scope or a scope closer to the node, like in host settings. We recommend matchers for k8s, content or level attributes in Log ingest rule settings in Dynatrace. Storage exclude rules are also supported for more granular control of ingest, for example, exclude from storage, logs containing supported severity keywords like trace.
In your Dynakube, change apiVerson
to v1beta5
Steps 4.1 and 4.2 are mutually exclusive; complete one step but not both. Dynatrace Operator will not deploy our logMonitoring workload on clusters with our OneAgent workload in mode Cloud Native Fullstack, because Fullstack OneAgents include LogAnalytics for log monitoring.
Required only for Logs on k8s with App-only or Standalone
In your Dynakube, add .spec.templates.logMonitoring
with imageRef params.
Check our latest logMonitoring image in ECR Public Gallery. We recommend deploying one of the most recent versioned images, like 1.321 as of September 2025. latest
is not currently an existing tag for our official logMonitoring images stored in public repositories.
templates:
logMonitoring:
imageRef:
repository: public.ecr.aws/dynatrace/dynatrace-logmodule
tag: 1.319.83.20250909-095914
Logs on k8s with Cloud Native Fullstack OneAgent
Update OneAgent. We recommend one of the two latest releases.
Check that Dynakube feature flag feature.dynatrace.com/automatic-kubernetes-api-monitoring
was not set to false.
Check that kubernetes-monitoring
was added to the list of ActiveGate capabilities in .spec.activeGate.capabilities
of your Dynakube.
In Dynatrace, we should see logs attached on the monitored k8s entity pages in the Kubernetes app and we can also filter logs by any of these fields in DQL (Grail) or DSQL (Logs classic).
Field log.source is set to Container Output. Field dt.source_entity is set to the containerized process(es) that output the logs to the pod's stdout/err streams. If the process details are unavailable, available entity ids like the cloud application instance of the pod entity page may be used to set the dt.source_entity field.
Logs in context of a k8s cluster monitored on a Dynatrace Demo environment.
If you encounter any unexpected behavior with the deployment change or missing log ingest, please collect the Dynatrace Operator archive, write a description of your issue with results of above steps and link the cluster in your Dynatrace environment on a support ticket with Dynatrace Support.
kubectl exec -n dynatrace deployment/dynatrace-operator -- dynatrace-operator support-archive --stdout > operator-support-archive.zip
A notebook share access link with edit permissions showing the query used can also help Dynatrace Support verify provided your issue description and start our investigation.
@jgrant thanks for the post - This is the closest thing to a document that has been created for this, it was fun figuring it out without this.
Would be good to see this actually documented in the official documents.
Current documentation leaves allot to be desired.
I do have one concern about the log module - I can't be bothered raising yet another Enhancement Request.
Logs ingested via the logs module has a different base tag for Kubernetes clusters compared to existing events and log ingestion from the active gate.
Active Gate Kubernetes events and logs use : dt.kuberenetes.cluster.name
Log Module logs use : k8s.cluster.name
This means that looking at k8s logs and k8s events can't be done in the same query and makes investigating issues painful due to the filtering.
when investigating issues in k8s, it is important to see both the cluster, namespace, workload level events in direct comparison with the application logs.
Thanks