04 Feb 2026 07:49 AM
Our OpenShift Kubernetes cluster is currently running in application‑only monitoring mode, so we don’t have visibility into host‑level metrics. To address this, we’re planning to integrate Prometheus with Dynatrace.
Has anyone successfully implemented this integration? We only need to collect host‑level metrics, since application‑only mode already covers workloads, pods, and containers.
04 Feb 2026 04:48 PM
Hi,
If you do not want to have infra metric coming by OneAgent in fullstack, you have more than one way:
Dynatrace have videos about both in their YouTube channel.
Best regards
05 Feb 2026 12:49 AM
Got it, I'll check it out, Anton.
04 Feb 2026 11:12 PM - edited 05 Feb 2026 01:33 AM
/*
apiVersion: dynatrace.com/v1beta5
kind: DynaKube
metadata:
name: dynakube-k8s-monitoring
namespace: dynatrace
annotations:
feature.dynatrace.com/automatic-kubernetes-api-monitoring-cluster-name: "CLUSTER_NAME"
feature.dynatrace.com/no-proxy: "dynakube-ag-routing-activegate.dynatrace.svc.cluster.local,.myopenshiftregistry.com" #Container Registries & Dynatrace Services
labels:
apm-number: APM123456
spec:
apiUrl: https://{environmentid}.live.dynatrace.com/api
skipCertCheck: true
proxy:
valueFrom: dynatraceproxysecret
networkZone: CLUSTER_NAME
activeGate:
capabilities:
- kubernetes-monitoring
- metrics-ingest
image: myopenshiftregistry.com/dynatrace/dynatrace-activegate:1.329.36.20260110-044104
replicas: 2
resources:
requests:
cpu: 2
memory: 4Gi
limits:
cpu: 4
memory: 12Gi
volumeClaimTemplate:
accessModes: [ "ReadWriteOncePod" ]
storageClassName: "standard"
resources:
requests:
storage: 10Gi
group: "CLUSTER_NAME"
telemetryIngest:
protocols:
- otlp
logMonitoring:
ingestRuleMatchers:
- attribute: "k8s.namespace.name"
values:
- "cpaas-namespaceA"
- "cpaas-namespaceA"
templates:
logMonitoring:
imageRef:
repository: myopenshiftregistry.com/dynatrace/dynatrace-logmodule
tag: 1.329.66.20260109-142000
otelCollector:
imageRef:
repository: myopenshiftregistry.com/dynatrace/dynatrace-otel-collector
tag: 0.42.0
---
apiVersion: dynatrace.com/v1beta5
kind: DynaKube
metadata:
name: dynakube-ag-routing
namespace: dynatrace
annotations:
feature.dynatrace.com/automatic-kubernetes-api-monitoring-cluster-name: "CLUSTER_NAME"
feature.dynatrace.com/no-proxy: "dynakube-ag-routing-activegate.dynatrace.svc.cluster.local,.myopenshiftregistry.com" #Container Registries & Dynatrace Services
feature.dynatrace.com/oneagent-initial-connect-retry-ms: "5000"
feature.dynatrace.com/init-container-seccomp-profile: "true"
labels:
apm-number: APM123456
spec:
apiUrl: https://{environmentid}.live.dynatrace.com/api
skipCertCheck: true
proxy:
valueFrom: dynatraceproxysecret
networkZone: CLUSTER_NAME
metadataEnrichment:
enabled: true
namespaceSelector:
matchLabels:
dynatrace-meta-injection: enabled
activeGate:
capabilities:
- routing
image: myopenshiftregistry.com/dynatrace/dynatrace-activegate:1.329.36.20260110-044104
replicas: 1
resources:
requests:
cpu: 1
memory: 1Gi
limits:
cpu: 2}
memory: 4Gi
group: "CLUSTER_NAME"
oneAgent:
applicationMonitoring:
namespaceSelector:
matchLabels:
dynatrace-agent-injection: enabled
*/Hi @JhonKenneth
We use OCP and can get these metrics without issue, we also use App Only and combination of opt 1 & 3.
1. Application and Kubernetes Monitoring - containerised active gates.
This uses multiple Dynakubes to provide the required monitoring on the cluster and reduce resource utilisation.
E.g
2. use an external Active Gate to pull the metrics from the K8s API endpoint (manual version of point 1 - connect in from outside the cluster)
04 Feb 2026 11:47 PM
@JhonKenneth
apologies on the above comment formatting - but it is the only way I could get it to post. code is for example 1
05 Feb 2026 01:17 AM
1. Application and Kubernetes Monitoring - containerised active gates.
This uses multiple Dynakubes to provide the required monitoring on the cluster and reduce resource utilisation. - i can use this multiple dynakubes approach on one of our openshift cluster since there is a limit range set on some of the application namespace.
05 Feb 2026 01:32 AM
Hi @JhonKenneth , the above details are for a relatively large cluster (120+ nodes), resource utilisation and requirements on resources and replica sets would be significantly less for most teams.
if you're less than 50 nodes, something like the below would suffice.
replicas: 1
resources:
requests:
cpu: 500m
memory: 1Gi
limits:
cpu: 2
memory: 2Gi
you could tune this down even further if required.
Just to clarify, this runs only within the Dynatrace namespace. foundation of your application only will continue to run as is (i've made an assumption you are on automatic injection and not manual & have the containerised AG).
This solution will be a fair bit less in resource, than having to deploy an OTEL collector, which would need to run as a daemon set to collect the kube monitoring metrics.
I would strongly recommend to give it a go if you can as it will save you time and effort.
you can also remove the log monitoring and otel parts from k8s monitoring AG to further reduce resources. I've just included them as a "catch all" to make things easier and for future ref.
it would like like this:
/*
ApiVersion: dynatrace.com/v1beta5
kind: DynaKube
metadata:
name: dynakube-k8s-monitoring
namespace: dynatrace
annotations:
feature.dynatrace.com/automatic-kubernetes-api-monitoring-cluster-name: "CLUSTER_NAME"
spec:
apiUrl: https://{environmentid}.live.dynatrace.com/api
skipCertCheck: true
proxy:
valueFrom: dynatraceproxysecret
networkZone: CLUSTER_NAME
activeGate:
capabilities:
- kubernetes-monitoring
- metrics-ingest
image: myopenshiftregistry.com/dynatrace/dynatrace-activegate:1.329.36.20260110-044104
replicas: 1
resources:
requests:
cpu: 500m
memory: 1Gi
limits:
cpu: 2
memory: 2Gi
group: "CLUSTER_NAME"
*/
Hope this helps.
Thanks
05 Feb 2026 02:44 AM - edited 05 Feb 2026 02:45 AM
Hi @JhonKenneth
above was an example for a 120+ node cluster, with all DT components,
you could quite happily reduce resources and scope like below and just deploy the k8s monitoring part - this can be done in the same namespace. should be able to fit within limits.
/*
ApiVersion: dynatrace.com/v1beta5
kind: DynaKube
metadata:
name: dynakube-k8s-monitoring
namespace: dynatrace
annotations:
feature.dynatrace.com/automatic-kubernetes-api-monitoring-cluster-name: "CLUSTER_NAME"
spec:
apiUrl: https://{environmentid}.live.dynatrace.com/api
skipCertCheck: true
proxy:
valueFrom: dynatraceproxysecret
networkZone: CLUSTER_NAME
activeGate:
capabilities:
- kubernetes-monitoring
- metrics-ingest
image: myopenshiftregistry.com/dynatrace/dynatrace-activegate:1.329.36.20260110-044104
replicas: 1
resources:
requests:
cpu: 500m
memory: 1Gi
limits:
cpu: 2
memory: 2Gi
group: "CLUSTER_NAME"
*/
Featured Posts