27 Jan 2023 02:26 PM - edited 21 Feb 2023 08:31 AM
The literal phrase in the documentation, says "Dynatrace automatically derives tags from your Kubernetes/OpenShift labels. This enables you to automatically organize and filter all your monitored Kubernetes/OpenShift application components." -- Organize Kubernetes/OpenShift deployments by tags | Dynatrace Docs
The requirements stated are:
However, with this requirements set, the Azure components like pods, containers, workloads, services and namespaces are not automatically tagged. Although labels are applied and visible in the properties details of the entities.
Oddly enough deriving process groups and services do get automatically [Azure] tagged.
Is this a configuration thing, or is the documentation not accurate? Are we missing a crucial tagging element?
I see several questions and RFE's similar to this topic, but it is not clear on this, and the statement as documented.
10 Feb 2023 01:05 PM
Hoping we get clarity on this at perform. I have noticed the same, even when designing automatic tags you cannot target the namespaces, workloads etc.... We had to leverage the entity selector using relationalship statements to effectively tag the desired entities, but its not scalable for large organizations.
12 Apr 2023 10:18 AM
We want this badly too,
As stated, processes and services get tagged properly based on kubernetes workload labels
Why not the workload itself ? 🙂
We rely heavily on tags to route our alerts and this routing is now useless without Dynatrace tags.
Would be really nice if DT could implement this,
Thanks in advance
Niels
13 Apr 2023 11:08 AM
Actually with Dynatrace as-it-is using automation (like monaco or terraform) + EntitySelectors to propagate Tags and MZ solves issue very efficiently. And honestly for proper monitoring (and alerting especially) it is kinda required. Have You ever noticed that Kubernetes/OpenShift events are not propagated to processes? A lot of k8s events are error or failure/unavailability events that are really important. Thus without MZ or tags used in alerting profiles You cannot properly handle Notification, as some are omitted. Highly recommend pushing them via Entity Selectors for time being.
04 May 2023 06:05 PM
how to we get workloads label values as tags so we can filter by those labels in data explorer?
05 May 2023 01:10 PM
I think that this (and the solution) is on the radar in the long running RFE RFE: Ability to tag containers based on Kubernetes namespace - Dynatrace Community.
05 May 2023 09:08 PM
According to my experience with dynatrace.
It seems that you are experiencing issues with automatically tagging Azure components, such as pods, containers, workloads, services, and namespaces, despite meeting the requirements mentioned in the Dynatrace documentation. While labels are applied and visible, they do not seem to be functioning as expected for automatic tagging.
It is important to note that Dynatrace considers labels as metadata, and for automatic tagging purposes, you should use annotations instead. Labels are intended for Kubernetes, whereas annotations are more suitable for "human" interaction and tools like Dynatrace.
To ensure that your Kubernetes/OpenShift components are tagged correctly, follow the steps below:
Use annotations in your Kubernetes/OpenShift deployment configuration to define the tags you want Dynatrace to recognize.
Grant the viewer role to service accounts associated with your Kubernetes/OpenShift deployment, which allows Dynatrace to access and monitor the necessary metadata.
For more information on leveraging tags defined in Kubernetes deployments and the related documentation, please refer to this Dynatrace support article: https://www.dynatrace.com/support/help/platform-modules/infrastructure-monitoring/container-platform....
By following these guidelines, you should be able to resolve the tagging issues you are experiencing with Dynatrace and your Azure components.
Thats my workaround currently
09 May 2023 07:55 AM
Hello @jimmybourgetleb ,
I do not agree with your comment as everything (processes / services /... ) is already getting Dynatrace Tags based on Kubernetes Labels (and not annotations)
Why can't the "Kubernetes Workloads" DT tags work the same way as all the rest inside DT ? 🙂
I also believe that most people use Kubernetes Labels and not Kubernetes Annotations for the human interactions but that might be a personal view 🙂
Best regards,
Niels
28 Aug 2023 10:57 PM
I agree with most all (users) on this post, we have run into the same issue and its beyond me why Dynatrace isn't making this a priority to resolve. We already cannot have calculated metrics for threshold alerting, but now we cant identify all the app team/product owners in the 30+ namespaces for the 20+ app teams in 6 different environments... so puting identifying labels at the k8s deployment config (which DT does pull in a one big label, but not stored as tag for DT logic or sent to PagerDuty for Orchestration routing logic)
Seems like a big miss that shouldn't be to stressful to fix, working with our internal DT team and directly with the DT vendor, we cannot get a definitive read on if/when this will be fixed.
~John
29 Aug 2023 12:27 PM
Added this to the corresponding RFE: https://community.dynatrace.com/t5/Feedback-channel/Make-K8s-metadata-labels-annotations-a-first-cla... (tagging no 16)