cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Automatic tags on Kubernetes/OpenShift components with labels (pod, namespace, etc.)

fstekelenburg
DynaMight Pro
DynaMight Pro

The literal phrase in the documentation, says "Dynatrace automatically derives tags from your Kubernetes/OpenShift labels. This enables you to automatically organize and filter all your monitored Kubernetes/OpenShift application components." -- Organize Kubernetes/OpenShift deployments by tags | Dynatrace Docs

The requirements stated are:

  • Pods are monitored with a code module
  • automountServiceAccountToken: false isn't set in your pod's spec

However, with this requirements set, the Azure components like pods, containers, workloads, services and namespaces are not automatically tagged. Although labels are applied and visible in the properties details of the entities.

Oddly enough deriving process groups and services do get automatically [Azure] tagged.

Is this a configuration thing, or is the documentation not accurate? Are we missing a crucial tagging element?

I see several questions and RFE's similar to this topic, but it is not clear on this, and the  statement as documented.

fstekelenburg_0-1674829375346.png

 

Kind regards, Frans Stekelenburg                 Certified Dynatrace Associate | measure.works, Dynatrace Partner
9 REPLIES 9

ChadTurner
DynaMight Legend
DynaMight Legend

Hoping we get clarity on this at perform. I have noticed the same, even when designing automatic tags you cannot target the namespaces, workloads etc.... We had to leverage the entity selector using relationalship statements to effectively tag the desired entities, but its not scalable for large organizations. 

-Chad

niels_peto
Frequent Guest

We want this badly too,
As stated, processes and services get tagged properly based on kubernetes workload labels

Why not the workload itself ? 🙂

 

We rely heavily on tags to route our alerts and this routing is now useless without Dynatrace tags.

 

Would be really nice if DT could implement this,

Thanks in advance
Niels

Hi @fstekelenburg 

Actually with Dynatrace as-it-is using automation (like monaco or terraform) + EntitySelectors to propagate Tags and MZ solves issue very efficiently. And honestly for proper monitoring (and alerting especially) it is kinda required. Have You ever noticed that Kubernetes/OpenShift events are not propagated to processes? A lot of k8s events are error or failure/unavailability events that are really important. Thus without MZ or tags used in alerting profiles You cannot properly handle Notification, as some are omitted. Highly recommend pushing them via Entity Selectors for time being. 

how to we get workloads label values as tags so we can filter by those labels in data explorer?

fstekelenburg
DynaMight Pro
DynaMight Pro

I think that this (and the solution) is on the radar in the long running RFE RFE: Ability to tag containers based on Kubernetes namespace - Dynatrace Community.

Kind regards, Frans Stekelenburg                 Certified Dynatrace Associate | measure.works, Dynatrace Partner

jimmybourgetleb
Observer

 

According to my experience with dynatrace.

It seems that you are experiencing issues with automatically tagging Azure components, such as pods, containers, workloads, services, and namespaces, despite meeting the requirements mentioned in the Dynatrace documentation. While labels are applied and visible, they do not seem to be functioning as expected for automatic tagging.

It is important to note that Dynatrace considers labels as metadata, and for automatic tagging purposes, you should use annotations instead. Labels are intended for Kubernetes, whereas annotations are more suitable for "human" interaction and tools like Dynatrace.

To ensure that your Kubernetes/OpenShift components are tagged correctly, follow the steps below:

Use annotations in your Kubernetes/OpenShift deployment configuration to define the tags you want Dynatrace to recognize.

Grant the viewer role to service accounts associated with your Kubernetes/OpenShift deployment, which allows Dynatrace to access and monitor the necessary metadata.

For more information on leveraging tags defined in Kubernetes deployments and the related documentation, please refer to this Dynatrace support article: https://www.dynatrace.com/support/help/platform-modules/infrastructure-monitoring/container-platform....

By following these guidelines, you should be able to resolve the tagging issues you are experiencing with Dynatrace and your Azure components.


Thats my workaround currently

Hello @jimmybourgetleb ,

 

I do not agree with your comment as everything (processes / services /... ) is already getting Dynatrace Tags based on Kubernetes Labels (and not annotations)

Why can't the "Kubernetes Workloads" DT tags work the same way as all the rest inside DT ? 🙂

 

I also believe that most people use Kubernetes Labels and not Kubernetes Annotations for the human interactions but that might be a personal view 🙂

 

Best regards,
Niels

netfreq
Frequent Guest

I agree with most all (users) on this post, we have run into the same issue and its beyond me why Dynatrace isn't making this a priority to resolve. We already cannot have calculated metrics for threshold alerting, but now we cant identify all the app team/product owners in the 30+ namespaces for the 20+ app teams in 6 different environments... so puting identifying labels at the k8s deployment config (which DT does pull in a one big label, but not stored as tag for DT logic or sent to PagerDuty for Orchestration routing logic)

Seems like a big miss that shouldn't be to stressful to fix, working with our internal DT team and directly with the DT vendor, we cannot get a definitive read on if/when this will be fixed. 

~John

florian_g
Dynatrace Mentor
Dynatrace Mentor

Added this to the corresponding RFE: https://community.dynatrace.com/t5/Feedback-channel/Make-K8s-metadata-labels-annotations-a-first-cla... (tagging no 16)

One does not simply run a container...

Featured Posts