02 May 2024 10:08 AM - last edited on 18 Jul 2024 12:12 PM by Michal_Gebacki
We have setup exporter to scrape metric into Dynatrace.
One of the metrics "hubble_drop_total.count" is growing in a crazy rate due to some dimension making a lot of data points.
Prometheus remembers all these data points since pod started, rather than only exposing the metric data for the last minute or so, then Dynatrace grabs the literal value present in the Prometheus exporter, and then ingests it.
Result is going from 60 data point pr hour to 904.000 datapoint pr hour in a week.
We really need to have this metric but cant really see what else to do beside cutting down dimension so much that it looses value.
And we are running on DPS license model so it kinda ramps up the cost.
Any one else that have encountered this issue and seing a good solution so we can keep the data in Dynatrace instead of ingesting them in grafana instead.
29 Jul 2024 11:21 PM
Any updates? Also, is there a dashboard/report that can quickly let us identity the top 10 workloads sending Prometheus metrics to Dynatrace?