For the containerized service, everything looks fine (looks fine in the sense that, every time the underlying docker is recycle, I didn't see any gap or broken pieces in the metric graph in service layer)
However, in host layer. The old containers would always goes into the state of 'unmonitored'.
For example, as this screenshot shown, api-private-gateway would always get its name increment by one each time it recycle. Right now, api-private-gateway-64-* is running, is there anyway I can make the containers before 64 disappear? (or, if they are gonna still be there can their state change from 'unmonitored' to something more......meaningful per se?)
There is a new method where you need to pass an event within 60mins of the recycle which will then keep you from being alerted when a host is being spun down in cloud environments. You can read more about it here: https://www.dynatrace.com/news/blog/new-event-type-helps-avoid-unnecessary-alerts-for-planned-host-d...
That's neat but doesn't really help addressing the "unmonitored" PaaS hosts spam for terminated containers/pods in an application-only monitoring scenario (= oneagent is manually deployed into the container).