16 May 2024 01:25 PM
What is the best way to alert when a specific pod does not exist (was running but now is not). A few metrics that I have looked at are builtin:kubernetes.workload.pods_desired and builtin:kubernetes.pods:filter but something I don't like about these is that when the pod is running then there is simply no data shown. Instead of having a value of 0 there is just no data.
I don't like to use something like default(0, always) OR 'alert on missing data' because to me this could be misleading. If the kubernetes integration is severed and we are no longer collecting these metrics then we will have a false positive, an alert will be created even if the pod is actually still running.
Is there really no metric that will simply show me a value of 0 if the pod is not running, and without having to manipulate the metric expression with something like default(0, always)?
Solved! Go to Solution.
16 May 2024 01:36 PM
Hi @sivart_89 ,
pod = process. You need to go to the process group setting and do the following
if the count is less than 1, you will receive alert as below
Regards,
Esam
16 May 2024 06:05 PM
Ahh alright that makes sense. I was looking for a metric supplied from the K8s integration but I suppose this will work.
16 May 2024 07:48 PM
Check this topic here, it may give you some hints on K8S monitoring best practices:
Similar question that you may also check: