What is the way to add host group to an Kubernetes/OpenShift pod.
Solved! Go to Solution.
What is your use case?
Pods won't appear as hosts normally, only if you use PaaS Monitoring and don't install an agent on the worker node or use the operator.
There also is enough meta data for things like process group separation in k8s anyway, so host group separation shouldn't be needed for it.
Thanks for your reply, yes pod doesnt appear as host. so for example there are multiple worker nodes which i need to filter it through host group but failed understand where i need to inject that parameter in case if k8s.
There is no need to use host groups, if it's just for filtering for specific criteria.
Have a look at Tags and Management Zones.
@pahofmann My use case is while deploying oneagent operator host group should be injected so automatically it will tagged with Management Zone and filer would be easier then other filter options.
Well, host groups can only be applied to the worker nodes not on a pod level.
But tags and management zones are pretty easy to setup. You can filter/tag based on all the criteria of the pods like namespace, pod/container name, k8s labels already on the pods, ....
So very few rules should be needed.
Just out of curiosity, if you can share it, what would you use for the host group that is not available in other variables already like clusterId, name, etc.?
Not sure I understood the question (or if it was event directed at me), but at least one practical use case is this:
Let's say you have a wide application landscape monitored, with a mix of on-prem and cloud-based monitoring. You'll often have various rules defined based on the host group value, which is especially useful for the on-prem monitoring. Having then host groups also available at k8s allows you to unify these rules, and make them apply to the entire environment at once regardless of technology. At least I personally like that approach. Of course if you design everything from the start to specifically avoid host groups, then it's different.
@kalle_lahtinen no was directed at @Sujit, but i just realized the sorting was off and his reply was pretty old and not to my last comment 🙂
Sure you have a point. From my experience it varies a lot from customer to customer. While outside of OpenShift/k8s things are quite heterogeneous and often host groups are only one step with a lot of additional configuration needed.
For k8s a small set of rules can be used across the entire environment and often doesn't differ much between customers.
"There is no need to use host groups, if it's just for filtering for specific criteria."
This is incorrect, hosts are used for PG detection. If you don't have host groups defined, there's a chance (well, this is something I just now ran into) that a change/upgrade/restart generates completely new Process Groups and Services for your Kubernetes environment. It means that all old customizations and configs which aren't rule-based are then gone. And even if all the configs are automated, I doubt no one wants duplicate services and data - for example in order for you to then look at past 7 Days, you'd need to see days 1-3 from Service A and 4-7 from Service B.
I would recommend defining the host group when possible.
Edit: Actually I missed the word "if" in that part I quoted 😛 But referring then to this other comment, the point still stands; host groups are good to define so that you avoid generating multiple instances of the same PG or Service: "There also is enough meta data for things like process group separation in k8s anyway, so host group separation shouldn't be needed for it."
Sure it doesn't hurt. But I haven't seen a single PG detection issue on any OpenShift/Kubernetes cluster yet. Workload/Namespace information is usually enough to differentiate them.
Yeah the issue in my case was actually due to developers first having defined host groups and then not using them anymore during an Operator upgrade - that broke down the PGs. If they were never used in the first place, PG detection would have likely worked throughout this change. But anyway, it's good to note that host groups are one factor in the PG detection, though perhaps not essential in the k8s world 🙂
Sure if host groups were used at first installation phase and then forgotten during following updates new PGs would have been created at first restart after host group were once again applied.
They are not strictly necessary but still useful, I recently had to use them since I had namespace/workload name duplication among different k8s clusters that led to wrong, unified PGs.