03 Sep 2024 06:34 PM
Guys, how are you?
I have a kubernetes cluster with 5 node pools. I would like to define my oneagent host groups according to these node pools, so I would have 5 different host groups in this cluster. Is it possible to do this in yaml? Reference which host group will be applied to which machine pool node?
Solved! Go to Solution.
03 Sep 2024 07:12 PM
Maybe if I do this it should work?
oneAgent:
cloudNativeFullStack:
# Configuração para o node pool 1
- nodeSelector:
nodepool: "pool1" # Seleciona o node pool "pool1"
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
operator: Exists
- effect: NoSchedule
key: node-role.kubernetes.io/control-plane
operator: Exists
- key: "CriticalAddonsOnly"
operator: "Equal"
value: "true"
effect: "NoSchedule"
args:
- --set-host-group=host-group-pool1
# Configuração para o node pool 2
- nodeSelector:
nodepool: "pool2" # Seleciona o node pool "pool2"
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
operator: Exists
- effect: NoSchedule
key: node-role.kubernetes.io/control-plane
operator: Exists
- key: "CriticalAddonsOnly"
operator: "Equal"
value: "true"
effect: "NoSchedule"
args:
- --set-host-group=host-group-pool2
# Configuração para o node pool 3
- nodeSelector:
nodepool: "pool3" # Seleciona o node pool "pool3"
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
operator: Exists
- effect: NoSchedule
key: node-role.kubernetes.io/control-plane
operator: Exists
- key: "CriticalAddonsOnly"
operator: "Equal"
value: "true"
effect: "NoSchedule"
args:
- --set-host-group=host-group-pool3
11 Sep 2024 12:46 PM
Yes, the expected configuration is a Dynakube per Node pool with a different host group for each.
I'd only change the host group to the dedicated field, as the configuration via args is deprecated. See Dynakube parameters in the official documentation.
11 Sep 2024 02:09 PM
Hi @chrismuellner, I made many tests here and this yaml does not work.
I did a lot of testing here and this yaml doesn't work. Apparently I can't define multiple nodeselector blocks. The big challenge here is being able to install oneagent on multiple nodes, passing the hostgroup. I wasn't able to do this, support still hasn't been able to help me and/or tell me if it's possible. I've already tried putting two dynakube resources in the same file, separate files, installing on nodepool, then another, but nothing works.
Can you help me understand if this is possible?
I don't know if the need is clear, but my cluster has 10 nodepools, right? I need to install the agent on all nodes, however, segregating the hostgroups according to the nodepools.
Thank you
11 Sep 2024 06:45 PM - edited 11 Sep 2024 08:42 PM
Howdy!
This is 100% possible, however this does require that multiple dynakubes are used and there are some additional parameters which need to be set to deploy successfully. You will need 1 dynakube CRD for each node pool.
Referentially, i've built out the below on a GKE cluster with two node pools: "default-pool" and "super-cool-nodepool":
kubectl describe nodes | grep nodepool
cloud.google.com/gke-nodepool=super-cool-nodepool
cloud.google.com/gke-nodepool=super-cool-nodepool
cloud.google.com/gke-nodepool=super-cool-nodepool
cloud.google.com/gke-nodepool=default-pool
cloud.google.com/gke-nodepool=default-pool
cloud.google.com/gke-nodepool=default-pool
To get this deployed in Cloud Native Full Stack, I'd recommend the below yaml as a template:
---
apiVersion: dynatrace.com/v1beta2
kind: DynaKube
metadata:
name: default-pool
namespace: dynatrace
spec:
apiUrl: https://{environmentid}.live.dynatrace.com/api
tokens: "my-token"
skipCertCheck: false
oneAgent:
hostGroup: default-pool
cloudNativeFullStack:
namespaceSelector:
matchLabels:
monitor: default
nodeSelector:
cloud.google.com/gke-nodepool: default-pool
Some notes on the "whys" here:
Depending on your use case for deploying into a single cluster with multiple CNFS dynakubes, it may be in best practices to deploy with dedicated Activegates per node pool. Note that you'll also want to add node selectors to those as well. For a six node cluster with two node pools you could use a single yaml file, but they are still separate dynakubes. Note in my yaml below i am also generating a token, which both dynakube crds are using:
apiVersion: v1
data:
apiToken: <REDACTED>
dataIngestToken: <REDACTED>
kind: Secret
metadata:
name: my-token
namespace: dynatrace
type: Opaque
---
apiVersion: dynatrace.com/v1beta2
kind: DynaKube
metadata:
name: default-pool
namespace: dynatrace
spec:
apiUrl: https://{environmentid}.live.dynatrace.com/api
tokens: "my-token"
skipCertCheck: false
oneAgent:
hostGroup: default-pool
cloudNativeFullStack:
namespaceSelector:
matchLabels:
monitor: default
nodeSelector:
cloud.google.com/gke-nodepool: default-pool
# Configuration for ActiveGate instances.
activeGate:
capabilities:
- routing
- kubernetes-monitoring
- dynatrace-api
replicas: 1
resources:
requests:
cpu: 500m
memory: 512Mi
limits:
cpu: 1000m
memory: 1.5Gi
nodeSelector:
cloud.google.com/gke-nodepool: default-pool
---
apiVersion: dynatrace.com/v1beta2
kind: DynaKube
metadata:
name: super-cool-nodepool
namespace: dynatrace
spec:
apiUrl: https://{environmentid}.live.dynatrace.com/api
tokens: "my-token"
skipCertCheck: false
oneAgent:
hostGroup: super-cool-nodepool
cloudNativeFullStack:
namespaceSelector:
matchLabels:
monitor: super-cool-namespace
nodeSelector:
cloud.google.com/gke-nodepool: super-cool-nodepool
# Configuration for ActiveGate instances.
activeGate:
capabilities:
- routing
- kubernetes-monitoring
- dynatrace-api
replicas: 1
resources:
requests:
cpu: 500m
memory: 512Mi
limits:
cpu: 1000m
memory: 1.5Gi
nodeSelector:
cloud.google.com/gke-nodepool: super-cool-nodepool
Which results in the below resources being deployed to the cluster
edit: One last note. I removed tolerations as it's not likely all your node pools are part of the control plane, but they may be relevant to your needs. The tolerations in your previously shared yaml are correct.
13 Sep 2024 01:26 PM
@kyle_harrington,
Thanks for the answer. This configuration is what I was already doing, with the exception of the namespace selector. I wasn't using it because I have the same namespace in different nodepools. I will change the strategy. As mentioned, for this type of approach to work, it is mandatory that we use the selector namespace and it is not possible.