cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Deploy Dynatrace Operator | Host Group by Node pool Name

RPbiaggio
Helper

Guys, how are you?

I have a kubernetes cluster with 5 node pools. I would like to define my oneagent host groups according to these node pools, so I would have 5 different host groups in this cluster. Is it possible to do this in yaml? Reference which host group will be applied to which machine pool node?

5 REPLIES 5

RPbiaggio
Helper

Maybe if I do this it should work?

  oneAgent:
    cloudNativeFullStack:
      # Configuração para o node pool 1
      - nodeSelector:
          nodepool: "pool1"  # Seleciona o node pool "pool1"
        tolerations:
          - effect: NoSchedule
            key: node-role.kubernetes.io/master
            operator: Exists
          - effect: NoSchedule
            key: node-role.kubernetes.io/control-plane
            operator: Exists
          - key: "CriticalAddonsOnly"
            operator: "Equal"
            value: "true"
            effect: "NoSchedule"
        args:
          - --set-host-group=host-group-pool1

      # Configuração para o node pool 2
      - nodeSelector:
          nodepool: "pool2"  # Seleciona o node pool "pool2"
        tolerations:
          - effect: NoSchedule
            key: node-role.kubernetes.io/master
            operator: Exists
          - effect: NoSchedule
            key: node-role.kubernetes.io/control-plane
            operator: Exists
          - key: "CriticalAddonsOnly"
            operator: "Equal"
            value: "true"
            effect: "NoSchedule"
        args:
          - --set-host-group=host-group-pool2

      # Configuração para o node pool 3
      - nodeSelector:
          nodepool: "pool3"  # Seleciona o node pool "pool3"
        tolerations:
          - effect: NoSchedule
            key: node-role.kubernetes.io/master
            operator: Exists
          - effect: NoSchedule
            key: node-role.kubernetes.io/control-plane
            operator: Exists
          - key: "CriticalAddonsOnly"
            operator: "Equal"
            value: "true"
            effect: "NoSchedule"
        args:
          - --set-host-group=host-group-pool3

 

Yes, the expected configuration is a Dynakube per Node pool with a different host group for each.

I'd only change the host group to the dedicated field, as the configuration via args is deprecated. See Dynakube parameters in the official documentation.

Hi @chrismuellner, I made many tests here and this yaml does not work. 

I did a lot of testing here and this yaml doesn't work. Apparently I can't define multiple nodeselector blocks. The big challenge here is being able to install oneagent on multiple nodes, passing the hostgroup. I wasn't able to do this, support still hasn't been able to help me and/or tell me if it's possible. I've already tried putting two dynakube resources in the same file, separate files, installing on nodepool, then another, but nothing works.

Can you help me understand if this is possible?

I don't know if the need is clear, but my cluster has 10 nodepools, right? I need to install the agent on all nodes, however, segregating the hostgroups according to the nodepools.

Thank you



kyle_harrington
Dynatrace Enthusiast
Dynatrace Enthusiast

Howdy!

 

This is 100% possible, however this does require that multiple dynakubes are used and there are some additional parameters which need to be set to deploy successfully. You will need 1 dynakube CRD for each node pool. 

Referentially, i've built out the below on a GKE cluster with two node pools: "default-pool" and "super-cool-nodepool":

 

 

 

kubectl describe nodes | grep nodepool
                    cloud.google.com/gke-nodepool=super-cool-nodepool
                    cloud.google.com/gke-nodepool=super-cool-nodepool
                    cloud.google.com/gke-nodepool=super-cool-nodepool
                    cloud.google.com/gke-nodepool=default-pool
                    cloud.google.com/gke-nodepool=default-pool
                    cloud.google.com/gke-nodepool=default-pool

 

 

 

 
To get this deployed in Cloud Native Full Stack, I'd recommend the below yaml as a template:

 

 

 

 

---
apiVersion: dynatrace.com/v1beta2
kind: DynaKube
metadata:
  name: default-pool
  namespace: dynatrace
spec:
  apiUrl: https://{environmentid}.live.dynatrace.com/api
  tokens: "my-token"
  skipCertCheck: false
  oneAgent:
    hostGroup: default-pool
    cloudNativeFullStack:
      namespaceSelector:
        matchLabels:
          monitor: default 
      nodeSelector:
        cloud.google.com/gke-nodepool: default-pool

 

 

 


Some notes on the "whys" here:

  1. "name:" needs to be unique to the dynakube CRD as each nodepool requires a different configuration
  2. "token:" you can reuse the same token/ secret in the same cluster, but it must be called out via this parameter. if it is not the CRD will look for a token with the same value set in "name:" In my example i am using one token for both CRDS, but you could use a unique token for each CRD/ node pool.
  3. "hostGroup:" This is the recommended yaml syntax over "args" as called out by @chrismuellner 
  4. "nameSpaceSelector:" Setting some unique form of APM injection in each dynakube CRD is required when leveraging multiple dynakubes in Cloud Native Full Stack or Application Only Mode. These two modes use our CSI drivers by default to perform APM injection, if there are injection rules which  overlap or conflict the dynatrace operator will reject the dynakube. please note that my example would look for namespaces with the "default" label, not the "Default" namespace.

     

    Depending on your use case for deploying into a single cluster with multiple CNFS dynakubes, it may be in best practices to deploy with dedicated Activegates per node pool. Note that you'll also want to add node selectors to those as well. For a six node cluster with two node pools you could use a single yaml file, but they are still separate dynakubes. Note in my yaml below i am also generating a token, which both dynakube crds are using:

 

 

 

apiVersion: v1
data:
  apiToken: <REDACTED>
  dataIngestToken: <REDACTED>
kind: Secret
metadata:
  name: my-token
  namespace: dynatrace
type: Opaque
---
apiVersion: dynatrace.com/v1beta2
kind: DynaKube
metadata:
  name: default-pool
  namespace: dynatrace
spec:
  apiUrl: https://{environmentid}.live.dynatrace.com/api
  tokens: "my-token"
  skipCertCheck: false
  oneAgent:
    hostGroup: default-pool
    cloudNativeFullStack:
      namespaceSelector:
        matchLabels:
          monitor: default 
      nodeSelector:
        cloud.google.com/gke-nodepool: default-pool
  # Configuration for ActiveGate instances.
  activeGate:
    capabilities:
      - routing
      - kubernetes-monitoring
      - dynatrace-api
    replicas: 1
    resources:
      requests:
        cpu: 500m
        memory: 512Mi
      limits:
        cpu: 1000m
        memory: 1.5Gi
    nodeSelector:
        cloud.google.com/gke-nodepool: default-pool
---
apiVersion: dynatrace.com/v1beta2
kind: DynaKube
metadata:
  name: super-cool-nodepool
  namespace: dynatrace
spec:
  apiUrl: https://{environmentid}.live.dynatrace.com/api
  tokens: "my-token"
  skipCertCheck: false
  oneAgent:
    hostGroup: super-cool-nodepool
    cloudNativeFullStack:
      namespaceSelector:
        matchLabels:
          monitor: super-cool-namespace
      nodeSelector:
        cloud.google.com/gke-nodepool: super-cool-nodepool
  # Configuration for ActiveGate instances.
  activeGate:
    capabilities:
      - routing
      - kubernetes-monitoring
      - dynatrace-api
    replicas: 1
    resources:
      requests:
        cpu: 500m
        memory: 512Mi
      limits:
        cpu: 1000m
        memory: 1.5Gi
    nodeSelector:
        cloud.google.com/gke-nodepool: super-cool-nodepool

 

 

 

Which results in the below resources being deployed to the cluster

kyle_harrington_2-1726076685791.png

edit: One last note. I removed tolerations as it's not likely all your node pools are part of the control plane, but they may be relevant to your needs. The tolerations in your previously shared yaml are correct. 

@kyle_harrington,
Thanks for the answer. This configuration is what I was already doing, with the exception of the namespace selector. I wasn't using it because I have the same namespace in different nodepools. I will change the strategy. As mentioned, for this type of approach to work, it is mandatory that we use the selector namespace and it is not possible.

Featured Posts