cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Will Dynatrace OneAgent interfere with Golang Technology enabled since kubernetes is written in Go for app written in Go

bhantol
Participant

One Agent version: 1.29.x

We use Dynatrace Operator initContainer method prvision our pods with full stack. I can see that all the technolofy stack libraries (.so files) are present and LD_LIBRARY is also correctly set.

 

But Golang has been disabled (intentionally not selected) from Dynatrace monitoring because it might interfere with "other" things.

One example is istio sidecar proxy container that is running in the same pod. Say we don't want this container to be monitored but want our app containers (the names of which can be anything based on what teams want - and there are 1000s of them of them - some are Golang based).

Because of this Goland has been turned off but we would like to support that.

 

Is there any way to override the supported technologies via some annotation or something ?

Or

There is a limitation of what we can do in kubernetes environment and Dynatrace does not support this use case?

 

12 REPLIES 12

gopher
Pro

@bhantol ,
You can't exclude based on technology, but there are a couple of approaches that you can take to help minimise this.

First is to use an opt-in based approach (monitor nothing by default). 
This is done by setting up a webhook label, and only applying that label to a namespace where you want monitored.  Configure monitoring for namespaces and pods - Dynatrace Docs, this way only workloads that require Dynatrace will get it.  This is a little more work on deployment teams, but they can add it into their build templates.
They can also go fine grain on specific deployments and or containers.  
  
Second - which is is an opt-out based approach (monitor everything by default). 

In the Dynakube CRD, you can set up a rule like below for metadata enrichment and one agent (both need to be the same or else injection will occur anyway)   
---

    ### Excluded Namespace - must be added to metadata enrichment as well ##
      namespaceSelector:
       matchExpressions:
          - key: kubernetes.io/metadata.name
            operator: NotIn
            values:
             # kubernetes
             - calico-apiserver
             - calico-system
             - cluster-operation            
             - default                    
             - kube-node-lease
             - kube-public
             - kube-system
             - tigera-operator
             # container squad
             - gatekeeper-system
             - istio-system
             - tlsrouter              
             # Services and Agents  
             - crossplane-system            
             # application namespaces

             - namespace-A

This will then exclude monitoring from the k8s critical components (where most of the go is) and anything else you might not want monitored, whilst monitoring all the workload by default. This will also help protect your platform from not crashing (something we have had outstanding issues with when the oneagent-init container fails - e.g proxy issue).

I haven't seen any issues with the istio-proxy sidecar getting instrumented, but if you wanted to exclude (same as any other deployment)  you can always add the relevant annotation 'oneagent.dynatrace.com/inject: "false"'  to the deployment or in the instance of istio, the istio mesh configmap. 

Hope this helps. 

We were given environment variable DT_AGENT_TYPE_OVERIDE='go=on' 

With opt in option but for some reason the agent is Inactivated. The logs are not clear as to why. 

Is there a code base that we can refer to understanding these logs?

 

This did not work with the opt in strategy while disabling globally. 

We also tried dt_agent_type_override=go=on but no luck.

 

The dynatrace UI still does not show this host/k8s container being monitored.

@bhantol please provide what deployment type in k8s you are using and provide kubectl describe pod output.

Certified Dynatrace Master | Alanata a.s., Slovakia, Dynatrace Master Partner

Please find the k desc po and the link to a post with agent logs further doen

k describe po

k describe po myapp-deployment-f74b7d587-889x5
Name:             myapp-deployment-f74b7d587-889x5
Namespace:        myteam-myproject-dev
Priority:         0
Service Account:  default
Node:             ip-1-1-1-1.ec2.internal/1.1.1.1
Start Time:       Fri, 29 Nov 2024 14:47:29 -0500
Labels:           app=myapp
                  myproject/protectRoutesByAuth=true
                  k8s.example.org/cost-center=myteam
                  k8s.example.org/environment=dev
                  k8s.example.org/project=myproject
                  pod-template-hash=f74b7d587
                  security.istio.io/tlsMode=istio
                  service.istio.io/canonical-name=myapp
                  service.istio.io/canonical-revision=latest
Annotations:      dynakube.dynatrace.com/injected: true
                  istio.io/rev: default
                  jenkins.example.org/build-initiator: myemail@example.org
                  jenkins.example.org/build-number: 66
                  jenkins.example.org/build-timestamp: 2024-11-29 14:44:05 EST
                  jenkins.example.org/build-url: https://jenkins-myteam.example.org/job/myteam/job/myproject-eks/job/dev/job/my-app/66/
                  k8s.example.org/build-system: jenkins.example.org
                  kubectl.kubernetes.io/default-container: myapp
                  kubectl.kubernetes.io/default-logs-container: myapp
                  kubernetes.io/limit-ranger:
                    LimitRanger plugin set: ephemeral-storage request for container istio-proxy; ephemeral-storage limit for container istio-proxy; ephemeral-...
                  oneagent.dynatrace.com/injected: true
                  prometheus.io/path: /stats/prometheus
                  prometheus.io/port: 15020
                  prometheus.io/scrape: true
                  proxy.istio.io/config: holdApplicationUntilProxyStarts: true
                  sidecar.istio.io/interceptionMode: REDIRECT
                  sidecar.istio.io/proxyCPU: 200m
                  sidecar.istio.io/proxyCPULimit: 200m
                  sidecar.istio.io/proxyMemory: 128Mi
                  sidecar.istio.io/proxyMemoryLimit: 128Mi
                  sidecar.istio.io/rewriteAppHTTPProbers: false
                  sidecar.istio.io/status:
                    {"initContainers":["istio-validation"],"containers":["istio-proxy"],"volumes":["workload-socket","credential-socket","workload-certs","ist...
                  traffic.sidecar.istio.io/excludeInboundPorts: 15020
                  traffic.sidecar.istio.io/excludeOutboundIPRanges:
                    REDACTED
                  traffic.sidecar.istio.io/includeInboundPorts: *
                  traffic.sidecar.istio.io/includeOutboundIPRanges: *
Status:           Running
IP:               REDACTED-IPv4
IPs:
  IP:           REDACTED-IPv4
Controlled By:  ReplicaSet/myapp-deployment-f74b7d587
Init Containers:
  istio-validation:
    Container ID:  containerd://9e1352707b7862062699283d29985138c820255c194c1b081a966e1ab9c64dba
    Image:         docker-trusted.example.org/docker.io/istio/proxyv2:1.20.6
    Image ID:      docker-trusted.example.org/docker.io/istio/proxyv2@sha256:34dc53d688bea0394d0a4906577ea394094e6fb217693790a8e12a687ce06215
    Port:          <none>
    Host Port:     <none>
    Args:
      istio-iptables
      -p
      15001
      -z
      15006
      -u
      1337
      -m
      REDIRECT
      -i
      *
      -x
      REDACTED-IPv4
      -b
      *
      -d
      15090,15021,15020
      --log_output_level=default:info
      --log_as_json
      --run-validation
      --skip-rule-apply
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Fri, 29 Nov 2024 14:47:45 -0500
      Finished:     Fri, 29 Nov 2024 14:47:46 -0500
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:                200m
      ephemeral-storage:  3Gi
      memory:             128Mi
    Requests:
      cpu:                200m
      ephemeral-storage:  5Mi
      memory:             128Mi
    Environment:          <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-txk89 (ro)
  install-oneagent:
    Container ID:  containerd://e3959038eeba6de46a0ee67c955a0ce4159dd2ad237a6d07fdda203c0e46b15c
    Image:         docker-trusted.example.org/docker.io/dynatrace/dynatrace-operator:v0.15.0
    Image ID:      docker-trusted.example.org/docker.io/dynatrace/dynatrace-operator@sha256:ab7302bb4d2c2a9b2193719508f46b8412b167ed7f4063002a73920cfe01fb57
    Port:          <none>
    Host Port:     <none>
    Args:
      init
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Fri, 29 Nov 2024 14:47:46 -0500
      Finished:     Fri, 29 Nov 2024 14:47:47 -0500
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:                50m
      ephemeral-storage:  3Gi
      memory:             100Mi
    Requests:
      cpu:                50m
      ephemeral-storage:  5Mi
      memory:             100Mi
    Environment:
      FAILURE_POLICY:       silent
      K8S_PODNAME:          myapp-deployment-f74b7d587-889x5 (v1:metadata.name)
      K8S_PODUID:            (v1:metadata.uid)
      K8S_BASEPODNAME:      myapp-deployment-f74b7d587
      K8S_CLUSTER_ID:       f28ff00a-5724-464a-8da2-e7733aae1c5d
      K8S_NAMESPACE:        myteam-myproject-dev (v1:metadata.namespace)
      K8S_NODE_NAME:         (v1:spec.nodeName)
      FLAVOR:
      TECHNOLOGIES:         all
      INSTALLPATH:          /opt/dynatrace/oneagent-paas
      INSTALLER_URL:
      VERSION:              1.303.50.20241118-133432
      MODE:                 provisioned
      CSI_VOLUME_READONLY:  true
      ONEAGENT_INJECTED:    true
      CONTAINERS_COUNT:     2
      CONTAINER_1_NAME:     myapp
      CONTAINER_1_IMAGE:    docker-snapshot.example.org/my-company/myteam/my-team/my-app:DEV.66.20241129
      CONTAINER_2_NAME:     istio-proxy
      CONTAINER_2_IMAGE:    docker-trusted.example.org/docker.io/istio/proxyv2:1.20.6
    Mounts:
      /mnt/agent-conf from oneagent-agent-conf (rw)
      /mnt/bin from oneagent-bin (rw)
      /mnt/config from injection-config (rw)
      /mnt/share from oneagent-share (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-txk89 (ro)
Containers:
  istio-proxy:
    Container ID:  containerd://9548cb8910afdd5366ab5579d42356ad9052356c8e4e3f09f536d82c4cf0c43d
    Image:         docker-trusted.example.org/docker.io/istio/proxyv2:1.20.6
    Image ID:      docker-trusted.example.org/docker.io/istio/proxyv2@sha256:34dc53d688bea0394d0a4906577ea394094e6fb217693790a8e12a687ce06215
    Port:          15090/TCP
    Host Port:     0/TCP
    Args:
      proxy
      sidecar
      --domain
      $(POD_NAMESPACE).svc.cluster.local
      --proxyLogLevel=warning
      --proxyComponentLogLevel=misc:error
      --log_output_level=default:info
      --log_as_json
    State:          Running
      Started:      Fri, 29 Nov 2024 14:47:48 -0500
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:                200m
      ephemeral-storage:  3Gi
      memory:             128Mi
    Requests:
      cpu:                200m
      ephemeral-storage:  5Mi
      memory:             128Mi
    Readiness:            http-get http://:15021/healthz/ready delay=1s timeout=3s period=2s #success=1 #failure=30
    Startup:              http-get http://:15021/healthz/ready delay=0s timeout=3s period=1s #success=1 #failure=600
    Environment:
      JWT_POLICY:                    third-party-jwt
      PILOT_CERT_PROVIDER:           istiod
      CA_ADDR:                       istiod.istio-system.svc:15012
      POD_NAME:                      myapp-deployment-f74b7d587-889x5 (v1:metadata.name)
      POD_NAMESPACE:                 myteam-myproject-dev (v1:metadata.namespace)
      INSTANCE_IP:                    (v1:status.podIP)
      SERVICE_ACCOUNT:                (v1:spec.serviceAccountName)
      HOST_IP:                        (v1:status.hostIP)
      ISTIO_CPU_LIMIT:               1 (limits.cpu)
      PROXY_CONFIG:                  {"holdApplicationUntilProxyStarts":true}

      ISTIO_META_POD_PORTS:          [
                                         {"containerPort":80,"protocol":"TCP"}
                                     ]
      ISTIO_META_APP_CONTAINERS:     myapp
      GOMEMLIMIT:                    134217728 (limits.memory)
      GOMAXPROCS:                    1 (limits.cpu)
      ISTIO_META_CLUSTER_ID:         Kubernetes
      ISTIO_META_NODE_NAME:           (v1:spec.nodeName)
      ISTIO_META_INTERCEPTION_MODE:  REDIRECT
      ISTIO_META_WORKLOAD_NAME:      myapp-deployment
      ISTIO_META_OWNER:              kubernetes://apis/apps/v1/namespaces/myteam-myproject-dev/deployments/myapp-deployment
      ISTIO_META_MESH_ID:            cluster.local
      TRUST_DOMAIN:                  cluster.local
      DT_DEPLOYMENT_METADATA:        orchestration_tech=Operator-application_monitoring;script_version=snapshot;orchestrator_id=f28ff00a-5724-464a-8da2-e7733aae1c5d
      LD_PRELOAD:                    /opt/dynatrace/oneagent-paas/agent/lib64/liboneagentproc.so
    Mounts:
      /etc/istio/pod from istio-podinfo (rw)
      /etc/istio/proxy from istio-envoy (rw)
      /etc/ld.so.preload from oneagent-share (rw,path="ld.so.preload")
      /opt/dynatrace/oneagent-paas from oneagent-bin (rw)
      /opt/dynatrace/oneagent-paas/agent/conf from oneagent-agent-conf (rw)
      /opt/dynatrace/oneagent-paas/datastorage from oneagent-data-storage (rw)
      /opt/dynatrace/oneagent-paas/log from oneagent-log (rw)
      /var/lib/dynatrace/oneagent/agent/config/container.conf from oneagent-share (rw,path="container_istio-proxy.conf")
      /var/lib/istio/data from istio-data (rw)
      /var/run/secrets/credential-uds from credential-socket (rw)
      /var/run/secrets/istio from istiod-ca-cert (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-txk89 (ro)
      /var/run/secrets/tokens from istio-token (rw)
      /var/run/secrets/workload-spiffe-credentials from workload-certs (rw)
      /var/run/secrets/workload-spiffe-uds from workload-socket (rw)
  myapp:
    Container ID:  containerd://0fcef422605b020c16fac7bde9af313b53e454f0453dbaba2741d410c8b50a4e
    Image:         docker-snapshot.example.org/my-company/myteam/my-team/my-app:DEV.66.20241129
    Image ID:      docker-snapshot.example.org/my-company/myteam/my-team/my-app@sha256:e082c5819fbda64b346ea8d202ea8736932df3c2b93be02c39d893beed0500fe
    Port:          80/TCP
    Host Port:     0/TCP
    Command:
      /bin/sh
    Args:
      -l
      -c
      netstat -tulpen && env && /app/start.sh
    State:          Running
      Started:      Fri, 29 Nov 2024 14:47:50 -0500
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:                375m
      ephemeral-storage:  3Gi
      memory:             512Mi
    Requests:
      cpu:                375m
      ephemeral-storage:  5Mi
      memory:             512Mi
    Liveness:             http-get http://:80/liveness delay=0s timeout=3s period=10s #success=1 #failure=3
    Readiness:            http-get http://:80/readiness delay=0s timeout=3s period=10s #success=1 #failure=3
    Startup:              http-get http://:80/readiness%3Fstartup delay=10s timeout=3s period=10s #success=1 #failure=5
    Environment Variables from:
      myapp-configmap  ConfigMap  Optional: false
    Environment:
      DT_AGENT_TYPE_OVERRIDE:  go=on
      DT_DEPLOYMENT_METADATA:  orchestration_tech=Operator-application_monitoring;script_version=snapshot;orchestrator_id=f28ff00a-5724-464a-8da2-e7733aae1c5d
      LD_PRELOAD:              /opt/dynatrace/oneagent-paas/agent/lib64/liboneagentproc.so
    Mounts:
      /etc/myapp from myapp-jwks (ro)
      /etc/ld.so.preload from oneagent-share (rw,path="ld.so.preload")
      /etc/nginx/nginx.conf from nginx-conf (rw,path="nginx.conf")
      /etc/s3 from s3-shared-style-creds-secret (ro)
      /opt/dynatrace/oneagent-paas from oneagent-bin (rw)
      /opt/dynatrace/oneagent-paas/agent/conf from oneagent-agent-conf (rw)
      /opt/dynatrace/oneagent-paas/datastorage from oneagent-data-storage (rw)
      /opt/dynatrace/oneagent-paas/log from oneagent-log (rw)
      /var/lib/dynatrace/oneagent/agent/config/container.conf from oneagent-share (rw,path="container_myapp.conf")
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-txk89 (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True
  Initialized                 True
  Ready                       True
  ContainersReady             True
  PodScheduled                True
Volumes:
  workload-socket:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  credential-socket:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  workload-certs:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  istio-envoy:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     Memory
    SizeLimit:  <unset>
  istio-data:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  istio-podinfo:
    Type:  DownwardAPI (a volume populated by information about the pod)
    Items:
      metadata.labels -> labels
      metadata.annotations -> annotations
  istio-token:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  43200
  istiod-ca-cert:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      istio-ca-root-cert
    Optional:  false
  myapp-jwks:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  myapp-secret
    Optional:    false
  nginx-conf:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      myapp-nginx-conf-configmap-5tm5dbd7hk
    Optional:  false
  s3-shared-style-creds-secret:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  s3-shared-style-creds-secret
    Optional:    false
  kube-api-access-txk89:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
  injection-config:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  dynatrace-dynakube-config
    Optional:    false
  oneagent-bin:
    Type:              CSI (a Container Storage Interface (CSI) volume source)
    Driver:            csi.oneagent.dynatrace.com
    FSType:
    ReadOnly:          true
    VolumeAttributes:      dynakube=rafay
                           mode=app
  oneagent-share:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  oneagent-agent-conf:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  oneagent-data-storage:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  oneagent-log:
    Type:        EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:   <unset>
QoS Class:       Guaranteed
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s

agent logs

And other details in the logs

https://community.dynatrace.com/t5/Open-Q-A/Using-DT-AGENT-TYPE-OVERRIDE-go-on-to-circumvent-global/...

 

@bhantol ,

If you're saying that GO is not being injected - then it is either 
The way the GO App has been built is not compatible: 

gopher_1-1733172663052.png

2. Built In Process Monitoring Rules (the legacy rule)

"https://{environmentid}.live.dynatrace.com/ui/settings/builtin:process.built-in-process-monitoring-rule"

gopher_0-1733172612720.png


There is really not much else to monitoring GO apps.  It will do it by default unless disabled or it's not built in a way that Dynatrace can instrument (this will show a reason on process view or Deployment Status screen)
Thanks 

The app is built dynamically. Checked with the 'file' utility.

 

Is there anything in the logs that look weird? I have pasted the link to that in another reply.

 

I mean if there was a problem injecting the logs should state what problem is there. 

 

Also we upgraded DT agent to 1.303

Static analysis is disabled as per logs but app is dynamically built using standard go tool chain.

 

Julius_Loman
DynaMight Legend
DynaMight Legend

@bhantol please see this: https://docs.dynatrace.com/docs/shortlink/annotate

 


By default, Dynatrace Operator injects OneAgent into all namespaces, except for:

Namespaces prefixed with kube- or openshift-.
The namespace where Dynatrace Operator was installed.

 You can safely turn on Golang instrumentation unless your apps run in the kube-* / openshift-* namespaces. You should not run your custom workload in those anyway. 

Certified Dynatrace Master | Alanata a.s., Slovakia, Dynatrace Master Partner

The technology type Go has been globally disabled.

 

Hence the DT override:-

DT_AGENT_TYPE_OVERIDE='go=on' 

In that case again asking the question is the source codebase is available to understand or why the dynatrace UI does not show monitoring for this kubernetes workload.

The logs are unclear if the agent is working.

 

The dynatrace ui settings -> status does not show this working either. 

If you see agent version in the process group instance properties, the code module is injected. Injection might not happen due to various reasons. What deployment mode ldo you use in your dynakube?

Certified Dynatrace Master | Alanata a.s., Slovakia, Dynatrace Master Partner

Apologies for multiple posts. But our configuration and the logs I captured are in this post https://community.dynatrace.com/t5/Container-platforms/Will-Dynatrace-OneAgent-interfere-with-Golang... 

Basically we are disabling Go but opting it in using a special environment variable linked in the post.

 

In short this is in a userland namespace and dynatrace install-oneagent init containers will install oneagent-pass related data the logs linked are captured from the /opt/dynatrace/oneagent-pass/agent/go/ruxit.....log

 

The logs indicate the the version and I am not able to understand the logs because I don't have the source code. 

 

Wondering if you have any insights in the logs. Like why it says agent Inactivated ? 

Featured Posts