18 Sep 2024 05:57 PM - last edited on 19 Sep 2024 07:58 AM by MaciejNeumann
We have an EKS component in aws that is private, so we created an AG environment to be the proxy between the EKS and the saas environment. We opeted for a offline deployment by pulling and pushing the images and we had the pods all running (we used the apiurl of the activegate in dynakube.yaml "apiUrl: https://<activegate-host>:9999/e/<environment-id>/api") and the cluster kub shows up in the Dynatrace UI. But after a 15 min, dynakube AG (containerized AG) goes offline in the Dynatrace UI without its pod being down and the monitoring goes down also, but the dynakube oneagents still online.
with that being said, the cluster can communicate with the external ActiveGate in 9999 and the externel ActiveGate can reach the cluster Dynatrace SaaS.
any body can relate to this issue,
thank you for your time,
regards
19 Sep 2024 01:37 AM
Hi @Lamiaa
Have you tried looking into the AG logs to see if you can find something ?
kubectl logs dynakube-activegate-0 -n dynatrace
Below command also provides some info for troubleshooting
kubectl exec deploy/dynatrace-operator -n dynatrace -- dynatrace-operator troubleshoot
19 Sep 2024 02:26 AM
Hey Lamiaa, you could also try looking into some self monitoring metrics for activegates. Such as dsfm:active_gate.kubernetes.api.query_count:splitBy(status_code,status_reason,path) which will show failing K8s requests. There are some other useful dimensions as well. Can be a good place to start.