I have a question. We have an on prem instance of Dynatrace. Managed Cluster node. My question is that it seems like some of our agents are indicating they are not consuming.
Half of our systems show this when you go to the oneagent status page. As a result our host unit consumption went down 100 units and we are wondering why this is the case they are not currently consuming...
Host is monitored.
Host units: 0.5 (currently not consuming)IP addresses: 999.999.999.999 Monitoring mode: Full stack Currently used ActiveGates activeagte1.adserver.com, activegate2.adserver.com
Any chance your ActiveGates have issues connecting to the Dynatrace Cluster? OneAgents connecting to ActiveGates? I'd recommend looking at AG and OA logs and see if there are any issues with connections. In classic licencing, OneAgents consume licenses after being connected for 5 minutes.
Thank you Julius for your input. We were wondering the same ourselves. I restarted the dynatrace gateway on two of the 6 AGs we have. That did not seem to make a difference. If there was a communications issue with the AG to the managed node we would see an error in the active gates page in deployment status in the CMC? We do not see any errors there, we see that the active gates are all healthy. I recently ran the update one agent under updates in settings to update one agents. We do not allow any updates to process unattended normally we have to submit an internal change ticket and select a window and process the update within that window. I noticed some agents are stuck in a status of instances are shutting down. If this persists we will probably submit an incident to Dyna One.
Just to follow up here. We had a planned cluster upgrade. After the upgrade there were still some agents showing currently not consuming host units. I checked the settings for updates and we found that we were set to upgrade during a maintenance window that was set for the a specific day of the month. The default agent version was set to use the latest default version. Instead of keeping these settings we disabled automatic updates and selected manual or no automatic updates. Instead of using the setting of use the latest standard one agent, we selected to use a specific version of the agent which we chose to be the latest version at that time. I then created an actual maintenance window under settings for the cluster and then manually started one agent updates for AIX, Linux and Windows one at at time and waited for those processes to completed. All the agents that were able to be updated were updated. The kubernetes containers show that their one agents are suppressed from updates and some *nix agents need more space in their respective folders so they were not updated but we can address those as manual remediation. It took some trial and error but I do not believe it was a network issue, there seems to be a nuance with now to configure one agent updates and some technique is learned from the experience. TY.