We're working on upgrading our kubernetes clusters to 1.25 and part of that upgrade involves us upgrading our tools as well. We have the latest charts for 9.0 and the operator 9.0 and the dynakube agents at 1.71
Everything from the DT side is running and we're getting metrics back. The problem we're getting now is with other pods when they attempt to start in our cluster
We use resource quotas in our namespaces and we already do have resource limits specified as well. I'm currently working with our DT resource on this but wanted to reach out to the community to see if anyone else has come across this recently and what their solution for it was. Thank you.
So found the solution to this with the help of our DT rep
We needed to specify resource limits for the initResources block within the deployment for the dynakube agents. Once that was done and redeployed to the cluster the troublesome pods that would not start prior to this are now working properly.