My OpenShift architecture is composed of 9 nodes:
I've deployed OneAgent following the instructions (OpenShift Installation) and my problem is that OneAgent installed into an Infrastructure Node, try's to deploy a OneAgent-pod into the Application node. That creates conflict with a rule that indicates that an infrastructure node cannot start a pod in the application nodes.
Is any way I can configure OneAgent on OpenShift, so it only creates pods on tainted nodes? For example, configuring OneAgent installed on Infra Nodes, so only create and run pods on their respective nodes?
If not, how can I deploy OneAgent in my OpenShift platform without having the problem mentioned above?
Thank you for your time, have a great day!
Hi Ugochukwu (NJ),
I've thought about this option of deployment, but the OpenShift team I'm working with suggested some metrics could be lost if we don't do it with DaemonSet. I will definitely give it a try, and install OneAgent natively on the Linux OS where OpenShift clusters are running.
Thank you for your time and for your quick answer!
You are welcome. I am not aware of any metrics lost by installing the agent on the hosts OpenShift is running. Can you have your team tell you exactly what metrics they are referring to?
Hi Ugochukwu (NJ),
They called them "infrastructure metrics". Unfortunately, I'm not that familiarized with OpenShift, so I can't be sure about what metrics they refer. Anyway, next week I'll install OneAgent natively and will post the results here.
Thinking while I write the response, I have a new question:
What are the differences between Native and DaemonSet OneAgent installation?
Actually there are some limitations if you install with DaemonSet than the Native installation on the Host. The DaemonSet installation has the same limitations as deploying the agent as a Docker Container. See links below
Hi there Ugochukwu,
I came back with some news: OneAgent has been installed in the 3 masters and 6 nodes of OCS. Nevertheless, we've restarted all the pods and we are still not able to full-stack monitor the pods. Any ideas?
I suggested the team to restart openshift service, perhaps this might work?
That's weird. In Settings -> Monitoring -> Monitored Technologies, Under Supported Technologies can you scroll down to the bottom and ensure you have enabled service insights to Go technology? If that is already enabled, please check in the Host Settings -> Detected Processes -> and ensure that Service monitoring is enabled for these processes.
That is strange. My only other guess is the pod was not restarted correctly. On the process page of each pod process, if it was restarted correctly, you should see a process restart event for the time it was restarted. If you do not see that event, then it was not restarted correctly. If it does show a restart happened, you may need to open a Support ticket.