24 Mar 2023 08:25 AM
Hi All,
Please help me to understand exact difference between Classic and Cloud Native approaches for K8S monitoring.
Classic FS - in this only oneagent image gets downloaded.
Cloud Native FS -in this both Oneagent image and code modules images get downloaded, code modules image is maintained by CSI driver.
Queries
1. what is the advantage of having extra code modules image in Cloud Native FS ?
2. What is the exact disadvantage in Classic FS that was overcome by cloud Native FS?
Solved! Go to Solution.
24 Mar 2023 10:16 AM
Hello @Gogi
Both deployment options have their own advantages and limitations. The Classic full-stack injection is the recommended approach. When it comes to security concerns then, Cloud-native full-stack injection can be the solution.
Note: Classic full-stack injection requires to write access from the OneAgent pod to the Kubernetes node filesystem to detect and inject into newly deployed containers.
https://www.dynatrace.com/support/help/setup-and-configuration/setup-on-container-platforms/kubernet...
Regards,
Babar
24 Mar 2023 10:39 AM
Be also aware that in Classic full-stack you cannot not select specific namespaces or exlude specific namespaces to monitor, which you might be interessted to keep the Host Unit consumption low, therefore you may prefer to go with Cloud-native full-stack.
https://www.dynatrace.com/support/help/shortlink/dto-config-k8s#annotate
22 May 2023 07:53 AM
I have few queries With respect to Classic Vs CloudNative approach
1. Does ClassicFullstack ask for application process/pods restart whenever new worker nodes spin up?
2. Whether application process/pods restart is not required in CloudNative FullStack when new worker nodes spin up?
Classic FullStack - my understanding is suppose we have 2 nodes now and we do node rollover then 2 new nodes comes up, on new nodes oneAgent process will start as demon(before application process start) so once application process starts it will do injection and start monitoring it(without restart of appln pods)
please clarify my understanding is correct or not.
04 Aug 2023 01:26 PM - edited 23 May 2024 08:39 AM
Hi,
I would like to shed some light in this discussion.
The major difference between classic full stack and cloud native full stack is the injection and lifecycle mgmt of code-modules:
- Classic full stack: Code-modules are injected by the OneAgent running on the node (OS agent). Thus, the OS agent needs to be deployed and ready on each node before any code-module get's injected (=you need to restart every pod / process which has been deployed before the OneAgent on the node). This poses a natural race condition, especially for dynamic environments that scale nodes frequently, and is the major disadvantage of this approach. Every pod created before OneAgent was ready on the respective node needs to be restarted to get full stack visibility as the lifecycle management of code-modules is handled by OneAgent.
- Cloud native full stack: The code-module injection is triggered by the Dynatrace webhook that uses an init-container and a CSI volume to mount the code-module binaries. The CSI driver manages the lifecycle of code-modules per node and ensures storage and bandwidth efficiency (code modules are downloaded and stored once per node and not for each pod). As long as the Dynatrace Operator/Webhook/CSI-Driver are installed (on any node) and ready before any new workloads in a k8s cluster are deployed, code-modules get injected and there is no need to restart any pods/processes.
Note: In case your workloads/applications are already deployed and running before deploying Dynatrace into your Kubernetes cluster, you have to restart all pods to get full stack visibility for both, classic full stack and cloud native full stack.
Here's a link to the documentation which explains the capabilities and differences. We are currently working on further removing the last limitations of cloud native full stack (e.g. support for network zones) and plan to make cloud native the new default in one of the upcoming Dynatrace Operator releases. (Update 2024-04: This is already released alongside Operator 1.0)
Hope this helps!