I am trying to locate information to better understand the impact of selecting files for “monitoring” with premium Log Analytics (i.e., selection of files to be centrally stored). Some specific questions are:
ruxitagentloganalytics.conf). The file contains a MainLoopInterval parameter that controls the frequency at which logs are processed… by default 60 seconds. I presume that the only way to modify this is by directly updating it from the server (i.e., not via the DT interface). Is that correct? Also, do customers find that they need to make changes to this parameter when selecting larger numbers or sizes of files for central storage?
Solved! Go to Solution.
1. When you reduce the value, there will be more writes to storage disks (IOPS) and there will be worse compression ratio, but there will be lower data latency. Though, in case the server doesn't t keep up with writing, server will increase the interval to achieve IOPS rate where the storage is able to handle.
2. Due to volume of logs, log agent is compressing them before sending. It is using zStandard compression, which has very good compression ratio to compute power needed. On average it is about 10% original, 25% realistic worst case.
3. Log writting queue can take up to 1% of available memory on a server node. CPU is typically not much affected. For metrics, a single node can process about 2 milion log entries per second.
4. Each host brings a number of free custom metrics. The log metrics can use that quota.
5. Each source log file is resulting in a single metric being consumed.
Thank you for the answers. There are a few things that I am not clear on from you response:
For question 3 I was asking about the Active Gates, but I am not clear whether your answer relates to the Active Gate or to the cluster node... you mention "server node" so I suspect your comments related to the cluster node and not the AG.
For question 4 you appear to be indicating that we would no longer have any limit on the number of alerts that we can configure for log file content. I.e., with the "standard" tier we could implement as many log file alerts as we want, as long as we stay within our licensed volume of custom metrics - is that the case?
Yes, it was about server. When it comes to AG, assume one physical core (not HT) is needed per 50 Mbps traffic (it is about the compressed content). When it comes to memory, it is mostly how long disruptions of connectivity you need to survive. If too much memory is occupied by messages they will be discarded, but log agent will resent the content in such case.