I am running docker swarm for AWS and have connected my AWS account to Dynatrace. My container logs are stored in CloudWatch but it seems that Dynatrace log analytics fails to incorporate these container logs.
E.g. when I go to Analyze > Log Files in my Dynatrace web interface, I only see dockerd and system logs of my EC2 hosts but no container logs.
Solved! Go to Solution.
@Pawel B. Can you detail when CloudWatch log support can be expected to arrive in Dynatrace? Is there any feature issue for this that I can vote for or any other way to speed up the implementation of this feature? This is quite important for us and for anyone who is operating docker swarm (and maybe also kubernetes) on AWS.
Well, according to https://aws.amazon.com/de/answers/logging/centralized-logging/, S3 is the best choice for archiving logs and for making them long-term accessible. For real time log consumption and monitoring, CloudWatch is more appropriate as it enables stuff like log events etc. Logs can later be exported from CloudWatch to S3 for archiving purposes.
The preferred way of deploying docker swarm on AWS EC2 is to use the CloudFormation templates provided by docker. The template enables CloudWatch logging by default, although it is possible to deactivate it in which case the json logging driver is used - so it would work with Dynatrace if I just disabled it. But then accessing the logs would be a pain because I would lack a central hub - CloudWatch - from where I can comfortably access the logs. With json logging, I would always need to SSH into my nodes to see the logs.
That's true that CW logs has more functionality around log management than S3. That's exactly why we integrate with S3, because we assume customers would want to save money by using Dynatrace Log Analytics instead of CloudWatch logs, having same functionality in one tool, with logs integrated with APM data and AI logic. That is the central log hub you are referring to.
S3 is perfect, low-cost storage for storing logs to be retrieved by Dynatrace and we advise customers to use Dynatrace Log Analytics to do any log management tasks instead of CW logs. It is much easier to drilldown to logs from performance problems, in context, if you have everything in same place. See:
Certainly CF templates for Docker take into account CW logs, as there's no guarantee or information if Dynatrace Oneagent will or not be part of the monitored infrastructure. However in your case, as a happy Dynatrace customer, the best would be to deactivate this integration and unleash full power of Dynatrace by letting Oneagent do the log collection and management automatically, saving money by not using CW logs nor S3 in this case.
I have now tried to test Dynatrace log analytics for my docker swarm environment.
Sadly, it does not seem to be possible to view logs on a per-container basis but only on a per-process basis which can quickly become confusing when multiple containers for the same docker image run on a host.
Also, it would be nice to have a per-service view on the logs, i.e. an aggregation of the logs accross containers aka replicas belonging to the same service - this is not possible in CW either as far as I know and could be a USP for Dynatrace ;).
Also, I cannot retrieve or view logs from my AWS swarm at the moment. This applies to both the host syslog and any process logs - I have created a support ticket for this a while ago
Hope this gets fixed soon!
Moritz, thanks for reply. Starting from the bottom: we confirm the issue, it's a bug, not reproducble sadly anywhere else, but we work on this with hghest priority and will contact you with the fix.
Going back to use cases and what's possible in Dynatrace :
1/ every docker log entry is automatically stamped with docker image name and id. you can use this information to filter for container(s) of your choice, using filters in the query box.
2/ not for a service (as service is a higher level entity, more abstract), but certainly for process group which is an aggregation of containers running similar process types (like tomcat or jboss). You can affect how the process groups are defined and reported in settings -> process group detection. Then if you go to technologies screen, and drilldown to any process group, you have "log files" tab showing all log entries from containers that run certain process group instances. This workflow is also demonstrated in the youtube video shown above.That's exactly what's NOT possible in log-only tools like CW logs, sumo or splunk.