cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Retrieve docker container logs from CloudWatch

moritz_becker
Participant

Hi,

I am running docker swarm for AWS and have connected my AWS account to Dynatrace. My container logs are stored in CloudWatch but it seems that Dynatrace log analytics fails to incorporate these container logs.

E.g. when I go to Analyze > Log Files in my Dynatrace web interface, I only see dockerd and system logs of my EC2 hosts but no container logs.


Thanks

9 REPLIES 9

moritz_becker
Participant

I also noticed that although dockerd and system log files show up in my Dynatrace interface, I cannot display or download them. It says 'pre-processing files' when I click on display or download but it fails with 'OneAgent connection timeout'. OneAgent is running on all my hosts.

That's a support problem, please open a case so we can investigate.

pawel_brzoska
Inactive

Currently we support container logs when they are stored using Docker's default json logging driver where logs are stored inside the Docker host. Support for CloudWatch logs is one the roadmap.

moritz_becker
Participant

@Pawel B. Can you detail when CloudWatch log support can be expected to arrive in Dynatrace? Is there any feature issue for this that I can vote for or any other way to speed up the implementation of this feature? This is quite important for us and for anyone who is operating docker swarm (and maybe also kubernetes) on AWS.

pawel_brzoska
Inactive

According to AWS, preferred way of keeping logs is S3 and this is what we tackle first. Currently we already finishing support for CloudTrail, other S3-based logs will follow. Why currently supported Docker json logging driver is not sufficient for you ?

moritz_becker
Participant

Well, according to https://aws.amazon.com/de/answers/logging/centralized-logging/, S3 is the best choice for archiving logs and for making them long-term accessible. For real time log consumption and monitoring, CloudWatch is more appropriate as it enables stuff like log events etc. Logs can later be exported from CloudWatch to S3 for archiving purposes.
The preferred way of deploying docker swarm on AWS EC2 is to use the CloudFormation templates provided by docker. The template enables CloudWatch logging by default, although it is possible to deactivate it in which case the json logging driver is used - so it would work with Dynatrace if I just disabled it. But then accessing the logs would be a pain because I would lack a central hub - CloudWatch - from where I can comfortably access the logs. With json logging, I would always need to SSH into my nodes to see the logs.

pawel_brzoska
Inactive

That's true that CW logs has more functionality around log management than S3. That's exactly why we integrate with S3, because we assume customers would want to save money by using Dynatrace Log Analytics instead of CloudWatch logs, having same functionality in one tool, with logs integrated with APM data and AI logic. That is the central log hub you are referring to.

S3 is perfect, low-cost storage for storing logs to be retrieved by Dynatrace and we advise customers to use Dynatrace Log Analytics to do any log management tasks instead of CW logs. It is much easier to drilldown to logs from performance problems, in context, if you have everything in same place. See:

https://www.youtube.com/watch?v=wKlb2ckyFzc

Certainly CF templates for Docker take into account CW logs, as there's no guarantee or information if Dynatrace Oneagent will or not be part of the monitored infrastructure. However in your case, as a happy Dynatrace customer, the best would be to deactivate this integration and unleash full power of Dynatrace by letting Oneagent do the log collection and management automatically, saving money by not using CW logs nor S3 in this case.

moritz_becker
Participant

I have now tried to test Dynatrace log analytics for my docker swarm environment.

Sadly, it does not seem to be possible to view logs on a per-container basis but only on a per-process basis which can quickly become confusing when multiple containers for the same docker image run on a host.

Also, it would be nice to have a per-service view on the logs, i.e. an aggregation of the logs accross containers aka replicas belonging to the same service - this is not possible in CW either as far as I know and could be a USP for Dynatrace ;).

Also, I cannot retrieve or view logs from my AWS swarm at the moment. This applies to both the host syslog and any process logs - I have created a support ticket for this a while ago

Hope this gets fixed soon!

pawel_brzoska
Inactive

Moritz, thanks for reply. Starting from the bottom: we confirm the issue, it's a bug, not reproducble sadly anywhere else, but we work on this with hghest priority and will contact you with the fix.

Going back to use cases and what's possible in Dynatrace :

1/ every docker log entry is automatically stamped with docker image name and id. you can use this information to filter for container(s) of your choice, using filters in the query box.

2/ not for a service (as service is a higher level entity, more abstract), but certainly for process group which is an aggregation of containers running similar process types (like tomcat or jboss). You can affect how the process groups are defined and reported in settings -> process group detection. Then if you go to technologies screen, and drilldown to any process group, you have "log files" tab showing all log entries from containers that run certain process group instances. This workflow is also demonstrated in the youtube video shown above.That's exactly what's NOT possible in log-only tools like CW logs, sumo or splunk.

Featured Posts