For troubleshooting code, few things are more valuable to developers than logs. That’s just one reason we built Retrace, which combines logs, errors, and code level performance in a single pane of glass to give you the insights you need to quickly identify and rectify the source of problems. With the widespread popularity of Docker’s container-based solution for apps, it’s important that you understand the ins and outs of Docker logs, so we put together this overview of Docker logging to bring you up to speed on the basics.
Logging has always been a central part of application monitoring. Logs tell the full story of what is happening, or what happened at every layer of the stack. Whether it’s the application layer, the networking layer, the infrastructure layer, or storage – logs have all the answers. As the software stack has changed from hardware-centric infrastructure to Dockerized microservices-based apps, much has changed, but what’s remained unchanged is the importance of logging. Docker needs logging more than traditional apps, and there are many innovative solutions to help you get logging right for Docker.
Docker adds complexity to the software stack. Troubleshooting is very different for Dockerized applications. You can’t make do with just a few basic metrics like availability, latency, and errors per second. These worked for traditional apps that ran on a single node and needed very little troubleshooting. With Docker, you need to search far and wide to identify root causes, and the time it takes to resolve issues is critical to delivering an outstanding user experience.
Logging drivers collect container logs and make it available for analysis. The default logging driver is a JSON file to with log data is written, but there are many other logging drivers like the following:
As you can tell from this list, a logging driver can be used to share log data with external services. Running the docker logs command will return log data only if you’ve set JSON or journald as the logging drivers. For the other services, you can view logs in each of their interfaces.
When you start the Docker daemon, you can specify logging attributes and options. Docker offers the following example command for manually starting the daemon with the json-file driver and setting a label and two environment variables:
$ dockerd \
–log-driver=json-file \
–log-opt labels=production_status \
–log-opt env=os,customer
Then, you’d run a container and specify values for the labels or env, using, for example:
$ docker run -dit –label production_status=testing -e os=ubuntu alpine sh
This will add additional fields to the logging output if the logging driver supports it, such as the following output for json-file:
“attrs”:{“production_status”:”testing”,”os”:”ubuntu”}
To make log data useful, they need to be analyzed. Most log data is monotonous and poring over every line is enough to drive anyone crazy. When analyzing log data, you’re looking to find a needle in a haystack. Out of thousands of lines of normal log entries, you’re often looking for that one line with an error. To get the true value of logs, you need a robust analysis platform.
The most popular open source log data analysis solution is ELK. It’s a collection of three different tools – ElasticSearch for storing log data, Logstash for processing the log data, and Kibana to present the data in a visual user interface. ELK is a great option for Docker log analysis as it provides a robust platform that is supported by a large community of developers and costs nothing. Despite being free, it’s a very capable data analysis platform.
Another popular open source option is Fluentd. It tries to solve not just your Docker logging, but logging for your entire stack, including non-Docker services. It uses a hub and spoke model to collect log data from various sources and share that log data to log analysis tools as needed. Thus, it doesn’t require writing scripts for each integration and stitching together your entire logging layer.
With the open source options, you need to setup your stack on your own and maintain it. This means provisioning the required resources, and ensuring your tools are highly available and is hosted on scalable infrastructure. This can take a lot of IT resources. The easier way is to opt for a hosted log analysis solution like Sumo Logic or Splunk. These vendors provide logging as a service. All you need to do is point your Docker logs to these services, and they automatically handle storage, processing, and presentation of the log data.
The advantage that commercial log analysis tool has over open source ones is that they have intelligence built into their platform. For example, Sumo Logic offers predictive outlier detection. With this feature, it looks for anomalies that may escalate, and alert you of possible issues before they become issues. This is still early stages for intelligent log analysis, but for commercial log analysis tools, this is the way to differentiate themselves from the many powerful open source options available today.
Along with logs, metrics and events are an important part of the entire Docker monitoring process. Metrics are performance numbers for various parts of the Docker stack like memory, I/O, and networking. You use the docker stats command to view all container runtime metrics.
Events are more detailed than metrics, and report the activity stream, or change history for various components of the Docker stack. There are events for containers, container images, plugins, volumes, networks, and daemons. Some sample events create, delete, mount, unmount, start, stop, push, pull, and reload. You view events using the docker events command. Together, metrics, events, and logs give you the end-to-end visibility you need when running and troubleshooting applications in Docker.
In conclusion, when running a Dockerized application, things get complicated. Using old log analysis methods will leave you flying blind. You need a modern approach to logging that is careful about how it collects log data from containers, and how it analyzes that data. There are many options both open source and commercial. Docker logging is a critical part of modern web-scale applications. Get it right, and you’re on your way to building highly available, scalable, and innovative applications.
For further reading on Docker logs, the ELK Stack, and other tools and logging information, visit the following resources and tutorials:
Featured image is by kyohei ito via Flickr, under Attribution-ShareAlike 2.0 Generic (CC BY-SA 2.0), and was cropped and optimized but not otherwise altered.
If you would like to be a guest contributor to the Stackify blog please reach out to [email protected]