Monitor Docker Logs with ELK - 1
2 min read

Monitor Docker Logs with ELK - 1

Monitor Docker Logs with ELK - 1

Following my attempts to monitor ethOS data via ElasticSearch and Kibana, I've thought it would be a good idea to start monitoring the logs produced by my various docker instances.

This first part deals with configuring ELK and logspout.

Prerequisites

I assume the following:

  • ELK is installed. You can do it either as a set of Docker images (oh, the meta!) or just plain package install.
  • The receiver machine is configured. E.g., if you're on windows, create a firewall rule opening TCP port 35000 (corresponding to the config).

Once this is done, you can either start by configuring logstash or by installing and configuring logspout. I've decided to start with a simple logstash configuration in order to allow it to be ready for when logspout starts to send data.

Configure Logstash

The simplest configuration I could find is:

input {
  tcp {
    port => 35000
    type => syslog
  }
  udp {
    port => 35000
    type => syslog
  }
}

filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOG5424PRI}%{NONNEGINT:ver} +(?:%{TIMESTAMP_ISO8601:ts}|-) +(?:%{HOSTNAME:containerid}|-) +(?:%{NOTSPACE:containername}|-) +(?:%{NOTSPACE:proc}|-) +(?:%{WORD:msgid}|-) +(?:%{SYSLOG5424SD:sd}|-|) +%{GREEDYDATA:msg}" }
    }
    syslog_pri { }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
    if !("_grokparsefailure" in [tags]) {
      mutate {
        replace => [ "@source_host", "%{syslog_hostname}" ]
        replace => [ "@message", "%{syslog_message}" ]
      }
    }
    mutate {
      remove_field => [ "syslog_hostname", "syslog_message", "syslog_timestamp" ]
    }
  }
}

output {
  elasticsearch { hosts => ["localhost:9200"] }
  stdout { codec => rubydebug }
}

It basically tells it to open port 35000 for inputs from logspout, parse the logspout entry and send it to a docker instance on localhost:9200. Simple.

Note: If you're installing logstash as a Docker image, you might want to create your own Dockerfile and copy the various configurations in.

Once logstash is configured, is time to install logspout

Logspout installation

Logspout is provided as a Docker image (oh, the double meta!). I've created a simple configuration script:

#!/bin/bash
sudo docker stop logspout-base
sudo docker rm logspout-base

sudo docker run -d \
  --name="logspout-base" \
  --volume=/var/run/docker.sock:/var/run/docker.sock \
  -e 'ROUTE_URIS=syslog+tcp://logstash.ip:35000' \
  -e LOGSTASH_TAGS="docker,test" \
  -e 'LOGSPOUT=ignore' \
  gliderlabs/logspout:latest

sudo docker ps |grep logspout

where logstash.ip id the IP address of your logstash installation.

The idea is to give it access to /var/run/docker.sock, where it retrieves logging data from Docker. Also, we'd want to ignore the logs produced by logspout itself (hence, the -e 'LOGSPOUT=ignore' flag).

Its logs should display:

docker logs -f logspout-base
# logspout v3.2.3 by gliderlabs
# adapters: raw syslog tcp udp tls
# options : persist:/mnt/routes
# jobs    : pump routes http[routes,logs]:80
# routes  :
#   ADAPTER     ADDRESS                 CONTAINERS      SOURCES OPTIONS
#   syslog+tcp  logstash.ip:35000                             map[]

If this is not the case or the logspout container exits, then you need to make sure your logstash is allowed to receive the logs (e.g. see port opening above).

Few notes

  1. Logspout is only one option. You could actually log directly to logstash (via .e.g gelf).
  2. Recording fine logs might be quite costly (e.g. applications with DEBUG logging enabled). You might want to extend logstash configuration to ignore such logs.
  3. You could read further here and here.

HTH,