first image

We needed a fast, easy and cheap “log everything” solution for our Docker environment.

UPDATE: Thanks, @jlchereau, for pointing out the limitation around Amazon's CloudWatch API (basically, there's a quota being exceeded in high-volume scenarios). Our use-case for this solution is primarily 'log archiving', not really using logging as a real-time monitoring facility. @dberesford has since made updates to the underlying package which provide a workaround ('batching' messages to reduce API calls). The configuration below has been updated accordingly.

Connecting AWS Cloudwatch with Docker’s in-built logging firehose gives us low-cost, real-time logging and archiving for every application in our stack, accessible anywhere.

Read on to see how you can achieve the same outcome in just two, simple steps.

The following tutorial assumes you are familiar with Docker, Kubernetes and Amazon Web Services.

You will need:

  • Kubernetes Cluster
  • A text editor
  • An AWS account
    • Follow the AWS steps to configure Cloudwatch Logging. Familiarise yourself with “Log Group” and “Log Stream” concepts
    • Access Key, Secrey Key, etc…normal authorisation stuff
  • A couple of dollars a month

Solution Requirements

  • We want to capture every application event from every docker container and stream them in real-time to a centralised console, accessible anywhere
  • The solution should work for a docker container running anywhere, in a kubernetes cluster

Solution Design

  • A special docker container runs on each docker host, capturing all log events streamed through ‘docker.sock’
  • Events from the firehose are forwarded to Amazon’s Cloudwatch service, where they are centralised, searchable and archivable.

Step 1 - Configure your Kubernetes “Logging Pod” specification file (see unique parameters at the bottom of this post)

Log into your AWS console, navigate to “Cloudwatch Logging” and get ready to see application events appear (example below…looks like I’ve got some things to fix in Dev).

Cloudwatch Logging

Step 2 - Launch your logging containers and watch the events flow into AWS Cloudwatch



# kubectl create replicationControllers -f kubelogging.json

Kubernetes Pod Specification parameters that will be unique in your environment;



|    Arg    |   Mask               | Description                     |
|-----------|----------------------|---------------------------------|
|    -a     |    XXXXXXXXXXXXXX    | AWS Access Key                  |
|    -s     |    YYYYYYYYYYYYYY    | AWS Secret Key                  |
|    -r     |    ZZZZZZZZZZZZZZ    | AWS Region (eg. ap-southeast-2) |
|    -g     |    AAAAAAAAAAAAAA    | Log Group Name (eg Dev)         |
|    -t     |    BBBBBBBBBBBBBB    | Log Stream (eg. Kube Minion)    |
|    -b     |    CCCCCCCCCCCCCC    | Batched Messages Trigger (eg.10)|
|    -o     |    DDDDDDDDDDDDDD    | Timeout in seconds (eg. 20)     |

# NOTE: Configure the number of 'replicas' to match the number of hosts in your cluster

Our kubernetes-based solution was inspired-by (and relies-on) “dberesford’s” container logging approach, available on the Docker Hub.

Give it a try, then come back and let us know how you went.