Last modified October 30, 2017

Logging with the Elastic Stack

The Elastic stack, most prominently know as the ELK stack, in this recipe is the combination of Fluentd, Elasticsearch, and Kibana. This stack helps you get all logs from your containers into a single searchable data store without having to worry about logs disappearing together with the containers. With Kibana you get a nice analytics and visualization platform on top.


Deploying Elasticsearch, Filebeat, and Kibana

First we create a namespace and deploy our manifests to it.

kubectl apply \

Configuring Kibana

Now we need to open up Kibana. As we have no authentication set up in this recipe (you can check out Shield for that), we access Kibana through

$ POD=$(kubectl get pods --selector component=kibana \
    -o template --template '{{range .items}}{{}} {{.status.phase}}{{"\n"}}{{end}}' \
    | grep Running | head -1 | cut -f1 -d' ')
$ kubectl port-forward --namespace logging $POD 5601:5601

Now you can open up your browser at http://localhost:5601/app/kibana/ and access the Kibana frontend.

Now set fluentd-* for index pattern.

All set! You can now use Kibana to access your logs including filtering logs based on pod names and namespaces.

You can collaborate on this recipe on GitHub.