7 minutes
Tweaking an EFK stack on Kubernetes
This is the continuation of my last post regarding EFK on Kubernetes. In this post we will mainly focus on configuring Fluentd/Fluent Bit but there will also be a Kibana tweak with the Logtrail plugin.
Configuring Fluentd
This part and the next one will have the same goal but one will focus on Fluentd and the other on Fluent Bit. Our goal is to create a configuration that will separate the logs of different namespaces and select which containers we want to log depending on their label.
Configure the DaemonSet
The first thing we need to do is change Fluentd’s DaemonSet. In fact, if we use the one provided by Fluentd, the configuration file is hardcoded into the image and it is not very simple to change it. So we will create a Kubernetes ConfigMap and mount it in the /fluentd/etc
folder. If you have RBAC enabled, and you should, don’t forget to configure it for Fluentd:
Now regarding the DaemonSet:
Note that we are using Fluentd v1. Some configurations will not work on v0.12!
You may wonder why I added FLUENT_ELASTICSEARCH_PASSWORD
and FLUENT_ELASTICSEARCH_USER
. It is because the Docker image fluent/fluentd-kubernetes-daemonset uses sed
on the configuration file if these environment variables are not set, and since the ConfigMap is read-only the container will fail to start. We could change the base image of the DaemonSet but adding these two lines is simpler and doesn’t hurt.
With the DaemonSet created we can now focus on our fluentd-config
ConfigMap.
Creating the ConfigMap
Here is a basic Fluentd configuration for Kubernetes (You can learn more on configuring Fluentd in their documentation):
The Kubernetes metadata plugin is already installed in the Docker image we use.
This configuration does about the same as the one provided by Fluentd. Now if you want for instance to not send the kube-system
containers’ logs, you can add these lines before the Elasticsearch output:
Split the logs regarding to the namespaces
Let’s assume you want to separate your logs depending on the container’s namespace. For instance you could send the logs from the dev
namespace to one Elasticsearch cluster and the logs from the production
namespace to another one. In order to achieve it we will use the rewrite tag filter. After the metadata plugin, we could add:
And then we could have something like that for the output:
It’s just an example, let your imagination make the better of it :) !
Select which containers you want to log
Now we want to select which containers we want to log or which not to log. It is possible with the grep filter (This will only work on Fluentd v1 since nested keys does not seem to work on v0.12).
The idea here is to add a label to the containers you want to log or to the ones you don’t want to log. There is two approaches: either we label all the containers we want to log; or the ones that we don’t want to log.
For instance if we add fluentd: "true"
as a label for the containers we want to log we then need to add:
Or similarly, if we add fluentd: "false"
as a label for the containers we don’t want to log we would add:
And that’s it for Fluentd configuration. Again if you want some more configuration options, check the documentation of Fluentd and of the plugins we used.
Configuring Fluent Bit
Unfortunately configuring Fluent Bit to work just like we just did for Fluentd is not (yet?) possible. One way to achieve it would be to connect Fluent Bit to a Fluentd aggregator but I will not cover it here. You can find some information about it on the fluent Github repo.
Let’s tweak Kibana a bit with the Logtrail plugin
Logtrail is a plugin for Kibana to view, analyze, search and tail log events from multiple hosts in realtime with devops friendly interface inspired by Papertrail.
First we need to install the plugin (Kibana 5.X & 6.X only). To install the plugin you’ll need the URL of a Logtrail release. You can check them here.
You must take the URL corresponding to your Kibana version.
Now, you can build the image with the Logtrail plugin like this (assuming you want Kibana 6.2.4):
Or pull the image from my Dockerhub: sh4d1/kibana-logtrail
I only have the
6.2.4
tag.
Next step is to configure Logtrail and we will use a ConfigMap. Here is the ConfigMap and the Deployment for Kibana:
So let’s take look at Logtrail’s configuration. The first point is the default-index
; it must be set to the index used by Elasticsearch.
Then the important part is the fields
section. It will display like:
timestamp hostname program:message
The message
is defined in message_format
. We could put something like {{{docker.container_id}}}: {{{log}}}
.
For further configuration you can check the repository of sivasamyk.
If you have any questions feel free to send me a email or contact me on the Docker Slack community @Sh4d1
kubernetes k8s efk elasticsearch fluentd kibana
1449 Words
2018-05-14 00:00 (Last updated: 2021-05-02 20:49)
a5cb94c @ 2021-05-02