How to set up your monitor environment with ELK

yuri bartochevis
5 min readApr 17, 2021

Introduction:

Who nowadays has never heard about Distributed systems that have a single responsibility built scalable and easily deployable (Micro-services)
However, those advantages have a price, and how can we monitor user’s steps along with those systems? And for this question, I will give you a quick answer. ELK.

What is ELK

ELK is a famous acronym for Elasticsearch Logstash and Kibana, and if you have never heard about them before, please take a look here.

The example project:

We will create a full monitor environment using the stack ELK to aggregate logging from different applications, and for this, we’re going to use a sample project with Kotlin and Ktor.

Hands-on:

First, we will need to create our ELK structure using docker, so let’s create a docker-compose.yml file with the information below.

version: "3"
services:
logstash:
hostname: logstash
image: docker.elastic.co/logstash/logstash-oss:6.6.2
container_name: logstash
ports:
- 9600:9600
- 8089:8089
- 4560:4560
depends_on:
- elasticsearch
links:
- elasticsearch
elasticsearch:
hostname: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch-oss:7.6.1
container_name: elasticsearch
restart: unless-stopped
environment:
discovery.type: single-node
ports:
- 9200:9200
kibana:
hostname: kibana
image: docker.elastic.co/kibana/kibana-oss:7.6.1
container_name: kibana
restart: unless-stopped
ports:
- 5601:5601
depends_on:
- elasticsearch
links:
- elasticsearch
environment:
ELASTICSEARCH_HOSTS: http://elasticsearch:9200

briefly explaining the attributes:

Ports: are used to map our local port with the container port
depends_on: is used to make sure the container is ready only after the dependency
links: for making sure all those containers will be in the same “network.”

as we can see so far, running a docker-compose will bring all containers up. However, we haven’t configured the Logstash to send information to our Elasticsearch.

the idea here is to create a folder to hold the config files from Logstash and then create a file called logstash.conf

input {
tcp {
port => 4560
codec => json_lines
}
}
output {
elasticsearch {
hosts => [ "elasticsearch:9200" ]
}
}

We’re telling Logstash that everything received via TCP in port 4560 will be written/outputted in the Elasticsearch.

Quick reminder: “elasticsearch:9200” is the DNS for Elasticsearch into our docker network

Right now, our docker image still not able to find this config file until we reference a volume so our docker can reach it out. Let’s create the volume

...logstash:
hostname: logstash
image: docker.elastic.co/logstash/logstash-oss:6.6.2
container_name: logstash
ports:
- 9600:9600
- 8089:8089
- 4560:4560
volumes:
- ./logstash/pipeline/logstash.conf:/usr/share/logstash/pipeline/logstash.conf
depends_on:
- elasticsearch
links:
- elasticsearch
...

so that now on our monitoring environment is completely configured. however, we have not created any single code to produce logging. For doing this, I’ll be using the Ktor template to generate a Ktor application with Logback.

For this example, I’ll use the main class to randomly create dummy logs with an assistant library to create dummy logging information.

compile 'com.thedeanda:lorem:2.1'

this is my Application.kt

fun main(args: Array<String>) {
io.ktor.server.netty.EngineMain.main(args)
}
@Suppress("unused") // Referenced in application.conf
@kotlin.jvm.JvmOverloads
fun Application.module(testing: Boolean = false) {
install(CallLogging) {
level = Level.INFO
filter { call -> call.request.path().startsWith("/") }
}
val lorem: Lorem = LoremIpsum.getInstance()
Scheduler {
log.trace(lorem.getWords(5, 10))
log.debug(lorem.getWords(5, 10))
log.info(lorem.getWords(5, 10))
log.warn(lorem.getWords(5, 10))
}.scheduleExecution(Every(3, TimeUnit.SECONDS))
Scheduler {
log.error(lorem.getWords(3), Exception("Random Exception"))
}.scheduleExecution(Every(60, TimeUnit.SECONDS))
}

and as you can see, by running this application locally, you will be able to see your logs running through the terminal.

So far, do you remember what is missing? We’re missing the bridge connection between our docker infrastructure and our application. And how can we make the TCP connection between our application and the Logstash? There is where Logback comes in.

Into our application inside the folder Resources, we already have a file called logback.xml.

We will need to add a TCP connector and an auxiliary library to Logback send logs as a JSON.

implementation group: 'ch.qos.logback.contrib', name: 'logback-jackson', version: '0.1.5'
implementation group: 'ch.qos.logback.contrib', name: 'logback-json-classic', version: '0.1.5'
compile 'net.logstash.logback:logstash-logback-encoder:6.1'

After importing all dependencies, we will need to create a new appender and set it in the <root> tag.

...<appender name="STASH" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
<destination>${LOGSTASH_HOST:localhost}:4560</destination>
<!-- encoder is required -->
<encoder class="net.logstash.logback.encoder.LogstashEncoder">
</encoder>
<keepAliveDuration>5 minutes</keepAliveDuration>
</appender>
<root level="trace">
<appender-ref ref="STDOUT"/>
<appender-ref ref="STASH"/>
</root>
...

As you can see, we are using logstash_host as an environment variable to manipulate the connection running locally or inside the docker network.

after that, running the application setting the environment variable as
LOGSTASH_HOST = localhost

right now, for making sure, we’re using the latest ELK docker-compose file execute :

$ docker kill $(docker ps -q)
$ docker-compose up -d

after that, we can run our application locally, and automatically we can go through our Kibana webpage

§

click to the Discover button create a new index pattern with an wildcard (do not do that in production environment .)

Once you have done this, you will click in discover again and see all logs from our application.

Right now, you’re able to see logs using Kibana, making our life easier whenever you need to investigate problems in the application.

I hope I could give you a brief explanation of how to configure the basics of ELK. In case you would like to see the code, here is the source I used during this tutorial.

--

--

yuri bartochevis

Back-end developer who fell in love with life working worldwide, addicted to cutting-edge technologies and fashion-old cars.