Elastic Search and Kibana are powerful tools, they can be used to store and query business data with good performance thanks to Lucene engine. This couple of tools can be an interesting option to store applications logs. Elastic Search provides APIs to manage data and Kibana offers a friendly grid and dashboards to visualize data. Some logger like Seri Log can send logs to an Elastic Search instance(s).

In this article we will discover how to « containerize » Elastic Search and Kibana using Docker. Then we will see how to use an Elastic Search instance to store an ASP net core Application logs. In a second article, we will discover how to host the log engine in Azure.

 

Containerized solution vs classic install process

It’s actually possible to install Elastic Search and Kibana on a Linux or Windows machine. It works well. However the setup is quite long (and not fun at all!). The goal is to create a log server quickly, using docker we are going to specify our log engine in one file and we will be able to make it work using two commands.

 

Target architecture

We are going to compare two architectures. A basic one (v1) and a secured one (v2).
 

Architecture V1

We can build our log server using two containers:

  • Kibana
    • Internal port : 5601
    • External port : 80
    • Image: docker.elastic.co/kibana/kibana:6.1.2
    • Configuration: No specific configuration, the container just needs to communicate with the Elastic Search container (it’s mandatory, Kibana container must be able to request the Elastic Search instance to show data)
  • Elastic Search
    • Internal port: 9200
    • External port: 9200
    • Image : elastic.co/elasticsearch/elasticsearch:6.3.2
    • Configuration: It uses a configuration file named elasticsearch.yml. We can create one and copy it in the container to override default configurations.

Both container must be in the same network.

 

Schema

 

This architecture would work perfectly, but it’s not secured at all. With the first architecture we expose to the world the 80 and 9200 ports (bad practice !). Basically Elastic Search is free and not secured, it’s opened on the port 9200, to make it secured we have two solutions:

  • Install a plugin (ex : X-Pack), the plugin is not free
  • Configure a custom authorization process at the container host level

The second option can be done using an Nginx container which would be the gateway / proxy between host public(s) port(s) and Elastic Search and Kibana containers. With Nginx we can secure routes access using a Basic Authentication.

 

Architecture V2

In the second architecture, the log server uses 3 containers:

  • Kibana
    • Internal port: 5601
    • Base Image: docker.elastic.co/kibana/kibana:6.1.2
  • Elastic Search
    • Internal port: 9200
    • Base Image: elastic.co/elasticsearch/elasticsearch:6.3.2
  • Nginx
    • Internal port: 80
    • External port: 8080
    • Base Image: nginx
    • Configuration: 2 files are required, a nginx.conf file contains the reverse proxy configuration and an .htpasswd file stores the login / password used for the Basic Authentication configuration

 

Schema

 

Data management

Before to build the Docker file, we have to decide how we will manage Elastic Search data. Elastic Search data and logs will be stored by default in the /var/lib/elasticsearch/data folder for Ubuntu and Debian distribution (here is the documentation!). These data are important because they can be production logs.  We can’t lose them if the container stop working for any reason. The solution is to mount this data directory in a Docker volume, this will save the data in a permanent way on the host machine. In this article we will use the docker local driver to manage volumes, but we could store data store in an Azure File Share using this driver. It works well too !

 

Containers definition

We are going to work in a single parent directory. In this parent directory, each container has its own directory:

 

Elastic Search container

The elastisearch  container configuration is quite simple, it just contains a Docker file. This Docker file just pulls the official Elastic Search Docker image.

Here is the folder view:

 

Here is the Docker file content:


FROM docker.elastic.co/elasticsearch/elasticsearch:6.3.2

 

Kibana container

Like the first one, the Kibana container definition is not complicated, the folder contains a single Docker file too.

 

Here is the folder view:

 

Here is the Docker file content:

 
FROM docker.elastic.co/kibana/kibana:6.1.2

 

Nginx container

The proxy container definition is more interesting because the container has two responsibilities:

  1. Define friendly routes to access to Elastic Search and Kibana containers
  2. Apply a Basic Auth to restrict containers access

 

In the nginx folder we find three files:

  • Docker file
  • nginx.conf: Nginx configuration, define the reverse proxy rules
  • .htpasswd : Basic Auth credentials (login & password)

 

Here is the folder view:

Docker file

The Docker file does the following jobs:

  1. Pull de nginx image
  2. Copy the nginx.conf file to the container location /etc/nginx/nginx.conf
  3. Copy the .htpasswd file to the container location /etc/nginx/.htpasswd

 
Here is the Docker file content:

FROM nginx

COPY nginx.conf /etc/nginx/nginx.conf

COPY .htpasswd /etc/nginx/.htpasswd

 

Nginx configuration

The nginx.conf file is the heart of the architecture : it configures the routing rules to Elastic Search and Kibana containers, moreover it adds a secure layer with a Basic Auth.
 
Here is the nginx.conf content:

events {
    worker_connections 2048;
}

http {

  upstream docker-kibana {
	server kibana:5601;
  }

  upstream docker-elasticsearch {
	server elasticsearch:9200;
  }
  
  server {		  	
    listen 80;
	
    location / {
      proxy_pass 		 http://docker-kibana;
	  auth_basic		 "Access limited";
	  auth_basic_user_file /etc/nginx/.htpasswd;
    }
	
    location /api/es {
      rewrite ^/api/es(.*) /$1 break;
	  proxy_pass 		 http://docker-elasticsearch;
	  auth_basic		 "Access limited";
	  auth_basic_user_file /etc/nginx/.htpasswd;
    }
	
    location = /favicon.ico {
	   log_not_found off;
    }	
  }
}

 

First we create two “upstream” which define where the reverse proxy will forward requests. Requests coming from the host need to be routed to Elastic Search and Kibana containers, that’s why we create two upstreams, one for each target container. The routing will be done in the containers network that’s why we have to specify target container names with internal port:

  • kibana:5601
  • elasticsearch:9200

 

Then we define the server configuration: the port to listen (80) and routes that we want to use to access to upstreams. Each route is defined with a location node. In each location node we have to set up:

  • The upstream target with the proxy_pass keywork
  • The Basic Auth configuration using auth_basic_user_file and auth_basic key words
    • auth_basic_user_file: tells to nginx where is located the Basic Auth credentials
    • auth_basic: defines the message on the authentication popup

 

.htpasswd

This file contains Basic Auth credentials.
 

Here is the .htpasswd content:


tranise:$apr1$SDzAe541$QOrVvoHMR0BLPGDDxiOSM0

For test purpose, you can generate your credentials online using this online generator. In the example, the password is hashed is « test ».

 

Docker compose

We just explained each container definition, we are able to build the log engine using a docker-compose file.
 
The docker compose file will define how containers will work together. It contains following elements:

  • A network called « net », it’s the common network used by containers
  • A volume called « esdata », it’s managed by the local driver
  • 3 services
    • Elasticsearch
      • Use the « net » network
      • Mounts the volume « esdata » on its local container folder /usr/share/elasticsearch/data
      • Indicates to the elastic search engine, that it will be a single node cluster with an environment setting
    • Kibana
      • Use the “net” network
      • Defines two environment variables to set up the server name and the Elastic Search endpoint to use
    • Proxy
      • Use the « net » network
      • Define a port mapping, the port 8080 on the host will be routed to the port 80 of the container. It’s the log engine entry point

 

Here is the docker-compose.yml file:

version: '3'

services:
 elasticsearch:
   build : ./elasticsearch
   container_name: elasticsearch
   environment:
     - "discovery.type=single-node"
   networks:
     - net
   volumes:
     - esdata:/usr/share/elasticsearch/data

 kibana:
   build : ./kibana
   container_name: kibana
   environment:
     SERVER_NAME : kibana
     ELASTICSEARCH_URL : http://elasticsearch:9200
   networks:
     - net

 proxy:
   build : ./nginx
   container_name : proxy
   ports:
     - "8080:80"
   networks:
     - net

volumes:
   esdata:
     driver: local

networks:
   net:

 

Run the log engine

The log engine architecture is ready, we can build it and run it via Docker. In the docker-compose.yml folder, we have to use two commands:

  • docker-compose build
  • docker-compose up

Trying to go on the http://localhost:8080, we will get blocked by the basic Auth
 

Once the credentials used (tranise / test in my case), we can access to Kibana on port 8080:

 

We can access to Elastic Search API using the /api/es route:

Use the log engine from an Asp net core App

A simple way to use an Elastic Search instance as a log endpoint is to use Seri Log. This logger can be easily added in the Asp net core logger stack.
 
In the Asp net core projectn we have to install two nugget packages:

  • Extensions.Logging
  • Sinks.Elasticsearch

 
The logger configuration is done is the startup.cs file, more exactly in the Startup(IHostingEnvironment env) method :

We create a Logger instance pointing on the local Elastic Search instance (using the basic Auth information) :

 Log.Logger = new LoggerConfiguration()
            .MinimumLevel.Debug()
            .Enrich.FromLogContext()
            .WriteTo.RollingFile(Path.Combine(env.ContentRootPath, "Logs", "log-{Date}.txt"))
            .WriteTo.Elasticsearch(new ElasticsearchSinkOptions(new Uri("http://localhost:8080/api/es"))
            {                
                AutoRegisterTemplate = true,    
                IndexFormat = "testlogs-{0:yyyy.MM.dd}",
                TypeName = "logs-api",
                ModifyConnectionSettings = x => x.BasicAuthentication("tranise", "*********")
            })
            .CreateLogger();

* Of course credentials must be stored in a configuration file or in a secure store like an Azure Key vault.

 

Then in the Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory) method, we have to add the Seri Log logger in the Asp net core logger collection, only one line of code is necessary:


loggerFactory.AddSerilog();

 
We can now use locally our log engine from an Asp net core application. We just need to configure an Elastic Search index with the pattern « testlogs-{0:yyyy.MM.dd} » using Kibana.

Using the log engine locally is a nice first step, but the final goal is to host it on a remote host. We will discover how to host in Azure in my next article!

 

Happy coding 🙂


Create a log engine using Docker, Elastic Search, Kibana and Nginx – Azure IaaS Hosting (part 1) Introduction to Azure Management group

Leave a Reply

Votre adresse de messagerie ne sera pas publiée. Les champs obligatoires sont indiqués avec *