Some weeks ago, I worked on a containerized log server based on Elastic Search and Kibana. The first goal is to be able to deploy a log server in a couple of minutes. In my last posts, we succeed to create and deploy this log server in Azure Virtual Machine, persisting logs in an Azure File Share.

Before to continue to read this blog post, a previous blog post explains how we built and deploy the log server in a local environment.

 

IaaS hosting

IaaS hosting is working well of course. We have full control on the host machine and we just have to log in on the remote machine to manage containers and debug if necessary. However, a PaaS or Serverless solution would be easier to manage and it would be a better solution to be able to scale out.

Our log server is based on Docker, in Azure 4 services are able to host Docker Container without managing the infrastructure: Azure App Service, Azure Container Instance, Azure Kubernetes Service, and Azure Service Fabric Mesh. In article this article we are going to focus on docker image management in Azure. In two future articles, we will explore how to on run our log server in serverless services: Azure Service Fabric Mesh (ASFM) and Azure Container Instances (ACI).

 

ASFM and ACI common deployment workflow

To publish containers in ASFM and ACI, the deployment workflow is quite identical. We need first to push Docker images in a Docker Registry and then reference images in an ARM template which will be responsible to create the containers in the target Azure Service. ARM templates will be different targeting ASFM and ACI because different Azure resources will be deploy.

 

Store Docker images in an Azure Container Registry (ACR)

ACR is a managed Docker registry in Azure. It basically does the same job that every Docker registry: store and deliver Docker images. However it’s an Azure Resources, so we can take advantage of controlling its access using RBAC and some additional features like geo-replication using the premium pricing tier. Here is the documentation.

 

Create the ACR instance

We can create an ACR using Azure CLI, more exactly using the az acr create command. The following script is creating a resource group with an ACR instance :


$resourceGroupName = ''
$acrName = ''

az group create -n $resourceGroupName -l westeurope
az acr create -g $resourceGroupName -n $acrName --sku Basic --admin-enabled

With the acr create command, we use the basic SKU. The flag –admin-enabled create an admin account that we will use to get connected to the ACR instance using the docker cli login command.

 

On the Azure Portal we can retrieve the username and the password of this admin account:

In this screenshot we can see the admin-account credentials of my registry named traniseRegistry.

 

Push local Docker images to ACR

It’s time to publish our local Docker images in the ACR instance. In my github repository we can find the container definitions for Nginx, Kibana and Elastic Search. In the “serverless” folder in the repo we can find another folder named “log_server” which contains a folder per container. We have to build the Docker image for each container and push them in ACR.

 

To push a Docker image in an ACR instance, we first need to login to the instance using the Docker cli and the login command (the “username” and the “password” checked in the previous part will be useful!):

docker login traniseregistry.azurecr.io

 

We have 3 folders, for each we need to do the following steps:

  1. Build the Docker image
  2. Tag the image targeting the remote registry
  3. Push the image in the remote store

 

First and second steps can be gathered with the docker build command and the –tag parameter.

docker build –t {registryuri}/{repositoryname}:{tag} .

With the –t parameter, we specify the registry url, the repository name and the tag for the current image. With the “point”, we tell to the CLI to use the Dockerfile stored in the current folder.

 

Once the image is built, we can use the docker push command using the tag we just created:

docker push {registryuri}/{repositoryname}:{tag}

 

For example, if I go in the kibana folder, I will use those two command lines:

docker build -t traniseregistry.azurecr.io/es_kibana:dev .
docker push traniseregistry.azurecr.io/es_kibana:dev

 

In the elasticsearch folder:

docker build -t traniseregistry.azurecr.io/es_elasticsearch:dev .
docker push traniseregistry.azurecr.io/es_elasticsearch:dev

 

Finaly, in the nginx folder :

docker build -t traniseregistry.azurecr.io/es_proxy:dev .
docker push traniseregistry.azurecr.io/es_proxy:dev

 

3 repositories will be created in the registry, here is a screenshot of my registry:

Each target images got the “dev” tag, they are now accessible using following paths:

  • traniseregistry.azurecr.io/es_elasticsearch:dev
  • traniseregistry.azurecr.io/es_kibana:dev
  • traniseregistry.azurecr.io/es_proxy:dev

 

Now that images are stored in an ACR instance, we can easily use them from ARM templates to deploy our log server in ACI or ASFM. We will discover first how to use service fabric mesh in the next article.

Happy coding 🙂


Hosting an Elastic Search log server in a serverless Architecture – Azure Service Fabric Mesh Create a log engine using Docker, Elastic Search, Kibana and Nginx – Azure IaaS Hosting (part 2)

Leave a Reply

Votre adresse de messagerie ne sera pas publiée. Les champs obligatoires sont indiqués avec *