In my last article, we deployed a “contenerized” log server in an Azure Virtual Machine. In this article we are going to update the deployment script to store logs in an Azure File Share.

This article is the third one of a collection named « Create a log engine using Docker, Elastic Search, Kibana and Nginx », i assume you already have read previous ones:

 

Usage of the Docker plugin “cloudstor”

With the basic version of our log server, logs are stored in the virtual machine disk. A first improvement consists to persist data in a virtual machine data disk. A second option would be to store log data outside the virtual machine using a Docker plugin named “cloudstore”. This plugin permits to mount a Docker Volume on a storage service in Azure or Amazon.  If we want to run our log server on another cloud provider we can do easily, modifying the cloudstore plugin configuration.

 

Target Azure architecture

 

Script udpates

We need to add two new variables and two new functions to the existing script.

Two new variables (storage account information)

The main difference with the first version of the script is that we are going to create and use an Azure Storage account with an Azure file share. That’s why we are going to add two variables at the beginning of the script storageAccountName and storageAccountFileShareName.

Here are all the variables used in the deployment script:


rgName=''
vmName=''
vmUser=''
vmLocation=''
vmOpenPort=80
storageAccountName=""
storageAccountFileShareName=""
passwordHash=''

 

Create the storage account and the file share

First let’s create a function which will create a storage account and an Azure File share. This file share will be used by the cloudstor plugin to store data logs.


create_AzureStorageAndFileShare(){
 echo -- create the storage account
 az storage account create -n $storageAccountName -g $rgName -l $vmLocation --sku Standard_LRS
 storageConnexionString=$(az storage account show-connection-string -n $storageAccountName -g $rgName --query 'connectionString' -o tsv)
 storagePrimaryKey=$(az storage account keys list --resource-group $rgName --account-name $storageAccountName --query "[0].value" | tr -d '"')

 echo -- create the file share
  az storage share create --name $storageAccountFileShareName --connection-string $storageConnexionString
 }

We use the az storage account create command to create the storage account (the account name must be unique). Then we store the storage account connection string and the access key in local variables. Finaly we use them with the az storage share create command which will create the Azure File Share in the storage account.

 

Install the Cloudstor plugin

Once the storage account created, we can install the cloudstor plugin.


install_CloudstorePlugin (){
 echo install cloudstor plugin
 az vm run-command invoke -g $rgName -n $vmName --command-id RunShellScript --scripts "sudo docker plugin install docker4x/cloudstor:17.05.0-ce-azure2 --grant-all-permissions --alias 
 cloudstor:azure CLOUD_PLATFORM=AZURE AZURE_STORAGE_ACCOUNT="$storageAccountName" AZURE_STORAGE_ACCOUNT_KEY="$storagePrimaryKey""
}

To install the plugin, we use the docker plugin install command. We have to specify the cloud platform target and the storage account information (account name and access key).

 

Docker compose file update

In the Docker compose file, we need to set up the volume which will use the cloudstor plugin.


version: '3'

 services:
  elasticsearch:
  build : ./elasticsearch
  restart: always
  container_name: elasticsearch
  environment:
   - "discovery.type=single-node"
  networks:
   - net
  volumes:
   - esdata_azure:/usr/share/elasticsearch/data

 kibana:
  build : ./kibana
  restart: always
  container_name: kibana
  environment:
   SERVER_NAME : kibana
   ELASTICSEARCH_URL : http://elasticsearch:9200
  networks:
   - net

 proxy:
  build : ./nginx
  restart: always
  container_name : proxy
  ports:
   - "80:80"
  networks:
   - net

 volumes:
  esdata_azure:
  driver: cloudstor:azure
  driver_opts:
   share: "logs"

 networks:
  net:

First we add a volume named esdata_azure, this volume will use the cloudstor driver and define a shared stored named « logs » (« logs » is the name of the Azure file share created with the function install_CloudstorePlugin).

Then, at the elasticsearch service level, we add a volume mapping. We define that the local container folder /usr/share/elasticsearch/data must be mapped on the volume named esdata_azure.

 

Final Script

Here is the new version of the deployment script, using news functions create_AzureStorageAndFileShare and install_CloudstorePlugin.


#!/bin/bash

rgName=''
vmName=''
vmUser=''
vmLocation=''
vmOpenPort=80
storageAccountName=""
storageAccountFileShareName=""
passwordHash=''

echo write the basic auth credentials in nginx file
echo $passwordHash > ./log_server/nginx/.htpasswd

create_AzureVm(){
 echo -- create the resource group
 az group create --name $rgName --location $vmLocation

 echo -- create the vm
 az vm create --resource-group $rgName --name $vmName --public-ip-address-dns-name $vmName --image UbuntuLTS --admin-username $vmUser --generate-ssh-keys

 echo -- open port $vmOpenPort on the vm
 az vm open-port --port $vmOpenPort --resource-group $rgName --name $vmName
}

install_DockerTools (){
 echo install dependencies docker, docker compose and start docker service
 #install Docker
 az vm run-command invoke -g $rgName -n $vmName --command-id RunShellScript --scripts "curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - && sudo add-apt-repository 'deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable' && sudo apt-get update && apt-cache policy docker-ce && sudo apt-get install -y docker-ce"
 #install Docker Compose
 az vm run-command invoke -g $rgName -n $vmName --command-id RunShellScript --scripts "sudo curl -L https://github.com/docker/compose/releases/download/1.23.1/docker-compose-$(uname -s)-$(uname -m) -o /usr/local/bin/docker-compose && sudo chmod +x /usr/local/bin/docker-compose"
 #start Docker service
 az vm run-command invoke -g $rgName -n $vmName --command-id RunShellScript --scripts "sudo service docker start"
}

install_CloudstorePlugin (){
 echo install cloudstor plugin
 az vm run-command invoke -g $rgName -n $vmName --command-id RunShellScript --scripts "sudo docker plugin install docker4x/cloudstor:17.05.0-ce-azure2 --grant-all-permissions --alias cloudstor:azure CLOUD_PLATFORM=AZURE AZURE_STORAGE_ACCOUNT="$storageAccountName" AZURE_STORAGE_ACCOUNT_KEY="$storagePrimaryKey""
}

create_AzureStorageAndFileShare(){
 echo -- create the storage account
 az storage account create -n $storageAccountName -g $rgName -l $vmLocation --sku Standard_LRS
 storageConnexionString=$(az storage account show-connection-string -n $storageAccountName -g $rgName --query 'connectionString' -o tsv)
 storagePrimaryKey=$(az storage account keys list --resource-group $rgName --account-name $storageAccountName --query "[0].value" | tr -d '"')

 echo -- create the file share
 az storage share create --name $storageAccountFileShareName --connection-string $storageConnexionString
}

run_LogServer(){
 echo copy 'log_server' folder content in the remote vm
 scp -o StrictHostKeyChecking=no -r ./log_server $vmUser@$vmName.$vmLocation.cloudapp.azure.com:/home/$vmUser/log_server

 echo run 'docker-compose' file
 az vm run-command invoke --debug -g $rgName -n $vmName --command-id RunShellScript --scripts "cd /home/"$vmUser"/log_server && sudo docker-compose up -d"
}

create_AzureVm
install_DockerTools
create_AzureStorageAndFileShare
install_CloudstorePlugin
run_LogServer

echo log server available on $vmName.$vmLocation.cloudapp.azure.com

#clear [.htpasswd] file
echo "" > ./log_server/nginx/.htpasswd

 

We can now deploy a containerized log server which stores elasticsearch data file in an Azure File Share! The last step before to have a « business ready » solution is to manage an SSL endpoint on the Azure Virtual machine.

Source files updated are available on my GitHub.

Happy coding 🙂


Hosting an Elastic Search log server in a serverless Architecture - Discovering Azure Container Registry (ACR) Create a log engine using Docker, Elastic Search, Kibana and Nginx – Azure IaaS Hosting (part 1)

Leave a Reply

Votre adresse de messagerie ne sera pas publiée. Les champs obligatoires sont indiqués avec *