Azure offers different ways to build & run your microservices ranging from Service Fabric to Azure Kubernetes Service to App Services to Azure Spring Cloud to even running Docker Enterprise and Apache Mesos in IAAS mode on Azure — the choice of compute depends on a lot of factors not excluding the affinity and knowledge of these services, the market hype (especially in case of Kubernetes), compatibility of the existing technology stack and the effort to migrate to the new service. Without trying to compare these compute options in Azure, in this article we focus on running your microservices “securely” in Azure Kubernetes Service. To make it more readable I plan to break this into a series of steps and articles.
- Creating a Spring Boot App locally and working with Azure SQL Database
- Deploying the Initial Infrastructure on Azure using Terraform and Azure DevOps
- Using AKS Managed Azure AD Integration instead of Service Principals for Cluster Identity and using Azure AD Pod Identity to associate Managed Identities with Pods.
- Dockerizing the Spring Boot App and deploying on AKS using Azure Devops pipelines.
- Using KeyVault for managing secrets for application instead of using Base 64 encoded kubernetes secrets.
- Securing the communication b/w PAAS services and AKS clusters using Private Link for Key Vault, Azure SQL and Container Registry.
- Use Azure Policy Add-on for AKS to enforce basic governance ( no privileged pods, only allow images from specific ACR etc.)
- Use Kubernetes Network Policies to restrict ingress & egress traffic b/w namespaces or b/w pods & services.
- Use a Web Application Firewall (WAF) with Application Gateway or Front Door or API Management to protect against common exploits like XSS, SQL injection, CSRF attacks etc.
There will be many things that you can further do to secure your deployment that are NOT covered in this series, at least initially ( but you must consider for your use case )
- Using a Private AKS Cluster and deploy Bastion and Jumpbox to access. You would then have to use self-hosted agents instead of using Microsoft Hosted Azure DevOps agents to deploy to the Private AKS cluster.
- Use a Firewall (like Azure Firewall or any other NVA) to control inbound & output filtering, detect & block malicious traffic — Typically we recommend to deploy the firewall in the Hub VNet in a typical Hub-Spoke topology that can centrally exert control on multiple Spoke VNets
- Azure Confidential Computing: AKS now supports running confidential computing nodes (using Intel SGX processors) to protect your data while in use. Details here
- Node security using Kured — managing and maintaining the data plane nodes (VMs) is kind of a joint responsibility. While Azure platform automatically applies OS security patches it cannot and should not restart your VM (which may be required in some cases) — so its the customer responsibility to monitor and restart these nodes as & when required. Using an open source solution like Kured really helps here.
- Scanning container images for vulnerabilities before using them in your DevOps pipeline. Or enabling Azure Defender for container registries that can notify you if it finds some issues with your images
Let’s get Started
Step 1: Local Spring Boot App + Azure SQL
To begin quickly, we will use the Spring Boot SQL Quickstart as a starting point which gives you a Spring Boot App running locally and accessing SQL Server in Azure. Once you have that running, lets go to the next step
Step 2: Deploy Initial Infrastructure in Azure
Our initial infrastructure in Azure consists of
- A Virtual Network with just one subnet to host our AKS cluster (yes, we will add more subnets subsequently in this VNet)
- A very basic AKS Cluster with Azure CNI Network Plugin enabled instead of the default Kubenet plugin and using AKS-managed Azure AD instead of legacy Azure AD integration and an instance of Azure Container registry to hold your docker images for the microservices.
- SQL server, a server-less SQL database and a storage account to hold the audit logs

All the terraform code for creating the above is located here. As a best practice we will be using Azure Storage as the remote backend for terraform for storing the Terraform state. I have a build pipeline that creates a resource group, a storage account and a key vault to hold the access keys of the storage account. This storage account will hold our terraform state files.https://itnext.io/media/c1fe2f8d34e30eb802ede89512b6cd9bSetup Azure Resources for Terraform Remote Backend
Note that this storage account TF_STATE_STORAGE_ACCOUNT_NAME
and key vault KEYVAULT_NAME
are different from the ones that will be used by the application. If you need more details, refer to this excellent blog by Julien for details.
I will be using Azure DevOps pipeline for build & release pipelines for this project and everything (including the initial infrastructure) will be deployed via that pipeline. There were several learning for me while using Azure Devops for running this simple Terraform template that I want to highlight:
- Using AZ CLI Task to run Terraform
I used the following code snippet to deploy the Terraform code via Azure CLI task
https://itnext.io/media/75fb3ac0b903fd469f25bd55360a6c69Sample AZ CLI Task to Deploy Terraform Code
Apart from the regular stuff with attaching the service connection via azureSubscription
and changing the workingDirectory
to point to the directory containing the terraform code, pay special attention to
addSpnToEnvironment: true
— This adds the service principal and key of the service connection to the script execution environment which you can access via$env:servicePrincipalID
,$env:servicePrincipalKey
and$env:tenantId
which we have used in the above snippet to configure the environment variables required to execute Terraforminit/plan/apply/destroy
.- The terraform
init
command uses the Remote storage and specifies thestorage account
name, thecontainer
and thekey
to store the state. For terraform to be able to access the storage account, either use the access key associated with the account or a SAS token. I have stored the access key of the storage account in a keyvault which I’m using above but you can also use a variable group in Azure DevOps and associate that with the keyvault to fetch secrets and make them available as environment variables.
2. Overriding the terraform variables via Azure DevOps Variables
Now we have used some variables in our terraform code for holding SQL server and SQL Database names and login details. Also before we deploy this terraform code, we must create an Azure AD group for AKS cluster administrators and assign its objectId to the terraform variable — aks_admin_group_id
— Members of this AAD group will have administrator access to our cluster.

Some of these values are secrets and should reside in a key vault and not as a part of configuration so I have created a variable group in Azure DevOps linked it to Key Vault containing the secret values

By default all secrets are available as environment variables in your tasks, so just substitute the values of variables by appending TF_VAR_
to the variable name, so if your variable in terraform was named as sql_server_admin_login
you can override that by specifying TF_VAR_sql_server_admin_login
as we have done before doing terraform plan
https://itnext.io/media/cb62511b4dfe281311ef259db040de35Using Azure DevOps Variables for Terraform Vars
Lines 2–5 are used to specify multiple variable groups (secrets and non-secrets) Always good to split your variables over multiple reusable groups which can be shared in related pipelines.
Lines 32–37 exposes the secrets and variables in those groups as Terraform variables following our convention TF_VAR_<variable_name>
— now we can execute the plan
and apply
phase of our pipeline and deploy the infrastructure in Azure.

Go to the portal and confirm that your infrastructure has been deployed correctly.
Step 3: Dockerize the Spring Boot App
Now that we have the required infra in Azure, let’s start by adding a Dockerfile to our project — we will be using the Multi-stage build approach and Layered Jar mode for Spring Boot and extracting different layers in our image instead of using a fat jarhttps://itnext.io/media/97380ae6325a7e26175c0d0c06d5e8bdDockerfile for our Spring
You can test it locally using
docker build . -t agrajm/spring-boot-aks:v1.0
This will build it locally and create a local image for you that you can run as follows:
docker run -it -p8080:8080 \
-e SPRING_DATASOURCE_URL="<Your_DB_Conn_String>" \
-e SPRING_DATASOURCE_USERNAME="<Your_DB_UserName>" \
-e SPRING_DATASOURCE_PASSWORD="<Your_DB_Password>" \
agrajm/spring-boot-aks:v1.0
Step 4: Deploy to AKS — Enters kubelogin for DevOps !
Let’s first try & deploy locally to our AKS cluster — we need to authenticate against it first. To do so type
az aks get-credentials --resource-group <RG_NAME> --name <AKS_CLUSTER_NAME>
This will create or update your KUBECONFIG
with details on how to authenticate against the AKS cluster. My config currently looks like

When I try to get the nodes of this cluster using kubectl

It asks me to authenticate with my Azure AD instance as our cluster is using AKS managed Azure AD integration — once I open a new tab and do devicelogin
and put that code & my credentials in, it issues me a temporary access & refresh token which is updated in the same ./kube/config
as can be seen below:

Now we can deploy our Application to AKS using kubectl apply
but we must create the deployment YAML for our Spring Boot App. Let’s use the below deployment confighttps://itnext.io/media/5307e4e7fcea590761b0a41a154f269cDeployment for Kubernetes
Now that you are authenticated to the cluster, you can simply apply the changes using kubectl
kubectl apply -f depoloy/spring-boot-deployment.yaml
This will work as my currently logged in user is already a part of the AAD group which is the admin group for AKS but how will this work with your pipelines — you cannot respond to the devicelogin
challenge from your pipeline — enters Kubelogin !
Even with an AAD managed AKS cluster, kubelogin allows us to do non-interactive login using a Service Principal or in the latest release — even using the Azure CLI token making it really ideal to use in CI/CD scenarios. We will be using the latter option. So the service principal associated with our Azure CLI must also be added into the same AAD group to be able to authenticate with the cluster and deploy the app to it.
Also, since kubelogin
is not installed by default on the latest ubuntu images my Microsoft hosted build agents are using, I‘ve to install it first before I can use it. Snippet from the pipeline code that makes use of ithttps://itnext.io/media/07c04efa38969b2a5b8923a1d9faf29bkubelogin convert-kubeconfig -l azurecli
The main command is kubelogin convert-kubeconfig -l azurecli
which lets you use the underlying token of the service principal being used by the Azure CLI task for your pipeline. You must add that SP to your AAD group as I’ve done for my AKS Admin AAD group

Now the AZDO pipeline should be able to authenticate & deploy the Spring Boot Application to your AKS cluster.

Next Steps
So far we have a working Spring boot app deployed in AKS talking to Azure SQL database. But this is just the tip of the iceberg and we need to put in all the security controls I promised at the start — we’ll continue this journey to learning together how to secure your microservices (Spring Boot & otherwise) on AKS in the next set of articles. Till then, happy hacking !
This blog is part of Azure Week. Check it out for more great content!
Cloud Solution Architect @ Microsoft | Full Stack Dev | Big Data Enthusiast | Ex-Adobe | Cloud-Native Citizen | https://agrajmangal.in/blog/
Reference:
Mangal, A. (2021). Running Your Microservices Securely on AKS. Available at: https://itnext.io/running-your-microservices-securely-on-aks-417a110b2e76