teamcity to containerize and deploy to azure kubernetes (Part II)

In continuation of our previous post where we have created build steps to build docker images and publish it to Azure Container Registry, we will emphasize on using Team City CD process to pull the images from ACR and deploy to Azure Kubernetes Cluster. We might face numerous challenges while going through this phase and we will see how to resolve them.


Manually deploy container images from ACR to AKS In Team City

Before we extend our build definition file in Team City to deploy container images from ACR to AKS, it is highly recommended to first perform a manual deployment in order to confirm that we are in right tracks. We can then take those steps involved and extend our Team City Build steps.

Our AKS resource has already been created named as dockerdemoaks. Let us follow step-by-step to complete the deployment process and also highlight the issues that we might encounter with.

Step #1:

Open Windows PowerShell in Administrative mode and run the below command to get authorized to your Azure subscription. If you do not have Azure CLI installed, please do install it before executing these scripts since we are using Azure CLI to do that.


az login




Step #2:

Once done run the following command which will store the required credentials and other essentials to work with Kubernetes Cluster in a config file.


az aks get-credentials –g {Resource Group Name} –n {Kubernetes Service Name}




Step #3:

ACR is a privately hosted Repository. Hence in order for AKS to establish a successful connection to ACR, we need to get a secret key which will authorize kubectl to pull the image from ACR and deploy the container to AKS

For that, you need run the following command.


kubectl create secret docker-registry regsecret –docker-server={Container Registry host} –docker-username={Application ID} –docker-password={Secret Password} –docker-email={your email}


Here the Application ID and Secret Password can be derived from the registered app under Azure Active Directory



Verify that the secret key has been created using the following command


kubectl get secret regsecret –output=yaml



Step #4:

Now when you browse the AKS dashboard using the below command, you should be able to see the secret key. However if you have RBAC enabled for AKS, you might encounter the issues in viewing the dashboard.


az aks browse –name {AKS Cluster name} –resource-group {resource group name}



Note: Next time you want to browse the Kubernetes server, run these 2 commands


1. az login

2. az aks browse –name {AKS Cluster Name} –resource-group {Resource Group Name}

Issues found with Kubernetes

You might have encountered the following issues in the Server page which are in highlighted in the above image. By default, the AKS dashboard has minimal read access due to which you might view these RBAC access errors. A ClusterRoleBinding must be created in order to access the dashboard

In order to resolve this issue, run the following command to get the aks credentials


 az aks get-credentials –resource-group {Resource Group Name} –name {AKS Cluster Name}



Use the below command to review the pods


kubectl get nodes



Use the below command to create the ClusterRoleBinding


kubectl create clusterrolebinding kubernetes-dashboard –clusterrole=cluster-admin –serviceaccount=kube-system:kubernetes-dashboard



Now when you will browse the AKS cluster, you will find all these issues resolved.


az aks browse –name {AKS Cluster Name} –resource-group {Resource Group Name}





As we have already created the secret key earlier, if you closely look into the Kubernetes dashboard, you can find the secret registered there.



Step #5:

If you are new to AKS, there is a Kubernetes Manifest file that we need to create for the deployment process to AKS. This YAML manifest file contain various information like ACR path from where container images will be downloaded, the AKS load balancer port information, the OS of AKS server, the secret registry information for authorizing AKS to access ACR, etc. Below is the sample manifest file content. Create the file if you haven’t done so.




Step #6:

Here we are going to execute the command to run the Kubernetes manifest YAML file and deploy a pod for the container. Browse to the folder where you have the file and run the below command


kubectl create –f aspnet35dockerdemo.yaml

Browsing the AKS dashboard using the below command, there are some issues that has been logged


az aks browse –name dockerdemoaks –resource-group dockerdemo-rg




Based on the issue, it is clear that AKS is unauthorized to pull image from ACR. So we need follow some steps in order to provide some level of Authorization.


Step #7:

Let’s ensure we get the Secret generated which will help ACR to authorize AKS to pull image from it. Please follow step by step command over here.

Get Client ID of Service Principal configured for AKS

az aks show –resource-group {Resource Group Name} –name {AKS Cluster Name} –query “servicePrincipalProfile.clientId” –output tsv


Get Resource Id of ACR Registry

az acr show –name {ACR Registry Name} –resource-group {Resource Group Name} –query “id” –output tsv


Create the role assignment

az role assignment create –assignee {Client ID generated} –role acrpull –scope {Resource ID generated}


Get ACR Login Server

az acr show –name {ACR Registry Name} –query loginServer –output tsv


Create an ACR Pull Role assignment with a scope of ACR resource

az ad sp create-for-rbac –name {Provide any Service Principal Name which will be generated} –role acrpull –scopes {Resource Id of ACR Registry generated} –query password –output tsv


Get Service Principal of the Client Id

az ad sp show –id http://{Service Principal Name} –query appId –output tsv


Create Kubernetes Secret

kubectl create secret docker-registry acr-auth –docker-server {ACR Login Server} –docker-username {Service Principal generated} –docker-password {ACR Pull Role assignment Id generated} –docker-email {valid email address}



Once done, update your Kubernetes manifest file with the secret. As you can see here the secret generated is acr-auth.



Step #8:

Browse to the Kubernetes dashboard

az aks browse –name {AKS Cluster Name} –resource-group {Registry Group Name}

Delete the deployment and the pod from the Kubernetes Dashboard.


Run the command to apply the Kubernetes manifest YAML file

kubectl create -f aspnet35dockerdemo.yaml

Browse the KS dashboard.


Well we got another issue here which stated that the OS windows cannot be used on this platform


Resolution for unmatched OS platform

Update your Manifest file with nodeSelector to be Windows, where NodeSelector specify the pod version.


After the execution, we did find a different problem where it is saying that the node doesn’t match the selector.

Ok the problem here is that till date AKS supports linux containers. Since our .NET application need to run under Windows server-based nodes, these nodes are not available in AKS at this time. What MSDN is suggesting is to use Virtual Kubelet to schedule Windows containers in Azure Container Instances and manage them as part of the AKS cluster.

Well seems like it is not going to be a straight forward solution. We will follow the documentation provided here


A few Highlights we have done so far


1. Created build steps to build docker images from Dockerfile and publish it to ACR

2. Successfully deployed the MVC app to AKS by pulling the images from ACR

Issues Encountered

1. Unable to run the deployed container as AKS support Linux container while the Application need to run under Windows container

Pending Steps

1. Successful deployment to AKS under Windows Container for the MVC app

2. Create the rest of the necessary build steps in Team City to perform the Continuous Deployment process.


Fix the issues with AKS and complete the deployment stage of MVC app

One of the major issues that we found is that deployment is failing in one of the pod which is a Linux based node. Since our Kubernetes manifest YAML file specify the image to run under Windows server container, unfortunately currently AKS does not support Windows Server based nodes.


We have to use Virtual Kubelet which can schedule to run both Linux and Windows containers on a container instance managed as part of the AKS cluster. Let us follow some step by step instructions to do that.


Step #1:

Two important thing that we need to understand is Helm and Tiller. Helm is a tool that streamlines installation and management of Kubernetes applications and Tiller runs inside Kubernetes cluster and manages installations of your Helm charts which are collection of files that describes related set of Kubernetes resources.

We need to install Helm in the system first using the below command


choco install kubernetes-helm

Step #2:

Create a service account and role binding for use with Tiller using the below rbac-virtual-kubelet.yaml file and running kubectl apply command against it.



kubectl apply –f rbac-virtual-kubelet.yaml

Step #3:

Configure helm to use the tiller account


helm init –service-account tiller

Step #4:

Install Virtual Kubelet


az aks install-connector –resource-group {Resource group name} –name {AKS Cluster name} –connector-name virtual-kubelet –location eastus –os-type Both



Step #5:

Verify that both the linux and windows nodes are created


kubectl get nodes



If you want to delete a pod then the command is kubectl delete –all pods

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.