teamcity to containerize and deploy to azure kubernetes (Part II)

In continuation of our previous post where we have created build steps to build docker images and publish it to Azure Container Registry, we will emphasize on using Team City CD process to pull the images from ACR and deploy to Azure Kubernetes Cluster. We might face numerous challenges while going through this phase and we will see how to resolve them.

 

Manually deploy container images from ACR to AKS In Team City

Before we extend our build definition file in Team City to deploy container images from ACR to AKS, it is highly recommended to first perform a manual deployment in order to confirm that we are in right tracks. We can then take those steps involved and extend our Team City Build steps.

Our AKS resource has already been created named as dockerdemoaks. Let us follow step-by-step to complete the deployment process and also highlight the issues that we might encounter with.

Step #1:

Open Windows PowerShell in Administrative mode and run the below command to get authorized to your Azure subscription. If you do not have Azure CLI installed, please do install it before executing these scripts since we are using Azure CLI to do that.

PowerShell

az login

 

image_thumb70

 

Step #2:

Once done run the following command which will store the required credentials and other essentials to work with Kubernetes Cluster in a config file.

PowerShell

az aks get-credentials –g {Resource Group Name} –n {Kubernetes Service Name}

 

image_thumb72

 

Step #3:

ACR is a privately hosted Repository. Hence in order for AKS to establish a successful connection to ACR, we need to get a secret key which will authorize kubectl to pull the image from ACR and deploy the container to AKS

For that, you need run the following command.

PowerShell

kubectl create secret docker-registry regsecret –docker-server={Container Registry host} –docker-username={Application ID} –docker-password={Secret Password} –docker-email={your email}

image_thumb82

Here the Application ID and Secret Password can be derived from the registered app under Azure Active Directory

image_thumb81

 

Verify that the secret key has been created using the following command

PowerShell

kubectl get secret regsecret –output=yaml

image_thumb78

 

Step #4:

Now when you browse the AKS dashboard using the below command, you should be able to see the secret key. However if you have RBAC enabled for AKS, you might encounter the issues in viewing the dashboard.

Powershell

az aks browse –name {AKS Cluster name} –resource-group {resource group name}

image_thumb85

 

Note: Next time you want to browse the Kubernetes server, run these 2 commands

PowerShell

1. az login

2. az aks browse –name {AKS Cluster Name} –resource-group {Resource Group Name}

Issues found with Kubernetes

You might have encountered the following issues in the Server page which are in highlighted in the above image. By default, the AKS dashboard has minimal read access due to which you might view these RBAC access errors. A ClusterRoleBinding must be created in order to access the dashboard

In order to resolve this issue, run the following command to get the aks credentials

PowerShell

 az aks get-credentials –resource-group {Resource Group Name} –name {AKS Cluster Name}

image_thumb2

 

Use the below command to review the pods

PowerShell

kubectl get nodes

image_thumb8[1]

 

Use the below command to create the ClusterRoleBinding

PowerShell

kubectl create clusterrolebinding kubernetes-dashboard –clusterrole=cluster-admin –serviceaccount=kube-system:kubernetes-dashboard

image_thumb12[1]

 

Now when you will browse the AKS cluster, you will find all these issues resolved.

PowerShell

az aks browse –name {AKS Cluster Name} –resource-group {Resource Group Name}

image_thumb15

 

image_thumb18[1]

 

As we have already created the secret key earlier, if you closely look into the Kubernetes dashboard, you can find the secret registered there.

image_thumb21

 

Step #5:

If you are new to AKS, there is a Kubernetes Manifest file that we need to create for the deployment process to AKS. This YAML manifest file contain various information like ACR path from where container images will be downloaded, the AKS load balancer port information, the OS of AKS server, the secret registry information for authorizing AKS to access ACR, etc. Below is the sample manifest file content. Create the file if you haven’t done so.

 

image_thumb88

 

Step #6:

Here we are going to execute the command to run the Kubernetes manifest YAML file and deploy a pod for the container. Browse to the folder where you have the file and run the below command

PowerShell

kubectl create –f aspnet35dockerdemo.yaml

Browsing the AKS dashboard using the below command, there are some issues that has been logged

PowerShell

az aks browse –name dockerdemoaks –resource-group dockerdemo-rg

 

image_thumb24[1]

 

Based on the issue, it is clear that AKS is unauthorized to pull image from ACR. So we need follow some steps in order to provide some level of Authorization.

 

Step #7:

Let’s ensure we get the Secret generated which will help ACR to authorize AKS to pull image from it. Please follow step by step command over here.

Get Client ID of Service Principal configured for AKS

az aks show –resource-group {Resource Group Name} –name {AKS Cluster Name} –query “servicePrincipalProfile.clientId” –output tsv

image_thumb28

Get Resource Id of ACR Registry

az acr show –name {ACR Registry Name} –resource-group {Resource Group Name} –query “id” –output tsv

image_thumb31

Create the role assignment

az role assignment create –assignee {Client ID generated} –role acrpull –scope {Resource ID generated}

image_thumb34

Get ACR Login Server

az acr show –name {ACR Registry Name} –query loginServer –output tsv

image_thumb37

Create an ACR Pull Role assignment with a scope of ACR resource

az ad sp create-for-rbac –name {Provide any Service Principal Name which will be generated} –role acrpull –scopes {Resource Id of ACR Registry generated} –query password –output tsv

image_thumb40

Get Service Principal of the Client Id

az ad sp show –id http://{Service Principal Name} –query appId –output tsv

image_thumb43

Create Kubernetes Secret

kubectl create secret docker-registry acr-auth –docker-server {ACR Login Server} –docker-username {Service Principal generated} –docker-password {ACR Pull Role assignment Id generated} –docker-email {valid email address}

image_thumb46

 

Once done, update your Kubernetes manifest file with the secret. As you can see here the secret generated is acr-auth.

image_thumb49

 

Step #8:

Browse to the Kubernetes dashboard

az aks browse –name {AKS Cluster Name} –resource-group {Registry Group Name}

Delete the deployment and the pod from the Kubernetes Dashboard.

image_thumb53

Run the command to apply the Kubernetes manifest YAML file

kubectl create -f aspnet35dockerdemo.yaml

Browse the KS dashboard.

image_thumb55

Well we got another issue here which stated that the OS windows cannot be used on this platform

 

Resolution for unmatched OS platform

Update your Manifest file with nodeSelector to be Windows, where NodeSelector specify the pod version.

image_thumb58

After the execution, we did find a different problem where it is saying that the node doesn’t match the selector.

Ok the problem here is that till date AKS supports linux containers. Since our .NET application need to run under Windows server-based nodes, these nodes are not available in AKS at this time. What MSDN is suggesting is to use Virtual Kubelet to schedule Windows containers in Azure Container Instances and manage them as part of the AKS cluster.

Well seems like it is not going to be a straight forward solution. We will follow the documentation provided here https://docs.microsoft.com/en-us/azure/aks/virtual-kubelet.

 

A few Highlights we have done so far

Achievement

1. Created build steps to build docker images from Dockerfile and publish it to ACR

2. Successfully deployed the MVC app to AKS by pulling the images from ACR

Issues Encountered

1. Unable to run the deployed container as AKS support Linux container while the Application need to run under Windows container

Pending Steps

1. Successful deployment to AKS under Windows Container for the MVC app

2. Create the rest of the necessary build steps in Team City to perform the Continuous Deployment process.

 

Fix the issues with AKS and complete the deployment stage of MVC app

One of the major issues that we found is that deployment is failing in one of the pod which is a Linux based node. Since our Kubernetes manifest YAML file specify the image to run under Windows server container, unfortunately currently AKS does not support Windows Server based nodes.

image

We have to use Virtual Kubelet which can schedule to run both Linux and Windows containers on a container instance managed as part of the AKS cluster. Let us follow some step by step instructions to do that.

 

Step #1:

Two important thing that we need to understand is Helm and Tiller. Helm is a tool that streamlines installation and management of Kubernetes applications and Tiller runs inside Kubernetes cluster and manages installations of your Helm charts which are collection of files that describes related set of Kubernetes resources.

We need to install Helm in the system first using the below command

PowerShell

choco install kubernetes-helm

Step #2:

Create a service account and role binding for use with Tiller using the below rbac-virtual-kubelet.yaml file and running kubectl apply command against it.

image

PowerShell

kubectl apply –f rbac-virtual-kubelet.yaml

Step #3:

Configure helm to use the tiller account

PowerShell

helm init –service-account tiller

Step #4:

Install Virtual Kubelet

PowerShell

az aks install-connector –resource-group {Resource group name} –name {AKS Cluster name} –connector-name virtual-kubelet –location eastus –os-type Both

image

 

Step #5:

Verify that both the linux and windows nodes are created

PowerShell

kubectl get nodes

image

 

If you want to delete a pod then the command is kubectl delete –all pods

teamcity to containerize and deploy to azure kubernetes (Part I)

Folks, I had been thinking for a while to implement a full CI/CD cycle for docker containerization of a legacy .NET framework 3.5 application and deploy it to Azure Kubernetes Cluster. While in one of my post I have used Azure DevOps project to implement the same scenario, I did encountered several issues with that approach. First of all, you have a limitation of 1800 minutes of build by any of the hosted agents. Secondly, each build will trigger the rigorous process of spinning up the hosted agent, installing Framework SDK and prerequisites for a successful build. For any .NET framework application, it is advisable to use VS 2017 Hosted agent since it will have VS 2017, Docker, Azure CLI, etc., installed as part of the configuration. This takes a significant amount of time for a single build to happen. I am sure there is a resolution to get away from this issue, but I couldn’t think much about it. Then I thought to have my own build agent or provision and use Azure TeamCity for CI/CD operations. But the challenge over here is that, Azure TeamCity is a Linux based system and running MSBuild commands or NUGET restore are some of the painful tasks. Considering all these scenarios, I thought why not provision a VS 2017 Windows 2016 VM and build my own build agent over there using TeamCity.

Having said that, I am trying to implement the approach of having a single VM that will have VS 2017, Team City Server and Client, Docker Community Edition, Azure PowerShell, Git Windows, Azure CLI, etc., to name a few, which will ideally provide an automated process to build docker images, push to Azure Container Registry, pull from Container Registry and deploy the container to Azure Kubernetes Cluster.

I am going to divide this post into two –

Create CI build for building docker images and publishing to Azure Container Registry through Team City

Create CD process to deploy container images from Azure Container Registry to Azure Kubernetes Service through Team City

Let’s go ahead and start provisioning a VM.

 

Provision and configure a Windows 2016 VM in cloud

Considering the approach, I find it much evitable to provision the VM image of Visual Studio Community 2017 on Windows Server 2016 (x64) in azure. Go to azure marketplace and create the VM.

image

 

image

I am going for D2 v3 VM Size with 8 GB RAM, 50 GB Temporary Storage.

image

We need to specify the inbound port rules so that the machine is accessible from internet.

image

A standard SDD looks good to me for Disks configuration.

image

Configure the networking interface with VNET, Subnet, Public IP

image

Keep rest of the settings and go ahead to create the VM. Once the VM has been provisioned successfully, go ahead and RDP the system.

 

Install Some Prerequisites in VM before we start configuring CI/CD processes.

TeamCity

Download TeamCity from the following link https://download.jetbrains.com/teamcity/TeamCity-2018.2.exe. During the installation process of TeamCity, I am going have both the Build Agent and Server installed.

image

Once you have TeamCity installed, we need to install Docker Community Edition in the server.

 
Docker Community Edition

Download Docker Community Edition from the following link https://store.docker.com/editions/community/docker-ce-desktop-windows.

image

While installing docker, remember to select Windows Container instead of Linux Container. Once docker is installed, we need Hyper-V to be enabled for docker to work. You will be automatically prompted for enabling Hyper-V post which the server will get restarted. Select Yes to get it done. You also need to create a docker hub account which will get associated to your docker.

 

Azure PowerShell

Install Azure PowerShell by running the below command in Windows PowerShell elevated to Administrator.

PowerShell

Install-Module -Name AzureRM –AllowClobber

image

 

Azure CLI

Download and Install Azure CLI from the following link https://aka.ms/installazurecliwindows

 

Chocolatey

Chocolatey is a software management automation which helps to install various packages. Install Chocolatey using the following commands in PowerShell.

PowerShell

Run Get-ExecutionPolicy. If it returns restricted, then run the command Set-ExecutionPolicy AllSigned.

Run the following command to install Chocolatey

Powershell

Set-ExecutionPolicy Bypass -Scope Process -Force; iex ((New-Object System.Net.WebClient).DownloadString(‘https://chocolatey.org/install.ps1’))

With all these packages installed, we should be good for now.

 

Configure inbound port rule to Allow connection to TeamCity

In order to connect to TeamCity from your browser without login to VM, you need to add an inbound rule in order to accept the connections. Open Networking section of VM and add inbound rule with any port number (in my case it is 8001) which has been configured during TeamCity installation process. Allow for all actions.

image

 

Once done, you can browse TeamCity using your DNS name followed by port number, like http://dockerdemo.southcentralus.cloudapp.azure.com:8001. You should be able to view TeamCity login page.

image

Once you log into TeamCity, verify your agent is connected.

image

 

manually build docker image and Push to ACR

Before we go ahead to setup the CI/CD process for building docker image using Team City, let us first try to manually build the docker image and push to ACR. Open Windows PowerShell and run the following commands in sequential order.

First, ensure you have the Dockerfile in the root repository. I already have the Dockerfile in my repo which will be used to build the image.

image

 

1. Run the following command in PowerShell to build the image –

Docker

docker build https://github.com/devexpresso/aspnet35docker.git

2. Run the below command to review the image created

Docker

docker images

3. Run the below command to to tag the Image Id with a proper name. The name should be follow the naming convention as {Container registry host}/{application name}, since we are working with ACR

Docker

docker tag  {image ID} {azure container registry host/application name} 

e.g. fb8f4791d1f2 dockerinstance.azurecr.io/aspnet35dockerdemo

4. Log in to Azure CLI

PowerShell

az login

5. Authorize Docker to access ACR using ACR user name and password using the below command

Docker

docker login {ACR host} –u {ACR user name} –p {ACR password}

6. Push the tagged image to the registry

Docker

docker push {image name}

That’s it. You should be able to view the image pushed to ACR repository.

 

Setting Up a CI/CD process for building Docker Images using Team City

This should be our first phase to build a Continuous Integration and Deployment process to build docker images from our source repository and deploy the images to Azure Container Registry.

Let us start by creating a project in TeamCity installed in our VM. This need selection of your source control repository URL which in my case is GitHub. The username and password is required to authenticate Team City communicate with GitHub.

image

 

image

 

Once the project is created, you need to configure the build steps for the whole CI/CD process. I would prefer to configure the build steps manually instead of Auto-detection since it helps to generalize and provide the essentials in a better way. Hence I have selected the option for configuring build steps manually.

image

 

Before we start configuring our build steps, we need to be clear with our build stages. Let us highlight them in orderly fashion.

Build docker images for the application using Dockerfile and docker engine installed in the system

Push Docker image created to Azure Container Registry

Authorize Azure Kubernetes Service to access docker images from Azure Container Registry

Pull image container from Container Registry and deploy it as a pod in Azure Kubernetes Service

Build Step #1: Add a build step to build docker image using the docker file we have in our solution directory. The docker command should be build. The Path to file should be the Dockerfile path. If your Dockerfile exist in the root directory of your solution file, then you don’t need to give the absolute path. Your Context folder would be the directory name where your solution resides. We need to keep the Image Platform as Windows since we need MSBuild to build and compile our project. The Image Name:tag should be represented as {container registry host}/{context folder name} which in my case is dockerinstance.azurecr.io/aspnet35dockerdemo.

image

 

Build Step #2: Add a build step to authorize docker to connect to azure container registry. I referred the build step name as ACR Login. The build step will run a PowerShell script and hence while creating the build step I have selected the Runner Type as PowerShell. The Platform where the script will run will be x64 in my case. The Script source should contain the PowerShell code which will authorize Docker to login container registry with container registry username and password, represented as

Docker Command

docker login {container registry name} –u {container registry user name} –p {container registry password}

For e.g.: docker login dockerinstance.azurecr.io -u dockerinstance -p MCl5g9Wu=x8JAF/OTWikrmTZydTTAPy7 

 

image

 

image

 

Build Step #3: This step will be responsible to push the docker image created after successful ACR authentication to Container registry. Here the Docker command will be push followed by Image name or tag that we have created.

In PowerShell the script looks like this docker push dockerinstance.azurecr.io/aspnet35dockerdemo

image

 

Once the build steps are configured, Run the build and verify whether the image has been created successfully. However, you might see that there is no compatible agent to run the build.

image

 

In order to resolve this issue, you need to do following steps.

Step #1:

First one is to install Azure RM Cloud support plugins for Team City. In order to do that, download the latest version of the plugin from this URL (https://plugins.jetbrains.com/plugin/9260-azure-resource-manager-cloud-support)

image

 

It will be downloaded as a zipped folder which will have the following contents. You don’t need to unzip the folder.

image

 

Go to Team City and open Administration section.

image

 

Select Plugins List from right menu under Server Administration.

image

 

Browse the folder where you have downloaded the zipped file of the plugin and upload it.

image

 

Once uploaded, your plugin should be available for TeamCity to connect to Azure RM.

image

 

Step #2:

Go to Project Settings and click Edit for the Build Step where you are going build the image.

image

 

Then update the Image Platform to Any.

image

 

Step #3:

Next, ensure your docker desktop is running. Even if you have started the VM from portal, your docker desktop might not be running. So I would suggest to log into the VM and then check that it’s status.

image

Even when you review the settings of docker desktop, you will find that you need to log in to start docker desktop

image

 

That’s it. You should be having a compatible agent now. Run the build. You should be able to view a successful build.

image

 

When you verify the ACR repository in Azure Portal, you should be having the image created and stored over there.

image

 

We are good with our deployment of images to ACR. In our next post we will extend the Team City build steps to deploy container to AKS. Please follow the post here in Part II (https://devexpresso.com/2018/12/25/teamcity-to-containerize-and-deploy-to-azure-kubernetes-part-ii/).

Publish Docker Images to Azure Container Registry

acr

While I started playing with Azure DevOps, .NET Core and Docker, I did realized that I missed the point where I have a legacy application developed in .net framework and still want to utilize the containerization aspect. Thinking on these lines, I thought to investigate and see how this strategy work for a sample asp.net mvc application built with .net framework 4.6.2.

I have created a sample non-functional asp.net mvc app using VS2017 asp.net template. Since I am more focused on how we can containerize the application and deploy it to AKS following a proper CI/CD process using Azure DevOps, I left the app as it is without any functionalities. I am sure we all know how to create an mvc app, hence I will skip the fundamentals in this post.

Let’s go ahead and follow some of the steps that we would like to perform in order to achieve our goal.

Create a Resource group

This will be our first step where we are going to create a resource group in azure that will hold our azure resources like ACR and AKS.

create_resouce_group

Create ACR (Azure Container Registry)

In this step we are going to create a container registry in azure. This registry will serve the purpose of storing the docker images which will be pushed by the release pipeline of azure devops project. Once the image has been successfully uploaded or updated, the same release pipeline will trigger the action to deploy the image container to AKS (Azure Kubernetes Service)

create_registry

Provide the registry details like registry name, resource group (which we have already created), Enable Admin user (which will use the registry name and access key as password for docker to access the registry) and let the SKU be Standard.

registry_details

Once the registry is created, copy the registry name and password, which we will be using later while defining azure pipeline definition file.

registry_access_keys

Add Dockerfile to your solution

This is a critical piece where I would like to containerize my application and create images that will be deployed to AKS. In order to do that, I need to add a dockerfile to the root inside the solution folder. This file don’t have any extension. Although VS2017 provides the feature to add containerization support, I will still go ahead an create the dockerfile manually instead of choosing containerization and docker support through VS2017.

The best place to create the dockerfile is using Notepad++. Remember to select All Types while saving the file so that no file extension get associated.

image

The content of the dockerfile varies based on different requirements. For every type of project, structure of solution, dependencies involved and whether we are using .NET Framework or .NET core, the content varies. Since I am going to use .NET Framework 4.6.1, hence below content is sufficient enough to build images. Remember that the dockerfile will finally help to create container image which we will store in ACR. If you have more than one applications like an ASP.NET MVC app and a RESTful API, we need to create docker-compose.yaml file along with the dockerfile which will consider creation of more than one image since we have two different applications.

image

The dockerfile defines series of steps that are required to be performed in order to build the container image. Since my application is targeting to .NET framework 4.6.2, hence I have to specify 4.6.2-runtime as build. If it is core than it will be core-sdk.

To get a look on various supported IIS image of docker, you can visit the following link https://github.com/microsoft/aspnet-docker and to get the .net framework versions supported for docker using this link https://hub.docker.com/r/microsoft/dotnet-framework/

Minimal steps that we should define are –

1. Specify the runtime that will be required to build the application

2. Specify the .csproj, .config, etc., that should be copied.

3. Specify where the build and how the build should take place to create the image by docker.

As I have already shared that there can be additional set of steps apart from this sequence based on complexities, dependencies and requirement.

We also need to add .dockerignore which contains list of extensions that should be ignored by docker while creating the image. For e.g., we docker build don’t need bin or obj folders, any other output folder, etc. You can create this file using Notepad++ too but do remember to select All Types while saving the file and don’t give any filename. Just save it as .dockerignore.

image

That’s all you need for docker.

you might be interested to visit https://github.com/Microsoft/dotnet-framework-docker which has various samples for using docker.

Create an Azure DevOps project

If you are new to Azure DevOps Project, there are tons of labs in https://www.azuredevopslabs.com/ that you might be interested to plug in and get some hands-on experience with Azure DevOps.

In this step we are going to use Azure DevOps Management portal to create a DevOps project and define build and release pipelines. Login to https://dev.azure.com using your Microsoft credentials and create a project.

create_new_project

While I already had my source code in github, .

Once the project has been successfully created, our first step would be to define CI build pipeline. There are two ways to define your CI build pipeline.

1. You can either choose Visual Designer to create the pipeline without YAML

2. Use series of steps to create the pipeline with YAML.

In both the options, the first step is to select the source control directory where the source code resides. Now, just for the case of simplicity, I thought to import my repository from GitHub to Azure Git Repo in the DevOps project. The only reason to do this is, because I have to use GitHub OAuth facility to generate a token and authorize Azure DevOps project to interact with my GitHub account. Anyway, you can go with your preference.

I will be using Visual designer to create my build pipeline. First step is to select the repository.

Creating the pipeline using Visual designer

image

Next step is to select the template that will build the app. Now, since we are going to use docker support to build container images, we should ideally select Docker container template.

image

Once you have selected the template, it will create the build definition which will currently have only two steps Build an image and Push an image. We need to add few more steps here.

First we need to change the Agent pool for the pipeline to Hosted VS2017, since MSBuild and NuGet restore will not work in the default Hosted Ubuntu 1604 agent.

image

Next we need to add the build step of installing NuGet in the agent.

image

image

Then we need to add a build step for NuGet Restore.

image

image

Next we will add a task for MSBuild

image

image

Then we need to configure our task for Build an image.

image

If you are wondering how to get the azure subscription endpoint, then follow the process of creating a subscription endpoint to Azure Resource Manager by opening Project Settings and selecting service connection

image

Click on New service connection.

image

Select Azure Resource Manager from the dropdown.

image

Provide the resource group name and connection name.

image

That’s it and your subscription endpoint will get created.

image

The last task in the build steps for the agent is Push an image. Here we need to configure the settings for pushing the container image created to ACR.

image

We are all set. Let us trigger the build now.

image

If everything goes smooth and build is succeeded, then we should be able to see the image getting created and store into ACR repository. Login to your azure portal and verify the container image has been generated.

image

Creating the pipeline using Step-By-Step process generating azure-pipelines.yml file

If you are using step-by-step procedure instead of Visual designer, then you should select the repository from the below screen.

new-pipeline

Select the location of your repository. If you are using github, then you need to use OAuth to authorize azure devops project to connect to your github. If you are using Azure git repo like mine, then you just need to select the repo.

repo_selection

Select ASP.NET template for the pipeline since we are going to build ASP.NET MVC app.

select-pipeline

Once you are done, it will create the build definition, queue it and then execute it. This will also create azure-pipeline.yaml file and upload it in your git repo which is a template containing series of steps that will be executed as part of the build.

define_azure_pipeline

Once the build is executed, you will be able to view the build summary. Unfortunately, the build has failed and I will come to that issue shortly.

edit_build_pipeline

Now, it is important to understand that the build is suppose to create docker images and the azure pipeline should have access to ACR. If you remember, we have stored the ACR user name and password that will be used by docker. Keeping that in mind, we need to rename the azure-pipelines.yml file to azure-pipelines.acr.yml file.

update_pipeline_name

We also need to add dockerId and dockerPassword in the variables section of the file. Value of dockerId will be ACR username and value of dockerPassword will be ACR password from ACR access keys.

image

When you save the file and commit it to git, it will automatically trigger the CI build.

queue_build

Unfortunately, I am finding an issue while executing the VSBuild task. Here is the issue.

pipeline_build_error

After various attempts and taking help from different forums, I have found that the issue is happening since we have the Packages folder in our repo. Since, NuGet Restore is happening during the build process, it is finding the packages folder and creating issues here. You might also face the same issue and hence do keep in mind that it is not with related to any missing dll or library references. In order to resolve this issue, you can either delete the packages folder from the repo or rename it. NuGet Restore will automatically generate the packages folder and add all the dependent libraries there. After this fix, I was able to run the build successfully

image

So far so good. we were able to run the build.

 

Update your azure-pipelines.acr.yml with the following information building and pushing docker image to the container registry.

image

Now if you queue the build you might encounter an error where it says that the newly created service endpoint is not authorized to use resources. In order to resolve this issue, edit the pipeline build definition and changes something like the one highlighted, save it and again revert back the changes to original and then save it again. Now if you queue the build, it will work. It’s an issue but we have to live with the workaround.

image

Issues encountered in the CI process

You might encounter couple of issues during the CI process. They might not be similar but I would like to share some of them along with the resolution which might help you.

#Issue 1

Well, when the build got triggered, you might encounter an error which says that dockerfile is not found while building the image.

image

Do validate that the dockerfile path provided in dockerfile is correct. It is advisable to always keep the dockerfile in the root solution directory and not under any sub-folders or project folders.

image

#Issue 2

You might encounter an error from the dockerfile step of Nuget restore which says ‘nuget’ is not recognized as an internal or external command. This issue will happen when you are using Ubuntu agent or any other agent instead of VS 2017 hosted agent.

image

Remember to change the agent to Hosted VS 2017.

#Issue 3

You might also encounter an error from the dockerfile step of running msbuild which says ‘msbuild’ is not recognized as an internal or external command. This is the same case like NuGet. Changing the agent to Hosted VS 2017, will resolve this issue.

image

Create Azure Kubernetes Cluster service

In this step we will see how to create an azure kubernetes cluster service (AKS) where we are going to deploy our container image from azure container registry using azure devops release pipeline (CD process).

Login to https://portal.azure.com and select Kubernetes Service to create it.

image

Provide the basic details like resource group name, cluster name and dns name prefix. Based on your requirement you can choose number of nodes. In my case I will go with node count 1.

image

In the authentication page, select to create a new service principal or use an existing service principal. Since I already have an existing service principal, I will use the same.

Service Principal Creation

If you are looking forward to know how to create a service principal, then go to Azure Active Directory and select App Registrations from the menu. Click New application registration.

image

Provide the Registration name and Sign-on URL which can be changed at any point of time. Give any URL you want. Click Create.

image

Once the app is registered, the Application ID is your Service Principal Client ID

image

Click Settings, select Keys and add a password. Juts provide the description and duration for the field while keeping the value empty. When you click Save, the password will automatically get generated. Remember to store this password as it cannot be retrieved later if you planning to use it. This password will act as Service Principal Client Secret.

image

Back to Continuation of AKS creation

Back to the authentication page, let Enable RBAC be false and provide the Service Principal details

image

Keep the networking and monitoring as it is. Review the details and then click Create.

image

Once the deployment has started, you should be able to view the status of the deployment in the overview page.

image

In order to view the Kubernetes Dashboard Web UI, open azure cloud shell and type the following command with your resource group name and AKS cluster name: az aks browse –resource-group dockerdemo-rg –name dockerdemocluster


image

Browse the URL as shown in the CLI window and you should be able to view the Kubernetes Web dashboard.

image


We are going to use the AKS to deploy the container images in our next post. Also, I am going to show both usage of Docker Hub and ACR for publishing the image and deploy to AKS. Stay tuned.


Introduction to Azure Devops Project

 

devops

 

What does DevOps mean to me…

DevOps, the word itself says a lot. To me DevOps defines processes which defy the separation of two worlds – developer and operations. It clearly states that, as a developer you own how and where your code should run. This means that being a developer, your responsibilities are not limited to only development but also extended to implement deployment strategies for your code. You should be aware where and how to deploy the code instead of having any dependencies and the same goes for an operation guy who should also be aware of the purpose of the code which need to be deployed. It is highly important to accept this fact and responsibility since being a developer, I am fully aware of the compatibility, complexities, dependencies and platform where my code can run. DevOps concept and methodologies, emphasize on bridging the gaps between development team and operations team and have one single team capable of full SDLC process, implementing Agile methodologies and providing a consistent robust delivery model for clients. Well, to get more in-depth concepts on DevOps, I would recommend you to refer some of the wonderful training materials and videos that are available online. I found this video which gives some insights about DevOps and I think it is good.

 

This post of mine emphasize on exploring the capabilities of new Azure DevOps project and how it’s in-build CI/CD feature works building a robust release pipeline. Continuing my .NET Core migration series of post, what can be better than using the capabilities of Azure DevOps project. Just to let you know, following are the stages of this migration journey that I have completed so far.

1. Developing a sample ASP.NET Web API application using .NET Framework 4.6.1. (click here)

2. Converting ASP.NET Web API application to ASP.NET Core Web API (click here)

3. Introduction to Azure Cosmos DB as our data storage (click here)

4. Provisioning Jenkins server from Azure marketplace (click here)

5. Introduction to Docker and enabling Containerization capabilities to our .NET Core API solution (click here)

6. Introduction to Azure Container Registry (ACR) and Azure Kubernetes Service (AKS) in Azure (click here)

 

Our Agenda For using Azure DevOps

We are going to use Azure DevOps Project in order to accomplish the build and release pipeline for our sample .NET Core application which also include containerization aspect to it. However, since the contents of this whole implementation might be too large to accommodate in a single post, and hence we are going to segregate it into multiple posts.

In this post we are going to go over a sample implementation of Azure DevOps project from the portal for a new project and know how exactly the DevOps project plays a vital role.

Our second post would be to use Azure DevOps Service management tool to create the build and release pipeline for our existing .NET Core project which should finally build docker images, store it to ACR and then deploy the container to AKS.

In our third and final post we are going to see how integration of Jenkins CI/CD tool with Azure DevOps Service works to build and publish image container to AKS

Azure DevOps And its flavors

Azure DevOps has two flavor which contribute towards the same goal of building a robust Release pipeline supporting Continuous Integration and Delivery processes.

Azure DevOps Service, a management tool that facilitates a distinct ecosystem incorporating services that support Build and Release pipeline, associate a platform to incorporate Agile principles using Scrum and Kanban methodologies, integrate Test Automation capabilities on top of providing a distinct platform of CI/CD processes and delivery pipeline. Just to let your know Visual Studio Team Services (VSTS) is also referred as as Azure DevOps.

Azure DevOps Project is part of the Azure DevOps Management Service, which is integrated with portal providing a unique platform to setup CI/CD pipeline in Azure within few minutes.

 

How to get started

Well there are two ways you can get started with Azure DevOps project. You can directly use Azure DevOps project template from the portal which will help you to create the whole CI/CD process in few minutes or create through Azure DevOps Management Service from https://dev.azure.com. We will use Azure DevOps project template from the portal and then validate the configurations, build definitions and release pipeline using Azure DevOps Management service. We are going to trigger the build from the  management service portal.

Let’s get started….

 

Create a sample project using Azure DevOps Project Template from Azure portal

Login to Azure portal https://portal.azure.com and search for Azure DevOps Project from the marketplace.

image

 

Once you have selected the project template, click Create from the template detailed screen.

image

 

In the application creation screen, you can either create a new application having basic features using some key technologies like .NET, Node.js, PHP, Java, Python, etc., or select an existing application where you need apply DevOps processes.

image

 

I am going to take the option of creating an new .NET application and then click Next. The next screen will give me options to select the runtime that I would like to go for like ASP.NET or ASP.NET Core. It will also let me add a SQL Server database in Azure, if I choose to go for it. I am going to go for ASP.NET Core and leave the database option as it is since I don’t need it here.

 

image

 

In my next screen, I will be provided with options to select the Azure service for deploying my application. The services that we can use are like Kubernetes Service, Service Fabric, Widows Web App, Linux Web App, Web App for Containers and Virtual machine. I am going to go for Kubernetes Service, since I have already planned and configured our .NET Core application for Containerization aspects. For the sample project, we don’t need Kubernetes service but I want to see how the configuration of Kubernetes service is applied in the project.

image

 

Next screen will give you the fields to define the Project name, Azure DevOps Service Organization name, Azure Subscription id, Cluster name which will be automatically populated based on the Project name and Location.

image

 

In the same creation screen, you will find Additional Settings which on selection opens another window where you can configure the Kubernetes Service and Container Registry.

In Kubernetes Service settings, you can create a new azure devops services organization if you don’t have one, give a resource group name, provide the number of nodes you need for the cluster, select VM size and provide location for Log Analytics.

In Container Registry Settings, you can provide the registry name which will get created, whether it is going to be a basic, standard or premium Registry and the location of the registry.

image

 

Once the process is complete, you can see that project has been created in the dashboard.  Click Go to resource.

image

 

In the project dashboard, you will be able to view the CI/CD pipeline generated.

image

 

Till here everything looks good. However, I feel if we can get some insights on what is happening behind the scene of DevOps, that will be great.

 

Behind the scene of Azure DevOps project

If you look closer there are various resources and components involved in building DevOps project that we can emphasize on. I would like to know how each of these azure resources and services plays their key role for the execution of the DevOps project. And then I would also like to know various steps and processes involved in executing the Build and Release pipeline associated with the DevOps project.

Azure DevOps

From the above infographic, we can review various resources and components that plays a vital role in this project. Post creation of the project from Azure portal, there are three resource groups that got created –

1. Resource Group for DevOps project itself

2. Resource Group containing the collection of Kubernetes Service, Log Analytics, Application Insights and Container Registry

3. Resource Group containing the compute resources like AKS Node, NIC (Network Interface), DNS zone, Kubernetes Load balancer and NSG (Network Security Group)

 

Let us start identifying the resource group containing Compute services.

Azure Kubernetes Service Structure

As we have selected only 1 node for our Kubernetes cluster, we have one AKS agent (VM) which is under the Agent pool availability set.

The agent is responsible to process the execution of containers using Kubelet which runs the container runtime (Docker) to connect with resources like VNET, storage, etc.

The network interface connects the agent to virtual network (VNET) by kube-proxy available in each node.

The load balancer provide an external IP address to let traffic connect with the application.

AKS DNS zone helps us to access application in easy mode.

NSG (Network Security Groups) are created automatically when the load balancer is created which filter traffic for agents or nodes.

 

The container resource group have four major resources deployed as part of the DevOps project –

1. Container Registry (ACR) which store images for container deployments

2. Kubernetes Service (AKS) which manage the hosting environment for Kubernetes in order to deploy and manage containerized applications. Each of these containers are executed by a container runtime like Docker

3. Application Insights which monitors the health of Kubernetes Service

4. Log Analytics which collect data and telemetry from Application Insights

 

Explore DevOps project features and components

While we went through the various components and resources deployed by Azure DevOps project, it is a good time to know the features of the project. Let us switch to https://dev.azure.com and have a quick look into the DevOps project created.

image

 

In the left navigation menu, the most important menu items that we should be considering are Boards, Repos, Pipelines, Test Plans and Artifacts.

1. Boards will be used to track work, Kanban boards, backlogs and sprint planning.

2. Repos will basically connect to Git associated with your azure account which will manage the source codes.

3. Pipelines are the most important aspects of DevOps project as it contains definitions and integration of CI/CD pipeline. This is the only one which we will cover in this article.

4. Test Plan plays a significant role if you are going for Test Automation

5. Artifacts are deployable components for your application through Azure pipeline.

 

Azure CI/CD Pipeline

Azure pipeline helps to continuously build, test and deploy application to cloud. It has various components –

1. Builds

2. Release

3. Library (collection of shared build and releases assets for the project)

4. Task Groups (groups which share common actions with multiple build and releases) 

5. Deployment Groups (collection of machines each having an azure pipeline agent used to host app).

 

Azure DevOps Build Pipeline

Azure DevOps Build Pipeline

1. The Build pipeline starts with providing the Build Name and selection of Agent Pool (e.g. Hosted Ubuntu 1604, Hosted VS 2017, etc.).

image

 

2. Select the Source Control (e.g. Azure Repos Git, GitHub, BitBucket, etc.), name of the Team Project, Repository selection, Branch selection for manual and scheduled builds and other non-required parameters.

image

 

3. Create the Agent job which will have multiple steps or tasks to be completed. The Agent job will inherit from the Agent pool which execute an agent during build process. You can also have multiple agent running same set of tasks by selecting the option of Parallelism.

 

Just to let you know, that the steps here added for the job is very specific to build of docker images and  working with ACR. Hence this might not be the appropriate steps in your case and you can choose any steps that would fit your build strategy.

a. Azure Container Registry deployment step requires information of Azure subscription, creation of ACR, creation of resource group, template mapped to Azure RM template, etc.

image

 

b. Building of docker images step which will use the ACR information and Docker file path to build the images.

image

 

c. Pushing of docker images step involves of placing the images in ACR.

image

 

d. Installation of Helm Tool step involves installing Helm and Kubernetes in Agent machine. In case you are new to Helm, it is a package manager for Kubernetes which helps to deploy apps to Kubernetes. The packaging format of Helm are called Charts, which contains collection of files that provides Kubernetes resources. In our next step we are going package and deploy those Charts.

image

 

image

 

You might be wondering, how do I get the chart path or how can we create it. Well the charts are available in the project folder and specific to individual project of a solution. However, I am not going to go to the depth of the charts since I will be covering that in my next articles. Here is the project structure containing chart.

image

 

e. With ARM templates copy files from the source folder to the destination folder

image

 

f. Publish the build artifacts to Azure pipeline

image

 

Once the build has been a success, you can view the build status

image

 

Azure DevOps Release Pipeline

The release pipeline will contain the execution steps in order to release the application and deploy it to a specific location or resources which in this case is Kubernetes cluster.

Azure DevOps Release Pipeline

You can add as many stages you want like DEV, STAGING and PRODUCTION

image

 

Dropping of the artifacts will trigger the CI build

image

 

Next phase is how the agent is going to execute the steps to create AKS cluster, extract the application routing zone providing the deployment outputs followed by installing Helm and then packaging and deploying the Helm charts.

image

 

In the AKS Cluster creation step, if you have a template for AKS you can specify it along with template parameters. Make sure to make OMS Location to East US or a location where the cluster is applicable. South Central US is not available.

image

 

Ensure that you have selected the right Azure subscription endpoint, resource group and location for the cluster creation.

image

 

While configuring the Application Routing Zone, provide the script that will be executed in this step.

image

 

In Helm package and deployment of charts, ensure you have the right values for your subscription, resource group and AKS Cluster name

image

 

Once the configuration is complete, you can create a release.

image

 

 

Error from the Release

After the successful CI build, I did notice that the deployment to Kubernetes has failed.

image

 

On clicking the Release-1 link, it redirects me to the Log generated from the release pipeline in azure devops management service portal. The error says that “The VM size of AgentPoolProfile:agentpool is not allowed in your subscription in location ‘centralus’.” . This means that the location centralus that has been selected for Kubernetes service is not available for my subscription.

image

 

In order to fix this issue, I have changed the OMS location in the template parameters to East US and made it worked. Initially it was pointing to South Central US.

 

Conclusion

This post is just an introduction to how we can build CI/CD pipeline using Azure DevOps project which is not covering the actual deployment of our .NET core project. In my next post, instead of using Azure DevOps project which has a limitation of not using existing ACR and AKS, I am going to create the CI/CD pipeline from the scratch. Also, I will modify our existing solution to add Helm Charts to manage Kubernetes. Along with that, we will also see how we can integrate Jenkins build with DevOps release pipeline to have a complete CI/CD process in place for our existing .NET core application.

Migration from .NET Framework to ASP.NET Core

aspnetcoremigration

Migration aspect is a major decision and not always it follows a green field path. There can be numerous reasons for taking up the decision to migrate an application codebase from its legacy form to future state or performing database migration or even infrastructure migration from on-premise to cloud. All migration aspects turns out to be a major disaster if proper strategies are not defined and lack efficient guidelines and resource management.

Anyway, this post emphasize on carrying out the migration aspect of our sample ASP.NET Web API solution that we had build earlier to ASP.NET Core. However, as the approach will follow a migration aspect with few basic core changes, I will not emphasize it as one of the green field project. Every migration aspect differ with its own set of issues and options to handle them.

Why migrating to .NET Core

Definitely this one is a good question that we developers bear in our mind knowing the fact that the existing codebase suffice our business needs and is in much stable state. It’s like asking someone who is looking forward to upgrade his four wheeler from a sedan to suv. Well, the relevant answer you might get is to gain more flexibility, power and adaptable features. Same kind of thoughts goes here too when you would like to migrate an application build on .NET framework to .NET core.

Few of the parameters that come to my mind for this migration are like better testability of application, ability to run on any platform and not restricted to windows only, easily available for cloud deployment and leveraging configuration management, ability to host anywhere and lightweight with high performance. Having said that, let us look into the migration aspect of the application.

Is there any migration tool

Honestly, as far as I know we don’t have any specific migration tool that can help you to migrate the whole application to .NET core. This has to follow step-by-step strategic approach. However, my initial approach would be to use .NET Portability Analyzer, that can be used to identify the compatibility of the migration approach. You can either download the tool or add it from visual studio extensions. Let’s look into that first and see what the analyzer gives us.

We will first search for .NET Portability Analyzer from Visual Studio extensions and download it.

image

Once installed, right-click the solution and select Portability Analyzer Settings, which will give you the configuration management tool for the Analyzer. In this tool you can define the output directory where the analyzed report will get generated, output format of the report (XLS, HTML or JSON) and the Target Platforms to where you want the codebase to get migrated which for us would be .NET Core 2.1 and .NET Standard.

Now, as best practice one thing that I am sure you know is that all class libraries should be converted to .NET Standard instead of .NET Core. The only reason for doing this is because, many external or 3rd party libraries like Newtonsoft.JSON support .NET Standard rather than .NET core. And moreover .NET Standard is lower than .NET core hence it is compatible to .NET Core version.

image

Since I have only one API project in my solution and I want to migrate it to ASP.NET Core, hence I will select on ASP.NET Core as my target framework for portability analysis

image

Once the settings are done, go ahead and right click the API project and select Analyze Project Portability, to start the analysis.

image

Once the analysis is complete, you can review the output generated in HTML or Excel format like shown here.

image

Based on the report, most highlighting factors or rather assemblies that .NET Core is not supporting are –

System.Configuration.ConfigurationManager

System.Net.Http.Formatting.MediaTypeFormatter

System.Net.Http

System.Web.Http

System.Web.Mvc

System.Web.Routing

Since this was the report generated for Web API, a few non-compatible assemblies are listed. If you have a complex project with various class libraries and 3rd party libraries, you might see a different result altogether. At the end of this post, I will try to provide some of the incompatible libraries that I had encountered in a real time project conversion and the alternate usage in .NET Core.

Starting the migration aspect

The portable analysis report will provide only few details to get started. However, it might not cover everything which you might encounter during compilation or execution phase of the converted application.

Since we don’t have any migration utility available yet, our first step would be to create a fresh new solution having same number of projects as in .NET framework application but targeting to ASP.NET Core 2.1 and .NET Standard (higher version).

What if I have complex structure having multiple class libraries along with API project

Well in this case you need to start step-by-step conversion process by creating related class libraries and dependencies targeting to .NET Standard (higher version). Copy the codebase from original solution to your libraries. Fix all the compatibility issues for .NET Standard. Once this phase is complete, you can add references of these dependencies to your new ASP.NET Core 2.1 project.

Structural differences between the API project

There is a significant structural differences between the API project developed with .NET Framework and .NET Core

image                      image

Let us identify some of the significant differences between these two applications.

Startup.cs – This class defines the request handling pipeline and services that need to be configured

Typically in ASP.NET MVC, we have the startup class in App_Start, which get triggered when the application is launched and initialize the pipeline. This is only required if you are handling any Katana/OWIN functionality for the MVC or WebAPI app and hence it is optional. But in case of ASP.NET Core, this class is must to have and is generated by default.

If you look into this class, there are three major components –

  1. Configure method to create application request processing pipeline where IApplicationBuilder used for configuring the request pipeline and IHostingEnvironment used to provide web hosting environment information is injected. If you have Swagger implemented, then this is the place where you are going to configure the Swagger endpoint.
  2. ConfigureServices method used to configure the application services by injecting IServiceCollection which specifies the contracts for service descriptors. IServiceCollection is under the namespace Microsoft.Extensions.DependencyInjection which helps services to resolve using inbuild dependency injection.
  3. IConfiguration which is followed by constructor injection of the Startup class, used to read the configuration properties represented by key/value pair

 

[csharp]
public class Startup
{
public Startup(IConfiguration configuration)
{
Configuration = configuration;
}

public IConfiguration Configuration { get; }

// This method gets called by the runtime. Use this method to add services to the container.
public void ConfigureServices(IServiceCollection services)
{
services.AddSingleton<IConfigurationProvider<Employee>, EmployeeProvider>();
services.AddSingleton<IConfigurationProvider<Project>, ProjectProvider>();
services.AddSingleton<IConfigurationProvider<Department>, DepartmentProvider>();
services.AddSingleton<IConfigurationProvider<Client>, ClientProvider>();
services.AddSingleton<IConfigurationProvider<Skills>, SkillProvider>();

services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_1);
services.AddSwaggerGen(c =>
{
c.SwaggerDoc("v1", new Swashbuckle.AspNetCore.Swagger.Info { Title = "EmployeeManagementApi", Version = "v1" });
c.IncludeXmlComments(Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "EmployeeManagement.Api.xml"));
c.ResolveConflictingActions(apidescription => apidescription.First());
c.DescribeAllEnumsAsStrings();
});
services.AddMediatR(typeof(Startup));
services.AddScoped<IMediator, Mediator>();
services.AddMediatorHandlers(typeof(Startup).GetTypeInfo().Assembly);
}

// This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}

app.UseSwagger();
app.UseStaticFiles();
app.UseSwaggerUI(c =>
{
c.SwaggerEndpoint("/swagger/v1/swagger.json", "Employee Managament Api");
c.RoutePrefix = string.Empty;
});

app.UseMvc();
}

}
[/csharp]

Configuration files for ASP.NET Core

In ASP.NET MVC, we provide all our application configuration settings in web.config file. However, in case of ASP.NET Core we provide these settings in JSON format in appsettings.json file which is placed to the root of the Api project.

[csharp]
{
"Logging": {
"LogLevel": {
"Default": "Warning"
}
},
"AllowedHosts": "*",
"AppSettings": {
"endpoint": "https://localhost:8081/&quot;,
"authKey": "C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw==",
"database": "empmanagement",
"collection": "employee"
}
}
[/csharp]

For me, the json file has only few keys like endpoint to Cosmos DB, authKey for Cosmos DB, database and collection key/value pairs.

Another file which is important to look into is launchSettings.json file, which contains the information of how and where we are going to run the application. We can configure settings for various environments like Development, Staging and Production along with the information of how to run the application in local IIS express or from IIS Server.  This file is also in JSON format.

 

[csharp]
{
"iisSettings": {
"windowsAuthentication": false,
"anonymousAuthentication": true,
"iisExpress": {
"applicationUrl": "http://localhost:1789&quot;,
"sslPort": 0
}
},
"$schema": "http://json.schemastore.org/launchsettings.json&quot;,
"profiles": {
"IIS Express": {
"commandName": "IISExpress",
"launchBrowser": true,
"environmentVariables": {
"ASPNETCORE_ENVIRONMENT": "Development"
}
},
"EmployeeManagement.Api": {
"commandName": "Project",
"launchBrowser": true,
"launchUrl": "api/values",
"environmentVariables": {
"ASPNETCORE_ENVIRONMENT": "Development"
},
"applicationUrl": "http://localhost:5000&quot;
},
"Docker": {
"commandName": "Docker",
"launchBrowser": true,
"launchUrl": "{Scheme}://{ServiceHost}"
}
}
}
[/csharp]

Implement Exception Error Handling in ASP.NET Core

System.Web.Mvc.HandleErrorAttribute which is responsible to handle any exception thrown by an action method in asp.net mvc is not support in asp.net core.

In this case we can use create a custom exception filter that is derived from ExceptionFilterAttribute class of Microsoft.AspNetCore.Mvc.Filters namespace which runs asynchronously when an exception is thrown. We can have an ApplicationLogging class which will have an instance of ILoggerFactory from Microsoft.Extensions.Logging namespace, that support API logging using third party logging providers like NLog or Log4Net. This logging class use it in the custom exception filter which tells the appl ication where to log.

[csharp]
public static class ApplicationLogging
{
public static ILoggerFactory LoggerFactory { get; } = new LoggerFactory();
public static ILogger CreateLogger<T>() => LoggerFactory.CreateLogger<T>();
}
[/csharp]

[csharp]
public class CustomExceptionFilterAttribute : ExceptionFilterAttribute
{
ILogger Logger { get; } = ApplicationLogging.CreateLogger<CustomExceptionFilterAttribute>();// to tell where we log

public override void OnException(ExceptionContext context)
{
using (Logger.BeginScope($"=>{ nameof(OnException) }")) // to tell which method we log
{
Logger.LogInformation("Log Message"); // to tell what exception we log
}
}

}
[/csharp]

You then use the custom exception filter attribute created for your controllers.

[csharp]
[Produces("application/json")]
[Route("api/[controller]/[action]")]
[ApiController]
[CustomExceptionFilter]
public class SkillController : ControllerBase
{
private readonly IConfigurationProvider<Skills> _provider;
private readonly IMediator _mediator;
}
[/csharp]

You can also use Microsoft.IdentityModel.Logging for implementing logging capabilities by installing the nuget package and overriding the OnException() method in custom exception filter class.

[powershell]
Install-Package Microsoft.IdentityModel.Logging -Version 5.3.0
[/powershell]

 

[csharp]
public override void OnException(ExceptionContext context)
{
Microsoft.IdentityModel.Logging.LogHelper.LogExceptionMessage(context.Exception);
}
[/csharp]

I am not covering much over here on exception handling as this should be a different post. Just an insight of the issues that we can encounter and the alternatives to fix them.

Using Configuration Manager in ASP.NET Core

Mostly when we try to retrieve values from configuration files like Web.config or App.config, we generally use ConfigurationManager, to get the values from appSettings. However, since System.Configuration.ConfigurationManager is not supported in .NET Core, we cannot use it. The workaround here is to install the package

 

[powershell]
Install-Package System.Configuration.ConfigurationManager -Version 4.5.0
[/powershell]

Now, in case of .NET Core we are supposed to read the values from appSettings.json file instead of any web.config or app.config file. In order to do that, we can create a static helper class like ConfigurationResolver.

 

[csharp]
public static class ConfigurationResolver
{

public static IConfiguration Configuration()
{
string basePath = AppContext.BaseDirectory;
var configuration = new ConfigurationBuilder()
.SetBasePath(basePath)
.AddJsonFile("AppSettings.json", optional: true, reloadOnChange: true)
.AddJsonFile("AppSettings.{env.EnvironmentName}.json", optional: true, reloadOnChange: true)
.Build();
return configuration;
}
}
[/csharp]

And then you can use the helper class like this

 

[csharp]
private static IConfiguration Configuration;
public static void Main()
{
Configuration = ConfigurationResolver.Configuration();
DatabaseId = Configuration.GetSection("AppSettings").GetSection("database").Value;
}
[/csharp]

 

Unavailability of System.Web.Http in ASP.NET Core

This limitation of not having System.Web.Http in .NET Core gives us a lot of issues where most of the code base had been using libraries and references belonging to this namespace.

For example, ApiParameterDescription or ApiDescription which belongs to System.Web.Http.Description and which gives metadata description of an input to API.

In order to implement this, we need to install the package ApiExplorer

 

[powershell]
Install-Package Microsoft.AspNetCore.Mvc.ApiExplorer -Version 2.1.2
[/powershell]

Once the package has been installed successfully, we can use most of the functionalities

 

Defining routes in ASP.NET Core

Configuring routes using MapHttpRoute is not supported in .NET Core. You can define the default routes in Startup.cs file

 

[csharp]
public class Startup
{
public void ConfigureServices(IServiceCollection services)
{
services.AddMvc();
}

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
app.UseMvc(routes =>
{
//New Route
routes.MapRoute(
name: "about-route",
template: "about",
defaults: new { controller = "Home", action = "About" }
);

routes.MapRoute(
name: "default",
template: "{controller=Home}/{action=Index}/{id?}");
});
}
}
[/csharp]

 

System.Web.Mvc in ASP.NET Core

System.Web.Mvc.Controller is not supported in .NET Core. However, you can install the package Microsoft.AspNetCore.Mvc as an alternative to serve your purpose.

 

[powershell]
Install-Package Microsoft.AspNetCore.Mvc -Version 2.1.2
[/powershell]

 

Enabling Swagger capabilities in ASP.NET Core

Swagger is an elegant way to provide API documentation. For that you need to install the package Swashbuckle.AspNetCore. Once installed, you need to update your Startup.cs file to provide the swagger endpoint and add it to the service collection.

 

[csharp]
public class Startup
{
public void ConfigureServices(IServiceCollection services)
{
services.AddMvc();
services.AddSwaggerGen(c =>
{
c.SwaggerDoc("v1", new Info { Title = "SampleApi", Version = "v1" });
c.IncludeXmlComments(GetXmlCommentsPath());
c.ResolveConflictingActions(apiDescriptions => apiDescriptions.First());
c.DescribeAllEnumsAsStrings();
});
}

public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}
app.UseSwagger();
app.UseStaticFiles();
app.UseSwaggerUI(c =>
{
c.SwaggerEndpoint("/swagger/v1/swagger.json", "Sample API V1");
c.RoutePrefix = string.Empty;
});

app.UseMvc();
}

private string GetXmlCommentsPath()
{
return System.IO.Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "Sample.Api.xml");
}

}
[/csharp]

Change your appSettings.json file to provide the launchUrl to Index.html which will open the endpoint to Swagger

 

[csharp]
"profiles": {
"IIS Express": {
"commandName": "IISExpress",
"launchBrowser": true,
"launchUrl": "index.html",
"environmentVariables": {
"ASPNETCORE_ENVIRONMENT": "Development"
}
}
[/csharp]

In your controller and action methods, you can add the following attributes.

 

[csharp]
[Produces("application/json")]
[Route("api/[controller]/[action]")]
public class ClientController : Controller
{
/// Your block of code
}

[HttpPost()]
[ActionName("GetClient")]
[ProducesResponseType(typeof(ClientRequest), 200)]
[ProducesResponseType(typeof(void), 400)]
[ProducesResponseType(typeof(void), 404)]
public async Task<IActionResult> GetClient(ClientRequest clientRequest)
{
/// Your block of code
}
[/csharp]

 

Dependency Injection using StructureMap in ASP.NET Core

As dependency injection is in-build in .NET Core, you don’t need StructureMap here. If you old code has referred to StructureMapDependencyResolver and StructureMapScope, these has been deprecated and cannot be used since .NET Core doesn’t support System.Web.Http and System.Web.Http.Dependencies.

You can use IServiceCollection to add all the dependencies required in StartUp.cs file

 

[csharp]
public void ConfigureServices(IServiceCollection services)
{
services.AddSingleton<IClientProvider, ClientProvider>();
services.AddSingleton<IProjectProvider, ProjectProvider>();

var container = new Container();
container.Configure(config =>
{
config.Populate(services);
});
}
[/csharp]

 

RestSharp library in .NET Core

If you are using RestSharp library, for .NET Core you need to install the nuget package RestSharp.NetCore and then create an extension to RestClient

 

[csharp]
public static class RestClientExtensions
{
public static async Task<RestResponse> ExecuteAsync(this RestClient client, RestRequest request)
{
TaskCompletionSource<IRestResponse> taskCompletion = new TaskCompletionSource<IRestResponse>();
RestRequestAsyncHandle handle = client.ExecuteAsync(request, r => taskCompletion.SetResult(r));
return (RestResponse)(await taskCompletion.Task);

}
}
[/csharp]

Change the implementation of RestSharp in your helper class or wherever you are using it.

 

[csharp]
public static async Task<IRestResponse> ExecuteAsync(string apiUrl, string request)
{
try
{
var client = new RestClient(apiUrl);
var apiRequest = new RestRequest(Method.POST);
apiRequest.AddHeader("Content-Type", "application/json");
apiRequest.AddHeader("Accept", "application/json");
apiRequest.RequestFormat = DataFormat.Json;
client.Timeout = 120000;
apiRequest.AddParameter("application/json", request, ParameterType.RequestBody);
return await client.ExecuteAsync(apiRequest);
}
catch (Exception ex)
{
throw;
}
}
[/csharp]

 

If you are using StatusCode and Content of Response object, change response.StatusCode == HttpStatusCode.OK to response.Result.StatusCode == HttpStatusCode.OK and response.Content to response.Result.Content

 

DataAnnotations in .NET Core

System.ComponentModel.DataAnnotations has been replaced by System.ComponentModel.Annotations. Add this from the nuget package.

 

Few more unsupported libraries and best practices in .NET Core

Install the nuget package Microsoft.AspNetCore.Http.Abstractions for using StatusCodes in your action methods

 

[csharp]
[HttpGet]
[ActionName("GetAllClients")]
public async Task<IActionResult> GetAllClients()
{
try
{
var response = await _mediator.Send(new GetAllClientsQuery());
return StatusCode(response.ResponseStatusCode, response.Value);
}
catch (Exception ex)
{
return StatusCode(StatusCodes.Status500InternalServerError, ex);
}
}
[/csharp]

 

Remove JavaScriptSerializer() since System.Web.Script.Serialization under System.Web.Extensions is no longer supported in .NET Core

Adding WCF services built with previous versions of .NET Framework are not supported. You need to modify the services to .NET Core.

Also, if you are trying to add the service into a .NET Core 2.1 application, you might encounter issues which states “An unknown error occurred while invoking the service metadata component. Failed to generate service reference.” Modify the application to .NET Core 2.0 version from 2.1 and it will work.

 

Well pretty much, I have covered only few of the issues and workaround during .NET Core migration. Obviously, this is not covering everything but hopefully, these pointers might help you at some stage.

I do have a plan to post various methodologies and in-depth programming strategies with .NET Core in my future posts, hence stay tuned.