Migration from .NET Framework to ASP.NET Core

aspnetcoremigration

Migration aspect is a major decision and not always it follows a green field path. There can be numerous reasons for taking up the decision to migrate an application codebase from its legacy form to future state or performing database migration or even infrastructure migration from on-premise to cloud. All migration aspects turns out to be a major disaster if proper strategies are not defined and lack efficient guidelines and resource management.

Anyway, this post emphasize on carrying out the migration aspect of our sample ASP.NET Web API solution that we had build earlier to ASP.NET Core. However, as the approach will follow a migration aspect with few basic core changes, I will not emphasize it as one of the green field project. Every migration aspect differ with its own set of issues and options to handle them.

Why migrating to .NET Core

Definitely this one is a good question that we developers bear in our mind knowing the fact that the existing codebase suffice our business needs and is in much stable state. It’s like asking someone who is looking forward to upgrade his four wheeler from a sedan to suv. Well, the relevant answer you might get is to gain more flexibility, power and adaptable features. Same kind of thoughts goes here too when you would like to migrate an application build on .NET framework to .NET core.

Few of the parameters that come to my mind for this migration are like better testability of application, ability to run on any platform and not restricted to windows only, easily available for cloud deployment and leveraging configuration management, ability to host anywhere and lightweight with high performance. Having said that, let us look into the migration aspect of the application.

Is there any migration tool

Honestly, as far as I know we don’t have any specific migration tool that can help you to migrate the whole application to .NET core. This has to follow step-by-step strategic approach. However, my initial approach would be to use .NET Portability Analyzer, that can be used to identify the compatibility of the migration approach. You can either download the tool or add it from visual studio extensions. Let’s look into that first and see what the analyzer gives us.

We will first search for .NET Portability Analyzer from Visual Studio extensions and download it.

image

Once installed, right-click the solution and select Portability Analyzer Settings, which will give you the configuration management tool for the Analyzer. In this tool you can define the output directory where the analyzed report will get generated, output format of the report (XLS, HTML or JSON) and the Target Platforms to where you want the codebase to get migrated which for us would be .NET Core 2.1 and .NET Standard.

Now, as best practice one thing that I am sure you know is that all class libraries should be converted to .NET Standard instead of .NET Core. The only reason for doing this is because, many external or 3rd party libraries like Newtonsoft.JSON support .NET Standard rather than .NET core. And moreover .NET Standard is lower than .NET core hence it is compatible to .NET Core version.

image

Since I have only one API project in my solution and I want to migrate it to ASP.NET Core, hence I will select on ASP.NET Core as my target framework for portability analysis

image

Once the settings are done, go ahead and right click the API project and select Analyze Project Portability, to start the analysis.

image

Once the analysis is complete, you can review the output generated in HTML or Excel format like shown here.

image

Based on the report, most highlighting factors or rather assemblies that .NET Core is not supporting are –

System.Configuration.ConfigurationManager

System.Net.Http.Formatting.MediaTypeFormatter

System.Net.Http

System.Web.Http

System.Web.Mvc

System.Web.Routing

Since this was the report generated for Web API, a few non-compatible assemblies are listed. If you have a complex project with various class libraries and 3rd party libraries, you might see a different result altogether. At the end of this post, I will try to provide some of the incompatible libraries that I had encountered in a real time project conversion and the alternate usage in .NET Core.

Starting the migration aspect

The portable analysis report will provide only few details to get started. However, it might not cover everything which you might encounter during compilation or execution phase of the converted application.

Since we don’t have any migration utility available yet, our first step would be to create a fresh new solution having same number of projects as in .NET framework application but targeting to ASP.NET Core 2.1 and .NET Standard (higher version).

What if I have complex structure having multiple class libraries along with API project

Well in this case you need to start step-by-step conversion process by creating related class libraries and dependencies targeting to .NET Standard (higher version). Copy the codebase from original solution to your libraries. Fix all the compatibility issues for .NET Standard. Once this phase is complete, you can add references of these dependencies to your new ASP.NET Core 2.1 project.

Structural differences between the API project

There is a significant structural differences between the API project developed with .NET Framework and .NET Core

image                      image

Let us identify some of the significant differences between these two applications.

Startup.cs – This class defines the request handling pipeline and services that need to be configured

Typically in ASP.NET MVC, we have the startup class in App_Start, which get triggered when the application is launched and initialize the pipeline. This is only required if you are handling any Katana/OWIN functionality for the MVC or WebAPI app and hence it is optional. But in case of ASP.NET Core, this class is must to have and is generated by default.

If you look into this class, there are three major components –

  1. Configure method to create application request processing pipeline where IApplicationBuilder used for configuring the request pipeline and IHostingEnvironment used to provide web hosting environment information is injected. If you have Swagger implemented, then this is the place where you are going to configure the Swagger endpoint.
  2. ConfigureServices method used to configure the application services by injecting IServiceCollection which specifies the contracts for service descriptors. IServiceCollection is under the namespace Microsoft.Extensions.DependencyInjection which helps services to resolve using inbuild dependency injection.
  3. IConfiguration which is followed by constructor injection of the Startup class, used to read the configuration properties represented by key/value pair

 

[csharp]
public class Startup
{
public Startup(IConfiguration configuration)
{
Configuration = configuration;
}

public IConfiguration Configuration { get; }

// This method gets called by the runtime. Use this method to add services to the container.
public void ConfigureServices(IServiceCollection services)
{
services.AddSingleton<IConfigurationProvider<Employee>, EmployeeProvider>();
services.AddSingleton<IConfigurationProvider<Project>, ProjectProvider>();
services.AddSingleton<IConfigurationProvider<Department>, DepartmentProvider>();
services.AddSingleton<IConfigurationProvider<Client>, ClientProvider>();
services.AddSingleton<IConfigurationProvider<Skills>, SkillProvider>();

services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_1);
services.AddSwaggerGen(c =>
{
c.SwaggerDoc("v1", new Swashbuckle.AspNetCore.Swagger.Info { Title = "EmployeeManagementApi", Version = "v1" });
c.IncludeXmlComments(Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "EmployeeManagement.Api.xml"));
c.ResolveConflictingActions(apidescription => apidescription.First());
c.DescribeAllEnumsAsStrings();
});
services.AddMediatR(typeof(Startup));
services.AddScoped<IMediator, Mediator>();
services.AddMediatorHandlers(typeof(Startup).GetTypeInfo().Assembly);
}

// This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}

app.UseSwagger();
app.UseStaticFiles();
app.UseSwaggerUI(c =>
{
c.SwaggerEndpoint("/swagger/v1/swagger.json", "Employee Managament Api");
c.RoutePrefix = string.Empty;
});

app.UseMvc();
}

}
[/csharp]

Configuration files for ASP.NET Core

In ASP.NET MVC, we provide all our application configuration settings in web.config file. However, in case of ASP.NET Core we provide these settings in JSON format in appsettings.json file which is placed to the root of the Api project.

[csharp]
{
"Logging": {
"LogLevel": {
"Default": "Warning"
}
},
"AllowedHosts": "*",
"AppSettings": {
"endpoint": "https://localhost:8081/&quot;,
"authKey": "C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw==",
"database": "empmanagement",
"collection": "employee"
}
}
[/csharp]

For me, the json file has only few keys like endpoint to Cosmos DB, authKey for Cosmos DB, database and collection key/value pairs.

Another file which is important to look into is launchSettings.json file, which contains the information of how and where we are going to run the application. We can configure settings for various environments like Development, Staging and Production along with the information of how to run the application in local IIS express or from IIS Server.  This file is also in JSON format.

 

[csharp]
{
"iisSettings": {
"windowsAuthentication": false,
"anonymousAuthentication": true,
"iisExpress": {
"applicationUrl": "http://localhost:1789&quot;,
"sslPort": 0
}
},
"$schema": "http://json.schemastore.org/launchsettings.json&quot;,
"profiles": {
"IIS Express": {
"commandName": "IISExpress",
"launchBrowser": true,
"environmentVariables": {
"ASPNETCORE_ENVIRONMENT": "Development"
}
},
"EmployeeManagement.Api": {
"commandName": "Project",
"launchBrowser": true,
"launchUrl": "api/values",
"environmentVariables": {
"ASPNETCORE_ENVIRONMENT": "Development"
},
"applicationUrl": "http://localhost:5000&quot;
},
"Docker": {
"commandName": "Docker",
"launchBrowser": true,
"launchUrl": "{Scheme}://{ServiceHost}"
}
}
}
[/csharp]

Implement Exception Error Handling in ASP.NET Core

System.Web.Mvc.HandleErrorAttribute which is responsible to handle any exception thrown by an action method in asp.net mvc is not support in asp.net core.

In this case we can use create a custom exception filter that is derived from ExceptionFilterAttribute class of Microsoft.AspNetCore.Mvc.Filters namespace which runs asynchronously when an exception is thrown. We can have an ApplicationLogging class which will have an instance of ILoggerFactory from Microsoft.Extensions.Logging namespace, that support API logging using third party logging providers like NLog or Log4Net. This logging class use it in the custom exception filter which tells the appl ication where to log.

[csharp]
public static class ApplicationLogging
{
public static ILoggerFactory LoggerFactory { get; } = new LoggerFactory();
public static ILogger CreateLogger<T>() => LoggerFactory.CreateLogger<T>();
}
[/csharp]

[csharp]
public class CustomExceptionFilterAttribute : ExceptionFilterAttribute
{
ILogger Logger { get; } = ApplicationLogging.CreateLogger<CustomExceptionFilterAttribute>();// to tell where we log

public override void OnException(ExceptionContext context)
{
using (Logger.BeginScope($"=>{ nameof(OnException) }")) // to tell which method we log
{
Logger.LogInformation("Log Message"); // to tell what exception we log
}
}

}
[/csharp]

You then use the custom exception filter attribute created for your controllers.

[csharp]
[Produces("application/json")]
[Route("api/[controller]/[action]")]
[ApiController]
[CustomExceptionFilter]
public class SkillController : ControllerBase
{
private readonly IConfigurationProvider<Skills> _provider;
private readonly IMediator _mediator;
}
[/csharp]

You can also use Microsoft.IdentityModel.Logging for implementing logging capabilities by installing the nuget package and overriding the OnException() method in custom exception filter class.

[powershell]
Install-Package Microsoft.IdentityModel.Logging -Version 5.3.0
[/powershell]

 

[csharp]
public override void OnException(ExceptionContext context)
{
Microsoft.IdentityModel.Logging.LogHelper.LogExceptionMessage(context.Exception);
}
[/csharp]

I am not covering much over here on exception handling as this should be a different post. Just an insight of the issues that we can encounter and the alternatives to fix them.

Using Configuration Manager in ASP.NET Core

Mostly when we try to retrieve values from configuration files like Web.config or App.config, we generally use ConfigurationManager, to get the values from appSettings. However, since System.Configuration.ConfigurationManager is not supported in .NET Core, we cannot use it. The workaround here is to install the package

 

[powershell]
Install-Package System.Configuration.ConfigurationManager -Version 4.5.0
[/powershell]

Now, in case of .NET Core we are supposed to read the values from appSettings.json file instead of any web.config or app.config file. In order to do that, we can create a static helper class like ConfigurationResolver.

 

[csharp]
public static class ConfigurationResolver
{

public static IConfiguration Configuration()
{
string basePath = AppContext.BaseDirectory;
var configuration = new ConfigurationBuilder()
.SetBasePath(basePath)
.AddJsonFile("AppSettings.json", optional: true, reloadOnChange: true)
.AddJsonFile("AppSettings.{env.EnvironmentName}.json", optional: true, reloadOnChange: true)
.Build();
return configuration;
}
}
[/csharp]

And then you can use the helper class like this

 

[csharp]
private static IConfiguration Configuration;
public static void Main()
{
Configuration = ConfigurationResolver.Configuration();
DatabaseId = Configuration.GetSection("AppSettings").GetSection("database").Value;
}
[/csharp]

 

Unavailability of System.Web.Http in ASP.NET Core

This limitation of not having System.Web.Http in .NET Core gives us a lot of issues where most of the code base had been using libraries and references belonging to this namespace.

For example, ApiParameterDescription or ApiDescription which belongs to System.Web.Http.Description and which gives metadata description of an input to API.

In order to implement this, we need to install the package ApiExplorer

 

[powershell]
Install-Package Microsoft.AspNetCore.Mvc.ApiExplorer -Version 2.1.2
[/powershell]

Once the package has been installed successfully, we can use most of the functionalities

 

Defining routes in ASP.NET Core

Configuring routes using MapHttpRoute is not supported in .NET Core. You can define the default routes in Startup.cs file

 

[csharp]
public class Startup
{
public void ConfigureServices(IServiceCollection services)
{
services.AddMvc();
}

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
app.UseMvc(routes =>
{
//New Route
routes.MapRoute(
name: "about-route",
template: "about",
defaults: new { controller = "Home", action = "About" }
);

routes.MapRoute(
name: "default",
template: "{controller=Home}/{action=Index}/{id?}");
});
}
}
[/csharp]

 

System.Web.Mvc in ASP.NET Core

System.Web.Mvc.Controller is not supported in .NET Core. However, you can install the package Microsoft.AspNetCore.Mvc as an alternative to serve your purpose.

 

[powershell]
Install-Package Microsoft.AspNetCore.Mvc -Version 2.1.2
[/powershell]

 

Enabling Swagger capabilities in ASP.NET Core

Swagger is an elegant way to provide API documentation. For that you need to install the package Swashbuckle.AspNetCore. Once installed, you need to update your Startup.cs file to provide the swagger endpoint and add it to the service collection.

 

[csharp]
public class Startup
{
public void ConfigureServices(IServiceCollection services)
{
services.AddMvc();
services.AddSwaggerGen(c =>
{
c.SwaggerDoc("v1", new Info { Title = "SampleApi", Version = "v1" });
c.IncludeXmlComments(GetXmlCommentsPath());
c.ResolveConflictingActions(apiDescriptions => apiDescriptions.First());
c.DescribeAllEnumsAsStrings();
});
}

public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}
app.UseSwagger();
app.UseStaticFiles();
app.UseSwaggerUI(c =>
{
c.SwaggerEndpoint("/swagger/v1/swagger.json", "Sample API V1");
c.RoutePrefix = string.Empty;
});

app.UseMvc();
}

private string GetXmlCommentsPath()
{
return System.IO.Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "Sample.Api.xml");
}

}
[/csharp]

Change your appSettings.json file to provide the launchUrl to Index.html which will open the endpoint to Swagger

 

[csharp]
"profiles": {
"IIS Express": {
"commandName": "IISExpress",
"launchBrowser": true,
"launchUrl": "index.html",
"environmentVariables": {
"ASPNETCORE_ENVIRONMENT": "Development"
}
}
[/csharp]

In your controller and action methods, you can add the following attributes.

 

[csharp]
[Produces("application/json")]
[Route("api/[controller]/[action]")]
public class ClientController : Controller
{
/// Your block of code
}

[HttpPost()]
[ActionName("GetClient")]
[ProducesResponseType(typeof(ClientRequest), 200)]
[ProducesResponseType(typeof(void), 400)]
[ProducesResponseType(typeof(void), 404)]
public async Task<IActionResult> GetClient(ClientRequest clientRequest)
{
/// Your block of code
}
[/csharp]

 

Dependency Injection using StructureMap in ASP.NET Core

As dependency injection is in-build in .NET Core, you don’t need StructureMap here. If you old code has referred to StructureMapDependencyResolver and StructureMapScope, these has been deprecated and cannot be used since .NET Core doesn’t support System.Web.Http and System.Web.Http.Dependencies.

You can use IServiceCollection to add all the dependencies required in StartUp.cs file

 

[csharp]
public void ConfigureServices(IServiceCollection services)
{
services.AddSingleton<IClientProvider, ClientProvider>();
services.AddSingleton<IProjectProvider, ProjectProvider>();

var container = new Container();
container.Configure(config =>
{
config.Populate(services);
});
}
[/csharp]

 

RestSharp library in .NET Core

If you are using RestSharp library, for .NET Core you need to install the nuget package RestSharp.NetCore and then create an extension to RestClient

 

[csharp]
public static class RestClientExtensions
{
public static async Task<RestResponse> ExecuteAsync(this RestClient client, RestRequest request)
{
TaskCompletionSource<IRestResponse> taskCompletion = new TaskCompletionSource<IRestResponse>();
RestRequestAsyncHandle handle = client.ExecuteAsync(request, r => taskCompletion.SetResult(r));
return (RestResponse)(await taskCompletion.Task);

}
}
[/csharp]

Change the implementation of RestSharp in your helper class or wherever you are using it.

 

[csharp]
public static async Task<IRestResponse> ExecuteAsync(string apiUrl, string request)
{
try
{
var client = new RestClient(apiUrl);
var apiRequest = new RestRequest(Method.POST);
apiRequest.AddHeader("Content-Type", "application/json");
apiRequest.AddHeader("Accept", "application/json");
apiRequest.RequestFormat = DataFormat.Json;
client.Timeout = 120000;
apiRequest.AddParameter("application/json", request, ParameterType.RequestBody);
return await client.ExecuteAsync(apiRequest);
}
catch (Exception ex)
{
throw;
}
}
[/csharp]

 

If you are using StatusCode and Content of Response object, change response.StatusCode == HttpStatusCode.OK to response.Result.StatusCode == HttpStatusCode.OK and response.Content to response.Result.Content

 

DataAnnotations in .NET Core

System.ComponentModel.DataAnnotations has been replaced by System.ComponentModel.Annotations. Add this from the nuget package.

 

Few more unsupported libraries and best practices in .NET Core

Install the nuget package Microsoft.AspNetCore.Http.Abstractions for using StatusCodes in your action methods

 

[csharp]
[HttpGet]
[ActionName("GetAllClients")]
public async Task<IActionResult> GetAllClients()
{
try
{
var response = await _mediator.Send(new GetAllClientsQuery());
return StatusCode(response.ResponseStatusCode, response.Value);
}
catch (Exception ex)
{
return StatusCode(StatusCodes.Status500InternalServerError, ex);
}
}
[/csharp]

 

Remove JavaScriptSerializer() since System.Web.Script.Serialization under System.Web.Extensions is no longer supported in .NET Core

Adding WCF services built with previous versions of .NET Framework are not supported. You need to modify the services to .NET Core.

Also, if you are trying to add the service into a .NET Core 2.1 application, you might encounter issues which states “An unknown error occurred while invoking the service metadata component. Failed to generate service reference.” Modify the application to .NET Core 2.0 version from 2.1 and it will work.

 

Well pretty much, I have covered only few of the issues and workaround during .NET Core migration. Obviously, this is not covering everything but hopefully, these pointers might help you at some stage.

I do have a plan to post various methodologies and in-depth programming strategies with .NET Core in my future posts, hence stay tuned.

Introduction to Azure Container Registry and Kubernetes Service

ACR-AKS

In one of my post of the .NET migration strategy series, we went through the exercise of adding docker and container orchestration support to our ASP.NET Core API solution. In this post we would like to perform an exercise to publish the docker images to Azure Container Registry (ACR) and then pull down the images from ACR and deploy to Azure Container Instances using Azure Kubernetes Service (AKS). Before we go further, let us have a generic overview what ACR and AKS is.

A bit knowledge on ACR and AKS

If you are new to ACR and AKS like me, then this post will most likely help you to get started. However, I will try not to go in depth to the working of these services and cover only the overview and essential concepts associated with this post.

What is Azure Container Registry (ACR)

ACR allows to build, store and manage images for all types of container deployments.

ACR has four key concepts which we need to know –

Registry – A stateless, highly scalable server side application in Azure that stores docker images and have a distributed pipeline for these images

Repository – One or more group of container images in the registry

Image – Read-only snapshot of Docker container as image which are pushed to the registry

Container – Package containing the application and its dependencies wrapped in complete file system which has the code, runtime, system tools and libraries

What is Azure Kubernetes Service (AKS)

AKS is managed Kubernetes cluster offering from Microsoft Azure hosted as a service which is responsible to manage containerized applications

Azure monitor plays a major role here for AKS collecting memory and processor metrics from containers, nodes in order to determine the container health.

I would recommend you to go through the documentation of ACR and AKS which will guide you to get more in-depth concepts.

Solution Design

Primary Objective

Our primary objective is to use the capabilities of Azure Container Registry and Azure Kubernetes Services to deploy and publish docker images in Azure.

Having said that, we need to design our solution in such a way that following tasks can be established –

Phase 1: A source code repository like GitHub which will have the source codes along with docker file and docker compose (.YAML) file. We have that already

Phase 2: Create a Azure Container Registry (ACR) where the docker images will be stored.

Phase 3: Create Azure Kubernetes Service (AKS) which will manage the containers after the docker images are published.

Phase 4: Publish the docker images to Azure Container Registry (ACR)

Phase 5: Deploy the container to multi-container AKS cluster application

Phase 6: Automate the above steps of triggering the creation of docker images, publish it to ACR and then deploy to AKS cluster.

Architecture

image

We already have completed Phase #1, using GitHub as our source control. Phase #2 where we are going to create an Azure Container Registry.

Phase #2 – Create Azure Container Registry (ACR)

ACR can be created by any means like Portal, Azure CLI as well as Azure PowerShell. However, here I am going to use Azure CLI to create the ACR.

Step 1: Download and install Azure CLI from here.

Step 2: Open Windows PowerShell ISE and use the command az login which will redirect you to login to your azure account.

Step 3: Once you have successfully logged in, create an Azure Container Registry Instance to your resource group using the below command.
[powershell]
az acr create –resource-group netcoremigration –name acrnetmigration –sku Basic
[/powershell]
image

Step 4:  Verify that the Container Registry has been successfully created in the portal.

image

Simple isn’t it.

Phase #3 – Create Azure Kubernetes Service (AKS)

Now let us create AKS cluster in Azure. We will do this using portal.

Step 1: Login to https://portal.azure.com and search for Kubernetes Service in the Azure marketplace.

image

Step 2:  Provide the required information like the Resource group (use the same where all your resources are there in this post), Region (some machines are not available in all region, hence I have selected East US which is closer to me), DNS name prefix (can be the same name as the cluster name). Select the number of nodes you want. I have kept it to 1. Click Review and Create.

image

Step 3: Next select Authentication where you can create a new service principal and also enable RBAC for Kubernetes. I am not enabling the RBAC.

image

Step 4: Configure the networking section. Set up HTTP application routing to Yes to integrate ingress controller with automatic public DNS and Network Configuration to Basic.

image

Step 5: In Monitoring section, keep Enable Container monitoring to Yes and select an existing or new Log Analytics workspace.

image

Step 6: Click Create. You can download the template for Automation purpose in future. Wait for the deployment to complete.

image

Awesome. Till now we have both completed our exercise of creating and having the ACR and AKS ready. Let us go ahead with our Phase #4 and Phase #5

Phase #4 – Building and pushing docker images to ACR

Let us have series of steps to publish docker images to ACR.

Step 1: Run Docker Toolbox

First of all, let me run Docker Toolbox in order to make docker environment available for executing docker commands. You can review my docker introductory post which will give you some details about docker and how to create images and containers.

Step 2: Verify the availability of docker images

Run the command in command prompt to view list of images available.

docker images

If you are creating the image for the first time, just follow this post which will give you step-by-step instructions of creating docker images.

image

Step 3: Tag the latest image with ACR

Run the following docker command to tag the image with a fully qualified path to registry.

docker tag {Image Name or Image ID} {Azure Container Registry/Image Name}

For e.g.

docker tag 901561ea73c5 acrnetmigration.azurecr.io/employeemanagementapi

image

Step 4: Push the image to ACR

In order to push the docker images to Azure Registry, we need to ensure that we are authorized to execute this step.

Run below Azure CLI command with your container registry name to get authenticated to your Azure account

[powershell]
az acr login –name {container registry name}
[/powershell]
image

Now run the below Docker command which will start the process of publishing the latest docker image to ACR

[powershell]
docker push {azure container registry host}/{image name}
[/powershell]

image

Step 5: Verify the docker image has been successfully uploaded

Log into Azure portal and open your container registry. Click Repositories to find the docker image that has been uploaded to registry.

image

You can even verify whether the image has been uploaded to ACR by running the below command

[powershell]
az acr repository list –name {container registry name} –output table
[/powershell]
image

Till here our task to upload the docker image to ACR is done. Our next step is to deploy the application to AKS.

Phase #5 – Deploy application to AKS

In this phase we are going to deploy our application to AKS. We have already created Kubernetes Service Cluster in Phase #3.

Step 1: Install Kubernetes CLI and set environment variables

Run the below command to install kubectl (which is Kubernetes command-line client)

[powershell]
az aks install-cli
[/powershell]
image

Based on the above message, we need to set the path of kubectl downloaded and installed in my environment variables.

image

image

Run the below command which is going to get the credentials required to connect with AKS cluster and update the config file in your .kube directory.

[powershell]
az aks get-credentials –resource-group {resource group name} –name {kubernetes cluster name}
[/powershell]

Here netcoremigration is my resource group name where my cluster netcoremigrationakscluster has been installed.

image

image

Open Windows PowerShell and run the below command to verify the connection to cluster

[powershell]
kubectl get nodes
[/powershell]

image

Step 2: Create or validate Kubernetes Manifest file

Kubernetes manifest file is a YAML file that defines the deployment and management of the application, deployed to the cluster. Basically, it contains information about images and containers, network ports to publish, etc. to name a few. Below is how the manifest file (employeemanagementportal.yaml) looks like. This file is accompanied with the solution here.

image

apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: employeemanagementapi
spec:
replicas: 1
template:
metadata:
labels:
app: employeemanagementapi
spec:
containers:
- name: employeemanagementapi
image: acrnetmigration.azurecr.io/employeemanagementapi
imagePullPolicy: Always
ports:
- containerPort: 8080
imagePullSecrets:
- name: regsecret
---
apiVersion: v1
kind: Service
metadata:
name: employeemanagementapi
spec:
ports:
- protocol: TCP
port: 80
targetPort: 8080
selector:
app: employeemanagementapi
type: LoadBalancer

As you can see, containers have name: employeemanegementapi which is the name of the container and image: acrnetmigration.azurecr.io/employeemanagementapi is the image published in ACR. The script is going to deploy the application to AKS cluster with the port 8080

Verify that ACR server name is same as mentioned like acrnetmigration.azurecr.io.

Run the following command in azure CLI or powershell to get the server name

[powershell]
az acr list –resource-group {resource group name} –query "[].{acrLoginServer:loginServer}" –output table
[/powershell]

image

Step 3: Deploy the application using the manifest file

In PowerShell, run the following command to create the Kubernetes objects in the cluster as specified in the file

[powershell]
kubectl apply -f {kubernetes manifest YAML file}
[/powershell]

For e.g.
[powershell]
kubectl apply -f employeemanagementportal.yaml
[/powershell]

image

Step 4: Verify the application

This is the final step where you would like to check that the application is up and running from AKS.

First check the status of the app by running the following command in PowerShell

[powershell]
kubectl get service {application name} –watch
[/powershell]

image

Browse the external IP generated from the command which in my case is 40.87.94.16. If everything looks good, you should be able to view the app in browser.

Some issues found and resolutions

Unfortunately, I am unable to get the app. So I went ahead to execute few steps that I would like to share here which might help you if you facing similar issues.

Run the following command in PowerShell to see if you are able to pull the container image.

[powershell]
kubectl get pods
[/powershell]

image

It is saying that the status of my container image is ImagePullBackOff, which means there is some issue happening. Microsoft documentation says that it might be due to invalid registry permission or having a wrong container image.

Following links Authenticate with Azure Container Registry from Azure Kubernetes Service and 10 Most Common Reasons Kubernetes Deployments Fail might be really handy to debug this issue.

Ok, so now my next step is to review the issue that is happening.

I used the below command in PowerShell to review what is causing the issue. As you can see employeemanagementapi-59484bdc4-2z745 is the pod name I got from the previous command

[powershell]
kubectl describe pod employeemanagementapi-59484bdc4-2z745
[/powershell]

image

Checking the event log, I do see that somehow pulling the image is not working

image

Next step is to check if I am able to pull down the image locally from ACR. So, I ran the below command and it seems to be giving authentication error.

[powershell]
docker pull acrnetmigration.azurecr.io/employeemanagementapi
[/powershell]

image

I went ahead to authenticate for ACR and then found that I am able to download the image, which clearly states that I need to provide AKS permission to download the image from ACR. This is just my assumption and I might be wrong too.

image

Run the following command to get the Id of the service principal configured for AKS

[powershell]
az aks show –resource-group netcoremigration –name netcoremigrationakscluster –query "servicePrincipalProfile.clientId" –output tsv
[/powershell]

image

Run the following command to get ACR registry resource id

[powershell]
az acr show –name acrnetmigration –resource-group netcoremigration –query "id" –output tsv
[/powershell]

image

Use the values of service principal ID and ACR registry resource id to run the below command to create role assignment

[powershell]
az role assignment create –assignee 8e58a2a4-f629-4a83-975f-8d32339d7a23 –role Reader –scope /subscriptions/94e96215-c87d-4548-819f-8c29436c44db/resourceGroups/netcoremigration/providers/Microsoft.ContainerRegistry/registries/acrnetmigration
[/powershell]

image

Create a reader role assignment with a scope of ACR resource by running the following command. Here netcoreadmin is my service principal name.

[powershell]
az ad sp create-for-rbac –name netcoreadmin –role Reader –scopes /subscriptions/94e96215-c87d-4548-819f-8c29436c44db/resourceGroups/netcoremigration/providers/Microsoft.ContainerRegistry/registries/acrnetmigration –query password –output tsv
[/powershell]

image

Get the service principal client id by running the below command.

[powershell]
az ad sp show –id http://netcoreadmin –query appId –output tsv
[/powershell]

image

Use the following command in PowerShell where efaa4271-5783-4044-ad41-02e0b9db8220 is the service principal client id and 5f828976-ed29-435f-ad86-efbc40b74fe8 is Service Principal Password generated by above statements. acrauth is the Kubernetes secret created. You can give any name you want.

[powershell]
kubectl create secret docker-registry acrauth –docker-server acrnetmigration.azurecr.io –docker-username efaa4271-5783-4044-ad41-02e0b9db8220 –docker-password 5f828976-ed29-435f-ad86-efbc40b74fe8
[/powershell]

image

Update the Kubernetes manifest YAML file to have the imagePullSecrets secret name as acrauth

image

After the Manifest modification, run the PowerShell command again which will modify the kubernetes objects created.

[powershell]
kubectl apply -f employeemanagementportal.yaml
[/powershell]

image

Run the below command again to check the status of the container. It will be on Running state

[powershell]
kubectl get pods
[/powershell]

image

Take the name of the pod and run the command to ensure that the container is running.

[powershell]
kubectl describe pod employeemanagementapi-65cb5764f6-nkhb8
[/powershell]

image

Once verified, run the below command to get the external IP address that you will browse to run the app. Keep it running

[powershell]
kubectl get service employeemanagementapi –-watch
[/powershell]

image

This should solve the problem.

Well, that’s it for now. In our next post we will cover Phase #6 which will deal on implementing proper CI/CD processes and automating the deployment using Jenkins and Azure DevOps.

Building Docker images for ASP.NET Core Application

docker_dotnetcore

This one is one of the crucial part of my DotNet Core Migration strategy series which I had been working for so long. With our initial posts of this series, we got introduced with Cosmos DB, created a sample Web API in .NET framework 4.6.1 and then migrated the same API to .NET core 2.1. Now we would like to deploy the core app.

We all know that every solution or design have their own deployment strategy which differs based on requirement and technical debts.  My kind of strategy is purely for educational purpose and might not be applicable for your case. However, you can take this as a reference if you want to implement containerization using docker technology.

Ideally, I want to deploy my core app into Azure and I know we can simply do that by deploy the API as Azure API Service. However, my intention for this series is to deploy the API to Azure Container Instances as docker images. Why I want to do that, is because I am new to containerization and I know there are many benefits of it. If you would like to know what containerization is, I would recommend to check out some of the good tutorials and online documentations. I will share some links at the bottom of this post for your reference.

Let’s get started.


Step 1: Accessing about Containerizing our ASP.NET Core app

Just to let you know that Visual Studio 2017, does provide enabling docker and containerization support to your .NET core app.

Enabling docker support for either linux or windows platform can be achievable by right clicking the .NET core project and choosing Docker support from the context menu as shown below.

vs_docker_support

This will create a dockerfile to the root of the application which has all the commands to be executed which helps to create the docker image. We will go through the details of dockerfile a bit later.

 

VS 2017 also support container orchestration using either Docker Compose and Service Fabric. For this post we are going to deal with Docker Compose is a YAML (human friendly data serialization standard) file that defines configuration of services for running multi-container docker applications. You can add orchestration support to the project by right-clicking the project properties and selecting Container Orchestrator Support from the context menu.

image

 

However, I have decided not to use VS 2017 to generate dockerfile or add container orchestration support. This is because I did faced few issues while doing so. Instead, I am going create the dockerfile and docker-compose.yaml file separately and also will share some inputs of how these files work and where to place these files in your solution.

 

Before that, please do keep in mind that in order to run docker, you need to install docker for either windows or Mac.

Considering the pre-requisites of installing docker to my Windows, I am not meeting the requirement as Docker can only be installed in Windows 10 professional or Enterprise edition and unfortunately I have Windows 10 Home. Well, if you facing similar constraints as mine, there is nothing to worry since we do have a solution here.


Step 2: My solution for running docker in Windows 10 Home

Docker Toolbox is a solution for Mac and Windows which doesn’t meet the requirement for Docker Community or Enterprise edition. It is used to launch the Docker environment for incompatible versions of Windows and Mac. Here are the installers for Windows and Mac. The installer will add a Docker GUI named Kitematic, an Oracle Virtual Box and Docker Quickstart for running docker commands.

image_thumb81

Before you start Docker Quickstart, ensure that Virtualization is enabled at BIOS and don’t forget to uncheck Windows hypervisor program from Windows features.

image_thumb41

The only reason we need to do this is because, if the hypervisor is enabled then you might encounter an error while running Docker Quickstart.

image_thumb101

After all these checks are done, restart the machine and then run Docker Quickstart.

image_thumb5

Once you get the interactive shell of docker, run hello-world to see if docker commands are working.

image

All good. Now our next step is to add dockerfile and docker-compose.yaml to our solution.


Step 3: Adding docker files to solution

Alright, here I want to share some insights about where these two files should be placed in your solution. If you use VS 2017 capability to add docker support or container orchestration support, by default the dockerfile will be placed inside the API project folder and the docker-compose.yaml will be placed in the solution folder. At later stage I am going to use Jenkins to build the docker images based on my github repo, which will create issue building and running the docker images. Hence, you should ideally place the docker-compose.yaml in the root folder of your github repo and dockerfile in the solution folder.

image

Here is how the dockerfile looks like

 
FROM microsoft/dotnet:2.1-sdk AS build
WORKDIR /app

COPY EmployeeManagement.Core/*.sln .
COPY EmployeeManagement.Core/EmployeeManagement.Api/*.csproj ./EmployeeManagement.Api/
COPY EmployeeManagement.Core/EmployeeManagement.Model/*.csproj ./EmployeeManagement.Model/
COPY EmployeeManagement.Core/EmployeeManagement.Provider/*.csproj ./EmployeeManagement.Provider/
COPY EmployeeManagement.Core/EmployeeManagement.Repository/*.csproj ./EmployeeManagement.Repository/
RUN dotnet restore

COPY EmployeeManagement.Core/EmployeeManagement.Api/. ./EmployeeManagement.Api/
COPY EmployeeManagement.Core/EmployeeManagement.Model/. ./EmployeeManagement.Model/
COPY EmployeeManagement.Core/EmployeeManagement.Provider/. ./EmployeeManagement.Provider/
COPY EmployeeManagement.Core/EmployeeManagement.Repository/. ./EmployeeManagement.Repository/
WORKDIR /app/EmployeeManagement.Api
RUN dotnet publish -c Release -o /app

FROM microsoft/dotnet:2.1-aspnetcore-runtime AS runtime
WORKDIR /app
COPY --from=build /app .
ENTRYPOINT ["dotnet", "EmployeeManagement.Api.dll"]

Let me explain you what this file has.

FROM microsoft/dotnet:2.1-sdk AS build 

Build the image with dotnet 2.1-sdk that is required for .NET core applications

WORKDIR /app  

Set the working directory to execute the commands like RUN, COPY, ENTRYPOINT, CMD and ADD against it which are specified in the docker file

COPY EmployeeManagement.Core/*.sln .
COPY EmployeeManagement.Core/EmployeeManagement.Api/*.csproj ./EmployeeManagement.Api/
COPY EmployeeManagement.Core/EmployeeManagement.Model/*.csproj ./EmployeeManagement.Model/
COPY EmployeeManagement.Core/EmployeeManagement.Provider/*.csproj ./EmployeeManagement.Provider/
COPY EmployeeManagement.Core/EmployeeManagement.Repository/*.csproj ./EmployeeManagement.Repository/

This will copy the files with .csproj from the source to the current working directory

RUN dotnet restore

Run the command from dotnet CLI to install all the dependencies to the container file system.

COPY EmployeeManagement.Core/EmployeeManagement.Api/. ./EmployeeManagement.Api/
COPY EmployeeManagement.Core/EmployeeManagement.Model/. ./EmployeeManagement.Model/
COPY EmployeeManagement.Core/EmployeeManagement.Provider/. ./EmployeeManagement.Provider/
COPY EmployeeManagement.Core/EmployeeManagement.Repository/. ./EmployeeManagement.Repository/
WORKDIR /app/EmployeeManagement.Api

This will copy remaining files of the project

RUN dotnet publish -c Release -o /app

This is going to build everything using Release configuration

FROM microsoft/dotnet:2.1-aspnetcore-runtime AS runtime
WORKDIR /app
COPY –from=build /app .
ENTRYPOINT [“dotnet”, “EmployeeManagement.Api.dll”]

Here we are going to use the base image aspnetcore-runtime from Microsoft, to run the compiled app, set the working directory, copy the compiled version to the working directory and then create the entry point which will instruct the docker, how to start the app using ASPNETCore runtime.

Now we need to add the docker-compose.yaml file to our root folder. Here is how the YAML file looks like.

 

 

 

version: '3.4'

services:
employeemanagement.api:
 image: employeemanagementapi
 container_name: acrnetmigration.azurecr.io
 build:
  context: .
    dockerfile: EmployeeManagement.Core/Dockerfile

Here we need to provide the path of the Dockerfile, container name which ideally will be our Azure Container Registry where the images will be stored and name of the image which will ideally be uploaded to the container

We are done with docker configuration. Oh, we forgot to add a .dockerignore file in our root folder which will ideally help to increase build performance and exclude files that are not required for the build.


.dockerignore
.env
.git
.gitignore
.vs
.vscode
docker-compose.yml
docker-compose.*.yml
*/bin
*/obj
*/.vs


Step 4: Run Docker Compose to build the image and container

In previous steps we did two major things –

Dockerfile – Created the file where we have stored definitions of our app environment

docker-compose.yaml – Created this file which define the services for the app that will run together in isolated environment

Our next step is to run the docker command that will compose, build, create images and container while starting the container and keep it running in the background. Open command prompt in Administrator mode and go to your directory where the docker-compose.yaml file is stored.

Run the docker command docker-compose up –d

image

If everything goes well, you should be able to see that the images and container has been created.

 

In order to view the images created, you can use the docker command docker images

image

 

In order to view the container created, use the docker command docker ps

image

 

Every time you run the docker-compose up –d command, it will check if there are any existing containers and if the image has been changed or modified, it will stop the container and recreate the container.


Step 5: Get the App/website running

In order to run the run the the app, we need to execute the following docker command docker run –p 8888:80 employeemanagementapi. Here “employeemanagementapi” is the latest image.

image

 

Verify that you are able to browse the app pasting the Url http://192.168.99.100:8888 in the browser window

image


Few Tips and Tricks

Based on my experience while performing this exercise, I did face few issues which I would like to share here. If you are new to docker, then this might be a good start.

Ensure that your dockerfile and docker-compose.yaml are placed in proper location. If you have a complex solution with multiple projects, place the the docker-compose.yaml file in the root folder and dockerfile in the solution folder and not the individual project folder.

Add .dockerignore file to the root folder too.

If you are using XML documentation for your API project, then ensure that you have set the XML documentation file path for both DEBUG and RELEASE. When the files are published to build the image, if Output of XML documentation file is not present, then API documentation will not work. This issue will  happen if you have missed setting up XML documentation for RELEASE mode

image

 

You might encounter various issues while running the dockerfile, which are almost similar to the below screenshots. If you are new to docker, then remember to follow the format of the dockerfile code that I have shared in this post.

image

image

image

image

 

If you want to delete all the images and containers, then use the docker command docker rmi –f <image id>. You can also use docker prune images to delete all the images or docker prune ps to delete all the containers

image

 

To view the IP address of the container, use the docker command docker inspect <image name>

image

In our next post, we are going to provision a Jenkins resource which will be used to build our CI/CD pipeline.

Next in this post, we are going to learn the process of how to publish docker images in ACR (Azure Container Registry) and deploy to Azure Container Instances (AKS), both in Manual and through Automation.

Introduction to Azure Cosmos DB

cosmos

In accordance to my series on .NET Core migration, this one is the initial phase where I am going to demonstrate how Azure Cosmos DB works. However, this post do cover an introduction of Azure Cosmos DB even if you not intending to perform any migration and would like to use Cosmos DB as part of your solution design.

 

Why Azure Cosmos DB

Apart from various benefits that Azure Cosmos DB offers which you can find here, my intention to go for Cosmos DB were for the following reasons.

  • It is a NoSQL database which support query language like SQL and ACID transactions.
  • Following a PaaS model, it support global distribution of database in much more scalable way.
  • It is faster having minimum latency and 99.9% availability.
  • With automatic indexing of data, it provide an accessibility using any API for SQL, Azure table storage, Mongo DB, etc.
  • It provide the database model in various formats like Key-Value store, Graph DBMS, Document Store, etc.
  • It is serverless and geo-distributed which makes it an ideal candidate and an alternative of Redis Cache

There are two ways you can go for development stage with Cosmos DB.

  • Create a Cosmos DB resource through Azure portal and use it throughout your development lifecycle.
  • Use Cosmos DB emulator on-premise or locally throughout your development lifecycle and once you are good with the usage, initiate the process to create the Cosmos DB resource in Azure.

I would prefer to go for on-premise Cosmos DB emulator usage as for development purpose, I will heavily use the Cosmos DB as my data storage. The consumption might incur some cost which I don’t think is necessary for the development purpose. You can check the pricing details here. However, I am going to demonstrate how we can create a Cosmos DB resource through Azure portal and also how we can use emulator for development purpose.

 

Creating an Azure Cosmos DB in Portal

Search for Azure Cosmos DB in Azure Marketplace and select it to start the creation process.

image

In the DB Account creation page, create a new or select an existing Resource Group where the resource will be available. Create an Account Name which will be interpreted as {Account name}documents.azure.com.

Select the type of API you want to use. Currently, it supports five types of API – SQL, MongoDB, Gremlin, Cassandra and Azure table storage. Based on your requirement, you can select the preferred one. For this post and continuation of my DotNet Core Migration Strategy series, I am going to use SQL API.

By default, the Geo-Redundancy and Multi-region Writes will be disabled. I will keep it like that for now since I don’t need it here.

 

image

 

Next we need to setup the Network for our Cosmos DB account. In this section, create a new Vnet if you want or can use an existing Vnet. I am going to create a new Vnet here.

 

image

 

image

 

image

 

This section will ask you to provide some Tags that will help you to view consolidated billing. I am going to keep it empty for now.

image

 

Once everything is complete, the Cosmos DB Account will be created successfully.

image

 

You can review the overview of the deployment that has completed.

image

 

Now if you select the Cosmos DB Account, you might see an error message which says that it has failed to get the collection list. You can compare collection to a model like Order, Product, etc. The same issue is reflected when you open the Data Explorer which says that retrieval of data is blocked due to firewall configuration.

image

 

image

 

All you need to do is click on the arrow in the Data Explorer which asks for review. This you open the Firewall configuration of the Data Explorer. Check the checkbox which says Allow access from Azure Portal. That will fix both the issues.

image

All good till now. Our Azure Cosmos DB in portal is ready to use. Let’s have a look on how to use Azure Cosmos DB emulator.

 

Using Azure Cosmos DB Emulator

As I was sharing earlier, the emulator provide a local or on-premise environment that emulates Azure Cosmos DB for development purpose. I would prefer to use the emulator during development stage. For staging or production, I will switch to Azure Cosmos DB account from cloud. This will make the development lifecycle cost effective.

You can download the emulator from the following link. Once the MSI installer is download, proceed with the installation steps.

Upon successful installation, run the emulator which will open the local instance of the service like https://localhost:8081/_explorer/index.html

image

In the Quickstart window, the fields URI and Primary Key are the crucial ones. The URI will be used by DocumentDB Client to connect to Azure Cosmos DB and Primary Key will be used to authenticate the request and establish a connection to Cosmos DB. These two fields are going to be used as key-value pair in AppSettings section of application configuration or web.config file.

When you select the Explorer windows, you will find options to create a new database and add a collection to it. Every time you want to add a collection to an existing database, you just need to select the option Use existing Database id and add a collection to it.

image

image

 

Just for demonstration purpose, I would like to show how it looks like when you add a collection. After adding a new database and collections, you will find the database and collection getting created. When you select Documents from  a specific collection, you can click New Document to insert a new record. The request here is taken in a JSON format. Remember, the “id” field in the request always takes a string and represent the record. You can also associate unique key while creating the collection.

image

Clicking the record id, you can view individual record. The SQL query associated to display the records can be changed with additional filters based on your requirement.

image

However, we are going write utility to generate databases and collections dynamically on need basis. Let’s go ahead with the next step of programming.

 

Programmatically access and perform CRUD operation to Cosmos DB

In order to programmatically create a database and collections to Cosmos DB using C#, we need to do the following steps.

  • Add Microsoft.Azure.DocumentDB Nuget package to the project.
  • Modify your Web.Config file to add Cosmos DB Authentication Key, CosmosDB endpoint and the Database Name

image

  • Create the DocumentClient which will take the endpoint and authentication key

image

  • Create the database

image

  • Create the Collection

image

You need to create a helper class which will take input from various GET, PUT, POST, DELETE operations from API and perform the same using the Document Client. I have included the source code library in this post which will help you to see how this works. The DocumentDBRespository.cs file is the helper class which is called from various Action methods of API Controller to get, insert, update or delete records from or to a collection. The helper class identifies whether the database exists and if not it will create it. Same goes for collection.

The Endpoint and Authentication Key in Web.Config file are pointing to the emulator. In case of staging/testing or production deployment, we need to set the values of these keys to Azure Cosmos DB service from the portal.

To learn more about Cosmos DB and how programmatically you can achieve it, follow the link here.

Hope this post helps you.

 

Source Code:

sourcecode

Create a Data Storage using Azure Cosmos DB – DotNet Core Migration Strategy (Part 1)

cosmos

In accordance to my series on .NET Core migration, this one is the initial phase where I am going to demonstrate how Azure Cosmos DB works. However, this post do cover an introduction of Azure Cosmos DB even if you not intending to perform any migration and would like to use Cosmos DB as part of your solution design.

 

Why Azure Cosmos DB

Apart from various benefits that Azure Cosmos DB offers which you can find here, my intention to go for Cosmos DB were for the following reasons.

  • It is a NoSQL database which support query language like SQL and ACID transactions.
  • Following a PaaS model, it support global distribution of database in much more scalable way.
  • It is faster having minimum latency and 99.9% availability.
  • With automatic indexing of data, it provide an accessibility using any API for SQL, Azure table storage, Mongo DB, etc.
  • It provide the database model in various formats like Key-Value store, Graph DBMS, Document Store, etc.
  • It is serverless and geo-distributed which makes it an ideal candidate and an alternative of Redis Cache

There are two ways you can go for development stage with Cosmos DB.

  • Create a Cosmos DB resource through Azure portal and use it throughout your development lifecycle.
  • Use Cosmos DB emulator on-premise or locally throughout your development lifecycle and once you are good with the usage, initiate the process to create the Cosmos DB resource in Azure.

I would prefer to go for on-premise Cosmos DB emulator usage as for development purpose, I will heavily use the Cosmos DB as my data storage. The consumption might incur some cost which I don’t think is necessary for the development purpose. You can check the pricing details here. However, I am going to demonstrate how we can create a Cosmos DB resource through Azure portal and also how we can use emulator for development purpose.

 

Creating an Azure Cosmos DB in Portal

Search for Azure Cosmos DB in Azure Marketplace and select it to start the creation process.

image

In the DB Account creation page, create a new or select an existing Resource Group where the resource will be available. Create an Account Name which will be interpreted as {Account name}documents.azure.com.

Select the type of API you want to use. Currently, it supports five types of API – SQL, MongoDB, Gremlin, Cassandra and Azure table storage. Based on your requirement, you can select the preferred one. For this post and continuation of my DotNet Core Migration Strategy series, I am going to use SQL API.

By default, the Geo-Redundancy and Multi-region Writes will be disabled. I will keep it like that for now since I don’t need it here.

 

image

 

Next we need to setup the Network for our Cosmos DB account. In this section, create a new Vnet if you want or can use an existing Vnet. I am going to create a new Vnet here.

 

image

 

image

 

image

 

This section will ask you to provide some Tags that will help you to view consolidated billing. I am going to keep it empty for now.

image

 

Once everything is complete, the Cosmos DB Account will be created successfully.

image

 

You can review the overview of the deployment that has completed.

image

 

Now if you select the Cosmos DB Account, you might see an error message which says that it has failed to get the collection list. You can compare collection to a model like Order, Product, etc. The same issue is reflected when you open the Data Explorer which says that retrieval of data is blocked due to firewall configuration.

image

 

image

 

All you need to do is click on the arrow in the Data Explorer which asks for review. This you open the Firewall configuration of the Data Explorer. Check the checkbox which says Allow access from Azure Portal. That will fix both the issues.

image

All good till now. Our Azure Cosmos DB in portal is ready to use. Let’s have a look on how to use Azure Cosmos DB emulator.

 

Using Azure Cosmos DB Emulator

As I was sharing earlier, the emulator provide a local or on-premise environment that emulates Azure Cosmos DB for development purpose. I would prefer to use the emulator during development stage. For staging or production, I will switch to Azure Cosmos DB account from cloud. This will make the development lifecycle cost effective.

You can download the emulator from the following link. Once the MSI installer is download, proceed with the installation steps.

Upon successful installation, run the emulator which will open the local instance of the service like https://localhost:8081/_explorer/index.html

image

In the Quickstart window, the fields URI and Primary Key are the crucial ones. The URI will be used by DocumentDB Client to connect to Azure Cosmos DB and Primary Key will be used to authenticate the request and establish a connection to Cosmos DB. These two fields are going to be used as key-value pair in AppSettings section of application configuration or web.config file.

When you select the Explorer windows, you will find options to create a new database and add a collection to it. Every time you want to add a collection to an existing database, you just need to select the option Use existing Database id and add a collection to it.

image

image

 

Just for demonstration purpose, I would like to show how it looks like when you add a collection. After adding a new database and collections, you will find the database and collection getting created. When you select Documents from  a specific collection, you can click New Document to insert a new record. The request here is taken in a JSON format. Remember, the “id” field in the request always takes a string and represent the record. You can also associate unique key while creating the collection.

image

Clicking the record id, you can view individual record. The SQL query associated to display the records can be changed with additional filters based on your requirement.

image

However, we are going write utility to generate databases and collections dynamically on need basis. Let’s go ahead with the next step of programming.

 

Programmatically access and perform CRUD operation to Cosmos DB

In order to programmatically create a database and collections to Cosmos DB using C#, we need to do the following steps.

  • Add Microsoft.Azure.DocumentDB Nuget package to the project.
  • Modify your Web.Config file to add Cosmos DB Authentication Key, CosmosDB endpoint and the Database Name

image

  • Create the DocumentClient which will take the endpoint and authentication key

image

  • Create the database

image

  • Create the Collection

image

You need to create a helper class which will take input from various GET, PUT, POST, DELETE operations from API and perform the same using the Document Client. I have included the source code library in this post which will help you to see how this works. The DocumentDBRespository.cs file is the helper class which is called from various Action methods of API Controller to get, insert, update or delete records from or to a collection. The helper class identifies whether the database exists and if not it will create it. Same goes for collection.

The Endpoint and Authentication Key in Web.Config file are pointing to the emulator. In case of staging/testing or production deployment, we need to set the values of these keys to Azure Cosmos DB service from the portal.

To learn more about Cosmos DB and how programmatically you can achieve it, follow the link here.

Hope this post helps you.

 

Source Code:

sourcecode