Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Building Microservices with .NET Core

You're reading from   Building Microservices with .NET Core Develop skills in Reactive Microservices, database scaling, Azure Microservices, and more

Arrow left icon
Product type Paperback
Published in Jun 2017
Publisher Packt
ISBN-13 9781785887833
Length 274 pages
Edition 1st Edition
Languages
Tools
Arrow right icon
Authors (4):
Arrow left icon
Soumya Mukherjee Soumya Mukherjee
Author Profile Icon Soumya Mukherjee
Soumya Mukherjee
Gaurav Aroraa Gaurav Aroraa
Author Profile Icon Gaurav Aroraa
Gaurav Aroraa
Lalit Kale Lalit Kale
Author Profile Icon Lalit Kale
Lalit Kale
Manish Kanwar Manish Kanwar
Author Profile Icon Manish Kanwar
Manish Kanwar
Arrow right icon
View More author details
Toc

Table of Contents (11) Chapters Close

Preface 1. What Are Microservices? 2. Building Microservices FREE CHAPTER 3. Integration Techniques 4. Testing Strategies 5. Deployment 6. Security 7. Monitoring 8. Scaling 9. Reactive Microservices 10. Creating a Complete Microservice Solution

Prerequisites for microservices

To understand better, let's take up an imaginary example of Flix One Inc. With this example as our base, let's discuss all the concepts in detail and see what it looks like to be ready for microservices.

FlixOne is an e-commerce player (selling books) that is spread all over India. They are growing at a very fast pace and diversifying their business at the same time. They have built their existing system on the .NET framework, and it is a traditional three-tier architecture. They have a massive database that is central to this system, and there are peripheral applications in their ecosystem. One such application is for their sales and logistics team, and it happens to be an Android app. These applications connect to their centralized data center and face performance issues. FlixOne has an in-house development team supported by external consultants. Refer to the following figure:

The preceding image depicts a broader sense of our current application, which is a single .NET assembly application. Here we have the user interfaces we use for search, order, products, tracking order, and checkout. Now check out the following diagram:

The preceding image depicts our Shopping cart module only. The application is built with C#, MVC5, and Entity Framework, and it has a single project application. This image is just a pictorial overview of the architecture of our application. This application is web-based and can be accessed from any browser. Initially, any request that uses the HTTP protocol will land on the user interface that is developed using MVC5 and JQuery. For cart activities, the UI interacts with the Shopping cart module, which is nothing but a business logic layer that further talks with the database layer (written in C#); data is persisted within the database (SQL Server 2008R2).

Functional overview of the application

Here we are going to understand the functional overview of the FlixOne bookstore application. This is only for the purpose of visualizing our application. The following is the simplified functional overview of the application until Shopping cart:

In the current application, the customer lands on the home page, where they see featured/highlighted books. They have the option to search for a book item if they do not get their favorite one. After getting the desired result, the customer can choose book items and add them to their shopping cart. Customers can verify the book items before the final checkout. As soon as the customer decides to check out, the existing cart system redirects them to an external payment gateway for the specified amount you need to pay for the book items in the shopping cart.

As discussed previously, our application is a monolithic application; it is structured to be developed and deployed as a single unit. This application has a large code base that is still growing. Small updates need to deploy the whole application at once.

Solutions for current challenges

Business is growing rapidly, so we decide to open our e-commerce website in 20 more cities; however, we are still facing challenges with the existing application and struggling to serve the existing user base properly. In this case, before we start the transition, we should make our monolithic application ready for its transition to microservices.

In the very first approach, the Shopping cart module will be segregated into smaller modules, then you'll be able to make these modules interact with each other as well as external or third-party software:

This proposed solution is not sufficient for our existing application, though developers would be able to divide the code and reuse it. However, the internal processing of the business logic will remain the same without any change in the way it would interact with the UI or the database. The new code will interact with the UI and the database layer with the database still remaining as the same old single database. With our database remaining undivided and as tightly coupled layers, the problems of having to update and deploy the whole code base will still remain. So this solution is not suitable for resolving our problem.

Handling deployment problems

In the preceding section, we discussed the deployment challenges we will face with the current .NET monolithic application. In this section, let's take a look at how we can overcome these challenges by making or adapting a few practices within the same .NET stack.

With our .NET monolithic application, our deployment is made up of xcopy deployments. After dividing our modules into different submodules, we can adapt to deployment strategies with the help of these. We can simply deploy our business logic layer or some common functionality. We can adapt to continuous integration and deployment. The xcopy deployment is a process where all the files are copied to the server, mostly used for web projects.

Making much better monolithic applications

We understand all the challenges with our existing monolithic application. We have to serve better with our new growth. As we are growing widely, we can't miss the opportunity to get new customers. If we miss fixing any challenge, then we would lose business opportunities as well. Let's discuss a few points to solve these problems.

Introducing dependency injections

Our modules are interdependent, so we are facing issues such as reusability of code and unresolved bugs due to changes in one module. These are deployment challenges. To tackle these issues, let's segregate our application in such a way that we will be able to divide modules into submodules. We can divide our Order module in such a way that it would implement the interface, and this can be initiated from the constructor. Here is a small code snippet that shows how we can apply this in our existing monolithic application.

Here is a code example that shows our Order class, where we use the constructor injection:

    namespace FlixOne.BookStore.Common
{
public class Order : IOrder
{
private readonly IOrderRepository _orderRepository;
public Order(IOrderRepository orderRepository)
{
_orderRepository = orderRepository;
}
public OrderModel GetBy(Guid orderId)
{
return _orderRepository.Get(orderId);
}
}
}
The inversion of control or IoC is nothing but a way in which objects do not create other objects on whom they rely to do their work.

In the preceding code snippet, we abstracted our Order module in such a way that it could use the IOrder interface. Afterward, our Order class implements the IOrder interface, and with the use of inversion of control, we create an object, as this is resolved automatically with the help of inversion of control.

Furthermore, the code snippets of IOrderRepository and OrderRepository are as follows:

    namespace FlixOne.BookStore.Common
{
public interface IOrderRepository
{
OrderModel Get(Guid orderId);
}
}
namespace FlixOne.BookStore.Common
{
public class OrderRepository : IOrderRepository
{
public OrderModel Get(Guid orderId)
{
//call data method here
return new OrderModel
{
OrderId = Guid.NewGuid(),
OrderDate = DateTime.Now,
OrderStatus = "In Transit"
};
}
}
}

Here we are trying to showcase how our Order module gets abstracted. In the preceding code snippet, we return default values for our order just to demonstrate the solution to the actual problem.

Finally, our presentation layer (the MVC controller) will use the available methods, as shown in the following code snippet:

    namespace FlixOne.BookStore.Controllers
{
public class OrderController : Controller
{
private readonly IOrder _order;
public OrderController(IOrder order)
{
_order = order;
}
// GET: Order
public ActionResult Index()
{
return View();
}
// GET: Order/Details/5
public ActionResult Details(string id)
{
var orderId = Guid.Parse(id);
var orderModel = _order.GetBy(orderId);
return View(orderModel);
}
}
}

The following is a class diagram that depicts how our interfaces and classes are associated with each other and how they expose their methods, properties, and so on:

Here again, we used the constructor injection, where IOrder passed and got the Order class initialized; hence, all the methods are available within our controller.

By achieving this, we would overcome a few problems such as:

  • Reduced module dependency: With the introduction of IOrder in our application, we are reducing the interdependency of the Order module. This way, if we are required to add or remove anything from/to this module, then other modules would not be affected, as IOrder is only implemented by the Order module. Let's say we want to make an enhancement to our Order module; it would not affect our Stock module. This way, we reduce module interdependency.
  • Introducing code reusability: If you are required to get the order details of any of the application modules, you can easily do so using the IOrder type.
  • Improvements in code maintainability: We have divided our modules into submodules or classes and interfaces now. We can now structure our code in such a manner that all the types, that is, all the interfaces, are placed under one folder and follow the suit for the repositories. With this structure, it would be easier for us to arrange and maintain code.
  • Our current monolithic application does not have any kind of unit testing. With the introduction of interfaces, we can now easily perform unit testing and adopt the system of test-driven development with ease.

Database refactoring

As discussed in the preceding section, our application database is huge and depends on a single schema. This huge database should be considered while refactoring. We will go for this as:

  • Schema correction: In general practice (not required), our schema depicts our modules. As discussed in previous sections, our huge database has a single schema, that is dbo now, and every part of the code or table should not be related to dbo. There might be several modules that will interact with specific tables. For example, our Order module should contain some related schema name, such as Order. So whenever we need to use the table, we can use them with their own schema instead of a general dbo schema. This will not impact any functionality related to how data would be retrieved from the database. But it will have structured or arranged our tables in such a way that we would be able to identify and correlate each and every table with their specific modules. This exercise will be very helpful while we are in the stage of transitioning of a monolithic application to microservices. Refer to the following image:

In the preceding figure, we see how the database schema is separated logically. It is not separated physically--our Order Schema and Stock Schema belong to the same database. So here we separate the database schema logically, not physically.

We can also take an example of our users: not all users are admin or belong to a specific zone, area, or region. But our user table should be structured in such a way that we should be able to identify the users by the table name or the way they are structured. Here we can structure our user table on the basis of regions. We should map our user table to a region table in such a way it should not impact or lay any changes in the existing codebase.

  • Moving business logic to code from stored procedures: In the current database, we have thousands of lines Stored Procedure with a lot of business logic. We should move the business logic to our codebase. In our monolithic application, we are using Entity Framework; here we can avoid the creation of stored procedures. We can incorporate all of our business logic to code.

Database sharding and partitioning

Between database sharding and partitioning, we can go with database sharding, where we will break it into smaller databases. These smaller databases will be deployed on a separate server:

In general, database sharding is simply defined as a shared-nothing partitioning scheme for large databases. This way, we can achieve a new level of high performance and scalability. Sharding comes from shard and spreading, which means dividing a database into chunks(shards) and spreading to different servers.

The preceding diagram is a pictorial overview of how our database is divided into smaller databases.Take a look at the following diagram:

DevOps culture

In the preceding sections, we discussed the challenges and problems with the team. Here, we propose a solution to the DevOps team: the collaboration of the development team with another operational team  should be emphasized. We should set up a system where development, QA, and the infrastructure team work in collaboration.

Automation

Infrastructure setup can be a very time-consuming job; developers would remain idle while the infrastructure is being readied for them. He or she will take some time before joining the team and contributing. The process of infrastructure setup should not stop a developer from becoming productive, as it would reduce overall productivity. This should be an automated process. With the use of Chef or PowerShell, we can easily create our virtual machines and quickly ramp up the developer count as and when required. This way, our developer can be ready to start the work from day one of joining the team.

Chef is a DevOps tool that provides a framework to automate and manage your infrastructure.

PowerShell can be used to create our Azure machines and to setup TFS.

Testing

We are going to introduce automated testing as a solution to our prior problems, those we faced while testing during deployment. In this part of the solution, we have to divide our testing approach as follows:

  • Adopt Test-Driven Development (TDD). With TDD, a developer is required to test his or her own code. The test is nothing but another piece of code that could validate whether the functionality is working as intended. If any functionality is found to not satisfy the test code, the corresponding unit test fails. This functionality can be easily fixed, as you know this is where the problem is. In order to achieve this, we can utilize frameworks such as MS test or unit tests.
  • The QA team can use scripts to automate their tasks. They can create scripts by utilizing QTP or the Selenium framework.

Versioning

The current system does not have any kind of versioning system. So there is no way to revert if something happens during a change. To resolve this issue, we need to introduce a version control mechanism. In our case, this should be either TFS or Git. With the use of version control, we can now revert to our change in case it is found to break some functionality or introduce any unexpected behavior in the application. We now have the capability of tracking the changes being done by the team members working on this application, at an individual level. However, in the case of our monolithic application, we did not have the capability of doing this.

Deployment

In our application, deployment is a huge challenge. To resolve this, we introduce Continuous Integration (CI). In this process, we need to set up a CI server. With the introduction of CI, the entire process is automated. As soon as the code is checked in by any team member, using version control TFS or Git, in our case, the CI process kicks into action. It ensures that the new code is built and unit tests are run along with the integration test. In both the scenarios of a successful build or otherwise, the team is alerted to the outcome. This enables the team to quickly respond to the issue.

Next we move to continuous deployment. Here we introduce various environments, namely a development environment, staging environment, QA environment, and so on. Now as soon as the code is checked in by any team member, CI kicks into action. It invokes the unit/integration test suits, builds the system, and pushes it out to the various environments we have set up. This way, the turnaround time of the development team to provide a suitable build for QA is reduced to minimal.

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image