Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Hands-On Microservices with  Kotlin
Hands-On Microservices with  Kotlin

Hands-On Microservices with Kotlin: Build reactive and cloud-native microservices with Kotlin using Spring 5 and Spring Boot 2.0

Arrow left icon
Profile Icon Medina Iglesias
Arrow right icon
S$36.99 S$52.99
Full star icon Full star icon Full star icon Full star icon Half star icon 4.4 (8 Ratings)
eBook Jan 2018 414 pages 1st Edition
eBook
S$36.99 S$52.99
Paperback
S$66.99
Subscription
Free Trial
Arrow left icon
Profile Icon Medina Iglesias
Arrow right icon
S$36.99 S$52.99
Full star icon Full star icon Full star icon Full star icon Half star icon 4.4 (8 Ratings)
eBook Jan 2018 414 pages 1st Edition
eBook
S$36.99 S$52.99
Paperback
S$66.99
Subscription
Free Trial
eBook
S$36.99 S$52.99
Paperback
S$66.99
Subscription
Free Trial

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
OR
Modal Close icon
Payment Processing...
tick Completed

Billing Address

Table of content icon View table of contents Preview book icon Preview Book

Hands-On Microservices with Kotlin

Understanding Microservices

Microservices and their continuously evolving architecture have become one of the most used approaches in enterprise applications. In this book, we will try to get an understanding of what they really are and the principles that they are based on. Using Domain-Driven Design, we will reinforce those principles to maintain a clean architecture that can evolve with our applications.

Since microservices have no static architecture, we will discover how the new reactive paradigm could change the way we create them. And, finally, we will have an overview of cloud architecture and why we should create Cloud Native microservices.

In this chapter, you will learn about:

  • What a microservice really is
  • Understanding microservices principles
  • Using Domain-Driven Design for a clean architecture
  • Non-blocking reactive microservices
  • Cloud Native microservices and their benefits

What is a microservice?

Microservices are modular, loosely-coupled services that provide a fine-grained protocol. They physically separate concerns and allow us to design, develop, test, and deploy them independently.

Due to their modular capabilities, they can be created by small cross-functional teams that are embracing the benefits of agile methodologies and the DevOps culture. They are also an ideal candidate for continuous delivery and deployment.


DevOps is a software development and delivery process that emphasizes communication and collaboration between product management, software development, and operations professionals.

They are easy to understand and connect well with other services, making integration of complex applications a lightweight task. They can be scaled, monitored, and controlled independently so that they fully benefit cloud architectures.

Understanding SoA

Microservices are an evolution of the Service Oriented Architecture (SoA). So, if we want to understand what a microservice is, we need to understand what SoA is. SoA is based on having application components communicating through a set of services, a discrete unit of functionality that can be accessed remotely. Services are the foundation stone in SoA, and the same applies to microservices as well.

As described in SoA, a service has four properties:

  • It logically represents a business activity with a specified outcome
  • It is self-contained
  • It is a black box for its consumers
  • It may consist of other underlying services

To understand these properties, let's look at an example of an application using SoA:

SoA application example

In this typical n-tier architecture, the application is divided into three layers:

  • Presentation layer: Holds the UI for our customer
  • Business layer: Has services implementing the domain logic for our business capabilities
  • Data layer: Persists our domain model

Each component includes the logic to interact with the customer in a specific business activity and to do so, uses the services provided by the business layer. Each service represents the realization of a business activity. For example, you log in to the application provided by the login service, check offers provided by the offers service or create orders via the orders service. These services are self-contained in the business layer, and they act as a black box for their consumer—the components don't know how the services are implemented, nor do they know how the domain model is persisted. All the services depend on the customer service to obtain customer data, or return customer information, but the client does not know about these details.

This approach provides several benefits to any architecture that uses it:

  • Standardized service contract, allowing easy integration with components
  • Reusability, allowing services to delegate responsibilities to each other
  • Business value, implementing the business capabilities
  • Hides complexity; if we need to change our database, the clients are unaffected
  • Autonomyl; each of the layers could be separated and be accessed remotely

Differentiating microservices from SoA

Microservices architecture evolves from SoA, but it has key differences that we need to understand. Let's recreate the previous SoA example with a microservices architecture and review the differences and benefits for this type of architecture:

Microservice application example

In this architecture, the layers are not bound together, as they are purely divided logically. Each microservice is completely decoupled from the other services, so even our UI components could be completely separate, deployable modules. These microservices own their own data and they could be altered without affecting each other. This is a capability that will stand out when facing continuous integration or delivery, as we could provision data for our tests and pipeline, without affecting other parts of the application.

Because of their independence, they could be very specific. We could use this benefit to build that expertise into our teams. An expert team that controls the domain logic from a business capability could effectively deliver better value to our products.

We could vary the range of development languages, platforms, or technologies to build each microservice. As they are completely independent, we could use a different database for each different business need, or perhaps use certain technologies that will give us the agility required to adapt to certain requirements more easily.

Since they are modular, we could deploy them independently and have different release cycles. When we need to monitor them, we could create different alerts or KPIs based on the nature of what they do and how they are done. It may not be the same for a microservice used in our accounting process as one that just provides content for our marketing banners. For similar reasons, we could scale them separately; we could have more servers for some microservices, or just more CPU or resources for others.


Taking advantage of how we can control and monitor microservices independently will grant us the ability to optimize scaling.

The infrastructure for microservices is usually simpler, as there are not as many complex servers to manage, configure, monitor, and control, no huge database schemas and, since the expertise within the teams is higher, more things can be easily automated. This will make DevOps culture within microservices teams a common practice, and with it we will gain even more flexibility within our products.

As microservice teams are usually small, there is a common understanding within the industry that the optimal size for a microservice team is one that could be fed with two pizzas. Whether or not this is the reality, keeping your team small will help to maximize the value of this type of architecture.

If we look at SoA and then microservices, what we can see is a natural evolution. Microservice architecture takes the advantages of SoA and then defines additional steps in that same direction. So, we can definitely say that:

"Microservices are SoA but all SoA are not microservices."

From the monolith to microservices

So, why did SoA evolve into microservices? Perhaps one of the reasons was due to the monolith. There was a point in time where applications were small, and the presentation logic was usually coupled with the business logic. Then, the domain model got complex and many software patterns arose. Most of them were focusing on one thing: Separation of Concerns.

Separation of Concerns (SoC) is a design principle for separating software into distinct sections so that each section addresses only one concern. But software is not the only thing that needs separation; the architecture needs it as well. Things like SoA are designed for that, as it allows us to hide our complexity behind black boxes to make our architecture more modular, and with the ability to handle the complexity that we require.

We may create a complex data store in the mainframe based on detailed business rules, or on a powerful database with a deep schema, complex stored procedures, views, and relationships. We can choose frameworks and tools to easily orchestrate all these parts. We probably also need a powerful Enterprise Software Bus (ESB).

An ESB is a software component that is in charge of the coordination, mapping, and routing of services. The overall idea is to have a very powerful component to easily orchestrate messages. In order to create complex applications, services were designed using most of these elements, creating complex relationships.

From services calling each other, to views querying several tables, pulling data from different business domains. And finally, merging in our ESB several of those elements with business rules to produce new services.

Complex SoA application

Changing one service, or a table in the schema, produces a knock-on effect in the whole application, resulting in those relationships and dependencies needing to be changed, whether they're services, mappings, or even screens as they are all bundled together. In many cases, this causes long release cycles as handling that level of complexity is not an easy task. And nor is the development, configuration, testing, nor deployment.

Even scaling the application could be affected, whether it's a bigger database, more servers for services, a bigger ESB for handling more messages. Since they depend on each other, it is not easy to scale them separately. All of this means that our architecture is coupled and we have created a monolithic application.

Monolithic applications existed before SoA and, in fact, this was one of the things that SoA helped to handle, decoupling the clients from the business domain. Unfortunately, trying to implement SoA drove many applications back to it.

Does this mean that doing SoA will produce a monolith? No. In fact, before the concept of microservices, many architects and developers started to adopt patterns and architectures to handle this problem. This evolved into what we call microservices today.

There were people doing microservices before that name existed; they just called it SoA.

Microservices principles

Defining microservices principles will allow us to build scalable, easy-to-maintain enterprise applications. We will focus on benefits and downsides when we review them. We understand that sometimes there could be some disagreement in some of them; however, we encourage you to review them all. Finally, we know that there are probably dozens or more principles that could be included, but we chose the ones that made most sense in the context of this book.

Defining design principles

We need to choose a set of principles when we design microservices; each of them will have their own advantage that will be reviewed later on in this chapter, but defining them will also allow us to have a consistent approach for different kinds of problems, and will help others understand our architecture.

The key principles that we are going to define are:

  • Modelled around business capabilities
  • Loosely couple
  • Single responsibility
  • Hiding implementation
  • Isolation
  • Independently deployable
  • Build for failure
  • Scalability
  • Automation

Modelled around business capabilities

A well-designed microservice should be modeled around the business capabilities that are meant to be implemented. Designing software has a component of abstraction and we are used to getting requirements and implementing them, but we must consider how everyone, including us, will understand the solution, now and in the future.

When we need to update, or even modify our microservices, we need to abstract back to the original concept that defined it. In that process, we could realize that something was not as we originally understood, or that our design could not evolve. We may even discover that we have to break the boundaries of our business domain and we don't implement the original capability anymore, or that actually it is implemented across a set of different non-related microservices. We could end up coupling our microservices together, and that is something that we want to avoid.

The domain experts of these business capabilities have a clear understanding of how they operate and how those capabilities are combined and used. Working with them could make our microservices understandable for everyone, including our future selves, and will move our services to become not just an abstraction, but a mapping of the original business capability.

Work as closely as you can with your domain experts, it will always benefit you.

We will deep dive more into this topic in the Domain-Driven Design section of this chapter.

Loosely couple

No microservice exists on its own, as any system needs to interact with others, including other microservices, but we need to do it as loosely coupled as we can. Let's say that we are designing a microservice that returns the available offers for a giving customer, we may need a relation to our customer, for example, a customer ID, and that should be the maximum coupling that we may accept.

Imagine a scenario that for a component that uses our offers, the microservice needs to display the customer name when it displays those offers. We could modify our implementation to use the customer microservice to add that information to our response, but in doing so, we are coupling with the customer microservice. In that case, if the customer name field changes, for example, to become not just a name but is separated into surname and forename, we need to change the output of our microservice. That type of coupling needs to be avoided; our microservices should just return what information that is really under their domain.

Remember that our domain experts could help us in understanding if a business capability owns a function; probably the experts in customer offers will know that the customer name is something that is a handle in another business capability.

We need to take care of how we are coupling, not only between microservices, but with everything in our architecture, including external systems. That is one of the reasons why every microservice should own its own data, this including even a database where the data is persisted.

Single responsibility

Every microservice should have responsibility over a single part of the functionality provided by the application, and that responsibility should be entirely encapsulated by the microservice. The design of the microservice should be narrowly aligned with that responsibility.

We could adopt Robert C. Martin's definition of the principle applied to OOP that said: "A class should have only one reason to change"; for this principle, we can say: a microservice should have only one reason to change.

If we realize that when we need to change a business function within our application, it modifies several microservices, or that a change cascades into non-related microservices, it is time that we reconsider how we design them.

This does not mean that we get to make microservices that do only one operation. Probably it is a good idea to have a microservice that handles the customer operations, like create, find, delete, but probably shouldn't handle operations like adding offers to a customer.

Hiding implementation

Microservices usually have a clear and easy to understand interface that must hide the implementation details. We shouldn't expose the internal details, neither technical implementation nor the business rules that drive it.

Applying this principle, we reduce the coupling to others, and that any change in our details affect them. We will prevent the technical changes or improvements that impact the overall architecture. We should always be able to change when needed, from where we persist our business model, to the programming languages or frameworks that we use.

But we also need to be able to modify our logic and rules, to adapt to any change within our domain without affecting the overall application. Helping to handle change is one of the benefits of a well-designed microservice architecture.

Isolation

A microservice should be physically and/or logically isolated from the infrastructure that uses the systems that it depends on. If we use a database, it must be our database, if we are running in a server, it should be in our server, and so on. With this, we guarantee that nothing external is affecting us and neither are we affecting anything external.

This will help from deployments to performance or monitoring, or even in building our continuous delivery pipeline. It will facilitate how we can be controlled and scaled independently, and will help the ops functions within our team to manage our microservices.

We should move away from the days when a failure in some parts of the architecture was affecting others. Containers are one of the key architectures to effectively archive this principle. We will learn more about this in the Cloud Native microservices section of this chapter.

Independently deployable

Microservices should be independently deployable; if not, it probably means that there is some kind of coupling within our architecture that needs to be solved. If we could meet other principles but we fail at this, we are probably decrementing the benefits of this architecture.

Having the ability to deliver constantly is one of the advantages of the microservices architecture; any constraints should be removed, as much as we remove bugs from our applications.

We should take care of deployments from the beginning of the design of our microservices and architecture; finding a constraint on this area at late stages could have a big impact on the overall application.

Build for failure

It doesn't matter how many tests we do in our microservice, how many controls are in place, how many alerts could be triggered; if our microservice is going to fail, we need to design for that failure, to handle it as gracefully as possible, and define how we could recover from it.

"Anything that can go wrong will go wrong."
– Murphy

When we approach the initial design of a microservice, we need to start working on the more basic errors that we need to handle. As the design grows, we should think of all the edge scenarios, and finally what could go really wrong. Then, we need to assess how we are going to notify, monitor, and control those situations, how we could recover, and if we have the right information and tools for solving them.

Think of these areas when you design a microservice:

  • Upstream
  • Downstream
  • Logging
  • Monitoring
  • Alerting
  • Recovery
  • Fallbacks

Upstream

Upstream is understanding how we are going to, or if we are not going to, notify errors to our consumers, but remembering always to avoid coupling.

Downstream

Downstream refers to how we are going to handle, if something that we depend on fails, as another microservice, or even a system, like a database.

Logging

Logging is about taking care of how we are going to log any failure, thinking if we are doing it too often or too infrequently, the amount of information, and how this can be accessed. We should take special care about sensitive information and performance implications.

Monitoring

Monitoring needs to be designed thoughtfully. It is a very problematic situation to handle a failure without the right information in the monitoring systems; we should consider what elements of the application have meaningful information.

Alerting

Alerting is to understand what the signals are that could indicate that something is going wrong, its link to our monitoring and probably to our logging, but for any good design application, it is not enough to just alert on anything strange. We require a deeper analysis on the signals and how they are related.

Recovery

Recovery is designing how we are going to act on those failures to get back to a normal state. Automatic recovery should be our target, but manual recovery should not be avoided since automatic recovery could fail.

Fallbacks

Think about how, even if our microservices are failing, we can still respond to whoever uses them. For example, if we design a microservice that retrieves the offers from a customer but encounters a problem acceding to the data layer, maybe it could return a default set of offers that allows the application to at least have some meaningful information. In the same way, if we consume an external service, we may have a fallback mechanism if that service is not available.

Fallbacks are a common pattern to prevent a problem within your architecture affecting other parts of the system. If we have a good fallback, our application could work until that problem is fixed.

Scalability

Microservices should be designed to be independently scalable. If we need to increase how many requests we can handle or how many records we can hold, we should do it in isolation. We should avoid that, due to a coupling on the architecture; the only way to scale our application is scaling several components together or through the system as a whole.

Let's go back to the original SoA application example and handle a scenario where we need to scale our offers capability:

Example of scaling a coupled SoA application

Even if what we need to scale is our offer capability, due to the coupling of the system, we need to do it as whole. We will increase how many instances of the presentation and business layer we have, and we increase our database either with more instances or with a bigger database. Probably, we may need to also update some of those servers as the resources that they require will increase. In a microservices architecture, we could just scale the elements that are needed. Let's view how we could scale the same application using microservices:

Example of scaling a microservice application

We have just increased what was required for the offers' capability and to keep the rest of the architecture intact, we need to consider that in microservices, those servers are smaller and don't need as many resources due to their limited scope.


In a well-designed microservice architecture, we could effectively have more capacity with less infrastructure since it could be optimized for more accurate use and be scaled independently.

We will review more about this topic in the Cloud Native microservices section of this chapter.

Automation

Our microservices should be designed with automization in mind, from building or testing to deployment and monitoring. Since our services are going to be small and they are isolated, the cost to automatize them should be low and the benefits should be high.

With this principle, we benefit the agility of our application and we prevent unnecessary manual tasks having an impact on the system. For those reasons, Continuous Integration and Continuous Delivery should be designed from the beginning of our architecture.

Domain-Driven Design

Using Domain-Driven Design (DDD) in our microservices will help us meet our microservices principles, but DDD will also help organize our products teams in ways that will benefit the value that we give from this type of architecture. But first, we need to understand what DDD actually is, and how we could apply it to create a clean architecture.

What is Domain-Driven Design

Domain-Driven Design is a software development approach to connect to an evolving complex model bounding into a core domain.

The term, Domain-Driven Design, was created by Eric Evans in his book with the same title.

When we approach a complex system, we usually abstract it to a model that describes the different selected aspects of the system, and how we could use it to solve problems. When multiple models are in play, and the code base of different models is combined, the software becomes buggy, unreliable, and difficult to understand. It is often unclear in what context a model should not be applied. The domain is the sphere of knowledge that the users of our system understand, and what they use to interact with our software; they are the domain experts.

In DDD, we define the context within which a model applies; explicitly set boundaries in terms of team organization, usage within specific parts of the application, and physical manifestations such as code bases and database schemas, keeping the model strictly consistent within these bounds.

Ubiquitous language

In DDD, we should build a common and rigorous language between developers and users. This language should be based on the domain model and will help us have a ubiquitous and fluid conversation with the domain experts, and will prove to be essential when approaching testing.

Since our domain model is part of our software, we should be precise to avoid any ambiguity and evolve both model and language as our knowledge as the domain grows. But when creating software, the usage of the ubiquitous language should not be only in our domain model, but also in our domain logic and even architecture. It will allow a ubiquitous understanding by any team member.

Creating tests that use the domain language help any team member to understand our domain logic.

Bounded context

When a domain model grows, it becomes complicated to have a unified domain model. Sometimes, we face a situation when we see two different representations of a concept, for example, let's examine the concept of family in a large model.

In a shopping platform, we may have the concept of products families, for example, our fabulous 32" LCD screen and the classical 24" CRT screen are part of the screen family. On the other hand, our speed offers and last day offers are part of our limited timed-offer family.

We could see that family may not be exactly the same thing on products and offers, probably they both have a unique name on their model, but in each context they may have a totally different model and logic.

In DDD, we separated them in to bounded contexts, a boundary that surrounds a model. This keeps the knowledge inside the boundary consistent, ignoring the outside world so we could still have our ubiquitous language for our domain model.

Context mapping

In a large application designed for several bounded contexts, we can lose sight of the global view. It is inevitable that the various bounded contexts will need to share or communicate data between each other. A context map is a global view of the system as a whole, showing how our bounded contexts should communicate with each other.

Context map example

This is an oversimplified example that shows three bounded contexts and how they are mapped. In the product context, we have our product and the family that it belongs to. Here, we will have all the operations for this domain context in it and it does not have a direct relation dependency to any other context.

Our offers bounded context has a dependency on the product domain context, but this is a weak relation that should purely reflect the ID of the product that a particular offer belongs to. This context will define the operations that contain the domain logic for this context.

In our shopping bound context, we have a weak relation with the product that belongs to a shopping list and will have the operations for this context. Finally, both offers and shopping concept have a relation with the customer that probably belongs to a separated bounding context.

Using DDD in microservices

Now that we have a clearer understanding of what DDD is, we need to review how we are going to apply it to our microservices. We could start with the following points:

  • Bounded Context: We should never create a microservice that includes more than one bounded context: it is better if we can map that whole context to a single microservice, something that indicates that our context is really bounded
  • Ubiquitous Language: We need to ensure that the language that our microservice speaks with is ubiquitous, so the operations and interfaces that are exposed are expressed in the context domain language
  • Context Model: The model that our microservice uses should be defined within the bounded context and use the ubiquitous language, even for entities that are not exposed in any of the interfaces that the microservices provide
  • Context Mapping: Finally, we need to review the context mapping of the whole system to understand the dependencies and coupling of our microservices

After reviewing these points, we will notice that we are in fact fulfilling the main principles defined before. Our microservices are modelled around business capabilities, our context domains, are loosely coupled as our context mapping shows, and have a single responsibility as a bound context should. Microservices that implement a bounded context could easily hide their implementation, and they will be nature isolated, so we could deploy them independently. Having those principles in place will make it easy to build for failure, having scalability and automation. Finally, having a microservice architecture that follows DDD will deliver a clean architecture that any team member could understand.


The ubiquitous language of a well-designed bounded context will make many tasks easy in a microservice life cycle, from working with the domain experts to tests or any tasks for the ops function of our team.

Reactive microservices

Reactive programming is currently a trend topic. This is mainly because of the benefits to implementing software using this new paradigm. Spring Framework 5.0 included numerous changes to give the advantage of this programming model and many new components of the Spring family have evolved to support it. In fact new Spring libraries have been created to add additional support to applications interested in what is called the reactive revolution. Additionally, Spring has rewritten the core of the framework, using reactive technologies that will allow a better technology for the applications that use them. In this section, we will understand the basics and principles of reactive programming and how we could apply it to create reactive microservices.

Reactive programming

We are quite familiar with imperative programming: in our software, we ask to do something and expect a result and meanwhile, we wait, our action is blocked expecting a result. Consider this small pseudo code as an example:

var someVariable = getData()
print(someVariable)

In this couple of instructions, we will set the content of a variable from the output of a function that will return data; when the data is available, we will print it out. The really important part of this small piece of code is that our program stops until we completely get the data, this is what is called a blocking operation.

There is a way to bypass this limitation that has been used extensively in the past. For example, we could create a separate thread to get our data, but effectively, that thread will be blocked until completion, and if different requests need to handle this, we end up in the situation of creating a thread for each one, possibly using a pool, but finally we will reach a limit of how many of those threads we can handle. This is how most traditional applications work, but now, this could be improved.

Let's see some pseudo code for this in reactive programming:

subscribe(::getData).whenDone(::print)

What we are trying to do here is to subscribe to an operation and when that operation is complete, send the result to another operation. In this example, when we get the data, we will print the results; the important part of this is that after that sentence our program continues, so it could process other things; this is what is called a non-blocking operation. But this could be applied not just to a single result, we could subscribe into a reactive stream of data, and when the stream starts to flow, it will be calling our function that will be progressively printing the data that we receive.

A reactive stream is a collection of data that will continuously flow as soon as it is ready, so imagine that instead of querying a database for some results, the database starts sending results as often as it is ready. Many modern database drivers support these concepts.

This new programming model allows us to have high-performance applications to process way more requests than in a more traditional blocking model. This approach utilizes resources more effectively and this could reduce the amount of infrastructure required for our applications. But now we need to understand what are the real principles of reactive programming.

Reactive Manifesto

In 2013, a working group of experts from some of the biggest software companies in the world published the Reactive Manifesto that set the basis of how reactive systems are understood and work, this manifesto is available on https://github.com/reactivemanifesto/reactivemanifesto.

Let's review what the Manifesto said:

First, the Manifesto introduces the current landscape of modern applications, focusing on how this demands a new kind of system that needs to respond to way more data and faster than before, and has to be scalable, resilient and fault tolerant. The intention of the Manifesto is to have a coherent approach to those problems and define reactive systems and what benefits we get from them. Many of those topics were discussed in our Microservices principles section, so it probably is a good idea to review them, but now we need to do a deep dive of how reactive systems are defined in the Manifesto.

If you would like to sign the manifest or have a PDF version in any language you could go to http://www.reactivemanifesto.org/.

Responsive

Modern applications should respond in a timely manner, but not only to the users of the system but also to the responsiveness of our problems and errors; we are long way away from those days where our applications would freeze when something was either taking longer to answer or failing for unknown reasons.

We need to help the users have a seamless and predictable experience so they could progressively work in our system, and doing so with consistent quality, so they will be encouraged to use the system and remove our past stigma with randomized user experiences.

Resilient

We cover much of this topic under our build for failure and isolation principles, but the Manifesto indicates as well that if we fail to have resilience, we tend to affect our responsiveness, something that we should handle.

Some of these issues could be handled, as well as applying correctly our scalability principle, since we could archive resilience by replication and replication, depends on our scalability.

Elastic

Reactive systems should be elastic, so they effectively apply the scalability principle to stay responsive under varying workloads, but more internally, the system itself may have the capability of increasing or decreasing the resources that allocate.

In the older architecture, planning resources was part of our architecture; we design thread pools to handle our request with certain capacity, and we prepare our servers to be able to manage this.

In reactive systems, our services could dynamically fetch more resources if required and free them when they are not needed.

Message-driven

Reactive systems use asynchronous messaging to flow information through the different components with very loosely coupling, that allows us to interconnect those systems in isolation. We could think of this as if we are connecting streams through pipes, one service could subscribe to another to get some information and the second service could be subscribed to a couple of additional services to combine the data and return it back to the original service.

Connecting streams

Each of those services does not know why or how that information is used, so they have little information about the dependencies. This allows us to replace those pieces easily, but as well as handling errors in case of failure, we could simply just create a stream of errors with other receivers that will handle and process them.

But the manifesto speaks about applying back pressure; we need to understand that concept further.

Back pressure

Back pressure is produced when a reactive system is published at a rate higher than the subscriber could handle, in other words, this is how a consumer of a reactive service says: Please, I am not able to deal with the demand at the moment, stop sending data and do not waste resources (for example, buffer memory).

There is a range of mechanisms for handling this, and they are usually close to the reactive implementation, from batching the messages to dropping them, but right now, we don't need to get into the details, just understand that any reactive system must deal with back pressure.

Reactive frameworks

There are several reactive frameworks that we could use to create reactive applications.

Let's list the more important frameworks:

  • Reactive Extensions (ReactiveX or Rx)
  • Project Reactor
  • Java Reactive Streams
  • Akka

Reactive Extensions

Reactive Extensions is probably one of the most popular frameworks to create reactive systems and support a wider set of platforms and programming languages, from JavaScript using RxJS, to Java using RxJava or even in .Net platforms using Rx.Net.

It uses the observable pattern to perform no blocking operations; most of the major reactive systems have been built using Rx.

More details can be found at: http://reactivex.io/.

Project Reactor

Project Reactor is a JVM reactive library that follows the reactive streams specification and provides a high-level library to easily create reactive applications. Spring Framework 5.0 uses Project Reactor extensively.

More details can be found at https://projectreactor.io/.

Reactive Stream is an initiative to provide a standard for asynchronous stream processing with non-blocking back pressure. You can refer to http://www.reactive-streams.org/.

Java reactive streams

Since Java 9, we now have an implementation of reactive streams in the Java platform, some projects are migrating existing Rx code into the new Java 9 libraries.

More details can be found in: https://community.oracle.com/docs/DOC-1006738.

Akka

Akka was created by Jonas Bonér, one of the main authors of the Reactive Manifesto, to create a toolkit in the JVM, using Scala, to create concurrent and distributed applications. Akka emphasizes in the actor-base model and has been proven to support high scalable distributed applications.

More details can be found in: https://akka.io/.

Reactive microservices

Now that we have a better understanding of reactive systems, we need to consider why we should create reactive microservices. If we look at microservices and remember what drove SoA into microservices, we could view what we need to create more complex applications and produce a better system for our users driving the architecture. With the new reactive programming model, we could create fast and non-blocking software that will utilize better the resources of our infrastructure. We could provide better responsiveness, and we could simplify our development to create highly reusable services that could be connected loosely with each other. And considering how aligned the reactive systems are with our principles and the extensive support of frameworks that they have, we conclude that the way forward for modern microservices is to become reactive.

We will explore more of this topic in Chapter 4, Creating Reactive Microservices.

Cloud Native microservices

Cloud Native microservices is an approach to building microservices using the advantages of the cloud computing model, focusing on building our microservices and allowing our cloud to deploy, manage, and scale them.

Cloud architecture focuses on how, and now where it will give us the agility to deliver value to our products constantly, but first we need to understand what cloud computing.

Cloud computing

Traditionally, organizations need to take care to provision specific infrastructure for their services. Whenever our applications need to scale, we need to buy more servers for them, many times using costly hardware provided by different vendors, and tanking a considerable amount of time to configure them within our systems.

That infrastructure approach usually is tied to a static capacity, so if we have a peak of load in our app, we need to buy more servers, and after that peak is gone, part of our infrastructure is unutilized, sometimes producing more costs just to maintain it or recycle into new services, and because configure isn't easy, probably we keep it as-is until the next peak comes in. Cloud computing is about using common and cheap hardware to create resources that could be used to dynamically deploy and run multiple applications that can be scaled either automatically or manually.

In a traditional architecture, we may have a database that has a certain capacity and is running in a particular server; if we need to scale it, we either upgrade the server resources, or we buy another server and configure. In a cloud, we create a database server and we scale the number of instances dynamically, and we get rid of unused instances if we don't need them anymore. The resources freed on the cloud could be used for creating or scaling other applications, and the cloud capacity overall could grow just by adding more conventional servers to it.

This approach allows organizations to go to a pay-as-you-go model for their infrastructure, instead of up-front buying servers, they could pay whatever resources their need for a specific period of time. Cloud applications are designed to easily be configured, since the overall idea is to have services that could be easily spawned in a short time. Cloud Native applications will use some kind of system, either provided by the cloud platform or the application itself, to be configured when new instances are created. Since we need to have services running in our cloud that could easily spawn and destroy, the majority of cloud use some kind of containerized applications.

Containers

Containers is a virtualization method that allows an operating system to run an application in isolated user space, controlling and limiting the resources for each contained application. For an application that runs on a container, it will work as if it is running in its own operating system. Most of the containers will not know that they are hosted on another operating system. This allows the host operating system to spawn or destroy those applications without affecting any part of the system and preventing impacts of one container to another.

Since these containers are running on a hosted system, when they need to start they will be faster than in a normal virtualization that requires a new operating system to boot. However, this implied that we could not spawn a container with a different system than the running host, so we could not run a Windows application as a process in a Linux host. Docker is probably the most used container system for cloud applications, however different cloud providers may choose different systems to run their applications. We will learn more about this topic in Chapter 7, Creating Dockers.

Deployment models

Organizations could choose different deployment models when creating cloud applications, let's review the most common models:

  • Private cloud
  • Public cloud
  • Hybrid cloud

Private cloud

Private cloud is a cloud infrastructure for a single organization, and usually hosted internally in a self-hosted data center. They are usually capital intensive, and require allocation of space, hardware and environmental controls. Their assets have to be refreshed periodically, requiring additional costs. They usually need to be built and managed internally so they are not fully benefited from focusing on the how, not where concept.

Public cloud

Public cloud services are delivered in a network that is open for a public audience through a services provider that operates the infrastructure and their data centers. Providers manage access control and security for the organizations that use these services and usually allow connection through the internet, but organizations could choose to use direct connections, if required. They could be seen to be more expensive than a private cloud in the pay-as-you-go model, however considering the cost to build, upgrade and maintain a private cloud that is not always the truth.

Keep your servers upgraded, security patched, with resilience and reliability is neither easy or cheap, think on the overall benefits of public clouds.

Hybrid cloud

Hybrid clouds try to take the benefits of mixing private and public clouds. An organization could choose to have a private cloud and linked to a public cloud to handle a peak of capacity or additional resources. Some organizations may choose this approach because a critical part of their business needs to be managed in-house in a private cloud but going to a public cloud for other matters.

Service models

There are several service models than can be offered to organizations by different cloud providers, sometimes more than one could be targeted by different products on the provider.

Here are the most common:

  • Infrastructure as a Service (IaaS)
  • Platform as a Service (PaaS)
  • Software as a Service (SaaS)

Infrastructure as a Service

The most basic cloud service model offers computing infrastructure, virtual machines, and other resources, as a service to their users. They usually provide a high-level API or frontend to take care of low-level details such as networking, data partitioning, scaling, security, backup and so on. All of these are usually delivered as raw elements, so the cloud users need to maintain, patch and configure the different servers created by the platform.

Examples of these platforms are:

  • Amazon AWS
  • Google Compute Engine
  • Microsoft Azure Virtual Machines
  • Red Hat OpenStack

Platform as a Service

In this service model, the cloud platform provides services that allow customers to develop, run, and manage applications without the complexity of building and maintaining the infrastructure. Often this is facilitated via an application template system, that allows easy creation of new services. For example, providing a standard template for an application type, framework or even programming language.

With this, we could reduce complexity and the overall development of the application can be more effective, and maintenance and enhancement of the application are easier. Usually, the cloud will provide capabilities to patch and configure the different servers.

Examples of these platforms are:

  • Google App Engine
  • IBM Bluemix
  • Microsoft Azure Cloud Services
  • Pivotal Cloud Foundry
  • Red Hat OpenShift

Software as a Service

In the SaaS model, users gain access to application software, because of which this model is sometimes called on-demand software. All the elements required for that software to run are managed internally on the platform. The cloud users do not need to take care of anything on the platform, neither the cloud, since everything is managed by the provider and is usually a pay-per-user basis.

Examples of these platforms are:

  • Google G Suite
  • Microsoft Office 365
  • Salesforce

Cloud Native microservices

Now that we have a better understanding of cloud computing, we need to think why we should build Cloud Native microservices. If we have been following our microservices principles, we could easily deploy them in a cloud and take advantage of those platforms to benefit further from the microservices architecture. Our microservices could easily be scaled and managed, and since their isolation is a loosely coupling, they could easily fit in containers.

When we create microservices, we could make them cloud aware, and try to benefit from not just being microservices, but becoming Cloud Native applications, so they could fully benefit from the cloud computing model. Spring Cloud provides an easy to use the framework to make our cloud services independent of the cloud platform that they are hosted on and take the full benefits of the platform.

We will expand this further in Chapter 6, Creating Cloud-Native Microservices.

While this book was written, the current snapshot version of Spring Boot 2 was Spring Boot 2.0.0 M7. The code bundle and examples are up to date with that version. Eventually, Spring Boot 2.0.0 will be finally released and the code bundle will be updated accordingly.

Summary

In this chapter, we got a clear understanding of what microservices are, their benefits, and how they evolved from SoA. We now have a set of principles that we could use to create them, and an overview of how Domain-Driven Design will allow us to evolve our applications as per our requirements. Following these designs, we could have a clean architecture that will help our microservices' life cycle, from development to scaling or monitoring. We should be familiar with the benefits of the reactive systems and the cloud computing model, that will allow us to deliver microservices to the next level of industry standards.

But next, we need to start from the basics, so in the next chapter, we will focus on how we can get started on microservices with Kotlin using Spring Boot 2.0, meanwhile, we will learn the tools that we will use to create them.

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Write easy-to-maintain lean and clean code with Kotlin for developing better microservices
  • Scale your Microserivces in your own cloud with Docker and Docker Swarm
  • Explore Spring 5 functional reactive web programming with Spring WebFlux

Description

With Google's inclusion of first-class support for Kotlin in their Android ecosystem, Kotlin's future as a mainstream language is assured. Microservices help design scalable, easy-to-maintain web applications; Kotlin allows us to take advantage of modern idioms to simplify our development and create high-quality services. With 100% interoperability with the JVM, Kotlin makes working with existing Java code easier. Well-known Java systems such as Spring, Jackson, and Reactor have included Kotlin modules to exploit its language features. This book guides the reader in designing and implementing services, and producing production-ready, testable, lean code that's shorter and simpler than a traditional Java implementation. Reap the benefits of using the reactive paradigm and take advantage of non-blocking techniques to take your services to the next level in terms of industry standards. You will consume NoSQL databases reactively to allow you to create high-throughput microservices. Create cloud-native microservices that can run on a wide range of cloud providers, and monitor them. You will create Docker containers for your microservices and scale them. Finally, you will deploy your microservices in OpenShift Online.

Who is this book for?

If you are a Kotlin developer with a basic knowledge of microservice architectures and now want to effectively implement these services on enterprise-level web applications, then this book is for you

What you will learn

  • • Understand microservice architectures and principles
  • • Build microservices in Kotlin using Spring Boot 2.0 and Spring Framework 5.0
  • • Create reactive microservices that perform non-blocking operations with Spring WebFlux
  • • Use Spring Data to get data reactively from MongoDB
  • • Test effectively with JUnit and Kotlin
  • • Create cloud-native microservices with Spring Cloud
  • • Build and publish Docker images of your microservices
  • • Scaling microservices with Docker Swarm
  • • Monitor microservices with JMX
  • • Deploy microservices in OpenShift Online

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Jan 29, 2018
Length: 414 pages
Edition : 1st
Language : English
ISBN-13 : 9781788473491
Vendor :
JetBrains
Languages :
Concepts :

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
OR
Modal Close icon
Payment Processing...
tick Completed

Billing Address

Product Details

Publication date : Jan 29, 2018
Length: 414 pages
Edition : 1st
Language : English
ISBN-13 : 9781788473491
Vendor :
JetBrains
Languages :
Concepts :

Packt Subscriptions

See our plans and pricing
Modal Close icon
$19.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
$199.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just S$6 each
Feature tick icon Exclusive print discounts
$279.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just S$6 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total S$ 208.97
Building Applications with Spring 5 and Kotlin
S$66.99
Hands-On Microservices with  Kotlin
S$66.99
Kotlin Programming Cookbook
S$74.99
Total S$ 208.97 Stars icon
Banner background image

Table of Contents

13 Chapters
Understanding Microservices Chevron down icon Chevron up icon
Getting Started with Spring Boot 2.0 Chevron down icon Chevron up icon
Creating RESTful Services Chevron down icon Chevron up icon
Creating Reactive Microservices Chevron down icon Chevron up icon
Reactive Spring Data Chevron down icon Chevron up icon
Creating Cloud-Native Microservices Chevron down icon Chevron up icon
Creating Dockers Chevron down icon Chevron up icon
Scaling Microservices Chevron down icon Chevron up icon
Testing Spring Microservices Chevron down icon Chevron up icon
Monitoring Microservices Chevron down icon Chevron up icon
Deploying Microservices Chevron down icon Chevron up icon
Best Practices Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Top Reviews
Rating distribution
Full star icon Full star icon Full star icon Full star icon Half star icon 4.4
(8 Ratings)
5 star 75%
4 star 0%
3 star 12.5%
2 star 12.5%
1 star 0%
Filter icon Filter
Top Reviews

Filter reviews by




Richard Hedin May 16, 2018
Full star icon Full star icon Full star icon Full star icon Full star icon 5
I recently accepted a job that, it turned out, involves lots of Kotlin, Spring, and Microservices. Of those, the only one I have had experience with in the past is: Microservices!This book takes through the entire landscape of the "chatter" I have been hearing -- docker, hystrix, zuul -- and explains where each piece fits in the landscape how to use each piece.The book is written at exactly my pace. (I don't know how that happened.) Neither so much detail I can't find the thread, nor so sparse I can't find the thread.There are some complaints, but they are quibbles. Sometimes, the book reads like it was dictated, not typed. Homophones abound. And I don't think the writer's first language is English. But you know how you get into conversation with someone who speaks heavily accented English and after an hour, you don't notice the accent? It's like that. All the ideas are clear.Also, I enjoy and appreciate a paragraph or two on how this solution came to be, historically, and what solutions were tried before the industry came to this point. I prefer to make my decisions from a position of depth of knowledge.
Amazon Verified review Amazon
Metin Öztürk Feb 04, 2020
Full star icon Full star icon Full star icon Full star icon Full star icon 5
As a starter in Kotlin backend development, this book helped me a great deal. Clear, focused and practical.
Amazon Verified review Amazon
Kindle Customer Aug 14, 2018
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Very comprehensive and easy to read
Amazon Verified review Amazon
Michael Jones Aug 18, 2018
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Most teams would love to be where this book describes. I knew my services could be better, this book gives me a destination and a roadmap to getting there.
Amazon Verified review Amazon
Noel A. Hahn Jun 12, 2019
Full star icon Full star icon Full star icon Full star icon Full star icon 5
As others have stated, there are a decent amount of typos but I really have to give the writer props. I've read a decent amount of Spring books and online material that have never been put together in such a complete and practical way. I bought this to learn Reactive Programming with Spring and best practices with Kotlin and the couple of chapters that actually covered this material were very informative. However, the entirety of this book has you go from learning the very basics to creating real production-ready microservices and deploying them using modern tools. And the small amount that is not covered, there are websites referenced. Best read on Spring by far, whether it book a book, or online tutorials.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

How do I buy and download an eBook? Chevron down icon Chevron up icon

Where there is an eBook version of a title available, you can buy it from the book details for that title. Add either the standalone eBook or the eBook and print book bundle to your shopping cart. Your eBook will show in your cart as a product on its own. After completing checkout and payment in the normal way, you will receive your receipt on the screen containing a link to a personalised PDF download file. This link will remain active for 30 days. You can download backup copies of the file by logging in to your account at any time.

If you already have Adobe reader installed, then clicking on the link will download and open the PDF file directly. If you don't, then save the PDF file on your machine and download the Reader to view it.

Please Note: Packt eBooks are non-returnable and non-refundable.

Packt eBook and Licensing When you buy an eBook from Packt Publishing, completing your purchase means you accept the terms of our licence agreement. Please read the full text of the agreement. In it we have tried to balance the need for the ebook to be usable for you the reader with our needs to protect the rights of us as Publishers and of our authors. In summary, the agreement says:

  • You may make copies of your eBook for your own use onto any machine
  • You may not pass copies of the eBook on to anyone else
How can I make a purchase on your website? Chevron down icon Chevron up icon

If you want to purchase a video course, eBook or Bundle (Print+eBook) please follow below steps:

  1. Register on our website using your email address and the password.
  2. Search for the title by name or ISBN using the search option.
  3. Select the title you want to purchase.
  4. Choose the format you wish to purchase the title in; if you order the Print Book, you get a free eBook copy of the same title. 
  5. Proceed with the checkout process (payment to be made using Credit Card, Debit Cart, or PayPal)
Where can I access support around an eBook? Chevron down icon Chevron up icon
  • If you experience a problem with using or installing Adobe Reader, the contact Adobe directly.
  • To view the errata for the book, see www.packtpub.com/support and view the pages for the title you have.
  • To view your account details or to download a new copy of the book go to www.packtpub.com/account
  • To contact us directly if a problem is not resolved, use www.packtpub.com/contact-us
What eBook formats do Packt support? Chevron down icon Chevron up icon

Our eBooks are currently available in a variety of formats such as PDF and ePubs. In the future, this may well change with trends and development in technology, but please note that our PDFs are not Adobe eBook Reader format, which has greater restrictions on security.

You will need to use Adobe Reader v9 or later in order to read Packt's PDF eBooks.

What are the benefits of eBooks? Chevron down icon Chevron up icon
  • You can get the information you need immediately
  • You can easily take them with you on a laptop
  • You can download them an unlimited number of times
  • You can print them out
  • They are copy-paste enabled
  • They are searchable
  • There is no password protection
  • They are lower price than print
  • They save resources and space
What is an eBook? Chevron down icon Chevron up icon

Packt eBooks are a complete electronic version of the print edition, available in PDF and ePub formats. Every piece of content down to the page numbering is the same. Because we save the costs of printing and shipping the book to you, we are able to offer eBooks at a lower cost than print editions.

When you have purchased an eBook, simply login to your account and click on the link in Your Download Area. We recommend you saving the file to your hard drive before opening it.

For optimal viewing of our eBooks, we recommend you download and install the free Adobe Reader version 9.