Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Cloud-Native Applications in Java

You're reading from   Cloud-Native Applications in Java Build microservice-based cloud-native applications that dynamically scale

Arrow left icon
Product type Paperback
Published in Feb 2018
Publisher Packt
ISBN-13 9781787124349
Length 406 pages
Edition 1st Edition
Languages
Tools
Arrow right icon
Authors (4):
Arrow left icon
Andreas Olsson Andreas Olsson
Author Profile Icon Andreas Olsson
Andreas Olsson
Shyam Sundar S Shyam Sundar S
Author Profile Icon Shyam Sundar S
Shyam Sundar S
Munish Kumar Gupta Munish Kumar Gupta
Author Profile Icon Munish Kumar Gupta
Munish Kumar Gupta
Ajay Mahajan Ajay Mahajan
Author Profile Icon Ajay Mahajan
Ajay Mahajan
Arrow right icon
View More author details
Toc

Table of Contents (14) Chapters Close

Preface 1. Introduction to Cloud-Native FREE CHAPTER 2. Writing Your First Cloud-Native Application 3. Designing Your Cloud-Native Application 4. Extending Your Cloud-Native Application 5. Testing Cloud-Native Applications 6. Cloud-Native Application Deployment 7. Cloud-Native Application Runtime 8. Platform Deployment – AWS 9. Platform Deployment – Azure 10. As a Service Integration 11. API Design Best Practices 12. Digital Transformation 13. Other Books You May Enjoy

The 12-factor app

In order to build a distributed, microservices-based application that can be deployed across cloud providers, engineers at Heroku came up with 12 factors that need to be implemented by any modern cloud-native application:

  • Single codebase: The application must have one codebase, tracked in revision control for every application (read: microservice) that can be deployed multiple times (development, test, staging, and production environments). Two microservices do not share the same codebase. This model allows the flexibility to change and deploy services without impacting other parts of the application.
  • Dependencies: The application must explicitly declare its code dependencies and add them to the application or microservice. The dependencies are packaged as part of the microservice JAR/WAR file. This helps isolate dependencies across microservices and reduce any side effects through multiple versions of the same JAR.
  • Config: The application configuration data is moved out of the application or microservice and externalized through a configuration management tool. The application or microservice will pick up the configuration based on the environment in which it is running, allowing the same deployment unit to be propagated across the environments.
  • Backing services: All external resources, access should be an addressable URL. For example, SMTP URL, database URL, service HTTP URL, queue URL, and TCP URL. This allows URLs to be externalized to the config and managed for every environment.
  • Build, release, and run: The entire process of building, releasing, and running is treated as three separate steps. This means that, as part of the build, the application is built as an immutable entity. This immutable entity will pick the relevant configuration to run the process based on the environment (development, testing, staging, or production).
  • Processes: The microservice is built on and follows the shared-nothing model. This means the services are stateless and the state is externalized to either a cache or a data store. This allows seamless scalability and allows load balance or proxy to send requests to any of the instances of the service.
  • Port binding: The microservice is built within a container. The service will export and bind all its interfaces through ports (including HTTP).
  • Concurrency: The microservice process is scaled out, meaning that, to handle increased traffic, more microservice processes are added to the environment. Within the microservice process, one can make use of the reactive model to optimize the resource utilization.
  • Disposability: The idea is to build a microservice as immutable with a single responsibility to, in turn, maximize robustness with faster boot-up times. Immutability also lends to the service disposability.
  • Dev/prod parity: The environments across the application life cycle—DEV, TEST, STAGING, and PROD—are kept as similar as possible to avoid any surprises later.
  • Logs: Within the immutable microservice instance, the logs generated as part of the service processing are candidates for state. These logs should be treated as event streams and pushed out to a log aggregator infrastructure.
  • Admin processes: The microservice instances are long-running processes that continue unless they are killed or replaced with newer versions. All other admin and management tasks are treated as one-off processes:
12-factor app

Applications that follow the 12 factors make no assumptions about the external environment, allowing them to be deployed on any cloud provider platform. This allows the same set of tools/processes/scripts to be run across environments and deploy distributed microservices applications in a consistent manner.

Microservices-enabling service ecosystem

In order to successfully run microservices, there are certain enabling components/services that are needed. These enabling services can be tagged as PaaS that are needed to support the building, releasing, deployment, and running of the microservices.

In the case of the cloud-native model, these services are available as PaaS services from the cloud provider itself:

  • Service discovery: When the application is decomposed into a microservices model, a typical application may be composed of hundreds of microservices. With each microservice running multiple instances, we soon have thousands of microservice instances running. In order to discover the service endpoint, it is pertinent to have a service registry that can be queried to discover all of the instances of the microservice. In addition, the service registry tracks the heartbeat of every service instance to make sure that all services are up and running.
    Further, the service registry helps in load balancing the requests across the service instances. We can have two models for load balancing:
    • Client-side load balancing:
      • A service consumer asks the registry for a service instance
      • The service registry returns with the list of services where the service is running
    • Server-side load balancing:
      • The service endpoint is hidden by Nginx, API Gateway, or another reverse proxy from the consumer

    Typical products in this space are Consul and Zookeeper:

The service registry
  • Config server: The microservice needs to be initialized with multiple parameters (for example, database URL, queue URL, functional parameters, and dependency flags). Managing properties in file or environment variables beyond a certain number can become unwieldy. To manage these properties across environments, all such configurations are managed externally in a configuration server. At boot time, microservices will load the properties by invoking the API on the config server.
    Microservices also make use of listeners to listen for any changes to the properties on the config server. Any runtime change of properties can be picked up immediately by the microservices. The properties are typically categorized at multiple levels:
    • Service-specific properties: Hold all properties tied to the microservice
    • Shared properties: Hold properties that might be shared between services
    • Common properties: Hold properties that are common across services

    The config server can back up these properties in a source-control system. Typical products in this space are Consul, Netflix Archaius, and Spring Cloud Config server:

The config server
  • Service management/monitoring: An average business application typically tends to get decomposed into about 400 microservices. Even if we ran two to three instances of these microservices, we would be talking about managing over 1,000  instances of our microservices. Without an automated model, managing/monitoring these services becomes an operational challenge. The following are the key metrics that need to be managed and monitored:
    • Service health: Each service needs to publish its health status. These need to be managed/tracked to identify slow or dead services.
    • Service metrics: Each service also publishes throughput metrics data, such as the number of HTTP requests/responses, the request/response size, and the response latency.
    • Process info: Each service will publish JVM metrics data (like heap utilization, the number of threads, and the process state) typically available as part of the Java VisualVM.
    • Log events as stream: Each service can also publish log events as a set of streaming events.

    All of this information is pulled from the services and tied together to manage and monitor the application services landscape. Two types of analysis—event correlation and correction decisions—need to be done. Alerts and actuation services are built as part of the service monitoring systems. For example, if a certain number of service instances need to be maintained and the number reduces (service not available due to health check) then an actuation service can take the event as an indicator to add another instance of the same service.

    Further, in order to track the service call flow through the microservices model, there is third-party software available that can help create a request identified and track how the service call flows through the microservices. This software typically deploys agents onto the containers, which weave them into the services and track the service metrics:

Service metrics
  • Container management/orchestration: Another key infrastructure piece of the microservice environment is container management and orchestration. The services are typically bundled in a container and deployed in a PaaS environment. The environment can be based on an OpenShift model, a Cloud Foundry model, or a pure VM-based model, depending whether they are deployed on a private or a public cloud. To deploy and manage the dependencies between the containers, there is a need for container management and orchestration software. Typically, it should be able to understand the interdependencies between the containers and deploy the containers as an application. For example, if the application has four pieces—one for UI, two for business services, and one for data store—then all of these containers should be tagged together and deployed as a single unit with interdependencies and the right order of instantiation injected.
  • Log aggregation: 1 of the 12 factors is treating logs as event streams. The containers are meant to be stateless. The log statements are typically stateful events that need to be persisted beyond the life of the containers. As a result, all logs from the containers are treated as event streams that can be pushed/pulled onto a centralized log repository. All the logs are aggregated and various models can be run on these logs for various alerts. One can track security and failure events through these logs, which can feed into the service management/monitoring system for further actions:
Log aggregation
  • API Gateway/management: The services are meant to be simple and follow the single responsibility model. The question arises: who will handle other concerns, such as service authentication, service metering, service throttling, service load balancing, and service freemium/premium models? This is where the API Gateway or management software comes into the picture. The API Gateway handles all such concerns on behalf of the microservice. The API Gateway provides multiple options for managing the service endpoints and can also provide transformation, routing, and mediation capabilities. The API Gateway is more lightweight, compared to the typical enterprise service bus.
API Management Gateway
  • DevOps: Another key aspect is the continuous integration/deployment pipeline, coupled with the automated operations that need to set up the microservice-based applications. As the developer writes code, it goes through a series of steps that need to be automated and mapped with gating criteria to allow the release of regression-tested code:
Development life cycle
You have been reading a chapter from
Cloud-Native Applications in Java
Published in: Feb 2018
Publisher: Packt
ISBN-13: 9781787124349
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image