Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Containers for Developers Handbook

You're reading from   Containers for Developers Handbook A practical guide to developing and delivering applications using software containers

Arrow left icon
Product type Paperback
Published in Nov 2023
Publisher Packt
ISBN-13 9781805127987
Length 490 pages
Edition 1st Edition
Tools
Arrow right icon
Author (1):
Arrow left icon
Francisco Javier Ramírez Urea Francisco Javier Ramírez Urea
Author Profile Icon Francisco Javier Ramírez Urea
Francisco Javier Ramírez Urea
Arrow right icon
View More author details
Toc

Table of Contents (20) Chapters Close

Preface 1. Part 1:Key Concepts of Containers
2. Chapter 1: Modern Infrastructure and Applications with Docker FREE CHAPTER 3. Chapter 2: Building Docker Images 4. Chapter 3: Sharing Docker Images 5. Chapter 4: Running Docker Containers 6. Chapter 5: Creating Multi-Container Applications 7. Part 2:Container Orchestration
8. Chapter 6: Fundamentals of Container Orchestration 9. Chapter 7: Orchestrating with Swarm 10. Chapter 8: Deploying Applications with the Kubernetes Orchestrator 11. Part 3:Application Deployment
12. Chapter 9: Implementing Architecture Patterns 13. Chapter 10: Leveraging Application Data Management in Kubernetes 14. Chapter 11: Publishing Applications 15. Chapter 12: Gaining Application Insights 16. Part 4:Improving Applications’ Development Workflow
17. Chapter 13: Managing the Application Life Cycle 18. Index 19. Other Books You May Enjoy

From monoliths to distributed microservice architectures

Application architectures are continuously evolving due to technological improvements. Throughout the history of computation, every time a technical gap is resolved in hardware and software engineering, software architects rethink how applications can be improved to take advantage of the new developments. For example, network speed increases made distributing application components across different servers possible, and nowadays, it’s not even a problem to distribute these components across data centers in multiple countries.

To take a quick look at how computers were adopted by enterprises, we must go back in time to the old mainframe days (before the 1990s). This can be considered the base for what we call unitary architecture – one big computer with all the processing functionality, accessed by users through terminals. Following this, the client-server model became very popular as technology also advanced on the user side. Server technologies improved while clients gained more and more functionality, freeing up the server load for publishing applications. We consider both models as monoliths as all application components run on one server; even if the databases are decoupled from the rest of the components, running all important components in a dedicated server is still considered monolithic. Both of these models were very difficult to upgrade when performance started to drop. In these cases, newer hardware with higher specifications was always required. These models also suffer from availability issues, meaning that any maintenance tasks required on either the server or application layer will probably lead to service outages, which affects the normal system uptime.

Exploring monolithic applications

Monolithic applications are those in which all functionalities are provided by just one component, or a set of them so tightly integrated that they cannot be decoupled from one another. This makes them hard to maintain. They weren’t designed with reusability or modularity in mind, meaning that every time developers need to fix an issue, add some new functionality, or change an application’s behavior, the entire application is affected due to, for example, having to recompile the whole application’s code.

Providing high availability to monolithic applications required duplicated hardware, quorum resources, and continuous visibility between application nodes. This may not have changed too much today but we have many other resources for providing high availability. As applications grew in complexity and gained responsibility for many tasks and functionalities, we started to decouple them into a few smaller components (with specific functions such as the web server, database, and more), although core components were kept immutable. Running all application components together on the same server was better than distributing them into smaller pieces because network communication speeds weren’t high enough. Local filesystems were usually used for sharing information between application processes. These applications were difficult to scale (more hardware resources were required, usually leading to acquiring newer servers) and difficult to upgrade (testing, staging, and certification environments before production require the same hardware or at least compatible ones). In fact, some applications could run only on specific hardware and operating system versions, and developers needed workstations or servers with the same hardware or operating system to be able to develop fixes or new functionality for these applications.

Now that we know how applications were designed in the early days, let’s introduce virtualization in data centers.

Virtual machines

The concept of virtualization – providing a set of physical hardware resources for specific purposes – was already present in the mainframe days before the 1990s, but in those days, it was closer to the definition of time-sharing at the compute level. The concept we commonly associate with virtualization comes from the introduction of the hypervisor and the new technology introduced in the late 1990s that allowed for the creation of complete virtual servers running their own virtualized operating systems. This hypervisor software component was able to virtualize and share host resources in virtualized guest operating systems. In the 1990s, the adoption of Microsoft Windows and the emergence of Linux as a server operating system in the enterprise world established x86 servers as the industry standard, and virtualization helped the growth of both of these in our data centers, improving hardware usage and server upgrades. The virtualization layer simplified virtual hardware upgrades when applications required more memory or CPU and also improved the process of providing services with high availability. Data centers became smaller as newer servers could run dozens of virtual servers, and as physical servers’ hardware capabilities increased, the number of virtualized servers per node increased.

In the late 1990s, the servers became services. This means that companies started to think about the services they provided instead of the way they did it. Cloud providers arrived to provide services to small businesses that didn’t want to acquire and maintain their own data centers. Thus, a new architecture model was created, which became pretty popular: the cloud computing infrastructure model. Amazon launched Amazon Web Services (AWS), providing storage, computation, databases, and other infrastructure resources. And pretty soon after that, Elastic Compute Cloud entered the arena of virtualization, allowing you to run your own servers with a few clicks. Cloud providers also allowed users to use their well-documented application programming interfaces (APIs) for automation, and the concept of Infrastructure as Code (IaC) was introduced. We were able to create our virtualization instances using programmatic and reusable code. This model also changed the service/hardware relationship and what started as a good idea at first – using cloud platforms for every enterprise service – became a problem for big enterprises, which saw increased costs pretty quickly based on network bandwidth usage and as a result of not sufficiently controlling their use of cloud resources. Controlling cloud service costs soon became a priority for many enterprises, and many open source projects started with the premise of providing cloud-like infrastructures. Infrastructure elasticity and easy provisioning are the keys to these projects. OpenStack was the first one, distributed in smaller projects, each one focused on different functionalities (storage, networking, compute, provisioning, and so on). The idea of having on-premises cloud infrastructure led software and infrastructure vendors into new alliances with each other, in the end providing new technologies for data centers with the required flexibility and resource distribution. They also provided APIs for quickly deploying and managing provisioned infrastructure, and nowadays, we can provision either cloud infrastructure resources or resources on our data centers using the same code with few changes.

Now that we have a good idea of how server infrastructures work today, let’s go back to applications.

Three-tier architecture

Even with these decoupled infrastructures, applications can still be monoliths if we don’t prepare them for separation into different components. Elastic infrastructures allow us to distribute resources and it would be nice to have distributed components. Network communications are essential and technological evolution has increased speeds, allowing us to consume network-provided services as if they were local and facilitating the use of distributed components.

Three-tier architecture is a software application architecture where the application is decoupled into three to five logical and physical computing layers. We have the presentation tier, or user interface; the application tier, or backend, where data is processed; and the data tier, where the data for use in the application is stored and managed, such as in a database. This model was used even before virtualization arrived on the scene, but you can imagine the improvement of being able to distribute application components across different virtual servers instead of increasing the number of servers in your data center.

Just to recap before continuing our journey: the evolution of infrastructure and network communications has allowed us to run component-distributed applications, but we just have a few components per application in the three-tier model. Note that in this model, different roles are involved in application maintenance as different software technologies are usually employed. For example, we need database administrators, middleware administrators, and infrastructure administrators for systems and network communications. In this model, although we are still forced to use servers (virtual or physical), application component maintenance, scalability, and availability are significantly improved. We can manage each component in isolation, executing different maintenance tasks and fixes and adding new functionalities decoupled from the application core. In this model, developers can focus on either frontend or backend components. Some coding languages are specialized for each layer – for example, JavaScript was the language of choice for frontend developers (although it evolved for backend services too).

As Linux systems grew in popularity in the late 1990s, applications were distributed into different components, and eventually different applications working together and running in different operating systems became a new requirement. Shared files, provided by network filesystems using either network-attached storage (NAS) or more complex storage area network (SAN) storage backends were used at first, but Simple Object Access Protocol (SOAP) and other queueing message technologies helped applications to distribute data between components and manage their information without filesystem interactions. This helped decouple applications into more and more distributed components running on top of different operating systems.

Microservices architecture

The microservices architecture model goes a step further, decoupling applications into smaller pieces with enough functionality to be considered components. This model allows us to manage a completely independent component life cycle, freeing us to choose whatever coding language fits best with the functionality in question. Application components are kept light in terms of functionality and content, which should lead to them using fewer host resources and responding faster to start and stop commands. Faster restarts are key to resilience and help us maintain our applications while up, with fewer outages. Application health should not depend on component-external infrastructure; we should improve components’ logic and resilience so that they can start and stop as fast as possible. This means that we can ensure that changes to an application are applied quickly, and in the case of failure, the required processes will be up and running in seconds. This also helps in managing the application components’ life cycle as we can upgrade components very fast and prepare circuit breakers to manage stopped dependencies.

Microservices use the stateless paradigm; therefore, application components should be stateless. This means that a microservice’s state must be abstracted from its logic or execution. This is key to being able to run multiple replicas of an application component, allowing us to run them distributed on different nodes from a pool.

This model also introduced the concept of run everywhere, where an application should be able to run its components on either cloud or on-premise infrastructures, or even a mix of both (for example, the presentation layer for components could run on cloud infrastructure while the data resides in our data center).

Microservices architecture provides the following helpful features:

  • Applications are decoupled into different smaller pieces that provide different features or functionalities; thus, we can change any of them at any time without impacting the whole application.
  • Decoupling applications into smaller pieces lets developers focus on specific functionalities and allows them to use the most appropriate programming language for each component.
  • Interaction between application components is usually provided via Representational State Transfer (REST) API calls using HTTP. RESTful systems aim for fast performance and reliability and can scale without any problem.
  • Developers describe which methods, actions, and data they provide in their microservice, which are then consumed by other developers or users. Software architects must standardize how application components talk with each other and how microservices are consumed.
  • Distributing application components across different nodes allows us to group microservices into nodes for the best performance, closer to data sources and with better security. We can create nodes with different features to provide the best fit for our application components.

Now that we’ve learned what microservices architecture is, let’s take a look at its impact on the development process.

You have been reading a chapter from
Containers for Developers Handbook
Published in: Nov 2023
Publisher: Packt
ISBN-13: 9781805127987
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image