Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
The Art of Micro Frontends
The Art of Micro Frontends

The Art of Micro Frontends: Build websites using compositional UIs that grow naturally as your application scales

eBook
$27.98 $39.99
Paperback
$49.99
Subscription
Free Trial
Renews at $19.99p/m

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
OR
Modal Close icon
Payment Processing...
tick Completed

Billing Address

Table of content icon View table of contents Preview book icon Preview Book

The Art of Micro Frontends

Chapter 1: Why Micro frontends?

Every journey needs to start somewhere. Quite often, we'll find ourselves looking for a solution to a particular problem. Knowing the problem in depth is usually part of the solution. Without a detailed understanding of the various challenges, finding a proper way to handle the situation seems impossible.

After almost three decades of web development, we've reached a point where anything is possible. The boundaries between classical desktop applications, mobile native applications, and websites have been torn apart. These days, many websites are actually web applications and provide a tool-like character.

As developers, we find ourselves in a quite difficult situation. Very often, the cognitive load to handle the underlying complexities of code bases is way too high. Consequently, inefficiencies and bugs haunt us.

One possible way out of this dilemma is the architectural pattern of micro frontends. In this chapter, we'll discover the reasons why micro frontends have become quite a popular choice for projects of any kind.

We will cover the following key topics in this chapter:

  • Evolution of web applications
  • Everything becomes micro
  • Emerging web standards
  • Faster time to market (TTM)

Without further ado, let's get right into the topic.

Evolution of web applications

Before looking for reasons to use micro frontends, we should look at why micro frontends came to exist at all. How did the web evolve from a small proof of concept (POC) running on a NeXT computer in a small office at Conseil Européen pour la Recherche Nucléaire/The European Organization for Nuclear Research (CERN) to becoming a central piece of the information age?

Programming the web

My first contact with web development was in the middle of the 1990s. Back then, the web was mostly composed of static web pages. While some people were experienced enough to bring in some dynamic websites using the Common Gateway Interface (CGI) technology as shown in Figure 1.1, most webmasters did not have knowledge of this or want to spend money on server-side rendering (SSR). The term webmaster was commonly used for somebody who was in charge of a website. Instead of this, everything was crafted by hand upfront:

Figure 1.1 – The web changes from static to dynamic pages

Figure 1.1 – The web changes from static to dynamic pages

To avoid duplication and potential inconsistencies, a new technology was used: frames declared in <frameset> tags. Frames allowed websites to be displayed within websites. Effectively, this enabled the reuse of things such as a menu, header, or footer on different pages. While frames have been removed from the HyperText Markup Language 5 (HTML5) specification, they are still available in all browsers. Their successor still lives on today: the inline frame (iframe) <iframe> tag.

One of the downsides of frames was that link handling became increasingly difficult. To get the best performance, the right target had to be selected explicitly. Another difficulty was encountered in correct Uniform Resource Locator (URL) handling. Since navigation was only performed on a given frame, the displayed page address did not change.

Consequently, people started to look for alternatives. One way was to use SSR without all the complexity. By introducing some special HTML comments as placeholders containing some server instructions, a generic layout could be added that would be dynamically resolved. The name for this technique was Server Side Includes, more commonly known as SSI.

The Apache web server was among the first to introduce SSI support through the mod_ssi module. Other popular web servers, such as Microsoft's Internet Information Services (IIS), followed quickly afterward. The progressive nature and Turing completeness of the allowed instructions made SSI an instant success.

It was around that time that websites became more and more dynamic. To tame the CGI, many solutions were implemented. However, only when a new programming language called PHP: Hypertext Preprocessor (PHP) was introduced did SSR become mainstream. The cost of running PHP was so cheap that SSI was almost forgotten.

The social web

With the rise of Web 2.0 and the capabilities of JavaScript—especially for dynamic data loading at runtime, known as Asynchronous JavaScript and XML (AJAX)—the web community faced another challenge. SSR was no longer a silver bullet. Instead, the dynamic part of websites had to live on the client—at least partially. The complexity of dividing an application into multiple areas (build, server, client), along with their testing and development, skyrocketed.

As a result, frameworks for client-side rendering (CSR) emerged. The first-generation frameworks such as Backbone.js, Knockout.js, and AngularJS all came with architecture choices similar to the popular frameworks for SSR. They did not intend to place the full application on the client, and they did not intend to grow indefinitely.

In practice, this looked quite different. The application size increased and a lot of code was served to the client that was never intended to be used by the current user. Images and other media were served without any optimizations, and the web became slower and slower.

Of course, we've seen the rise of tools to mitigate this. While JavaScript minification has become nearly as old as JavaScript itself, other tools for image optimization and Cascading Style Sheets (CSS) minification came about to improve the situation too.

The missing link was to combine these tools into a single pipeline. Thanks to Node.js, the web community received a truly magnificent gem. Now, we had a runtime that could not only bring JavaScript to the server side, but also allowed the use of cross-platform tooling. New task runners such as Grunt or Gulp started making frontend code easier to develop efficiently.

The web 2.0 movement also made the reuse of web services directly from the UI running in the browser, as shown in the following figure, popular:

Figure 1.2 – With the Web 2.0 movement, services and AJAX enter common architectures

Figure 1.2 – With the Web 2.0 movement, services and AJAX enter common architectures

Having dedicated backend services makes sense for multiple use cases. First, we can leverage AJAX in the frontend to do partial reloads, as outlined in Figure 1.2. Another use case is to allow other systems to access the information, too. This way, useful data can be monetarized as well. Finally, the separation between representation (usually using HTML) and structure (using formats such as Extensible Markup Language (XML) or JavaScript Object Notation (JSON)) allows reuse across multiple applications.

Separation of frontend and backend

The enhanced rendering capabilities on the frontend also accelerated the separation between backend and frontend. Suddenly, there was no giant monolith that handled user activities, page generation, and database queries in one code base. Instead, the data handling was put into an application programming interface (API) layer that could be used for page generation in SSR scenarios or directly from code running in the user's browser. In particular, applications that handle all rendering in the client have been labeled a single-page application (SPA).

Such a separation not only brought benefits—the design of these APIs became an art on their own. Providing suitable security settings and establishing a great performance baseline became more difficult, however. Ultimately, this also became a challenge from a deployment perspective.

From a user experience perspective, the capability of doing partial page updates is not without challenges either. Here, we rely on indicators such as loading spinners, skeleton styles, or other methods to transport the right signals to the user. Another thing to take care of here is correct error handling. Should we retry? Inform the user? Use a fallback? Multiple possibilities exist; however, besides making a good decision here, we also need to spend some time implementing and testing it.

Nevertheless, for many applications, the split into a dedicated frontend and a dedicated backend part is definitely a suitable choice. One reason is that dedicated teams can work on both sides of the story, thus making the development of larger web applications more efficient.

The gain in development efficiency, as we will see, is a driving force behind the move toward micro frontends. Let's see how the trend toward modularization got started in web development.

Everything becomes micro

We already discovered that the eventual split into dedicated parts for the frontend and backend of an application may be quite useful. Staying on the backend, we've seen the rise of smaller services that need to be orchestrated and joined together: so-called microservices.

From SOA to microservices

While the idea of a service-oriented architecture (SOA) is not new, what microservices brought to the table is freedom of choice. In the early 2000s, the term SOA was introduced to already try that out. However, for SOA, we've seen a broad palette of requirements and constraints. From the communicating protocol, to the discoverability of the API, up to the consuming applications, everything was either predetermined—or at least strongly recommended.

With the ultimate failure of SOA, the community tried to stay away from building too many services too quickly. In the end, it was not only about the constraints imposed by SOA, but also about the orchestration struggle of deploying multiple services with potential interdependencies. How could we go ahead and reliably deploy multiple services into—most likely—two or three environments?

When Docker was introduced, it solved a couple of problems. Most importantly, it brought about a way to reliably get deployments working right. Combined with tools such as Ansible, releasing multiple services seemed feasible and a possible improvement. Consequently, multiple teams around the world started breaking existing boundaries. The common design choices used by these teams would then be taken to form a general architecture style, known as microservices.

When using microservices, there was only a single recommendation: build services that are responsible for one thing. The method of communication (usually REpresentational State Transfer (REST) with JSON) was not determined. Whether services should have their own databases or not was left to the developer. If services should communicate directly or indirectly with each other, this could be decided by the architect. Ultimately, best practices for many situations emerged, cementing the success of microservices even more.

While some purists see microservices as "fine-grained SOA," most people believe in the freedom of choice of using only the single-responsibility principle (SRP). The following figure shows the web applications using microservices with a SPA:

Figure 1.3 – Current state-of-the-art web applications using microservices with a SPA

Figure 1.3 – Current state-of-the-art web applications using microservices with a SPA

With people being busy building backends using microservices, the frontend needed to follow this dynamic approach too. Consequently, a lot of applications are heavily built on top of JavaScript, leveraging SPAs to avoid having to deal with the complexities of having to define both SSR and CSR. This is also shown in Figure 1.3.

Advantages of microservices

Why should a book on micro frontends spend any time looking at the history of microservices or their potential advantages? Well, for one, both start with micro, which could be a coincidence but actually is not. Another reason is that, as we will see, the background and history of both architectures are actually quite analogous. Since microservices are older than micro frontends, they can be used to give us a view into the future, which can only be helpful when making design decisions.

How did microservices become the de facto standard for doing backends these days? Sure—a good monolith is still the right choice for many projects but is not commonly regarded as a "popular choice." Most larger projects immediately head for the microservice route, independent of any evaluations or other constraints they could encounter.

Besides their obvious popularity, microservices also come with some intrinsic advantages, outlined as follows:

  • Failures only directly concern a single service.
  • Multiple teams can work independently.
  • Deployments are smaller.
  • Frameworks and programming languages can be chosen freely.
  • The initial release time is smaller.
  • They have clear architectural boundaries.

Every project willingly choosing microservices hopes to benefit from one or more of the advantages from this list, but there are—of course—some disadvantages too.

Disadvantages of microservices

For every advantage, there is also a disadvantage. What really matters is our perspective and how we weigh the arguments. For instance, if failures only concern a single service, this results in increased complexity for debugging. A service that depends on another service that just failed will also behave strangely. Finding the root cause for why another service failed and working out how to mitigate this situation is increasingly difficult.

Nevertheless, if we go for microservices, we should value their advantages over their disadvantages. If we cannot live with the disadvantages, then we need to either look for a different pattern or try our best to make the disadvantage in question go away as much as possible.

Another example of where an advantage can lead to a disadvantage is in the freedom of choice in terms of frameworks and programming languages. Depending on the number of developers, a certain number of languages may be supported as well. Anything that exceeds that number will be unmaintainable. It's certainly quite fashionable to try new languages and frameworks; however, if no one else is capable of maintaining the new service written in some niche language, we'll have a hard time scaling the development.

Some of the most popular disadvantages are outlined as follows:

  • Increased orchestration complexity
  • Multiple points of failure
  • Debugging and testing gets more difficult
  • Lack of responsibility
  • Eventual inconsistencies between the different services
  • Versioning hell

This list may look intimidating at first, but we need to remember that people already addressed these issues with best practices and enhanced tooling. In the end, it's our call to make the best of the situation and choose wisely.

Micro and frontend

Microservices sound great, so applying their principles to the frontend seems to be an obvious next step. The easiest solution would be to just implement microservices that serve HTML instead of JSON. Problem solved, right? Not so fast…

Just assembling the HTML would not be so difficult, but we need to think bigger than that. We need assets such as JavaScript, CSS, and image files. How can we interleave JavaScript content? If global variables are placed with the same name, which one wins? Should the URLs of all the files resolve to the individual services serving them?

There are more questions than answers here, but one thing is sure: while doing so would be possible in theory, in practice the experience would be limited, and we would lose the separation between backend and frontend, which—as we found out—is fundamental to today's mindset of web development.

Consequently, micro frontends did not exist until quite recently, when some of these barriers could be broken. As we'll see, there are multiple ways to establish a micro frontend solution. As with microservices, there are no fixed constraints. In the end, a project uses micro frontends when independent teams can freely deploy smaller chunks of the frontend without having to update the main application.

Besides some technical reasons, there is also another argument why micro frontends were established later than microservices: they were not required beforehand.

As outlined, the web just recently exploded on frontend code. Beforehand, either separated pages with no uniform user experience were sufficient or a monolith did the job sufficiently well. With applications growing massively in size, micro frontends came to the rescue.

Server-side solutions such as SSI or dedicated logic can help a fair bit to establish a micro frontend solution, but what about bringing this dynamic composition to the client? As it turns out, emerging web standards make this possible too.

Emerging web standards

There are a couple of problems that make developing micro frontends that run on the client problematic. Among other considerations, we need to be able to isolate code as much as possible to avoid conflicts. However, in contrast to backend microservices, we may also want to share resources such as a runtime framework; otherwise, exhausting the available resources on the user's computer is possible.

While this sounds exotic at first, performance is one of the bigger challenges for micro frontends. On the backend, we can give every service a suitable amount of hardware. On the frontend, we need to live with the browser running code on a machine that the user chooses. This could be a larger desktop machine, but it could also be a Raspberry Pi or a smartphone.

One area where recent web standards help quite a bit is with style isolation. Without style isolation, the Document Object Model (DOM) would treat every style as global, resulting in accidental style leaks or overwrites due to conflicts.

Isolation via Web Components

Styles may be isolated using the technique of a shadow DOM. A shadow DOM enables us to write components that are projected into the parent DOM without being part of the parent DOM. Instead, only a carrier element (referred to as the host) is directly mounted. Not only do style rules from the parent not leak into the shadow DOM—style definitions in general are not applied either.

Consequently, a shadow DOM requires style sheets from the parent to be loaded again—if we want to use these styles. This isolation thus only makes sense if we want to be completely autonomous regarding styling in the shadow DOM.

The same isolation does not work for the JavaScript parts. Through global variables, we are still able to access the exports of the parent's scripts. Here, another drawback of the Web Components' standard shines through. Usually, the way to transport shadow DOM definitions is by using custom elements. However, custom elements require a unique name that cannot be redefined.

A way around many of the JavaScript drawbacks is to fall back to the mentioned <iframe> element. An iframe comes with style and script isolation, but how can we communicate between the parent and the content living in the frame?

Frame communication

Originally introduced in HTML5, the window.postMessage function has proven to be quite useful when it comes to micro frontends. This was already introduced quite early in reusable frontend pieces. Today, we can find reusable frontend pieces relying on frame communication—for instance—in most chatbot services or cookie consent services.

While sending a message is one part of the story, the other part is to receive messages. For this, a message event was introduced on the window object. The difference from many other events is that this event also gives us an origin property, which allows us to track the URL of the sender.

The URL of a particular frame to send the message to is also necessary when sending a message. Let's see an example of such a process.

Here is the HTML code of the parent document:

<!doctype html>
<iframe src="iframe.html" id="iframe"></iframe>
<script>
setTimeout(() => {
iframe.contentWindow.postMessage('Hello!', '*');
}, 1000);
</script>

After a second, we'll send a message containing the string Hello! to the document loaded from iframe.html. For the URL, we use the * wildcard string.

The iframe can be defined like so:

<!doctype html>
<script>
window.addEventListener('message', event => {
  const text = document.body.appendChild(
    document.createElement('div'));
text.textContent = ` Received "${event.data}" from 
  ${event.origin}`;
});
</script>

This will display the posted message in the frame.

Important note

The access and manipulation of frames is only forbidden when using a cross-origin. The definition of a cross-domain origin is one of the security fundamentals that the web is currently built upon. An origin consists of the protocol (for example, HTTP Secure (HTTPS)), the domain, and the port. Different subdomains also correspond to different origins. Many HTTP requests can only be performed if the cross-origin rules are positively evaluated by the browser. Technically, this is referred to as cross-origin resource sharing (CORS). See the following MDN Web Docs page for more on this: https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS.

While these communication patterns are quite fruitful, they do not allow the sharing of any resources. Remember that only strings can be transported via the postMessage function. While these strings may be complex objects serialized as JSON, they can never be true objects with nested cyclic references or functions inside.

An alternative is to give up on direct isolation and instead work on indirect isolation. A wonderful possibility is to leverage web workers to do this.

Web workers and proxies

A web worker represents an easy way to break out of the single-threaded model of the JavaScript engine, without all the usual hassle of multithreading.

As with iframes, the only method of communication between the main thread and the worker thread is to post messages. A key difference, however, is that the worker runs in another global context that is different from the current window object. While some APIs still exist, many parts are either different or not available at all.

One example is in the way of loading additional scripts. While standard code may just append another <script> element, the web worker needs to use the importScripts function. This one is synchronous and allows not only one URL to be specified, but actually multiple URLs. The given URLs are loaded and evaluated in order.

So far so good, but how can we use the web worker if it comes with a different global context? Any frontend-related code that tries to do DOM manipulation will fail here. This is where proxies come to the rescue.

A proxy can be used to capture desired object accesses and function calls. This allows us to forward certain behavior from the web worker to the host. The only drawback is that the postMessage interface is asynchronous in nature, which can be a challenge when synchronous APIs should be mimicked.

One of the simplest proxies is actually the I can handle everything proxy. This small amount of code, shown in the following snippet, provides a solid basis for 90% of all stubs:

const generalProxy = new Proxy(() => generalProxy, {
  get(target, name) {
    if (name === Symbol.toPrimitive) {
      return () => ({}).toString();
    } else {
      return generalProxy();
    }
  },
});

The trick here is that this allows it to be used like generalProxy.foo.bar().qxz without any problems and with no undefined access or invalid functions.

Using a proxy, we can mock necessary DOM APIs, which are then forwarded to the parent document. Of course, we would only define and forward API calls that can be safely used. Ultimately, the trick is to filter against a safe list of acceptable APIs.

Facing the problem of transporting object references such as callbacks from the web worker to the parent, we may fall back to wrapping these. A custom marker may be used to allow the web worker to be called to run the callback at a later point in time.

We will go into the details of such a solution later—for now, it's enough to know that the web offers a fair share of security and performance measures that can be utilized to allow strong modularization.

For the final section, let's close out the original question with a look at the business reasons for choosing micro frontends.

Faster TTM

As already written, there are technical and business reasons why micro frontends have been a bit more difficult to fully embrace than their backend counterparts. My personal opinion is that the technical reasons are significant, but not as crucial as the business reasons.

When implementing micro frontends, we should never look for technical reasons alone. Ultimately, our applications are created for the end users. As a result, end users should not be impacted negatively by our technical choices.

But not only may the user experience be a driver from a business perspective—the productivity level and the development process should also be taken into consideration. In the end, this impacts the user experience just as much and represents the basis for any implementation: how fast we can ship new features or adjust to some user surveys.

Decreasing onboarding time

20 years ago, most developers that entered a company stayed for quite a while. Over time this decreased, to be around 2 years for most companies (source: https://hackerlife.co/blog/san-francisco-large-corporation-employee-tenure). The average employment duration is shown here:

Figure 1.4 – Average time of developers at a single company

Figure 1.4 – Average time of developers at a single company

If a standard developer gets productive in a larger code base after 6 months, this means that a large fraction of the investment (about 25%) is not really fulfilled. Being productive from day one should be the goal, even if it is unrealistic.

One way to achieve faster onboarding time is by decreasing the required cognitive load, and a strong modularization with independent repositories can help a lot with this. After all, if a new developer fixes a single bug, a smaller repository can be debugged and understood much faster than a large repository.

Also, the documentation can be kept up to date much more easily with a lot of smaller repositories than a few larger ones. Together with a smaller (and more focused) commit history, the entry level is far smaller.

The downside is that complex bugs may be more problematic to debug. To understand the system fully, most likely a lot more expertise may be required. Knowing which code resides in which repository becomes a virtue and gives some senior developers and architects a special status that makes it difficult to remove them safely from a project.

A solution is to think in terms of feature teams that have main responsibilities but may share common infrastructure expertise.

Multiple teams

If we are capable of onboarding new developers quickly, we can also leverage multiple sources for recruiting. Recruiting great talent in information technology (IT) has become absurdly difficult. The top players already attract a fair share of all the top developers needed, especially in smaller ventures. How can new features be developed quickly when it takes months to acquire a single new developer?

A solution to this challenge is to consider the help of IT agencies or service providers who can offer on-demand development help. Historically, bringing in externals can be a source of trouble too. Non-disclosure agreements (NDAs) need to be filled out; access needs to be granted. The biggest pain point may be procedural differences.

By using modularization as a key architecture feature, different teams may work in their own repositories. They can use their own processes and have their own release schedules. This can come in handy internally too, but obviously has the greatest benefit when working with external developers and teams.

One of the best advantages of being able to modularize the frontend is that we can come up with new ways to cut the teams. For instance, we can introduce true full-stack teams, where each team is responsible for one backend service and one frontend module. While this boundary may not always make sense, it can be very helpful in a number of scenarios. Giving the same team that made a backend API its frontend counterpart reduces alignment requirements and enables more stable releases.

Combining the ideas of domain-driven design (DDD) with multiple teams is an efficient way to reduce cognitive load and establish clear architectural boundaries.

DDD is a set of techniques to formalize how modularization can be performed without ending up in unmaintainable spaghetti code. In Chapter 4, Domain Decomposition, we'll have a closer look at these techniques, which can help us to derive a modularization referred to as domain decomposition. Deriving the right domain decomposition is, however, not easy at all. Keeping the different functional domains separated is key here.

Isolated features

My favorite business driver for introducing micro frontends is the ability to ship independent features using their own schedules.

Let's say one of the product owners (POs) has an idea for a new feature. Right now, it is not really known how this feature will be accepted by the users. What can be done here? Surely, we want to start with a small POC presented to the PO. If this is well accepted, we can refine it and publish a minimum viable product (MVP), which would serve to gather some end user feedback.

Historically, a lot of alignment would be needed just to get this feature into the application in the desired way. Using micro frontends, we can roll out the feature progressively. We would first only make it available to the PO, and then adjust the rollout to the target end users too. Finally, we could release it in full regions or worldwide.

A selective rollout is only possible when a feature is isolated. If we have two features— feature A and feature B, with A requiring B—then a progressive rollout is never possible for feature B. It would always need to be a superset of the rollout rules of feature A.

In general, such dependencies may still be possible (or even desired), but they should never be direct in nature. As such, all available features may only weakly reference other features. We will see later that weak references form a basis for scalability and are crucial for sound micro frontend solutions.

A good test to verify whether a given micro frontend is well designed is to turn it off. Is the overall application still working? Maybe it may not be useful any more, but that is not the key point. If the application is still technically working, then the micro frontend solution is still sound. Always ask yourself: Does it—in principle—work without this module?

Being able to turn off micro frontends consequently allows us to also swap individual micro frontends. As the inner workings are not required to be available, we can also fully replace them. This can be used to our advantage to gather useful user feedback.

A/B testing

We may find at least a dozen more good reasons to use micro frontends from a business perspective; however, one—unfortunately, often forgotten—reason is to simplify gathering user feedback. A great way to do this is by introducing A/B testing.

In A/B testing, we introduce two variants of a feature. One is variant A, while the other is labeled variant B. While A may be the current state of the feature (usually called the baseline), B may be a new way of approaching the problem. The following diagram illustrates this:

Figure 1.5 – A/B testing of a feature results in different experiences for different users

Figure 1.5 – A/B testing of a feature results in different experiences for different users

In scenarios where the feature is new anyway, an A/B test may be served to all users interested in taking part here. If we work with a baseline, it may make sense to survey only users that are new to the feature to avoid having existing habits contaminating the data.

Most frontend monoliths require quite a few code changes to include the ability of A/B testing for specific features. In the worst case, branching from variant A to variant B can be found in dozens—if not hundreds—of places. By introducing a suitable modularization, we can introduce A/B testing without any code change. This is an ideal scenario, where only the part shipping the micro frontends needs to know about the rules of the active A/B test.

Actually, including the feedback and evaluating the results is then, of course, a completely different story. Here, micro frontends don't really help us. The connection to dedicated tools for supporting the data analysis is unchanged, with a monolithic solution.

Summary

In this chapter, we investigated how the web evolved to reach a point where micro frontends could be presented as a viable solution. We discussed that the need for application scaling hit a point where alignment efforts had a massive impact on efficiency. Together with some new technical possibilities, the architectural pattern of micro frontends can now be implemented successfully.

You now know that micro frontends have not only a technical but also a business background. However, besides some advantages, they also come with disadvantages. Knowing from the start what to expect is crucial when aiming for a solid solution.

In the next chapter, we'll have a more detailed look at the challenges and pitfalls of micro frontends. Our goal is to be able to successfully tackle all challenges with sound solutions. We will cover some of the most difficult problems, together with a path leading to proper implementation.

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Cut through the complexities of designing a monolithic web architecture using micro frontend architecture
  • Explore architecture patterns for building large-scale applications
  • Learn how to build, test, and secure your micro frontends efficiently

Description

Micro frontend is a web architecture for frontend development borrowed from the idea of microservices in software development, where each module of the frontend is developed and shipped in isolation to avoid complexity and a single point of failure for your frontend. Complete with hands-on tutorials, projects, and self-assessment questions, this easy-to-follow guide will take you through the patterns available for implementing a micro frontend solution. You’ll learn about micro frontends in general, the different architecture styles and their areas of use, how to prepare teams for the change to micro frontends, as well as how to adjust the UI design for scalability. Starting with the simplest variants of micro frontend architectures, the book progresses from static approaches to fully dynamic solutions that allow maximum scalability with faster release cycles. In the concluding chapters, you'll reinforce the knowledge you’ve gained by working on different case studies relating to micro frontends. By the end of this book, you'll be able to decide if and how micro frontends should be implemented to achieve scalability for your user interface (UI).

Who is this book for?

This book is for software/solution architects or (mostly lead) developers as well as web developers and frontend engineers. Beginner-level knowledge of HTML and CSS along with a solid understanding of JavaScript programming and its ecosystem, including Node.js and NPM, is assumed.

What you will learn

  • Understand how to choose the right micro frontend architecture
  • Design screens for compositional UIs
  • Create a great developer experience for micro frontend solutions
  • Achieve enhanced user experiences with micro frontends
  • Introduce governance and boundary checks for managing distributed frontends
  • Build scalable modular web applications from scratch or by migrating an existing monolith

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Jun 21, 2021
Length: 310 pages
Edition : 1st
Language : English
ISBN-13 : 9781800565609
Languages :
Tools :

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
OR
Modal Close icon
Payment Processing...
tick Completed

Billing Address

Product Details

Publication date : Jun 21, 2021
Length: 310 pages
Edition : 1st
Language : English
ISBN-13 : 9781800565609
Languages :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
$19.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
$199.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts
$279.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total $ 143.97
React 17 Design Patterns and Best Practices
$54.99
The Art of Micro Frontends
$49.99
Embracing Microservices Design
$38.99
Total $ 143.97 Stars icon
Banner background image

Table of Contents

20 Chapters
Section 1: The Hive - Introducing Frontend Modularization Chevron down icon Chevron up icon
Chapter 1: Why Micro frontends? Chevron down icon Chevron up icon
Chapter 2: Common Challenges and Pitfalls Chevron down icon Chevron up icon
Chapter 3: Deployment Scenarios Chevron down icon Chevron up icon
Chapter 4: Domain Decomposition Chevron down icon Chevron up icon
Section 2: Dry Honey - Implementing Micro frontend Architectures Chevron down icon Chevron up icon
Chapter 5: Types of Micro Frontend Architectures Chevron down icon Chevron up icon
Chapter 6: The Web Approach Chevron down icon Chevron up icon
Chapter 7: Server-Side Composition Chevron down icon Chevron up icon
Chapter 8: Edge-Side Composition Chevron down icon Chevron up icon
Chapter 9: Client-Side Composition Chevron down icon Chevron up icon
Chapter 10: SPA Composition Chevron down icon Chevron up icon
Chapter 11: Siteless UIs Chevron down icon Chevron up icon
Section 3: Busy Bees - Scaling Organizations Chevron down icon Chevron up icon
Chapter 12: Preparing Teams and Stakeholders Chevron down icon Chevron up icon
Chapter 13: Dependency Management, Governance, and Security Chevron down icon Chevron up icon
Chapter 14: Impact on UX and Screen Design Chevron down icon Chevron up icon
Chapter 15: Developer Experience Chevron down icon Chevron up icon
Chapter 16: Case Studies Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Top Reviews
Rating distribution
Full star icon Full star icon Full star icon Full star icon Empty star icon 4
(9 Ratings)
5 star 66.7%
4 star 0%
3 star 11.1%
2 star 11.1%
1 star 11.1%
Filter icon Filter
Top Reviews

Filter reviews by




Daniel Correa Jul 25, 2021
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Have you thought about how to split a frontend application (in a similar way as we split backend applications through microservices)? That is what "The Art of Micro Frontends" book is about.Micro frontends are a trending strategy to build scalable web applications. It is a strategy that, in my opinion, all front-end developers should consider, and should learn. Instead of having a monolithic frontend application, you will split it into several frontend applications, which are merged by different strategies.Florian Rappl (the author of this book) takes us through a journey to learn about the world of the micro frontends. At the beginning, we will investigate how the web evolved to reach a point where micro frontends could be presented as a viable solution. We will learn how we moved from static applications, to dynamic applications, to the social web, to single-page applications, to micro frontends. Then, the book discusses some challenges and deployment scenarios of how to implement micro frontends.Later, the book becomes very practical. We will design and codify six different micro frontend architectures. You will get some excellent simple examples of these architectures. For example, you will implement a server-side composition micro frontend architecture (for a tractor store application). In this example, you will code four different applications. (i) a micro frontend application made with Node.JS, Express, and EJS, which displays the tractors information. (ii) a micro frontend application made with Node.JS, Express, Express-session, and EJS, which displays the basket info and a buy button. (iii) a micro frontend application made with Node.JS, Express, and EJS, which displays product recommendations. And (iv) a gateway application which composes the micro frontends.Another example is the development of a SPA composition micro frontend architecture. In this case, we will design and implement an accounting application that comes with three micro frontends. (i) a balance micro frontend written in React. (ii) a tax micro frontend developed with Svelte. And (iii) a settings micro frontend with Vue. All those applications are composed through a shell application which mounts and unmounts components.At the end, the book teaches you about some recommendations for security, team management, communication with stakeholders, impact on UX, and developer experience.Two important things to highlight (i) I recommend having some experience with JavaScript and frontend development before reading this book. And (ii) the examples presented in this book are not detailed step by step (the author only presents some important excerpts of the main code), so, you will have to follow the repository instructions, and take a detailed look at the source code. I would have liked to see a step-by-step guide to code at least the first project. Because that way, you get a better understanding about how it works.
Amazon Verified review Amazon
Mark Dunn Jul 19, 2021
Full star icon Full star icon Full star icon Full star icon Full star icon 5
I really enjoyed this book. I'm familiar with compositional patterns but I didn't have any experience with this architectural approach to building user interfaces. This material is approachable to anyone that has web development experience with HTML, JavaScript, Node.js, and NPM. The Micro Frontend approach is inspired by a similar way of building microservices. You build modules in isolation to avoid complexity and a single point of failure. This book will be of most use to architects, lead developers, and lead designers.I own a training company and often look for books that can be used as courseware. I really like the fact that the author has several projects/case studies, easy-to-follow hands-on tutorials, and even self-assessment questions making this a natural fit for training material.Scalability is a big payoff to this approach and the author does a great job of explaining different styles and how they are put to use. After building a foundation for how things work you'll have the benefit of case studies to reinforce what you have learned.I really found all the topics in the book useful. You'll learn how to choose the correct micro frontend architecture, design compositional UIs, and how to build frontends that are scalable and modular in your applications.
Amazon Verified review Amazon
Vishal Mishra Sep 22, 2021
Full star icon Full star icon Full star icon Full star icon Full star icon 5
This is a book aimed at web developers that have already some experience with web technologies and node. I really enjoyed this book, it is extremely clear and detailed, it covers the theory behind micro frontends and why you want to use them for your web apps, the author covers different kinds of implementations, going from the easier to the most robust and complex ones.
Amazon Verified review Amazon
Boris Sep 22, 2021
Full star icon Full star icon Full star icon Full star icon Full star icon 5
With more than 4 years of experience in micro frontend development, where one would think that it's a lot, after reading this book I realized how much I didn't know. The author really took the time and analyzed many different ways on successful micro frontend development taking into account, not just the tech aspects, but also other POVs like business, team structure, or UX/UI. All this is witnessing a high level of experience and deep understanding of the given topic, which just infuses an extra dose of confidence that we can trust and rely on the examples from the book as a solid foundation to build upon our next Micro Frontend application.What I find especially useful, are the Youtube videos accompanying every chapter together with GitHub repositories, where the author is taking the opportunity, in his own words, additionally bring closer the examples to the readers.This read would be strongly recommended for most web developers because it can be a source of great ideas that a lot of “modern” architects and SW engineers are not even aware of.
Amazon Verified review Amazon
Web Developer Dec 20, 2022
Full star icon Full star icon Full star icon Full star icon Full star icon 5
I've read essentially all of them and while the book of M.Geers is a good practical book for server-side composition and the one from L.Mezzalira contains a lot of content on the organizational aspects, this one is definitely the best alrounder. What distinguishes this book from the others is that it really emphasizes the importance of loosely coupled solutions.I cannot agree with one review that indicates that the book does not contain module federation (maybe the author of that review should have read the book before posting a review) - it does. However, since module federation is just one of the possible tools and the book is about microfrontends as a whole I think its fair that its only touched twice and not the center piece of the whole book. Personally I would have loved to see more content on single-spa, but then again the idea of the book is not to just introduce some frameworks but rather to teach how microfrontends work and how a good solution should and can be built.All in all I surely give this book 5 stars. Not only did it really go into details on very important topics it also provided tips and tricks that can be used with any kind of distributed system. Wonderful!
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

How do I buy and download an eBook? Chevron down icon Chevron up icon

Where there is an eBook version of a title available, you can buy it from the book details for that title. Add either the standalone eBook or the eBook and print book bundle to your shopping cart. Your eBook will show in your cart as a product on its own. After completing checkout and payment in the normal way, you will receive your receipt on the screen containing a link to a personalised PDF download file. This link will remain active for 30 days. You can download backup copies of the file by logging in to your account at any time.

If you already have Adobe reader installed, then clicking on the link will download and open the PDF file directly. If you don't, then save the PDF file on your machine and download the Reader to view it.

Please Note: Packt eBooks are non-returnable and non-refundable.

Packt eBook and Licensing When you buy an eBook from Packt Publishing, completing your purchase means you accept the terms of our licence agreement. Please read the full text of the agreement. In it we have tried to balance the need for the ebook to be usable for you the reader with our needs to protect the rights of us as Publishers and of our authors. In summary, the agreement says:

  • You may make copies of your eBook for your own use onto any machine
  • You may not pass copies of the eBook on to anyone else
How can I make a purchase on your website? Chevron down icon Chevron up icon

If you want to purchase a video course, eBook or Bundle (Print+eBook) please follow below steps:

  1. Register on our website using your email address and the password.
  2. Search for the title by name or ISBN using the search option.
  3. Select the title you want to purchase.
  4. Choose the format you wish to purchase the title in; if you order the Print Book, you get a free eBook copy of the same title. 
  5. Proceed with the checkout process (payment to be made using Credit Card, Debit Cart, or PayPal)
Where can I access support around an eBook? Chevron down icon Chevron up icon
  • If you experience a problem with using or installing Adobe Reader, the contact Adobe directly.
  • To view the errata for the book, see www.packtpub.com/support and view the pages for the title you have.
  • To view your account details or to download a new copy of the book go to www.packtpub.com/account
  • To contact us directly if a problem is not resolved, use www.packtpub.com/contact-us
What eBook formats do Packt support? Chevron down icon Chevron up icon

Our eBooks are currently available in a variety of formats such as PDF and ePubs. In the future, this may well change with trends and development in technology, but please note that our PDFs are not Adobe eBook Reader format, which has greater restrictions on security.

You will need to use Adobe Reader v9 or later in order to read Packt's PDF eBooks.

What are the benefits of eBooks? Chevron down icon Chevron up icon
  • You can get the information you need immediately
  • You can easily take them with you on a laptop
  • You can download them an unlimited number of times
  • You can print them out
  • They are copy-paste enabled
  • They are searchable
  • There is no password protection
  • They are lower price than print
  • They save resources and space
What is an eBook? Chevron down icon Chevron up icon

Packt eBooks are a complete electronic version of the print edition, available in PDF and ePub formats. Every piece of content down to the page numbering is the same. Because we save the costs of printing and shipping the book to you, we are able to offer eBooks at a lower cost than print editions.

When you have purchased an eBook, simply login to your account and click on the link in Your Download Area. We recommend you saving the file to your hard drive before opening it.

For optimal viewing of our eBooks, we recommend you download and install the free Adobe Reader version 9.