Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Building Serverless Microservices in Python
Building Serverless Microservices in Python

Building Serverless Microservices in Python: A complete guide to building, testing, and deploying microservices using serverless computing on AWS

eBook
€13.98 €19.99
Paperback
€24.99
Subscription
Free Trial
Renews at €18.99p/m

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Table of content icon View table of contents Preview book icon Preview Book

Building Serverless Microservices in Python

Serverless Microservices Architectures and Patterns

Microservices architectures are based on service. You could think of the microservices as a lightweight version of SOA but enriched with more recent architectures, such as the event-driven architecture, where an event is defined as a state of change that's of interest. In this chapter, you will learn about the monolithic multi-tier architecture and monolithic service-oriented architecture (SOA). We will discuss the benefits and drawbacks of both architectures. We will also look at the microservices background to understand the rationale behind its growth, and compare different architectures.

We will cover the design patterns and principles and introduce the serverless microservice integration patterns. We then cover the communication styles and decomposition microservice patterns, including synchronous and asynchronous communication.

You will then learn how serverless computing in AWS can be used to quickly deploy event-driven computing and microservices in the cloud. We conclude the chapter by setting up your serverless AWS and development environment.

In this chapter we will cover the following topics:

  • Understanding different architecture types and patterns
  • Virtual machines, containers, and serverless computing
  • Overview of microservice integration patterns
  • Communication styles and decomposition microservice patterns
  • Serverless computing in AWS
  • Setting up your serverless environment

Understanding different architecture types and patterns

In this section, we will discuss different architectures, such as monolithic and microservices, along with their benefits and drawbacks.

The monolithic multi-tier architecture and the monolithic service-oriented architecture

At the start of my career, while I was working for global fortune 500 clients for Capgemini, we tended to use multi-tier architecture, where you create different physically separate layers that you can update and deploy independently. For example, as shown in the following three-tier architecture diagram, you can use Presentation, Domain logic, and Data Storage layers:

In the presentation layer, you have the user interface elements and any presentation-related applications. In domain logic, you have all the business logic and anything to do with passing the data from the presentation layer. Elements in the domain logic also deal with passing data to the storage or data layer, which has the data access components and any of the database elements or filesystem elements. For example, if you want to change the database technology from SQL Server to MySQL, you only have to change the data-access components rather than modifying elements in the presentation or domain-logic layers. This allows you to decouple the type of storage from presentation and business logic, enabling you to more readily change the database technology by swapping the data-storage layer.

A few years later at Capgemini, we implemented our clients' projects using SOA, which is much more granular than the multi-tier architecture. It is basically the idea of having standardized service contracts and registry that allows for automation and abstraction:

There are four important service properties related to SOA:

  • Each service needs to have a clear business activity that is linked to an activity in the enterprise.
  • Anybody consuming the service does not need to understand the inner workings.
  • All the information and systems are self-contained and abstracted.
  • To support its composability, the service may consist of other underlying services

Here are some important SOA principles:

  • Standardized
  • Loosely coupled
  • Abstract
  • Stateless
  • Granular
  • Composable
  • Discoverable
  • Reusable

The first principle is that there is a standardized service contract. This is basically a communication agreement that's defined at the enterprise level so that when you consume a service, you know exactly which service it is, the contract for passing in messages, and what you are going to get back. These services are loosely coupled. That means they can work autonomously, but also you can access them from any location within the enterprise network. They also offer an abstract version, which means that these services are a black box where the inner logic is actually hidden away, but also they can work independently of other services.

Some services will also be stateless. That means that, if you call a service, passing in a request, you will get a response and you would also get an exception if there is a problem with the service or the payload. Granularity is also very important within SOA. The service needs to be granular enough that it's not called inefficiently or many times. So, we want to normalize the level and the granularity of the service. Some services can be decomposed if they're being reused by the services, or services can be joined together and normalized to minimize redundancy. Services also need to be composable so you can merge them together into larger services or split them up.

There's a standardized set of contracts, but the service also needs to be discoverable. Discoverable means that there is a way to automatically discover what service is available, what endpoints are available, and a way to interpret them. Finally, the reasonable element, reuse is really important for SOA, which is when the logic can be reused in other parts of the code base.

Benefits of monolithic architectures

In SOA, the architecture is loosely coupled. All the services for the enterprise are defined in one repository. This allows us to have good visibility of the services available. In addition, there is a global data model. Usually, there is one data store where we store all the data sources and each individual service actually writes or reads to it. This allows it to be centralized at a global level.

Another benefit is that there is usually a small number of large services, which are driven by a clear business goal. This makes them easy to understand and consistent for our organization. In general, the communication between the services is decoupled via either smart pipelines or some kind of middleware.

Drawbacks of the monolithic architectures

The drawback of the monolithic architecture is that there is usually a single technology stack. This means the application server or the web server or the database frameworks are consistent throughout the enterprise. Obsolete libraries and code can be difficult to upgrade, as this is dependent on a single stack and it's almost like all the services need to be aligned on the same version of libraries.

Another drawback is that the code base is usually very large on a single stack stack, which means that there are long build times and test times to build and deploy the code. The services are deployed on a single or a large cluster of application servers and web servers. This means that, in order to scale, you need to scale the whole server, which means there's no ability to deploy and scale applications independently. To scale out an application, you need to scale out the web application or the application server that hosts the application.

Another drawback is that there's generally a middleware orchestration layer or integration logic that is centralized. For example, services would use the Business Process Management (BPM) framework to control the workflow, you would use an Enterprise Service Bus (ESB), which allows you to do routing your messages centrally, or you'd have some kind of middleware that would deal with the integration between the services themselves. A lot of this logic is tied up centrally and you have to be very careful not to break any inter-service communication when you're changing the configuration of that centralized logic.

Overview of microservices

The term microservice arose from a workshop in 2011, when different teams described an architecture style that they used. In 2012, Adrien Cockcroft from Netflix actually described microservice as a fine-grained SOA who pioneered this fine-grained SOA at web scale.

For example, if we have sensors on an Internet of Things (IoT) device, if there's a change of temperature, we would emit an event as a possible warning further downstream. This is what's called event-stream processing or complex-event processing. Essentially, everything is driven by events throughout the whole architecture.

The other type of design used in microservices is called domain-driven design (DDD). This is essentially where there is a common language between the domain experts and the developers. The other important component in DDD is the bounded context, which is where there is a strict model of consistency that relies in its bounds for each service. For example, if it's a service dealing with customer invoicing, that service will be the central point and only place where customer invoicing can be processed, written to, or updated. The benefits are that there won't be any confusion around the responsibilities of data access with systems outside of the bounded context.

You could think of microservice as centered around a REST endpoint or application programming interface using JSON standards. A lot of the logic could be built into the service. This is what is called a dumb pipeline but a smart endpoint, and you can see why in the diagram. We have a service that deals with customer support, as follows:

For example, the endpoint would update customer support details, add a new ticket, or get customer support details with a specific identifier. We have a local customer support data store, so all the information around customer support is stored in that data store and you can see that the microservice emits customer-support events. These are sent out on a publish-subscribe mechanism or using other publishing-event frameworks, such as Command Query Responsibility Segregation (CQRS). You can see that this fits within the bounded context. There's a single responsibility around this bounded context. So, this microservice controls all information around customer support.

Benefits and drawbacks of microservice architectures

The bounded context, and the fact that this is a very small code base, allow you to build very frequently and deploy very frequently. In addition, you can scale these services independently. There's usually one application server or web server per microservice. You can obviously scale it out very quickly, just for the specific service that you want to. In addition, you can have frequent builds that you test more frequently, and you can use any type of language, database, or web app server. This allows it to be a polygon system. The bounded context is a very important as you can model one domain. Features can be released very quickly because, for example, the customer services microservice could actually control all changes to the data, so you can deploy these components a lot faster.

However, there are some drawbacks to using a microservices architecture. First, there's a lot of complexity in terms of distributed development and testing. In addition, the services talk a lot more, so there's more network traffic. Latency and networks become very important in microservices. The DevOps team has to maintain and monitor the time it takes to get a response from another service. In addition, the changing of responsibilities is another complication. For example, if we're splitting up one of the bounded contexts into several types of sub-bounded context, you need to think about how that works within teams. A dedicated DevOps team is also generally needed, which is essentially there to support and maintain much larger number of services and machines throughout the organization.

SOA versus microservices

Now that we have a good understanding of both, we will compare the SOA and microservices architectures. In terms of the communication itself, both SOA and microservices can use synchronous and asynchronous communication. SOA typically relied on Simple Object Access Protocol (SOAP) or web services. Microservices tend to be more modern and widely use REpresentational State Transfer (RESTApplication Programming Interfaces (APIs).

We will start with the following diagram, which compares SOA and microservices:

The orchestration is where there's a big differentiation. In SOA, everything is centralized around a BPM, ESB, or some kind of middleware. All the integration between services and data flowing is controlled centrally. This allows you to configure any changes in one place, which has some advantages.

The microservices approach has been to use a more choreography-based approach. This is where an individual service is smarter, that is, a smart endpoint but a dumb pipeline. That means that the services know exactly who to call and what data they will get back, and they manage that process within the microservice. This gives us more flexibility in terms of the integration for microservices. In the SOA world or the three-tier architecture, there's less flexibility as it's usually a single code base and the integration is a large set of monolith releases and deployments of user interface or backend services. This can limit the flexibility of your enterprise. For microservices, however, these systems are much smaller and can be deployed in isolation and much more fine-grained.

Finally, on the architecture side, SOA works at the enterprise level, where we would have an enterprise architect or solutions architect model and control the release of all the services in a central repository. Microservices are much more flexible. Microservices talked about working at the project level where they say the team is only composed of a number of developers or a very small number of developers that could sit around and share a pizza. So, this gives you much more flexibility to make decisions rapidly at the project level, rather than having to get everything agreed at the enterprise level.

Virtual machines, containers, and serverless computing

Now that we have a better understanding of the monolithic and microservice architectures, let's look at the Amazon Web Service (AWS) building blocks for creating serverless microservices.

But first we'll cover virtual machines, containers, and serverless computing, which are the basic building blocks behind any application or service hosted in the public cloud.

Virtual machines are the original offering in the public cloud and web hosting sites, containers are lightweight standalone images, and serverless computing is when the cloud provider fully manages the resources. You will understand the benefits and drawbacks of each approach and we will end on a detailed comparison of all three.

Virtual machines

In traditional data centers, you would have to buy or lease physical machines and have spare capacity to deal with additional web or user traffic. In the new world, virtual machines were one of the first public cloud offerings. You can think of it as similar to physical boxes, where you can install an operating system, remotely connect via SSH or RDP, and install applications and services. I would say that virtual machines have been one of the key building blocks for start-up companies to be successful. It gave them the ability to go to market with only small capital investments and to scale out with an increase in their web traffic and user volumes. This was something that previously only large organizations could afford, given the big upfront costs of physical hardware.

The advantages of virtual machines are the pay per usage, choice of instance type, and dynamic allocation of storage, giving your organization full flexibility to rent hardware within minutes rather than wait for physical hardware to be purchased. Virtual machines also provides security, which is managed by the cloud provider. In addition, they provide multi-region auto-scaling and load balancing, again managed by the cloud provider and available almost at the click of a button. There are many virtual machines available, for example, Amazon EC2, Azure VMs, and Google Compute Engine.

However, they do have some drawbacks. The main drawback is that it takes a few minutes to scale. So, any machine that needs to be spun up takes a few minutes, making it impossible most to scale quickly upon request. There is also an effort in terms of configuration where the likes of Chef or Puppet are required for configuration management. For example, the operating system needs to be kept up to date.

Another drawback is that you still need to write the logic to poll or subscribe to other managed services, such as streaming analytics services. In addition, you still pay for idle machine time. For example, when your services are not running, the virtual machines are still up and you're still paying for that time even if they're not being actively used.

Containers

The old way with virtual machines was to deploy applications on a host operating system with configuration-management tools such as Chef or Puppet. This has the advantage of managing the application artifacts' libraries and life cycles with each other and trying to operate specific operating systems, whether Linux or Windows. Containers came out of this limitation with the idea of shipping your code and dependencies into a portable container where you have full operating-system-level virtualization. You essentially have better use of the available resources on the machine.

These containers can be spun up very fast and they are essentially immutable, that is, the OS, library versions, and configurations cannot be changed. The basic idea is that you ship the code and dependencies in this portable container and the environments can be recreated locally or on a server by a configuration. Another important aspect is the orchestration engine. This is the key to managing containers. So, you'd have Docker images that will be managed, deployed, and scaled by Kubernetes or Amazon EC2 container service (ECS).

The drawbacks are that these containers generally scale within seconds, which is still too slow to actually invoke a new container per request. So, you'd need them to be pre-warmed and already available, which has a cost. In addition, the cluster and image configuration does involve some DevOps effort.

Recently AWS introduced AWS Fargate and Elastic Kubernetes Service (EKS), which have helped to relieve some of this configuration-management and support effort, but you would still need a DevOps team to support them.

The other drawback is that there's an integration effort with the managed services. For example, if you're dealing with a streaming analytics service, you still need to write the polling and subscription code to pull the data into your application or service.

Finally, like with virtual machines, you still pay for any containers that are running even if the Kubernetes assists with this. They can run on the EC2 instance, so you'll still need to pay for that actual machine through a running time even if it's not being used.

Serverless computing

You can think of service computing as focusing on business logic rather than on all the infrastructure-configuration management and integration around the service. In serverless computing, there are still servers, it's just that you don't manage the servers themselves, the operating system, or the hardware, and all the scalability is managed by the cloud provider. You don't have access to the raw machine, that is, you can't SSH onto the box.

The benefits are that you can really focus on the business logic code rather than any of the infrastructure or inbound integration code, which is the the business value you are adding as an organization for your customers and clients.

In addition, the security is managed by the cloud provider again, auto-scaling and the high availability options also managed by the cloud provider. You can spin up more instances dynamically based on the number of requests, for example. The cost is per execution time not per idle time.

There are different public cloud serverless offerings. Google, Azure, AWS, and Alibaba cloud have the concept of Functions as a Service (FaaS). This is where you deploy your business logic code within a function and everything around it, such as the security and the scalability, is managed by the cloud provider.

The drawback is that these are stateless, which means they have a very short lifetime. After the few minutes are over, any state maintained within that function is lost, so it has to be persisted outside. It's not suitable for a long-running processes. It does have a limited instance type and a duration too. For example, AWS Lambdas have a duration of 15 minutes before they are terminated. There's also constraints on the actual size of the external libraries or any custom libraries that you package together, since these lambdas need to be spun up very quickly.

Comparing virtual machines, containers, and serverless

Let's compare Infrastructure as a Service (IaaS), Containers as a Service (CaaS), and Functions as a Service (FaaS). Think of IaaS as the virtual machine, CaaS as pool of Docker containers and FaaS an example will be Lambda functions. This is a comparison between IaaS, CaaS, and FaaS:

The green elements are managed by the user, and the blue elements are managed by the cloud service provider. So, on the left, you can see that IaaS, as used with virtual machines, have a lot of the responsibility on the user. In CaaS, the operating-system level is managed by the provider, but you can see that the container and the runtime are actually managed by the user. And, finally on the right, FaaS, you can see the core business logic code and application configuration is managed by the user.

So, how do you choose between AWS Lambda containers and EC2 instances in the AWS world? Check out the following chart:

If we compare virtual machines against the containers and Lambda functions on the top row, you can see that there is some configuration effort required in terms of the maintenance, building it for high availability, and management. For the Lambda functions, this is actually done on a pre-request basis. That is, it's request-driven. AWS will spin up more lambdas if more traffic hits your site to make it highly available (HA), for example.

In terms of flexibility, you have full access in virtual machines and containers, but with AWS Lambda, you have default hardware, default operating system, and no graphics processing units (GPU) available. The upside is that there is no upgrade or maintenance required on your side for Lambdas.

In terms of scalability, you need to plan ahead for virtual machines and containers. You need to provision the containers or instances and decide how you are going to scale. In AWS Lambda functions, scaling is implicit based on the number of requests or data volumes, as you natively get more or fewer lambdas executing in parallel.

The launch of virtual machines is usually in minutes and they can stay on perhaps for weeks. Containers can spin up within seconds and can stay on for minutes or hours before they can be disposed of. Lambda functions, however, can spin up in around 100 milliseconds and generally live for seconds or maybe a few minutes.

In terms of state, virtual machines and containers can maintain state even if it's generally not best practice for scaling. Lambda functions are always stateless, when they terminate their execution, anything in memory is disposed of, unless it's persisted outside in a DynamoDB table or S3 bucket, for example.

Custom integration with AWS services is required for virtual machines and Docker containers. In Lambda functions, however, event sources can push data to a Lambda function using built-in integration with the other AWS services, such as Kinesis, S3, and API Gateway. All you have to do is subscribe the Lambda event source to a Kinesis Stream and the data will get pushed to your Lambda with its business logic code, which allows you to decide how you process and analyze that data. However, for EC2 virtual machines and ECS containers, you need to build that custom inbound integration logic using the AWS SDK, or by some other means.

Finally, in terms of pricing, EC2 instances are priced per second. They also have a spot instance that uses market rates, which is lot cheaper than on-demand instances. The same goes for containers, except that you can have many containers on one EC2 instance. This makes better use of resources and is a lot cheaper, as you flexibility to spread different containers among the EC2 instances. For AWS Lambda functions, the pricing is per 100 milliseconds, invocation number, and the amount of random-access memory (RAM) required.

Overview of microservice integration patterns

In this section, we'll discuss design patterns, design principles, and how microservice architectural patterns relate to traditional microservice patterns and can be applied to serverless microservices. These topics will help you gain an overview of different integration patterns.

Design patterns

Patterns are reusable blueprints that are a solution to a similar problem others have faced, and that have widely been reviewed, tested, and deployed in various production environments.

Following them means that you will benefit from best practices and the wisdom of the technical crowd. You will also speak the same language as other developers or architects, which allows you to exchange your ideas much faster, integrate with other systems more easily, and run staff handovers more effectively.

Why are patterns useful?

Useful applications almost never exist in isolation. They are almost always integrated in a wider ecosystem, which is especially true for microservices. In other words, the integration specification and requirements need to be communicated and understood by other developers and architects.

When using patterns, you have a common language that is spoken among the technical crowds, allowing you to be understood. It's really about better collaborating, working with other people, exchanging ideas, and working out how to solve problems.

The main aim of patterns is to save you time and effort when implementing new services, as you have a standard terminology and blueprint to build something. In some cases, they help you avoid pitfalls as you can learn from others' experience and also apply the best practices, software, design patterns, and principles.

Software design patterns and principles

Your will probably be using object-oriented (OO) or functional programming in your microservices or Lambda code, so let's briefly talk about the patterns linked to them.

In OO programming, there are many best practice patterns or principles you can use when coding, such as GRASP or SOLID. I will not go into too much depth as it would take a whole book, but I would like to highlight some principles that are important for microservices:

  • SOLID: This has five principles. One example is the Single Responsibility Principle (SRP), where you define classes that each have a single responsibility and hence a single reason for change, reducing the size of the services and increasing their stability.
  • Package cohesion: For example, common closure-principle classes that change together belong together. So when a business rule changes, developers only need to change code in a small number of packages.
  • Package coupling: For example, the acyclic dependencies principle, which states that dependency graphs of packages or components should have no cycles.

Let's briefly go into some of the useful design patterns for microservice:

  • Creational patterns: For example, the factory method creates an instance of several derived classes.
  • Structural patterns: For example, the decorator adds additional responsibilities to an object dynamically.
  • Behavioral patterns: For example, the command pattern encapsulates a request as an object, making it easier to extract parameters, queuing, and logging of requests. Basically, you decouple the parameter that creates the command from the one that executes it.
  • Concurrency patterns: For example, the reactor object provides an asynchronous interface to resources that must be handled synchronously.

Depending on you coding experience, you may be familiar with these. If not, it's worth reading about them to improve you code readability, management, and stability, as well as your productivity. Here are some references where you can find out more:

  • SOLID Object-Oriented Design, Sandi Metz (2009)
  • Design Patterns: Elements of Reusable Object-Oriented Software, Erich Gamma, Richard Helm, Ralph Johnson, John Vlissides (1995)
  • Head First Design Patterns, Eric T Freeman, Elisabeth Robson, Bert Bates, Kathy Sierra (2004)
  • Agile Software Development, Principles, Patterns, and Practices, Robert C. Martin (2002)

Serverless microservices pattern categories

On top of the software design patterns and principles we just discussed are the microservices patterns. From my experience, there are many microservices patterns that I recommended that are relevant for serverless microservices, as shown in the following diagram:

I created this diagram to summarize and illustrate the serverless microservices patterns we will be discussing in this book:

  • Communication styles: How services communicate together and externally.
  • Decomposition pattern: Creating a service that is loosely coupled by business capability or bounded context.
  • Data management: Deals with local and shared data stores.
  • Queries and messaging: Looks at events and messages that are sent between microservices, and how services are queried efficiently.
  • Deployment: Where ideally we would like uniform and independent deployments, you also don't want developers to re-create a new pipeline for each bounded context or microservice.
  • Observability and discovery: Being able to understand whether a service is functioning correctly, monitor and log activity allow you to drill down if there are issues. You also want to know and monitor what is currently running for cost and maintenance reasons, for example.
  • Security: This is critical for compliance, data integrity, data availability, and potential financial damage. It's important to have different encryption, authentication, and authorization processes in place.

Next we will have a look at the communication styles and decomposition pattern first.

Communication styles and decomposition microservice patterns

In this section, we will discuss two microservice patterns, called communication styles and decomposition, with a sufficient level of detail that you will be able to discuss them with other developers, architects, and DevOps.

Communication styles

Microservice applications are distributed by nature, so they heavily rely on the authorizations network. This makes it important to understand the different communications styles available. These can be to communicate with each other but also with the outside world. Here are some examples:

  • Remote procedure calls: It used to be popular for Java to use Remote Method Invocation (RMI), which is a tight coupling between client and servers with a non-standard protocol, which is one limitation. In addition, the network is not reliable and so traditional RMIs should be avoided. Others, such as the SOAP interface, and a client generated from the Web Service Definition Language (WSDL), are better but are seen as heavy weight, compared to REpresentational State Transfer (REST) APIs that have widely been adopted in microservices.
  • Synchronous communication: It is simpler to understand and implement; you make a request and get a response. However, while waiting for the response, you may also be blocking a connection slot and resources, limiting calls from other services:

  • Asynchronous communication: With asynchronous communication, you make the request and then get the response later and sometimes out of order. These can be implemented using callbacks, async/await, or promise in Node.js or Python. However, there are many design considerations in using async, especially if there are failures that need monitoring. Unlike most synchronous calls, these are non-blocking:

When dealing with communications, you also need to think about whether your call is blocking or non-blocking. For example, writing metrics from web clients to a NoSQL database using blocking calls could slow down your website.

You need to think about dealing with receiving too many requests and throttling them to not overwhelm your service, and look at failures such as retires, delays, and errors.

When using Lambda functions, you benefit from AWS-built event source and spinning up a Lambda per request or with a micro-batch of data. In most cases, synchronous code is sufficient even at scale, but it's important to understand the architecture and communication between services when designing a system, as it is limited by bandwidth, and network connections can fail.

One-to-one communication microservice patterns

At an individual microservice level, the data management pattern is composed of a suite of small services, with its own local data store, communicating with a REST API or via publish/subscribe:

API Gateway is a single entry point for all clients, and tailored for them, allowing changes to be decoupled from the main microservice API, which is especially useful for external-facing services.

One-to-one request/response can be sync or async. If they are sync, they can have a response for each request. If the communication is async, they can have an async response or async notification. Async is generally preferred and much more scalable, as it does not hold an open connection (non-blocking), and makes better use of the central processing unit (CPU) and input/output (I/O) operations.

We will go into further detail on the data-management patterns later in the book, where we will be looking at how microservices integrate in a wider ecosystem.

Many-to-many communication microservice patterns

For many-to-many communication, we use publish/subscribe, which is a messaging pattern. This is where senders of messages, called publishers, do not program the messages to be sent directly to specific receivers; rather, the receiver needs to subscribe to the messages. It's a highly scalable pattern as the two are decoupled:

Asynchronous messaging allows a service to consume and act upon the events, and is a very scalable pattern as you have decoupled two services: the publisher and the subscriber.

Decomposition pattern by business capability

How do you create and design microservices? If you are migrating existing systems, you might look at decomposing a monolith or application into microservices. Even for new a green-field project, you will want to think about the microservices that are required:

First, you identify the business capability, that is, what an organization does in order to generate value, rather than how. That is, you need to analyze purpose, structure, and business processes. Once you identify the business capabilities, you define a service for each capability or capability group. You then need to add more details to understand what the service does by defining the available methods or operations. Finally, you need to architect how the services will communicate.

The benefit of this approach is that it is relatively stable as it is linked to what your business offers. In addition, it is linked to processes and stature.

The drawbacks are that the data can span multiple services, it might not be optimum communication or shared code, and needs a centralized enterprise-language model.

Decomposition pattern by bounded context

There are three steps to apply the decomposition pattern by bounded context: first, identify the domain, which is what an organization does. Then identify the subdomain, which is to split intertwined models into logically-separated subdomains according to their actual functionality. Finally, find the bounded context to mark off where the meaning of every term used by the domain model is well understood. Bounded context does not necessarily fall within only a single subdomain. The three steps are as follows:

The benefits of this pattern are as follows:

  • Use of Ubiquitous Language where you work with domain experts, which helps with wider communication.
  • Teams own, deploy, and maintain services, giving them flexibility and a deeper understanding within their bounded context. This is good because services within it are most likely to talk to each other.
  • The domain is understood by the team with a representative domain expert. There is an interface that abstracts away of a lot of the implementation details for other teams.

There are a few drawbacks as well:

  • It needs domain expertise.
  • It is iterative and needs to be continuous integration (CI) to be in place.
  • Overly complex for a simple domain, dependent on Ubiquitous Language and domain expert.
  • If a polyglot approach was used, it's possible no one knows the tech stack any more. Luckily, microservices should be smaller and simpler, so these can be rewritten.

More details can be found in the following books:

  • Building-microservices, Sam Newman (2015)
  • Domain-Driven Design: Tackling Complexity in the Heart of Software, Eric Evans (2003)
  • Implementing Domain-Driven Design, Vaughn Vernon (2013)

Serverless computing in AWS

Serverless computing in AWS allows you to quickly deploy event-driven computing in the cloud. With serverless computing, there are still servers but you don't have the manage them. AWS automatically manages all the computing resources for you, as well as any trigger mechanisms. For example, when an object gets written to a bucket, that would trigger an event. If another service writes a new record to an Amazon DynamoDB table, that could trigger an event or an endpoint to be called.

The main idea of using event-driven computing is that it easily allows you to transform data as it arrives into the cloud, or we can perform data-driven auditing analysis notifications, transformations, or parse Internet of Things (IoT) device events. Serverless also means that you don't need to have an always-on running service in order to do that, you can actually trigger it based on the event.

Overview of some of the key serverless services in AWS

Some key serverless services in AWS are explained in the following list:

  • Amazon Simple Storage Service (S3): A distributed web-scale object store that is highly scalable, highly secure, and reliable. You only pay for the storage that you actually consume, which makes it beneficial in terms of pricing. It also supports encryption, where you can provide your own key or you can use a server-side encryption key provided by AWS.
  • Amazon DynamoDB: A fully-managed NoSQL store database service that is managed by AWS and allows you to focus on writing the data out to the data store. It's highly durable and available. It has been used in gaming and other high-performance activities, which require low latency. It uses SSD storage under the hood and also provides partitioning for high availability.
  • Amazon Simple Notification Service (SNS): A push-notification service that allows you to send notifications to other subscribers. These subscribers could be email addresses, SNS messages, or other queues. The messages would get pushed to any subscriber to the SNS service.
  • Amazon Simple Queue Service (SQS): A fully-managed and scalable distributed message queue that is highly available and durable. SQS queues are often subscribed to SNS topics to implement the distributed publish-subscribe pattern. You pay for what you use based on the number of requests.
  • AWS Lambda: The main idea is you write your business logic code and it gets triggered based on the event sources you configure. The beauty is that you only pay for when the code is actually executed, down to the 100 milliseconds. It automatically scales and is highly available. It is one of the key components to the AWS serverless ecosystem.
  • Amazon API Gateway: A managed API service that allows you to build, publish, and manage APIs. It performs at scale and allows you to also perform caching, traffic throttling, and caching in edge locations, which means they're localized based on where the user is located, minimizing overall latency. In addition, it integrates natively with AWS Lambda functions, allowing you to focus on the core business logic code to parse that request or data.
  • AWS Identity and Access Management (IAM): The central component of all security is IAM roles and policies, which are basically a mechanism that's managed by AWS for centralizing security and federating it to other services. For example, you can restrict a Lambda to only read a specific DynamoDB table, but not have the ability to write to the same DynamoDB table or deny read/write access any other tables.
  • Amazon CloudWatch: A central system for monitoring services. You can, for example, monitor the utilization of various resources, record custom metrics, and host application logs. It is also very useful for creating rules that trigger a notification when specific events or exceptions occur.
  • AWS X-Ray: A service that allows you to trace service requests and analyze latency and traces from various sources. It also generates service maps, so you can see the dependency and where the most time is spent in a request, and do root cause analysis of performance issues and errors.
  • Amazon Kinesis Streams: A steaming service that allows you to capture millions of events per second that you can analyze further downstream. The main idea is you would have, for example, thousands of IoT devices writing directly to Kinesis Streams, capturing that data in one pipe, and then analyzing it with different consumers. If the number of events goes up and you need more capacity, you can simply add more shards, each with a capacity of 1,000 writes per second. It's simple to add more shards as there is no downtime, and they don't interrupt the event capture.
  • Amazon Kinesis Firehose: A system that allows you to persist and load streaming data. It allows you to write to an endpoint that would buffer up the events in memory for up to 15 minutes, and then write it into S3. It supports massive volumes of data and also integrates with Amazon Redshift, which is a data warehouse in the cloud. It also integrates with the Elasticsearch service, which allows you to query free text, web logs, and other unstructured data.
  • Amazon Kinesis Analytics: Allows you to analyze data that is in Kinesis Streams using structured query language (SQL). It also has the ability to discover the data schema so that you can use SQL statements on the stream. For example, if you're capturing web analytics data, you could count the daily page view data and aggregate them up by specific pageId.
  • Amazon Athena: A service that allows you to directly query S3 using a schema on read. It relies on the AWS Glue Data Catalog to store the table schemas. You can create a table and then query the data straight off S3, there's no spin-up time, it's serverless, and allows you to explore massive datasets in a very flexible and cost-effective manner.

Among all these services, AWS Lambda is the most widely used serverless service in AWS. We will discuss more about that in the next section.

AWS Lambda

The key serverless component in AWS is called AWS Lambda. A Lambda is basically some business logic code that can be triggered by an event source:

A data event source could be the put or get of an object to an S3 bucket. Streaming event sources could be new records that have been to a DynamoDB table that trigger a Lambda function. Other streaming event sources include Kinesis Streams and SQS.

One example of requests to endpoints are Alexa skills, from Alexa echo devices. Another popular one is Amazon API Gateway, when you call an endpoint that would invoke a Lambda function. In addition, you can use changes in AWS CodeCommit or Amazon Cloud Watch.

Finally, you can trigger different events and messages based on SNS or different cron events. These would be regular events or they could be notification events.

The main idea is that the integration between the event source and the Lambda is managed fully by AWS, so all you need to do is write the business logic code, which is the function itself. Once you have the function running, you can run either a transformation or some business logic code to actually write to other services on the right of the diagram. These could be data stores or invoke other endpoints.

In the serverless world, you can implement sync/asyc requests, messaging or event stream processing much more easily using AWS Lambdas. This includes the microservice communication style and data-management patterns we just talked about.

Lambda has two types of event sources types, non-stream event sources and stream event sources:

  • Non-stream event sources: Lambdas can be invoked asynchronously or synchronously. For example, SNS/S3 are asynchronous but API Gateway is sync. For sync invocations, the client is responsible for retries, but for async it will retry many times before sending it to a Dead Letter Queue (DLQ) if configured. It's great to have this retry logic and integration built in and supported by AWS event sources, as it means less code and a simpler architecture:

  • Stream event sources: The Lambda is invoked with micro-batches of data. In terms of concurrency, there is one Lambda invoked in parallel per shard for Kinesis Streams or one Lambda per partition for DynamoDB Stream. Within the lambda, you just need to iterate over the Kinesis Streams, DynamoDB, or SQS data passed in as JSON records. In addition, you benefit from the AWS built-in streams integration where the Lambda will poll the stream and retrieve the data in order, and will retry upon failure until the data expires, which can be up to seven days for Kinesis Streams. It's also great to have that retry logic built in without having to write a line of code. It is much more effort if you had to build it as a fleet of EC2 or containers using the AWS Consumer or Kinesis SDK yourself:

In essence, AWS is responsible for the invocation and passing in the event data to the Lambda, you are responsible for the processing and the response of the Lambda.

Serverless computing to implement microservice patterns

Here is an overview diagram of some of the serverless and managed services available on AWS:

Leveraging AWS-managed services does mean additional vendor lock-in but helps you reduce non business differentiating support and maintenance costs. But also to deploy your applications faster as the infrastructure can be provisioned or destroyed in minutes. In some cases, when using AWS-managed services to implement microservices patterns, there is no need for much code, only configuration.

We have services for the following:

  • Events, messaging, and notifications: For async publish/subscribe and coordinating components
  • API and web: To create APIs for your serverless microservices and expose it to the web
  • Data and analytics: To store, share, and analyze your data
  • Monitoring: Making sure your microservices and stack are operating correctly
  • Authorization and security: To ensure that your services and data is secure, and only accessed by those authorized

At the center is AWS Lambda, the glue for connecting services, but also one of the key places for you to deploy your business logic source code.

Example use case – serverless file transformer

Here is an example use case, to give you an idea of how different managed AWS systems can fit together as a solution. The requirements are that a third-party vendor is sending us a small 10 MB file daily at random times, and we need to transform the data and write it to a NoSQL database so it can be queried in real time. If there are any issues with the third-party data, we want to send an alert within a few minutes. Your boss tells you that you they don't want to have an always-on machine just for this task, the third party has no API development experience, and there is a limited budget. The head of security also finds out about this project and adds another constraint. They don't want to give third-party access to your AWS account beyond one locked-down S3 bucket:

This can be implemented as an event-driven serverless stack. On the left, we have an S3 bucket where the third party has access to drop their file. When a new object is created, that triggers a Lambda invocation via the built-in event source mapping. The Lambda executes code to transform the data, for example, extracts key records such as user_id, date, and event type from the object, and writes them to a DynamoDB table. The Lambda sends summary custom metrics of the transformation, such as number of records transformed and written to CloudWatch metrics. In addition, if there are transformation errors, the Lambda sends an SNS notification with a summary of the transformation issues, which could generate an email to the administrator and third-party provider for them to investigate the issue.

Setting up your serverless environment

If you already have an AWS account and configured it locally you can skip this section, but for security reasons, I recommend you enable Multi-Factor Authentication (MFA) for console access and do not use the root user account keys for the course.

There are three ways to access resources in AWS:

  • AWS Management Console is a web-based interface to manage your services and billing.
  • AWS Command Line Interface is a unified tool to manage and automate all your AWS services.
  • The software-development kit in Python, JavaScript, Java, .NET, and GO, which allows you to programmatically interact with AWS.

Setting up your AWS account

It's very simple to set up an account; all you need is about five minutes, a smartphone, and a credit card:

  1. Create an account. AWS accounts include 12 months of Free Tier access: https://aws.amazon.com/free/.
  2. Enter your name and address.
  3. Provide a payment method.
  4. Verify your phone number.

This will create a root account, I recommend you only use it for billing and not development

Setting up MFA

I recommend you use MFA as it adds an extra layer of protection on top of your username and password. It's free using your mobile phone as a Virtual MFA Device (https://aws.amazon.com/iam/details/mfa/). Perform the following steps to set it up:

  1. Sign into the AWS Management Console: https://console.aws.amazon.com.
  2. Choose Dashboard on the left menu.
  3. Under Security Status, expand Activate MFA on your root account.
  4. Choose Activate MFA or Manage MFA.
  1. In the wizard, choose Virtual MFA device, and then choose Continue.
  2. Install an MFA app such as Authy (https://authy.com/).
  3. Choose Show QR code then scan the QR code with you smartphone. Click on the account and generate an Amazon six-digit token.
  4. Type the six-digit token in the MFA code 1 box.
  5. Wait for your phone to generate a new token, which is generated every 30 seconds.
  6. Type the six-digit token into the MFA code 2 box.
  7. Choose Assign MFA:

Setting up a new user with keys

For security reasons, I recommend you use the root account only for billing! So, the first thing is to create another user with fewer privileges:

Create a user with the following steps:

  1. Sign into the AWS Management console (https://console.aws.amazon.com/).
  2. Choose Security, Identity, & Compliance > IAM or search for IAM under Find services.
  3. In the IAM page, choose Add User.
  4. For User name, type new user on the set user details pane.
  1. For Select AWS access Type, select the check boxes next to Programmatic access, AWS Console access. Optionally select Autogenerated password and Require password rest.
  2. Choose Next: Permissions:

Follow these steps to set the permission for the new user:

  1. Choose Create group.
  2. In the Create group dialog box, type Administrator for new group name.
  3. In policy list, select the checkbox next to AdministratorAccess (note that, for non-proof of concept or non-development AWS environments, I recommend using more restricted access policies).
  4. Select Create group.
  1. Choose refresh and ensure the checkbox next to Administrator is selected.
  2. Choose Next: Tags.
  3. Choose Next: Review.
  4. Choose Create user.
  5. Choose Download .csv and take a note of the keys and password. You will need these to access the account programmatically and log on as this user.
  6. Choose Close.

As with the root account, I recommend you enable MFA:

  1. In the Management Console, choose IAM | User and choose the newuser.
  2. Choose the Security Credentials tab, then choose Manage next to Assigned MFA device Not assigned.
  3. Choose a virtual MFA device and choose Continue.
  4. Install an MFA application such as Authy (https://authy.com/).
  5. Choose Show QR code then scan the QR code with you smartphone. Click on the Account and generate an Amazon six-digit token.
  6. Type the six-digit token in the MFA code 1 box.
  7. Wait for your phone to generate a new token, which is generated every 30 seconds.
  8. Type the six-digit token into the MFA code 2 box.
  9. Choose Assign MFA.

Managing your infrastructure with code

A lot can be done with the web interface in the AWS Management Console. It's a good place to start and help you to understand what you are building, but most often it is not recommended for production deployments as it is time-consuming and prone to human error. Best practice is to deploy and manage your infrastructure using code and configuration only. We will be using the AWS Command-line Interface (CLI), bash shell scripts, and Python 3 throughout this book, so let's set these up now.

Installing bash on Windows 10

Please skip this step if you are not using Windows.

Using bash (Unix shell) makes your life much easier when deploying and managing your serverless stack. I think all analysts, data scientists, architects, administrators, database administrators, developers, DevOps, and technical people should know some basic bash and be able to run shell scripts, which are typically used on Linux and Unix (including the macOS Terminal).

Alternatively, you can adapt the scripts to use MS-DOS or PowerShell, but it's not something I recommended, given that bash can now run natively on Windows 10 as an application, and there are many more examples online in bash.

Note that I have stripped off the \r or carriage returns, as they are illegal in shell scripts. You can use something such as Notepad++ (https://notepad-plus-plus.org/) on Windows if you want to view the carriage returns in your files properly. If you use traditional Windows Notepad, the new lines may not be rendered at all, so use Notepad++, Sublime (https://www.sublimetext.com/), Atom (https://atom.io/), or another editor.

A detailed guide on how to install Linux Bash shell on Windows 10 can be found at https://www.howtogeek.com/249966/how-to-install-and-use-the-linux-bash-shell-on-windows-10/. The main steps are as follows:

  1. Navigate to Control Panel | Programs | Turn Windows Features On Or Off.
  2. Choose the check box next to the Windows Subsystem for Linux option in the list, and then Choose OK.
  3. Navigate to Microsoft Store | Run Linux on Windows and select Ubuntu.
  4. Launch Ubuntu and set up a root account with a username and password the Windows C:\ and other drives are already mounted, and you can access them with the following command in the Terminal:
$ cd /mnt/c/

Well done, you now have full access to Linux on Windows!

Updating Ubuntu, installing Git and Python 3

Git will be used later on in this book:

$ sudo apt-get update
$ sudo apt-get -y upgrade
$ apt-get install git-core

The Lambda code is written in Python 3.6. pip is a tool for installing and managing Python packages. Other popular Python package and dependency managers are available, such as Conda (https://conda.io/docs/index.html) or Pipenv (https://pipenv.readthedocs.io/en/latest/), but we will be using pip as it is the recommended tool for installing packages from the Python Package Index PyPI (https://pypi.org/) and is the most widely supported:

$ sudo apt -y install python3.6
$ sudo apt -y install python3-pip

Check the Python version:

$ python --version

You should get Python version 3.6+.

The dependent packages required for running, testing, and deploying the severless microservices are listed in requirements.txt under each project folder, and can be installed using pip:

$ sudo pip install -r /path/to/requirements.txt

This will install the dependent libraries for local development, such as Boto3, which is the Python AWS Software Development Kit (SDK).

In some projects, there is a file called lambda-requirements.txt, which contains the third-party packages that are required by the Lambda when it is deployed. We have created this other requirements file as the Boto3 package is already included when the Lambda is deployed to AWS, and the deployed Lambda does not need testing-related libraries, such as nose or locust, which increase the package size.

Installing and setting up the AWS CLI

The AWS CLI is used to package and deploy your Lambda functions, as well as to set up the infrastructure and security in a repeatable way:

$ sudo pip install awscli --upgrade

You created a user called newuser earlier and have a crednetials.csv file with the AWS keys. Enter them by running aws configure:

$ aws configure
AWS Access Key ID: <the Access key ID from the csv>
AWS Secret Access Key: <the Secret access key from the csv>
Default region name: <your AWS region such as eu-west-1>
Default output format: <optional>

More details on setting up the AWS CLI are available in the AWS docs (https://docs.aws.amazon.com/lambda/latest/dg/welcome.html).

To choose your AWS Region, refer to AWS Regions and Endpoints (https://docs.aws.amazon.com/general/latest/gr/rande.html). Generally, those in the USA use us-east-1 and those in Europe use eu-west-1.

Summary

In this chapter, we got an overview of monolithic and microservices architectures. We then talked about the design patterns and principles and how they relate to serverless microservices. We also saw how to set up the AWS and development environment that will be used in this book.

In the next chapter, we will create a serverless microservice that exposes a REST API and is capable of querying a NoSQL store built using API Gateway, Lambda, and DynamoDB.

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Create a secure, cost-effective, and scalable serverless data API
  • Use identity management and authentication for a user-specific and secure web application
  • Go beyond traditional web hosting to explore the full range of cloud hosting options

Description

Over the last few years, there has been a massive shift from monolithic architecture to microservices, thanks to their small and independent deployments that allow increased flexibility and agile delivery. Traditionally, virtual machines and containers were the principal mediums for deploying microservices, but they involved a lot of operational effort, configuration, and maintenance. More recently, serverless computing has gained popularity due to its built-in autoscaling abilities, reduced operational costs, and increased productivity. Building Serverless Microservices in Python begins by introducing you to serverless microservice structures. You will then learn how to create your first serverless data API and test your microservice. Moving on, you'll delve into data management and work with serverless patterns. Finally, the book introduces you to the importance of securing microservices. By the end of the book, you will have gained the skills you need to combine microservices with serverless computing, making their deployment much easier thanks to the cloud provider managing the servers and capacity planning.

Who is this book for?

If you are a developer with basic knowledge of Python and want to learn how to build, test, deploy, and secure microservices, then this book is for you. No prior knowledge of building microservices is required.

What you will learn

  • Discover what microservices offer above and beyond other architectures
  • Create a serverless application with AWS
  • Gain secure access to data and resources
  • Run tests on your configuration and code
  • Create a highly available serverless microservice data API
  • Build, deploy, and run your serverless configuration and code
Estimated delivery fee Deliver to France

Premium delivery 7 - 10 business days

€10.95
(Includes tracking information)

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Mar 29, 2019
Length: 168 pages
Edition : 1st
Language : English
ISBN-13 : 9781789535297
Languages :
Concepts :
Tools :

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Estimated delivery fee Deliver to France

Premium delivery 7 - 10 business days

€10.95
(Includes tracking information)

Product Details

Publication date : Mar 29, 2019
Length: 168 pages
Edition : 1st
Language : English
ISBN-13 : 9781789535297
Languages :
Concepts :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
€18.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
€189.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts
€264.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total 99.97
Python Microservices Development
€41.99
Hands-On Docker for Microservices with Python
€32.99
Building Serverless Microservices in Python
€24.99
Total 99.97 Stars icon
Banner background image

Table of Contents

7 Chapters
Serverless Microservices Architectures and Patterns Chevron down icon Chevron up icon
Creating Your First Serverless Data API Chevron down icon Chevron up icon
Deploying Your Serverless Stack Chevron down icon Chevron up icon
Testing Your Serverless Microservice Chevron down icon Chevron up icon
Securing Your Microservice Chevron down icon Chevron up icon
Summary and Future Work Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Rating distribution
Full star icon Full star icon Full star icon Empty star icon Empty star icon 3
(5 Ratings)
5 star 20%
4 star 20%
3 star 20%
2 star 20%
1 star 20%
User Sep 16, 2019
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Found this is a really good book to get me up and running with Serverless in AWS using Python, wish it were bit longer. Really like how the author introduces microservice architecture theory, with practical examples I can use and the in-depth and hands-on section on many forms of testing.
Amazon Verified review Amazon
Amazon Customer May 06, 2019
Full star icon Full star icon Full star icon Full star icon Empty star icon 4
best for aws users trying to learn python scripting
Amazon Verified review Amazon
Andres G. T. Jul 25, 2020
Full star icon Full star icon Full star icon Empty star icon Empty star icon 3
It has a good intro to microservices concepts and good practice with lambda. However, if you plan to follow the exercises you may find it difficult. For example, sometimes the code it's repetitive and the author doesn't explain why, or he doesn't tell you that, after writing some python scripts, you have to run them locally first before running a test for a lambda function in AWS. Also, the GitHub site for the book is hard to find the code for the corresponding section in the book (at the least in the Kindle edition).
Amazon Verified review Amazon
Rob Shearer Apr 08, 2021
Full star icon Full star icon Empty star icon Empty star icon Empty star icon 2
This can't really be called a "book". It's a tutorial, where you follow the directions given to create an AWS serverless service. There is a brief description of what each command does, but very little discussion of why or what other approaches are possible. Many of the steps are literally "cut and paste this code into the box".The real issue is that even as a tutorial it's atrocious. The quality of the code itself wouldn't pass muster in any serious programming team. There seems to be no rhyme or reason to what is abstracted or parameterized, leading to code that is overcomplicated and verbose while providing no real value for reuse. But it's also terribly written for a tutorial. When code from one step is expanded with a little bit of additional functionality in the next step, that isn't added as a few extra lines of code; instead the entire code is arbitrarily rewritten with slightly different function names for no obvious reason. There was clearly no intention of readers ever actually "working through" the tutorial by coding it up themselves (and experimenting with variations), only cutting and pasting.I wish I had the knowledge to comment on whether the tutorial genuinely offers modern best practice for developing serverless applications. I don't. But the author's obvious lack of any production software engineering expertise or experience does not inspire confidence.That this "book" was pushed out the door in this state must be taken as a reflection of its publisher. Based on this book, I will make sure not to waste money on any other Packt releases in the future.
Amazon Verified review Amazon
utente Sep 09, 2019
Full star icon Empty star icon Empty star icon Empty star icon Empty star icon 1
Terribile. È scritto coi piedi. Sembra il risultato di una persona a cui è stato detto che dovrebbe scrivere un libro, e ha buttato giù quattro appunti. Termini usati senza definirli, o con la definizione dopo il primo utilizzo. Frasi senza capo ne coda. Mi sono fermato quando ho visto una immagine in bianco e nero con la spiegazione: "I rettangoli in verde indicano i servizi...". QUesto genere di libri distruggono la fiducia in un autore e in una casa editrice. Non li conoscevo prima, li eviterò in seguito. Mi spiace perchè l'argomento mi interessava davvero :-(
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is the delivery time and cost of print book? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela
What is custom duty/charge? Chevron down icon Chevron up icon

Customs duty are charges levied on goods when they cross international borders. It is a tax that is imposed on imported goods. These duties are charged by special authorities and bodies created by local governments and are meant to protect local industries, economies, and businesses.

Do I have to pay customs charges for the print book order? Chevron down icon Chevron up icon

The orders shipped to the countries that are listed under EU27 will not bear custom charges. They are paid by Packt as part of the order.

List of EU27 countries: www.gov.uk/eu-eea:

A custom duty or localized taxes may be applicable on the shipment and would be charged by the recipient country outside of the EU27 which should be paid by the customer and these duties are not included in the shipping charges been charged on the order.

How do I know my custom duty charges? Chevron down icon Chevron up icon

The amount of duty payable varies greatly depending on the imported goods, the country of origin and several other factors like the total invoice amount or dimensions like weight, and other such criteria applicable in your country.

For example:

  • If you live in Mexico, and the declared value of your ordered items is over $ 50, for you to receive a package, you will have to pay additional import tax of 19% which will be $ 9.50 to the courier service.
  • Whereas if you live in Turkey, and the declared value of your ordered items is over € 22, for you to receive a package, you will have to pay additional import tax of 18% which will be € 3.96 to the courier service.
How can I cancel my order? Chevron down icon Chevron up icon

Cancellation Policy for Published Printed Books:

You can cancel any order within 1 hour of placing the order. Simply contact [email protected] with your order details or payment transaction id. If your order has already started the shipment process, we will do our best to stop it. However, if it is already on the way to you then when you receive it, you can contact us at [email protected] using the returns and refund process.

Please understand that Packt Publishing cannot provide refunds or cancel any order except for the cases described in our Return Policy (i.e. Packt Publishing agrees to replace your printed book because it arrives damaged or material defect in book), Packt Publishing will not accept returns.

What is your returns and refunds policy? Chevron down icon Chevron up icon

Return Policy:

We want you to be happy with your purchase from Packtpub.com. We will not hassle you with returning print books to us. If the print book you receive from us is incorrect, damaged, doesn't work or is unacceptably late, please contact Customer Relations Team on [email protected] with the order number and issue details as explained below:

  1. If you ordered (eBook, Video or Print Book) incorrectly or accidentally, please contact Customer Relations Team on [email protected] within one hour of placing the order and we will replace/refund you the item cost.
  2. Sadly, if your eBook or Video file is faulty or a fault occurs during the eBook or Video being made available to you, i.e. during download then you should contact Customer Relations Team within 14 days of purchase on [email protected] who will be able to resolve this issue for you.
  3. You will have a choice of replacement or refund of the problem items.(damaged, defective or incorrect)
  4. Once Customer Care Team confirms that you will be refunded, you should receive the refund within 10 to 12 working days.
  5. If you are only requesting a refund of one book from a multiple order, then we will refund you the appropriate single item.
  6. Where the items were shipped under a free shipping offer, there will be no shipping costs to refund.

On the off chance your printed book arrives damaged, with book material defect, contact our Customer Relation Team on [email protected] within 14 days of receipt of the book with appropriate evidence of damage and we will work with you to secure a replacement copy, if necessary. Please note that each printed book you order from us is individually made by Packt's professional book-printing partner which is on a print-on-demand basis.

What tax is charged? Chevron down icon Chevron up icon

Currently, no tax is charged on the purchase of any print book (subject to change based on the laws and regulations). A localized VAT fee is charged only to our European and UK customers on eBooks, Video and subscriptions that they buy. GST is charged to Indian customers for eBooks and video purchases.

What payment methods can I use? Chevron down icon Chevron up icon

You can pay with the following card types:

  1. Visa Debit
  2. Visa Credit
  3. MasterCard
  4. PayPal
What is the delivery time and cost of print books? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela