Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Hands-On Serverless Computing

You're reading from   Hands-On Serverless Computing Build, run and orchestrate serverless applications using AWS Lambda, Microsoft Azure Functions, and Google Cloud Functions

Arrow left icon
Product type Paperback
Published in Jul 2018
Publisher Packt
ISBN-13 9781788836654
Length 350 pages
Edition 1st Edition
Tools
Concepts
Arrow right icon
Author (1):
Arrow left icon
Kuldeep Chowhan Kuldeep Chowhan
Author Profile Icon Kuldeep Chowhan
Kuldeep Chowhan
Arrow right icon
View More author details
Toc

Table of Contents (16) Chapters Close

Preface 1. What is Serverless Computing? 2. Development Environment, Tools, and SDKs FREE CHAPTER 3. Getting Started with AWS Lambda 4. Triggers and Events for AWS Lambda 5. Your First Serverless Application on AWS 6. Serverless Orchestration on AWS 7. Getting Started with Azure Functions 8. Triggers and Bindings for Azure Functions 9. Your First Serverless Application on Azure 10. Getting Started with Google Cloud Functions 11. Triggers and Events for Google Cloud Functions 12. Your First Serverless Application on Google Cloud 13. Reference Architecture for a Web App 14. Reference Architecture for a Real-time File Processing 15. Other Books You May Enjoy

What serverless computing is not

So far, I have talked about serverless computing and the benefits that it provides over traditional infrastructure architectures. Let's spend some time understanding what's not serverless as well before we look at the drawbacks of serverless computing.

Comparison with PaaS

Given that serverless applications or FaaS applications do tend to be similar to The Twelve-Factor applications, people often compare them to PaaS offerings. However, if you look closely, PaaS offerings are not built to bring your entire application up and down for every request, but FaaS or serverless applications are built to do that. For example, if you look at a popular PaaS offering from AWS, AWS Elastic Beanstalk, it is not designed to bring up the entire infrastructure every time a request is processed. However, it does provide lots of capabilities to the developers, where it makes the deployment and management of web applications very easy.

The other key difference between PaaS and FaaS is scaling. With PaaS offerings, you still need a solution to handle the scaling of your application to the load. But with FaaS offerings, this is completely transparent. Lots of PaaS solutions, such as AWS Elastic Beanstalk, do offer autoscaling based on different parameters, such as CPU, memory, or network. However, this is still not tailored towards individual requests, which makes FaaS offerings much more efficient in scaling and provides best bang for buck in terms of cost.

Comparison with containers

Another comparison that is quite popular for FaaS offerings is with containers.

Let's look at what containers are, as we haven't discussed them so far. Containerization of the infrastructure has been around for a long time, but it was made popular in 2013 by Docker when they combined operation system level virtualization with filesystem images on Linux. With this, it became easy to build and deploy containerized applications. The biggest benefit that Docker-containers provides is that, unlike Virtual Machines (VMs), they share the same operating system as the host, which tremendously reduces the size of the image that is required to run a containerized application. As the footprint of Docker containers is less, you can run multiple Docker containers on the same host without significant impact on the host system.

As container runtimes gained popularity, container platforms started to evolve. Some of the notable ones are:

  • CloudFoundry
  • Apache Mesos

The next step in the evolution of containerization was providing an API that let developers deploy Docker images across a fleet of compute infrastructures. The most popular container orchestration and scheduling platform that is out there right now is Kubernetes. It started as an internal project within Google in 2004, called Borg. Later, Google open sourced the solution as Kubernetes and it is right now the most commonly used container platform out there. There are other solutions as well, such as Mesosphere.

And even further advancement in container solutions was cloud-hosted container platforms, such as Amazon EC2 Container Service (ECS) and Google Container Engine (GKE), which, like serverless offerings, eliminates the need to provision and manage the master nodes in the cluster. However, you still have to provision and manage the nodes on which the containers will be scheduled to run. There are a lot of solutions that are out there that help in the maintenance of nodes within the cluster. AWS introduced their new container offering called Fargate, which is completely serverless, where you don't have to provision the nodes within the cluster ahead of time. It is still early days for Fargate, but if it offers what it promises, it will certainly help a lot of developers who don't have to spend time provisioning and managing nodes within the cluster.

Fundamentally, the same argument that I made for PaaS still holds true with containers as well. For serverless offerings or FaaS offerings, scaling is automatically managed and is transparent. The scaling also has fine grain control. Container platforms do not yet offer such a solution. There are scaling solutions within container platforms, such as Kubernetes Horizontal Pod Autoscaling, which do scale based on the load on the system. However, at the moment, they don't offer the same level of control that FaaS offerings provide.

As the gap of scaling and management between FaaS offerings and hosted container solutions narrow, the choice between the two may come down to the type of the application. For example, you would choose FaaS for applications that are event driven and containers for request-driven components with a lot of entry points. I expect FaaS offerings and container offerings to merge in the coming years.

#NoOps

Serverless computing or FaaS doesn't mean No Ops completely. Definitely, there is no system administration required to provision and manage infrastructure to run your applications. However, there are still some aspects of operations (Ops) involved in making sure your application is running the way it is supposed to run. You need to monitor your application using the monitoring data that FaaS offerings provide and ensure that they are running optimally.

Ops doesn't just mean system administration. It also includes monitoring, deploying, securing, and production debugging. With serverless apps, you still have to do all these and, in some cases, it may be harder to do Ops for serverless applications as the tool set is relatively new.

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image