Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Building Serverless Web Applications

You're reading from   Building Serverless Web Applications Develop scalable web apps using the Serverless Framework on AWS

Arrow left icon
Product type Paperback
Published in Jul 2017
Publisher Packt
ISBN-13 9781787126473
Length 354 pages
Edition 1st Edition
Languages
Tools
Concepts
Arrow right icon
Author (1):
Arrow left icon
Diego Zanon Diego Zanon
Author Profile Icon Diego Zanon
Diego Zanon
Arrow right icon
View More author details
Toc

The pros and cons of serverless

In this section, we will go through the various pros and cons associated with serverless computing.

Pros

We can list the following strengths:

  • Fast scalability
  • High availability
  • Efficient usage of resources
  • Reduced operational costs
  • Focus on business, not on infrastructure
  • System security is outsourced
  • Continuous delivery
  • Microservices friendly
  • Cost model is startup friendly

Let's skip the first three benefits, since they were already covered in the previous pages, and let's take a look at the others.

Reduced operational costs

As the infrastructure is fully managed by the cloud vendor, it reduces the operational costs since you don't need to worry about hardware failures, applying security patches to the operating system, or fixing network issues. It effectively means that you need to spend less sysadmin hours to keep your application running.

Also, it helps to reduce risks. If you make an investment to deploy a new service and that ends up as a failure, you don't need to worry about selling machines or disposing the data center that you have built.

Focus on business

Lean software development states that you must spend time in what aggregates value to the final product. In a serverless project, the focus is on business. Infrastructure is a second-class citizen.

Configuring a large infrastructure is a costly and time-consuming task. If you want to validate an idea through a Minimum Viable Product (MVP) without losing time to market, consider using serverless to save time. There are tools that automate the deployment, which we will use throughout this book and see how they help the developer to launch a prototype with minimum effort. If the idea fails, infrastructure costs are minimized since there are no payments made in advance.

System security

The cloud vendor is responsible for managing the security of the operating system, runtime, physical access, networking, and all related technologies that enable the platform to operate. The developer still needs to handle authentication, authorization, and code vulnerabilities, but the rest is outsourced to the cloud provider. It's a positive feature if you consider that a large team of specialists are focused on implementing the best security practices, and patching new bug fixes as soon as possible to serve their hundreds of customers. That's the definition of economy of scale.

Continuous delivery

Serverless is based on breaking a big project into dozens of packages, each one represented by a top-level function that handles requests. Deploying a new version of a function means uploading a ZIP file to replace the previous one and updating the event configuration that specifies how this function can be triggered.

Executing this task manually, for dozens of functions, is an exhausting task. Automation is a must-have feature when working in a serverless project. In this book, we'll use the Serverless Framework that helps developers manage and organize solutions, making a deployment task as simple as executing a one-line command. With automation, continuous delivery is a feature that brings many benefits, such as the ability to deploy at any time, short development cycles, and easier rollbacks.

Another related benefit when the deployment is automated is the creation of different environments. You can create a new test environment, which is an exact duplicate of the development environment, using simple commands. The ability to replicate the environment is very important for building acceptance tests and to progress from deployment to production.

Microservices friendly

Microservices is a topic that will be better discussed later in this book. In short, a Microservices architecture is encouraged in a serverless project. As your functions are single units of deployment, you can have different teams working concurrently on different use cases. You can also use different programming languages in the same project and take advantage of emerging technologies or team skills.

Cost model

Suppose that you have built an online store with serverless. The average user will make some requests to see a few products and a few more requests to decide whether they will buy something or not. In serverless, a single unit of code has a predictable time to execute for a given input. After collecting some data, you can predict how much a single user costs on average, and this unit cost will remain almost constant as your application grows in usage.

Knowing how much a single user costs and keeping this number fixed is very important for a startup. It helps to decide how much you need to charge for a service or earn through ads or sales to have a profit.

In a traditional infrastructure, you need to make payments in advance, and scaling your application means increasing your capacity in steps. So, calculating the unit cost of a user is a more difficult task and it's a variable number.

In the following diagram, the left-hand side shows traditional infrastructures with stepped costs and the right-hand side depicts serverless infrastructures with linear costs:

Cons

Serverless is great, but no technology is a silver bullet. You should be aware of the following issues:

  • Higher latency
  • Constraints
  • Hidden inefficiencies
  • Vendor dependency
  • Debugging difficulties
  • Atomic deploys
  • Uncertainties

We will address these drawbacks in detail now.

Higher latency

Serverless is event-driven and so your code is not running all the time. When a request is made, it triggers a service that finds your function, unzips the package, loads it into a container, and makes it available to be executed. The problem is that those steps take time: up to a few hundreds of milliseconds. This issue is called a cold start delay and is a trade-off that exists between the serverless cost-effective model and the lower latency of traditional hosting.

There are some solutions available to minimize this performance problem. For example, you can configure your function to reserve more RAM memory. It gives a faster start and overall performance. The programming language is also important. Java has a higher cold start time than JavaScript (Node.js).

Another solution is to benefit from the fact that the cloud provider may cache the loaded code, which means that the first execution will have a delay but further requests will benefit from a smaller latency. You can optimize a serverless function by aggregating a large number of functionalities into a single function. The benefit is that this package will be executed with a higher frequency and will frequently skip the cold start issue. The problem is that a big package will take more time to load and provoke a higher first start time.

As a last resort, you could schedule another service to ping your functions periodically, such as once every 5 minutes, to prevent putting them to sleep. It will add costs, but it removes the cold start problem.

There is also a concept of serverless databases that references services where the database is fully managed by the vendor, and it costs only the storage and the time to execute the database engine. Those solutions are wonderful, but they add a second layer of delay for your requests.

Constraints

If you go serverless, you need to know what the vendor constraints are. For example, on AWS, you can't run a Lambda function for more than 5 minutes. It makes sense because if you spend long time running code, you are using it the wrong way. Serverless was designed to be cost efficient in short bursts. For constant and predictable processing, it will be expensive.

Another constraint on AWS Lambda is the number of concurrent executions across all functions within a given region. Amazon limits this to 1,000. Suppose that your functions need 100 milliseconds on average to execute. In this scenario, you can handle up to 10,000 users per second. The reasoning behind this restriction is to avoid excessive costs due to programming errors that may create potential runways or recursive iterations.

AWS Lambda has a default limit of 1,000 concurrent executions. However, you can file a case into AWS Support Center to raise this limit. If you say that your application is ready for production and that you understand the risks, they will probably increase this value.

When monitoring your Lambda functions using Amazon CloudWatch (more in Chapter 10, Testing, Deploying, and Monitoring), there is an option called throttles. Each invocation that exceeds the safety limit of concurrent calls is counted as one throttle. You can configure a CloudWatch alert to receive an e-mail if this scenario occurs.

Hidden inefficiencies

Some people see serverless as a NoOps solution. That's not true. DevOps is still necessary. You don't need to worry much about servers because they are second-class citizens and the focus is on your business. However, adding metrics and monitoring your applications will always be a good practice. It's so easy to scale that a specific function may be deployed with a poor performance that takes much more time than necessary and remains unnoticed forever because no one is monitoring the operation.

Also, over or under provisioning is also possible (in a smaller sense) since you need to configure your function, setting the amount of RAM memory that it will reserve and the threshold to timeout the execution. It's a very different scale of provisioning, but you need to keep it in mind to avoid mistakes.

Vendor dependency

When you build a serverless solution, you trust your business to a third-party vendor. You should be aware that companies fail and you can suffer downtimes, security breaches, and performance issues. Also, the vendor may change the billing model, increase costs, introduce bugs into their services, have poor documentation, modify an API forcing you to upgrade, and terminate services. A whole bunch of bad things may happen.

What you need to weigh is whether it's worth trusting in another company or making a big investment to build everything by yourself. You can mitigate these problems by doing a market search before selecting a vendor. However, you still need to count on luck. For example, Parse was a vendor that offered managed services with really nice features. It was bought by Facebook in 2013, which gave more reliability due to the fact that it was backed by a big company. Unfortunately, Facebook decided to shut down all servers in 2016, giving one year of notice for customers to migrate to other vendors.

Vendor lock-in is another big issue. When you use cloud services, it's very likely that one specific service has a completely different implementation than another vendor, making those two different APIs. You need to rewrite code in case you decide to migrate. It's already a common problem. If you use a managed service to send e-mails, you need to rewrite part of your code before migrating to another vendor. What raises a red flag here is that a serverless solution is entirely based in one vendor, and migrating the entire codebase can be much more troublesome.

To mitigate this problem, some tools such as the Serverless Framework support multiple vendors, making it easier to switch between them. Multivendor support represents safety for your business and gives power to competitiveness.

Debugging difficulties

Unit testing a serverless solution is fairly simple because any code that your functions rely on can be separated into modules and unit tested. Integration tests are a little bit more complicated because you need to be online to test using external services.

When it comes to debugging to test a feature or fix an error, it's a whole different problem. You can't hook into an external service to see how your code behaves step-by-step. Also, those Serverless APIs are not open sourced, so you can't run them in-house for testing. All you have is the ability to log steps, which is a slow debugging approach, or extract the code and adapt it to host into your own servers and make local calls.

Atomic deploys

Deploying a new version of a serverless function is easy. You update the code and the next time that a trigger requests this function, your newly deployed code will be selected to run. This means that, for a brief moment, two instances of the same function can be executed concurrently with different implementations. Usually, that's not a problem, but when you deal with persistent storage and databases, you should be aware that a new piece of code can insert data into a format that an old version can't understand.

Also, if you want to deploy a function that relies on a new implementation of another function, you need to be careful in the order that you deploy those functions. Ordering is often not secured by the tools that automate the deployment process.

The problem here is that current serverless implementations consider that deployment is an atomic process for each function. You can't batch deploy a group of functions atomically. You can mitigate this issue by disabling the event source while you deploy a specific group, but that means introducing downtime into the deployment process. Another option would be to use a Monolith approach instead of a Microservices architecture for serverless applications.

Uncertainties

Serverless is still a pretty new concept. Early adopters are braving this field, testing what works, and which kind of patterns and technologies can be used. Emerging tools are defining the development process. Vendors are releasing and improving new services. There are high expectations for the future, but the future isn't here yet. Some uncertainties still worry developers when it comes to building large applications. Being a pioneer can be rewarding, but risky.

Technical debt is a concept that compares software development with finances. The easiest solution in the short run is not always the best overall solution. When you take a bad decision in the beginning, you pay later with extra hours to fix it. Software is not perfect. Every single architecture has pros and cons that append technical debt in the long run. The question is: how much technical debt does serverless aggregate to the software development process? Is it more, less, or equivalent to the kind of architecture that you are using today?

You have been reading a chapter from
Building Serverless Web Applications
Published in: Jul 2017
Publisher: Packt
ISBN-13: 9781787126473
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image