Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Kubernetes for Serverless Applications
Kubernetes for Serverless Applications

Kubernetes for Serverless Applications: Implement FaaS by effectively deploying, managing, monitoring, and orchestrating serverless applications using Kubernetes

eBook
$27.98 $39.99
Paperback
$48.99
Subscription
Free Trial
Renews at $19.99p/m

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
OR
Modal Close icon
Payment Processing...
tick Completed

Billing Address

Table of content icon View table of contents Preview book icon Preview Book

Kubernetes for Serverless Applications

The Serverless Landscape

Welcome to the first chapter of Kubernetes for Serverless Applications. In this chapter, we are going to be looking at and discussing the following:

  • What do we mean by serverless and Functions as a Service?
  • What services are out there?
  • An example of Lambda by Amazon Web Services
  • An example of Azure Functions
  • Using the serverless toolkit
  • What problems can we solve using serverless and Functions as a Service?

I think it is important we start by addressing the elephant in the room, and that is the term serverless.

Serverless and Functions as a Service

When you say serverless to someone, the first conclusion they jump to is that you are running your code without any servers.

This can be quite a valid conclusion if you are using one of the public cloud services we will be discussing later in this chapter. However, when it comes to running in your own environment, you can't avoid having to run on a server of some sort.

Before we discuss what we mean by serverless and Functions as a Service, we should discuss how we got here. As people who work with me will no doubt tell you, I like to use the pets versus cattle analogy a lot as this is quite an easy way to explain the differences in modern cloud infrastructures versus a more traditional approach.

Pets, cattle, chickens, insects, and snowflakes

I first came across the pets versus cattle analogy back in 2012 from a slide deck published by Randy Bias. The slide deck was used during a talk Randy Bias gave at the cloudscaling conference on architectures for open and scalable clouds. Towards the end of the talk, he introduced the concept of pets versus cattle, which Randy attributes to Bill Baker who at the time was an engineer at Microsoft.

The slide deck primarily talks about scaling out and not up; let's go into this in a little more detail and discuss some of the additions that have been made since the presentation was first given five years ago.

Pets

Pets are typically what we, as system administrators, spend our time looking after. They are traditional bare metal servers or virtual machines:

  • We name each server as you would a pet. For example, app-server01.domain.com and database-server01.domain.com.
  • When our pets are ill, you will take them to the vets. This is much like you, as a system administrator, would reboot a server, check logs, and replace the faulty components of a server to ensure that it is running healthily.
  • You pay close attention to your pets for years, much like a server. You monitor for issues, patch them, back them up, and ensure they are fully documented.

There is nothing much wrong with running pets. However, you will find that the majority of your time is spent caring for them—this may be alright if you have a few dozen servers, but it does start to become unmanageable if you have a few hundred servers.

Cattle

Cattle are more representative of the instance types you should be running in public clouds such as Amazon Web Services (AWS) or Microsoft Azure, where you have auto scaling enabled.

  • You have so many cattle in your herd you don't name them; instead they are given numbers and tagged so you can track them. In your instance cluster, you can also have too many to name so, like cattle, you give them numbers and tag them. For example, an instance could be called ip123067099123.domain.com and tagged as app-server.
  • When a member of your herd gets sick, you shoot it, and if your herd requires it you replace it. In much the same way, if an instance in your cluster starts to have issues it is automatically terminated and replaced with a replica.
  • You do not expect the cattle in your herd to live as long as a pet typically would, likewise you do not expect your instances to have an uptime measured in years.
  • Your herd lives in a field and you watch it from afar, much like you don't monitor individual instances within your cluster; instead, you monitor the overall health of your cluster. If your cluster requires additional resources, you launch more instances and when you no longer require a resource, the instances are automatically terminated, returning you to your desired state.

Chickens

In 2015, Bernard Golden added to the pets versus cattle analogy by introducing chickens to the mix in a blog post titled Cloud Computing: Pets, Cattle and Chickens? Bernard suggested that chickens were a good term for describing containers alongside pets and cattle:

  • Chickens are more efficient than cattle; you can fit a lot more of them into the same space your herd would use. In the same way, you can fit a lot more containers into your cluster as you can launch multiple containers per instance.
  • Each chicken requires fewer resources than a member of your herd when it comes to feeding. Likewise, containers are less resource-intensive than instances, they take seconds to launch, and can be configured to consume less CPU and RAM.
  • Chickens have a much lower life expectancy than members of your herd. While cluster instances can have an uptime of a few hours to a few days, it is more than possible that a container will have a lifespan of minutes.
Unfortunately, Bernard's original blog post is no longer available. However, The New Stack have republished a version of it. You can find the republished version at https://thenewstack.io/pets-and-cattle-symbolize-servers-so-what-does-that-make-containers-chickens/.

Insects

Keeping in line with the animal theme, Eric Johnson wrote a blog post for RackSpace which introduced insects. This term was introduced to describe serverless and Functions as a Service.

Insects have a much lower life expectancy than chickens; in fact, some insects only have a lifespan of a few hours. This fits in with serverless and Functions as a Service as these have a lifespan of seconds.

Later in this chapter, we will be looking at public cloud services from AWS and Microsoft Azure which are billed in milliseconds, rather than hours or minutes.

Snowflakes

Around the time Randy Bias gave his talk which mentioned pets versus cattle, Martin Fowler wrote a blog post titled SnowflakeServer. The post described every system administrator's worst nightmare:

  • Every snowflake is unique and impossible to reproduce. Just like that one server in the office that was built and not documented by that one guy who left several years ago.
  • Snowflakes are delicate. Again, just like that one server—you dread it when you have to log in to it to diagnose a problem and you would never dream of rebooting it as it may never come back up.

Summing up

Once I have explained pets, cattle, chickens, insects, and snowflakes, I sum up by saying:

"Organizations who have pets are slowly moving their infrastructure to be more like cattle. Those who are already running their infrastructure as cattle are moving towards chickens to get the most out of their resources. Those running chickens are going to be looking at how much work is involved in moving their application to run as insects by completely decoupling their application into individually executable components."

Then finally I say this:

"No one wants to or should be running snowflakes."

In this book, we will be discussing insects, and I will assume that you know a little about the services and concepts that cover cattle and chickens.

Serverless and insects

As already mentioned, using the word serverless gives the impression that servers will not be needed. Serverless is a term used to describe an execution model.

When executing this model you, as the end user, do not need to worry about which server your code is executed on as all of the decisions on placement, server management, and capacity are abstracted away from you—it does not mean that you literally do not need any servers.

Now there are some public cloud offerings which abstract so much of the management of servers away from the end user that it is possible to write an application which does not rely on any user-deployed services and that the cloud provider will manage the compute resources needed to execute your code.

Typically these services, which we will look at in the next section, are billed for the resources used to execute your code in per second increments.

So how does that explanation fits in with the insect analogy?

Let's say I have a website that allows users to upload photos. As soon as the photos are uploaded they are cropped, creating several different sizes which will be used to display as thumbnails and mobile-optimized versions on the site.

In the pets and cattle world, this would be handled by a server which is powered on 24/7 waiting for users to upload images. Now this server probably is not just performing this one function; however, there is a risk that if several users all decide to upload a dozen photos each, then this will cause load issues on the server where this function is being executed.

We could take the chickens approach, which has several containers running across several hosts to distribute the load. However, these containers would more than likely be running 24/7 as well; they will be watching for uploads to process. This approach could allow us to horizontally scale the number of containers out to deal with an influx of requests.

Using the insects approach, we would not have any services running at all. Instead, the function should be triggered by the upload process. Once triggered, the function will run, save the processed images, and then terminate. As the developer, you should not have to care how the service was called or where the service was executed, so long as you have your processed images at the end of it.

Public cloud offerings

Before we delve into the core subject of this book and start working with Kubernetes, we should have a look at the alternatives; after all, the services we are going to be covering in upcoming chapters are nearly all loosely based off these services.

The three main public cloud providers all provide a serverless service:

Each of these services has the support of several different code frameworks. For the purposes of this book, we will not be looking at the code frameworks in too much detail as using these is a design decision which has to based on your code.

We are going to be looking at two of these services, Lambda from AWS and Functions by Microsoft Azure.

AWS Lambda

The first service we are going to look at is AWS Lambda by AWS. The tagline for the service is quite a simple one:

"Run code without thinking about servers."

Now those of you who have used AWS before might be thinking the tagline makes it sound a lot like the AWS Elastic Beanstalk service. This service inspects your code base and then deploys it in a highly scalable and redundant configuration. Typically, this is the first step for most people in moving from pets to cattle as it abstracts away the configuration of the AWS services which provide the scalability and high availability.

Before we work through launching a hello world example, which we will be doing for all of the services, we will need an AWS account and its command-line tools installed.

Prerequisites 

First of all, you need an AWS account. If you don't have an account, you can sign up for an account at https://aws.amazon.com/:

While clicking on the Create a Free Account and then following the onscreen instructions will give you 12 months' free access to several services, you will still need to provide credit or debit card details and it is possible that you could incur costs.

For more information on the AWS free tier, please see https://aws.amazon.com/free/. This page lets you know which instance sizes and services are covered by the 12 months of free service, as well as letting you know about non-expiring offers on other services, which include AWS Lambda.

Once you have your AWS account, you should create a user using the AWS Identity and Access Management (IAM) service. This user can have administrator privileges and you should use that user to access both the AWS Console and the API.

For more details on creating an IAM user, see the following pages:

Using your AWS root account to launch services and access the API is not recommended; if the credentials fall into the wrong hands you can lose all access to your account. Using an IAM rather than your root account, which you should also lock down using multi-factor authentication, means that you will always have access to your AWS account.

The final prerequisite is that you need access to the AWS command-line client, where I will be using macOS, but the client is also available for Linux and Windows. For information on how to install and configure the AWS command-line client, please see:

When configuring the AWS CLI, make sure you configure the default region as the one you will be accessing in the AWS web console, as there is nothing more confusing than running a command using the CLI and then not seeing the results in the web console.

Once installed, you can test that you can access AWS Lambda from the command-line client by running:

$ aws lambda list-functions

This should return an empty list of functions like the one shown in the following screenshot:

Now that we have an account set up, created, and logged in using a non-root user, and we have the AWS CLI installed and configured, we can look at launching our first serverless function.

Creating a Lambda function

In the AWS Console, click on the Services menu in the top-left of the screen and select Lambda by either using the filter box or clicking on the service in the list. When you first go to the Lambda service page within the AWS Console, you will be presented with a welcome page:

Clicking on the Create a function button will take us straight to the process of launching our first serverless function.

There are four steps to creating a function; the first thing we need to do is select a blueprint:

For the basic hello world function, we are going to be using a pre-built template called hello-world-python; enter this into the filter and you should be presented with two results, one is for Python 2.7 and the second uses Python 3.6:

Selecting hello-world-python and then clicking Export will give you the option of downloading the code used in the function in the lambda_function.py file and the template which is used by Lambda during step 3. This can be found in the template.yaml file.

The code itself, as you would imagine, is pretty basic. It does nothing other than return a value it is passed. If you are not following along, the contents of the lambda_function.py file are:

from __future__ import print_function

import json

print('Loading function')

def lambda_handler(event, context):
#print("Received event: " + json.dumps(event, indent=2))
print("value1 = " + event['key1'])
print("value2 = " + event['key2'])
print("value3 = " + event['key3'])
return event['key1'] # Echo back the first key value
#raise Exception('Something went wrong')

The template.yaml file contains the following:

AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31'
Description: A starter AWS Lambda function.
Resources:
helloworldpython:
Type: 'AWS::Serverless::Function'
Properties:
Handler: lambda_function.lambda_handler
Runtime: python2.7
CodeUri: .
Description: A starter AWS Lambda function.
MemorySize: 128
Timeout: 3
Role: !<tag:yaml.org,2002:js/undefined> ''

As you can see, the template file configures both the Runtime, which in our case is python2.7, and some sensible settings for the MemorySize and Timeout values.

To continue to step 2, click on the function name, which for us is hello-world-python, and you will be taken to the page where we can choose how the function is triggered:

We are not going to be using a trigger just yet and we will look at these in a little more detail in the next function we launch; so for now, click on Next.

Step 3 is where we configure our function. There is quite a bit of information to enter here, but luckily a lot of the detail we need to enter has been pre-populated from the template we looked at earlier, as you can see from the following screenshot:

The details we need to enter are as follows: anything with a * is required and the information in italics was pre-populated and can be left as-is.

The following list shows all of the form fields and what should be entered:

  • Basic information:
    • Name*: myFirstFunction
    • Description: A starter AWS Lambda function
    • Runtime: Python 2.7
  • Lambda function code:
    • Code entry type: This contains the code for the function, there is no need to edit this
    • Enable encryption helpers: Leave unticked
    • Environment variables: Leave empty
  • Lambda function handler and role:
    • Handler*: lambda_function.lambda_handler
    • Role*: Leave Create new role from template(s) selected
    • Role name*: myFirstFunctionRole
    • Policy templates: We do not need a policy template for this function, leave blank

Leave the Tags and Advanced settings at the default values. Once the preceding information has been entered, click on the Next button to take us to step 4, which is the final step before our function is created.

Review the details on the page. If you are happy that everything has been entered correctly, click on the Create function button at the bottom of the page; if you need to change any information, click on the Previous button.

After a few seconds, you will receive a message confirming that your function has been created:

In the preceding screenshot, there is a Test button. Clicking this will allow you to invoke your function. Here you will be able to customize the values posted to the function. As you can see from the following screenshot, I have changed the values for key1 and key2:

Once you have edited the input, clicking on Save and test will store your updated input and then invoke the function:

Clicking on Details in the Execution result message will show you both the results of the function being invoked and also the resources used:

START RequestId: 36b2103a-90bc-11e7-a32a-171ef5562e33 Version: $LATEST
value1 = hello world
value2 = this is my first serverless function
value3 = value3
END RequestId: 36b2103a-90bc-11e7-a32a-171ef5562e33

The report for the request with the 36b2103a-90bc-11e7-a32a-171ef5562e33 ID looks like this:

  • Duration: 0.26 ms
  • Billed Duration: 100 ms
  • Memory Size: 128 MB
  • Max Memory Used: 19 MB

As you can see, it took 0.26 ms for the function to run and we were charged the minimum duration of 100 ms for this. The function could consume up to 128 MB of RAM, but we only used 19 MB during the execution.

Returning to the command line, running the following command again shows that our function is now listed:

$ aws lambda list-functions

The output of the preceding command is as follows:

We can also invoke our function from the command line by running the following command:

$ aws lambda invoke \
--invocation-type RequestResponse \
--function-name myFirstFunction \
--log-type Tail \
--payload '{"key1":"hello", "key2":"world", "key3":"again"}' \
outputfile.txt

As you can see from the preceding command, the aws lambda invoke command requires several flags:

  • --invocation-type: There are three types of invocation:
    • RequestResponse: This is the default option; it sends the request, which in our case is defined in the --payload section of the command. Once the request has been made, the client waits for a response.
    • Event: This sends the request and triggers an event. The client does not wait for a response and instead you receive an event ID back.
    • DryRun: This calls the function, but never actually executes it—this is useful when testing that the details used to invoke the function actually have the correct permissions.
  • --function-name: This is the name of the function we want to invoke.
  • --log-type: There is currently a single option here, Tail. This returns the result of the --payload, which is the data we want to send the function; typically this will be JSON.
  • outputfile.txt: The final part of the command defines where we want to store the output of the command; in our case it is a file called outputfile.txt which is being stored in the current working directory.

When invoking the command from the command line, you should get something like the following result:

Returning to the AWS Console and remaining on the myFirstFunction page, click on Monitoring and you will be presented with some basic statistics about your function:

As you can see from the preceding graphs, there are details on how many times your function has been invoked, how long it takes, and also if there are any errors.

Clicking on View logs in CloudWatch will open a new tab which lists the log streams for myFirstFunction. Clicking on the name of the log stream will then take you to a page which gives you the results for each time the function has been invoked both as testing in the AWS Console and also from the command-line client:

Both the Monitoring page and logs are extremely useful when it comes to debugging your Lambda functions.

Microsoft Azure Functions

Next up, we are going to take a look at Microsoft's serverless offering, Azure Functions. Microsoft describes this service as:

"Azure Functions is a solution for easily running small pieces of code, or "functions," in the cloud. You can write just the code you need for the problem at hand, without worrying about a whole application or the infrastructure to run it."

Like Lambda, there are several ways your Function can be invoked. In this quick walkthrough, we will be deploying a Function which is called using an HTTP request.

Prerequisites 

You will need an Azure account to follow along with this example. If you don't have an account, you can sign up for a free account at https://azure.microsoft.com/:

At the time of writing, Microsoft is crediting all new accounts with $200 to spend on Azure services, and like AWS, several services have a free tier.

While you are credited with $200, you will still need to provide credit card details for verification purposes. For more information on the services and limits in the free tier, please see https://azure.microsoft.com/en-gb/free/pricing-offers/.

Creating a Function app

All of the work we are going to be doing to create our first Function app will be using the web-based control panel. Once you have your account, you should see something like the following page:

One thing you should note about the Microsoft Azure control panel is that it scrolls horizontally, so if you lose where you are on a page you can typically find your way back to where you need to by scrolling to the right.

As you can see from the preceding screenshot, there are quite a few options. To make a start creating your first Function, you should click on + New at the top of the left-hand side menu.

From here, you will be taken to the Azure Marketplace. Click on Compute and then in the list of featured marketplace items you should see Function App. Click on this and you will be taken to a form which asks for some basic information about the Function you want to create:

  • App name: Call this what you want; in my case I called it russ-test-version. This has to be a unique name and, if your desired App name has already been used by another user, you will receive a message that your chosen App name is not available.
  • Subscription: Choose the Azure subscription you would like your Function to be launched in.
  • Resource Group: This will be automatically populated as you type in the App name.
  • Hosting Plan: Leave this at the default option.
  • Location: Choose the region which is closest to you.
  • Storage: This will automatically be populated based on the App name you give, for our purpose leave Create New selected.
  • Pin to dashboard: Tick this as it will allow us to quickly find our Function once it has been created.

If you are not following along in your account, my completed form looks like the following screenshot:

Once you have filled out the form, click on the Create button at the bottom of the form and you will be taken back to your Dashboard. You will receive a notification that your Function is being deployed as you can see from the box at the right-hand side in the following screenshot:

Clicking on the square in the Dashboard or on the notification in the top menu (the bell icon with the 1 on it) will take you to an Overview page; here you can view the status of the deployment:

Once deployed, you should have an empty Function app ready for you to deploy your code into:

To deploy some test code, you need to click on the + icon next to Functions in the left-hand side menu; this will take you to the following page:

With Webhook + API and CSharp selected, click on Create this function; this will add the following code to your Function app:

using System.Net;

public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, TraceWriter log)
{
log.Info("C# HTTP trigger function processed a request.");

// parse query parameter
string name = req.GetQueryNameValuePairs()
.FirstOrDefault(q => string.Compare(q.Key, "name", true) == 0)
.Value;

// Get request body
dynamic data = await req.Content.ReadAsAsync<object>();

// Set name to query string or body data
name = name ?? data?.name;

return name == null
? req.CreateResponse(HttpStatusCode.BadRequest, "Please pass
a name on the query string or in the request body")
: req.CreateResponse(HttpStatusCode.OK, "Hello " + name);
}

This code simply reads in the variable name, which it has passed via the URL and then prints back to the user as Hello <name>.

We can test this by clicking on the Run button at the top of the page. This will execute our Function as well as giving you the output and logs:

The logs for the test run look like this:

2017-09-09T15:28:08 Welcome, you are now connected to log-streaming service.2017-09-09T15:29:07.145 Function started (Id=4db505c2-5a94-4ab4-8e12-c45d29e9cf9c)2017-09-09T15:29:07.145 C# HTTP trigger function processed a request.2017-09-09T15:29:07.176 Function completed (Success, Id=4db505c2-5a94-4ab4-8e12-c45d29e9cf9c, Duration=28ms)

You can also view more information on your Function app by clicking on Monitor in the inner left-hand side menu. As you can see from the following screenshot, we have details on how many times your Function has been called, as well as the status of each execution and the duration for each invocation:

For more detailed information on the invocation of your Function app, you can enable Azure Application Insights, and for more information on this service, please see https://azure.microsoft.com/en-gb/services/application-insights/.

Being able to test within the safety of the Azure Dashboard is all well and good, but how do you directly access your Function app?

If you click on HttpTriggerCSharp1, which will take you back to your code, above the code block you will have a button which says Get function URL, and clicking on this will pop up an overlay box with a URL in it. Copy this:

For me, the URL was:

https://russ-test-function.azurewebsites.net/api/HttpTriggerCSharp1?code=2kIZUVH8biwHjM3qzNYqwwaP6O6gPxSTHuybdNZaD36cq3HptD5OUw==

The preceding URL will no longer work as the Function has been removed; it has been provided for illustration purposes only, and you should replace it with your URL.

To interact with URLs on the command line, I am going to be using HTTPie, which is a command-line HTTP client. For more detail on HTTPie, see the project's homepage at https://httpie.org/.

Call that URL on the command line using HTTPie with the following command:

$ http "https://russ-test-function.azurewebsites.net/api/HttpTriggerCSharp1?code=2kIZUVH8biwHjM3qzNYqwwaP6O6gPxSTHuybdNZaD36cq3HptD5OUw=="

This gives us the following result:

As you can see from what is returned, our Function app has returned the HttpStatusCode BadRequest message. This is because we are not passing the name variable. To do this, we need to update our command to:

$ http "https://russ-test-function.azurewebsites.net/api/HttpTriggerCSharp1?code=2kIZUVH8biwHjM3qzNYqwwaP6O6gPxSTHuybdNZaD36cq3HptD5OUw==&name=kubernetes_for_serverless_applications"

As you would expect, this returns the correct message:

You can also enter the URL in your browser and see the message:

The serverless toolkit

Before we finish this chapter, we are going to take a look at the serverless toolkit. This is an application that aims to provide a consistent experience when it comes to deploying your serverless functions across different cloud providers. You can find the service's homepage at https://serverless.com/.

As you can see from the home page, it supports both AWS and Microsoft Azure, as well as the Google Cloud Platforms and IBM OpenWhisk. You will also notice that there is a Sign Up button; click on this and follow the onscreen prompts to create your account.

Once signed up, you will receive some very simple instructions on how to install the tool and also deploy your first application; let's follow these now. First of all, we need to install the command-line tool by running:

$ npm install serverless -g

The installation will take a few minutes, and once it is installed you should be able to run:

$ serverless version

This will confirm the version that was installed by the previous command:

Now that the command-line tool is installed and we have confirmed that we can get the version number without any errors, we need to log in. To do this, run:

$ serverless login

This command will open a browser window and take you to a login page where you will need to select which account you wish to use:

As you can see in the preceding screenshot, it knows I last logged into serverless using my GitHub account, so clicking this will generate a Verification Code:

Pasting the code into your Terminal at the prompt and pressing Enter on your keyboard will then log you in:

Now that we are logged in, we can create our first project, which is going to be another hello-world application.

To launch our hello-world function in AWS, we must first create a folder to hold the artifacts created by the serverless toolkit and change to it; I created mine on my Desktop using:

$ mkdir ~/Desktop/Serverless
$ cd ~/Desktop/Serverless

To generate the files needed to launch our hello-world application, we need to run:

$ serverless create --template hello-world

This will return the following message:

Opening serverless.yml in my editor, I can see the following (I have removed the comments):

service: serverless-hello-world
provider:
name: aws
runtime: nodejs6.10
functions:
helloWorld:
handler: handler.helloWorld
# The `events` block defines how to trigger the handler.helloWorld code
events:
- http:
path: hello-world
method: get
cors: true

I updated the service to be russ-test-serverless-hello-world; you should choose something unique as well. Once I had saved my updated serverless.yml file, I ran:

$ serverless deploy

This, as you may have already guessed, deployed the hello-world application to AWS:

Access the endpoint URL using HTTPie:

$ http --body "https://5rwwylyo4k.execute-api.us-east-1.amazonaws.com/dev/hello-world"

This returns the following JSON:

{
"input": {
"body": null,
"headers": {
"Accept": "*/*",
"Accept-Encoding": "gzip, deflate",
"CloudFront-Forwarded-Proto": "https",
"CloudFront-Is-Desktop-Viewer": "true",
"CloudFront-Is-Mobile-Viewer": "false",
"CloudFront-Is-SmartTV-Viewer": "false",
"CloudFront-Is-Tablet-Viewer": "false",
"CloudFront-Viewer-Country": "GB",
"Host": "5rwwylyo4k.execute-api.us-east-1.amazonaws.com",
"User-Agent": "HTTPie/0.9.9",
"Via": "1.1 dd12e7e803f596deb3908675a4e017be.cloudfront.net
(CloudFront)",
"X-Amz-Cf-Id": "bBd_ChGfOA2lEBz2YQDPPawOYlHQKYpA-
XSsYvVonXzYAypQFuuBJw==",
"X-Amzn-Trace-Id": "Root=1-59b417ff-5139be7f77b5b7a152750cc3",
"X-Forwarded-For": "109.154.205.250, 54.240.147.50",
"X-Forwarded-Port": "443",
"X-Forwarded-Proto": "https"
},
"httpMethod": "GET",
"isBase64Encoded": false,
"path": "/hello-world",
"pathParameters": null,
"queryStringParameters": null,
"requestContext": {
"accountId": "687011238589",
"apiId": "5rwwylyo4k",
"httpMethod": "GET",
"identity": {
"accessKey": null,
"accountId": null,
"apiKey": "",
"caller": null,
"cognitoAuthenticationProvider": null,
"cognitoAuthenticationType": null,
"cognitoIdentityId": null,
"cognitoIdentityPoolId": null,
"sourceIp": "109.154.205.250",
"user": null,
"userAgent": "HTTPie/0.9.9",
"userArn": null
},
"path": "/dev/hello-world",
"requestId": "b3248e19-957c-11e7-b373-8baee2f1651c",
"resourceId": "zusllt",
"resourcePath": "/hello-world",
"stage": "dev"
},
"resource": "/hello-world",
"stageVariables": null
},
"message": "Go Serverless v1.0! Your function executed successfully!"
}

Entering the endpoint URL in your browser, (in my case as I am using Safari) shows you the RAW output:

Going to the URL mentioned at the very end of the serverless deploy command gives you an overview of the function you have deployed to Lambda using serverless:

Open the AWS Console by going to https://console.aws.amazon.com/, select Lambda from the Services menu, and then change to the region your function was launching in; this should show you your function:

At this point, you might scratch your head thinking, How was it launched in my account? I didn't provide any credentials! The serverless tool is designed to use the same credentials as the AWS CLI we installed before we launched our first Lambda function—these can be found at ~/.aws/credentials on your machine.

To remove the function, simply run:

$ serverless remove

And this will remove everything in your AWS account that the serverless toolkit has created.

For more information on how to use the serverless toolkit to launch an Azure Function, please see the quick-start guide which can be found at https://serverless.com/framework/docs/providers/azure/guide/quick-start/.

Problems solved by serverless and Functions as a Service

Even though we have only been launching the most basic applications so far, I hope you are starting to see how using serverless could help with the development of your applications.

Imagine you have a JavaScript application which is being hosted in an object store such as Amazon's S3 service. Your application could be written in, say, React (https://facebook.github.io/react/) or Angular (https://angular.io/), and both of these technologies allow you to load external data using JSON. This data can be requested and delivered using a serverless function—combining these technologies allows you to create an application that not only has no single point of failure, but also, when using public cloud offerings, is a true you only pay for what you use application.

As the serverless function is being executed and then is immediately terminated, you should not have to worry about where or how it is executed, just that it is. This means that your application, in theory, should be scalable and also more fault-tolerant than a more traditional server-based application.

For example, if something goes wrong when one of your functions is called, for instance, if it crashes or there are resource issues and you know that when your function is next called it will be being launched afresh, you don't need to worry about your code being executed on a server which is having issues.

Summary

In this chapter, we have taken a very quick look at what is meant by serverless, and we have launched and interacted with serverless functions in AWS and also Microsoft Azure as well as used a third-party tool, which just happens to be called serverless, to create a serverless function in AWS.

You will have noticed that so far we haven't mentioned Kubernetes at all, which you may be thinking for a book entitled Kubernetes for Serverless Applications is a little strange. Don't worry though; in the next chapter we will be looking at Kubernetes in more detail and all will become clear.

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • • Get hands-on experience in installing, configuring, and using services such as Kubeless, Funktion, OpenWhisk, and Fission
  • • Learn how to launch Kubernetes both locally and in public clouds
  • • Explore the differences between using services such as AWS Lambda and Azure Functions and running your own

Description

Kubernetes has established itself as the standard platform for container management, orchestration, and deployment. It has been adopted by companies such as Google, its original developers, and Microsoft as an integral part of their public cloud platforms, so that you can develop for Kubernetes and not worry about being locked into a single vendor. This book will initially start by introducing serverless functions. Then you will configure tools such as Minikube to run Kubernetes. Once you are up-and-running, you will install and configure Kubeless, your first step towards running Function as a Service (FaaS) on Kubernetes. Then you will gradually move towards running Fission, a framework used for managing serverless functions on Kubernetes environments. Towards the end of the book, you will also work with Kubernetes functions on public and private clouds. By the end of this book, we will have mastered using Function as a Service on Kubernetes environments.

Who is this book for?

If you are a DevOps engineer, cloud architect, or a stakeholder keen to learn about serverless functions in Kubernetes environments, then this book is for you.

What you will learn

  • • Get a detailed analysis of serverless/Functions as a Service
  • • Get hands-on with installing and running tasks in Kubernetes using Minikube
  • • Install Kubeless locally and launch your first function
  • • Launch Kubernetes in the cloud and move your applications between your local machine and your cloud cluster
  • • Deploy applications on Kubernetes using Apache OpenWhisk
  • • Explore topics such as Funktion and Fission installation on the cloud followed by launching applications
  • • Monitor a serverless function and master security best practices and Kubernetes use cases

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Jan 18, 2018
Length: 318 pages
Edition : 1st
Language : English
ISBN-13 : 9781788626125
Vendor :
Google
Concepts :
Tools :

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
OR
Modal Close icon
Payment Processing...
tick Completed

Billing Address

Product Details

Publication date : Jan 18, 2018
Length: 318 pages
Edition : 1st
Language : English
ISBN-13 : 9781788626125
Vendor :
Google
Concepts :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
$19.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
$199.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts
$279.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total $ 152.97
DevOps with Kubernetes
$54.99
Kubernetes for Developers
$48.99
Kubernetes for Serverless Applications
$48.99
Total $ 152.97 Stars icon
Banner background image

Table of Contents

12 Chapters
The Serverless Landscape Chevron down icon Chevron up icon
An Introduction to Kubernetes Chevron down icon Chevron up icon
Installing Kubernetes Locally Chevron down icon Chevron up icon
Introducing Kubeless Functioning Chevron down icon Chevron up icon
Using Funktion for Serverless Applications Chevron down icon Chevron up icon
Installing Kubernetes in the Cloud Chevron down icon Chevron up icon
Apache OpenWhisk and Kubernetes Chevron down icon Chevron up icon
Launching Applications Using Fission Chevron down icon Chevron up icon
Looking at OpenFaaS Chevron down icon Chevron up icon
Serverless Considerations Chevron down icon Chevron up icon
Running Serverless Workloads Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Rating distribution
Full star icon Empty star icon Empty star icon Empty star icon Empty star icon 1
(1 Ratings)
5 star 0%
4 star 0%
3 star 0%
2 star 0%
1 star 100%
Customer Mar 23, 2018
Full star icon Empty star icon Empty star icon Empty star icon Empty star icon 1
Another useless "Kubernetes" book. Apart from very brief explanation of serverless concept, it is just a compilation of poorly written installation guides for various libraries, which probably, by the time you get that book, will be out of date anyway.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

How do I buy and download an eBook? Chevron down icon Chevron up icon

Where there is an eBook version of a title available, you can buy it from the book details for that title. Add either the standalone eBook or the eBook and print book bundle to your shopping cart. Your eBook will show in your cart as a product on its own. After completing checkout and payment in the normal way, you will receive your receipt on the screen containing a link to a personalised PDF download file. This link will remain active for 30 days. You can download backup copies of the file by logging in to your account at any time.

If you already have Adobe reader installed, then clicking on the link will download and open the PDF file directly. If you don't, then save the PDF file on your machine and download the Reader to view it.

Please Note: Packt eBooks are non-returnable and non-refundable.

Packt eBook and Licensing When you buy an eBook from Packt Publishing, completing your purchase means you accept the terms of our licence agreement. Please read the full text of the agreement. In it we have tried to balance the need for the ebook to be usable for you the reader with our needs to protect the rights of us as Publishers and of our authors. In summary, the agreement says:

  • You may make copies of your eBook for your own use onto any machine
  • You may not pass copies of the eBook on to anyone else
How can I make a purchase on your website? Chevron down icon Chevron up icon

If you want to purchase a video course, eBook or Bundle (Print+eBook) please follow below steps:

  1. Register on our website using your email address and the password.
  2. Search for the title by name or ISBN using the search option.
  3. Select the title you want to purchase.
  4. Choose the format you wish to purchase the title in; if you order the Print Book, you get a free eBook copy of the same title. 
  5. Proceed with the checkout process (payment to be made using Credit Card, Debit Cart, or PayPal)
Where can I access support around an eBook? Chevron down icon Chevron up icon
  • If you experience a problem with using or installing Adobe Reader, the contact Adobe directly.
  • To view the errata for the book, see www.packtpub.com/support and view the pages for the title you have.
  • To view your account details or to download a new copy of the book go to www.packtpub.com/account
  • To contact us directly if a problem is not resolved, use www.packtpub.com/contact-us
What eBook formats do Packt support? Chevron down icon Chevron up icon

Our eBooks are currently available in a variety of formats such as PDF and ePubs. In the future, this may well change with trends and development in technology, but please note that our PDFs are not Adobe eBook Reader format, which has greater restrictions on security.

You will need to use Adobe Reader v9 or later in order to read Packt's PDF eBooks.

What are the benefits of eBooks? Chevron down icon Chevron up icon
  • You can get the information you need immediately
  • You can easily take them with you on a laptop
  • You can download them an unlimited number of times
  • You can print them out
  • They are copy-paste enabled
  • They are searchable
  • There is no password protection
  • They are lower price than print
  • They save resources and space
What is an eBook? Chevron down icon Chevron up icon

Packt eBooks are a complete electronic version of the print edition, available in PDF and ePub formats. Every piece of content down to the page numbering is the same. Because we save the costs of printing and shipping the book to you, we are able to offer eBooks at a lower cost than print editions.

When you have purchased an eBook, simply login to your account and click on the link in Your Download Area. We recommend you saving the file to your hard drive before opening it.

For optimal viewing of our eBooks, we recommend you download and install the free Adobe Reader version 9.