Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Building Serverless Architectures
Building Serverless Architectures

Building Serverless Architectures: Unleash the power of AWS Lambdas for your applications

eBook
€20.98 €29.99
Paperback
€36.99
Subscription
Free Trial
Renews at €18.99p/m

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Table of content icon View table of contents Preview book icon Preview Book

Building Serverless Architectures

Getting Started with Serverless

If you are reading this book, you have probably already heard the term serverless on more than one occasion. You might have read more than one definition of the term, like every buzzword. I define serverless computing as a new and efficient software development approach that abstracts infrastructure from the functionality itself, letting the developers focus on their business instead of the infrastructure constraints.

I remember myself and my team struggling with these infrastructure constraints in one of the web shops in the late 2000s. After being born as a pet project in college years, Instela had suddenly grown from having hundreds of visits per day to thousands, and we were hosting it on a shared hosting provider. Our website was eating all the CPU available in those poor Xeon servers and our hosting provider unilaterally decided to shut it down to keep neighboring websites up on the same server with us. The local plumber and the coffee shop were online and we were homeless in the cyber world. We did not have any remedy other than running to buy a cheap desktop computer, make it our first server, and bring Instela up again. Our visitor count was increasing day by day, our ATX server was resetting itself a couple of times per day because of overheating, and we ended up buying our first DELL PowerEdge box, which was like a space station for us back in 2005. Everything was cool in the beginning, but as more visitors started to come, our site started to respond slower and slower. Sometimes it was rather fast, and sometimes it was as slow as molasses. Sometimes, there was viral content that attracts thousands of people, and sometimes we had 100 people online. For our data center, it was exactly the same. They were charging us a fixed price and enjoying it. When we needed a new server, we had to spend at least one week, ask whether the local dealer had it in their stock, wait for the delivery, and install the network and the operating system. And what if one of the machines had a hardware issue? We had to wait for the technician and deal with the traffic with one less machine. It was a real pain and there was no other way to run a web platform.

Virtual servers have already existed since the early 2000s, but one can say that real cloud computing started in 2006 with the launch of AWS EC2. It is worth noting that when this service was launched, it was offering very limited options, and for many companies, it was not a production-ready solution.

Nowadays, this horror story is just a legacy to remember for many companies. Public clouds are providing us with a dedicated compute power from their large machine pools. Cloud computing introduced many new concepts and drastically changed how we build and deploy software. We do not have to worry about maintaining an on-premise SAN we mount via NFS. There are S3, Azure Blog Storage, or Google Cloud Storage, which give us the space we really need. We do not monitor the free space or repair it when it is broken. Within the SLA levels, (99.999999999% for AWS S3 [1]) you always know that your storage engine is just there, working. You need a queue service such as RabbitMQ but have AWS Simple Queue Service or Windows Azure Queue Service? You need to implement a search functionality and are planning to deploy an Elasticsearch cluster? You have a managed one: CloudSearch. AWS is offering a managed service even if you are developing a platform that needs to transcode video. You upload your jobs and get the results.

So far, we have spoken about the supporting services that any size of application might need. Leveraging the managed service offerings from public cloud providers, we see that we have become able to shut down some of the servers we previously needed in an on-premise infrastructure. We might say that this is the first part of the serverless architecture. Some authors are calling this type of service Backend as a Service, or BaaS. However, so far, our software is still running on virtual machines, called instances on AWS and Google Cloud Platform or VMs on Windows Azure. We have to prepare virtual machine images with our application code, spin up instances using them, and configure the auto-scaling rules for cost optimization and scalability. More importantly, we have to pay for these servers on a timely basis, even if you really do not use the reserved compute capacity.

As an alternative to this paradigm, cloud providers came up with the Functions as a Service (FaaS) idea. With FaaS, the vast amount of the business logic is still written by the application developer, but they are deployed to fully managed, ephemeral containers that are live only during the invocation of the functions. These functions respond to specific events. For example, the application developer can author a function that gets binary image data as the input and returns its compressed version. This function can be deployed as an independent 'unit of work and invoked with an image data to get the compressed version. This function would run in an isolated container managed by the cloud provider itself, and the application developer would only be busy with the parameter the cloud provider gets and the return data they give away. Obviously, this function alone does not make much sense, but cloud providers are also providing a mechanism to make these small functions respond to specific cloud events. For instance, you can configure this function to be invoked automatically whenever a new file is added to an S3 Bucket. In this way, this function will always be notified when there is a new image uploaded by your users and save a compressed version of it to another bucket. You can deploy another function that returns plain JSON objects that configure it to respond to HTTP requests via API Gateway. You would now have a fully scalable web service that you pay for as you go.

Sounds good? Then we warmly welcome you to the serverless computing world!

For a good theoretical study on serverless computing, I recommend that you read Mike Robert's Serverless Architectures. He paints a big picture of the topic and carefully analyzes the advantage and drawbacks of a serverless approach. You can find information about this article in the bibliography section.

In this book, we will learn how to build a midsize serverless application with AWS Lambda and the Java language. Although Google Cloud Platform and Windows Azure offer similar functionalities, I picked AWS Lambda because, at the time of writing, AWS is the provider that offers the most mature solutions. I picked Java because, despite its power and popularity, I believe that Java has been always underestimated in the serverless computing community. In my opinion, this is because AWS started with offering JavaScript, thus the trend started with that language and went on with it. However, AWS Lambda has native support for Java, which offers a fully functional JVM 8 to developers. In this book, we will look at how to apply the most common techniques in the Java world, such as Dependency Injection, and try to apply OOP design patterns to our functions. Unlike JavaScript equivalents, our functions will be more sophisticated and we will create great build systems thanks to Gradle. Gradle is Maven like build tool which uses Groovy based language that you can build sophisticated build configurations.

In this journey, we will begin with the following:

  • We will create a fully serverless forum application on the AWS platform.
  • We will use Java 8 as language. Google's Guice will be our dependency injection framework.
  • We will use AWS CloudFormation to deploy our application. We will write small Gradle tasks that will help us to have a painless deployment process. Gradle will also manage our dependencies.
CloudFormation is an automated AWS tool for the provisioning of cloud resources. With CloudFormation, you can define your whole cloud platform using a single JSON file without having to deal with CLI or AWS Console and deploy your application with one command in any AWS account. It is a very strong tool and I advise against usage of any other method to build AWS-based applications. With CloudFormation, you can have a solid definition of your application that works everywhere in the same way. Besides the benefits of such solidity in the production environment, CloudFormation also lets us define our infrastructure as code, so we can leverage source control and observe the development of our infrastructure along with our code. Therefore, in this book, you will not find any CLI command or AWS Console screenshot, but will find CloudFormation template files.
  • We will create only REST endpoints and test them using a rest-assured testing tool. We will not create any frontend as it is out of the scope of this book. For REST endpoints, we will use API Gateway. For some backend services, we will also develop some standalone Lambda functions that will respond to cloud events, such as S3 events.
  • We will use AWS S3 to store static files.
  • We will use DynamoDB as the data layer and store static files in Amazon S3. For the search feature, we will learn how to use AWS CloudSearch. We will use SQS (Simple Queue Service) and SNS (Simple Notification Service) for some backend services.
  • You can use any IDE you want. We will operate on CLI, mostly with Gradle commands that make the project totally IDE-agnostic.

You may think that there are many unknown words in this list, especially if you are not familiar with the AWS ecosystem. No worries! We expect you to be familiar only with the Java language and common patterns such as Dependency Injection. Knowledge of Gradle is a plus but not mandatory. We do not expect you to know about the services that AWS offers. We will be covering most details and referring to relevant documentation whenever needed, and after completing this book, you will know what these abbreviations mean. However, you are free to go to the AWS documentation and learn what those services are offering.

The forum application we will be implementing will be a very basic but over-engineered application. It will include the REST API that users can register, create topics and posts under existing topics, update their profiles, and do some other operations. The application will have some supporting services, such as sending mobile notifications to users when someone replies to their posts, an image resizer, and so on. As it is very typical web application and we are assuming that the audience of the book is already familiar with the business requirements of such an application, we are omitting the definition of all the systems at this stage. Instead, we will adopt an iterative agile methodology and define the specifications of these subsystems when we need them in the upcoming chapters.

In this chapter, we will cover the following topics:

  • A brief theoretical introduction to AWS Lambda
  • Setting up an AWS account
  • Creating the Gradle project for our project and configuring dependencies
  • Developing the base Lambda handler class that will be shared with all Lambda functions in the future
  • Testing this implementation locally using Junit
  • Creating and deploying a basic Lambda function
  • Introducing AWS Lambda

As stated earlier, AWS Lambda is the core AWS offering we will be busy with throughout this book. While other services offer us important functionalities such as data storage, message queues, search, and so on, AWS Lambda is the glue that combines all this with our business logic.

In the simplest words, AWS Lambda is a computing service where we can upload our code, create independent functions, and tie them to specific events in the cloud infrastructure. AWS manages all the infrastructure where our functions run and performs all of the administration of the compute resources, including server and operating system maintenance, capacity provisioning and automatic scaling, code monitoring, and logging. When our function has high demand, AWS automatically increases the underlying machine count to ensure that our function performs with the same performance. AWS Lambda supports JavaScript (Node.js), Java, and Python languages natively.

You can write AWS Lambda functions in one of the languages supported natively. Regardless of the chosen language, there is a common pattern that includes the following core concepts:

  • Handler: Handler is a method that Lambda runtime calls whenever your function is invoked. You configure the name of this method when you create your Lambda function. When your function is invoked, the Lambda runtime injects the event data to this method. After this entry point, your method can call other methods in your deployment package. In Java, the class that includes the handler method should implement a specific interface provided by the AWS Lambda Runtime dependency. We will look at the details later in this chapter.
  • Context: A special context object is also passed to the handler method. Using this object, you can access some AWS Lambda runtime values, such as the request ID, the execution time remaining before AWS Lambda terminates your Lambda function, and so on.
  • Event: Events are JSON payloads that Lambda runtime injects to your Lambda function upon execution. You can call Lambda function from many sources, like HTTP requests, messaging systems, and so on. For each execution type, structure of JSON will be different. In Node.js environment, events are passed to handler functions in string format. In Java runtime, you have two possibilities: Receive event as InputStream and parse yourself or create a POJO that can be deserialized from expected JSON. For the latter case, Lambda runtime will use Jackson library to convert the event to that POJO. In this book we will create our own deserializer because default Jackson configuration is not meeting our requirements.
  • Logging: Within your Lambda function, you can log in to CloudWatch, which is the built-in logging feature offered by AWS. In this book, we will use log4j to generate log entries. We will then leverage the custom log4j appender offered by AWS to write our logs to CloudWatch.
  • Exceptions: After successful execution, Lambda functions return a result in the JSON format. It is also possible to identify an execution error using Java exceptions. We will make heavy use of exceptions to tell to the AWS runtime about failed executions, and it will be especially useful in returning different HTTP code in our REST API.

AWS Lambda functions can be invoked manually or by responding to different events. They are normal functions: you give them an event object and you get the results. During the execution, Lambda functions are totally agnostic about who is calling them. However, invoking them manually does not make much sense. Instead, we configure them to respond to Cloud events. Invoking Lambda functions manually is useful when we test our functions for different type of inputs and we will actually do that when we test our functions manually. However, the real power of Lambda functions appears when their invocation is out of our control. In this book, we will configure AWS functions to respond to different cloud events. Here are examples of some of them:

  • REST Endpoints: We will develop Lambda functions that will be asynchronously invoked by HTTP requests. We will be using API Gateway. This service accepts HTTP requests, converts the HTTP request parameters into the Lambda event that our function will understand, and finally converts the output of the Lambda to the desired JSON output. We will be creating three-four endpoints using this technology and have a fully scalable API for our application.
  • Resizing Images: For the most of the use cases, we do not even need to develop a REST API for our needs. In this scenario, our users will upload their profile photos to AWS S3. We will not write a special endpoint for that; instead, client application will use AWS Cognito to temporarily obtain the IAM credentials that will only allow you to upload files to the S3 bucket. Once the image is uploaded, S3 will invoke our Lambda function and our function will resize the image and save it to the resized images bucket. After this point, the users will be able to access to resized images using the CloudFront CDN. In other words, we will have built an image service without using or developing any REST API endpoints:

In the following chapters, you will understand much better how Lambda functions work with practical examples.

After this introduction, it is time to get our hands dirty and write some code.

Preparing the environment

Before we start digging into our project, we have to have an AWS account and the AWS CLI installed on our system. Even if you already have an AWS account, it is recommended that you open a new one because every new AWS account will come with a free tier available for 12 months following your AWS sign-up date. With the free tier, you will not have to pay for most of the resources we will install throughout the book. To set up a new account, perform the following steps:

  1. Open http://aws.amazon.com/ and then choose Create an AWS Account.
  2. Follow the online instructions.

Once you create your account, you will have to create security credentials for yourself. IAM (Identity and Access Management) is a service where you manage the security configuration of your AWS account. Here, you can create more than one user and allow them granularly to specific cloud resources. For every user, you can create up to two security credentials that you can use to access AWS APIs via different SDKs or the AWS CLI tool.

When you sign up a new AWS account, a root user is created, but usage of this account with security credentials should be avoided. This account has unlimited access to your account, and if you expose your security credentials accidentally to the public domain, such as a public git repository, your account can be compromised. For the sake of simplicity, we will create a new IAM user with administrator access.

The Internet is full of stories of stolen AWS keys. It is known that some malicious software is scanning every commit published to GitHub and when they detect AWS credentials accidentally published to a public repository, they spin up lots of virtual machines using those credentials to mine Bitcoins or for other purposes. While they make money with that, the owner of the AWS account is faced with excessive bills. Therefore, you should be very protective about access keys. Do not share them with anyone and restrict the usage right of AWS users using IAM policies. The credentials of the user we create here will not be hardcoded in any code and will be merely used to configure the AWS CLI. Even though the risk of granting administrator access to this user is relatively low in this case, we recommend that you be aware of potential issues.

To create the user, perform the following steps:

  1. Navigate to https://console.aws.amazon.com/iam.
  2. In the navigation pane, choose Users and then choose Create New Users.
  3. Type the user name for the user to be created. You can create up to five users at the same time, but we need only one for now.
  4. Make sure that the Generate an access key for each user checkbox is selected.
  5. Click on Create.
  6. On the next screen, you will be given the security credentials of the user you just created. This is the only opportunity to view the credentials. If you do not save them, you will need to create new access keys for the user. That's why it's important to save the Access Key ID and Secret Access Key now.

The user you just created does not have any access to AWS resources. AWS users gain the right to access depending on the IAM policies attached to them. Now we will attach an AdministratorAccess policy to that. To accomplish that, perform the following steps:

  1. In the Users section, click on the user that you created.
  2. On Permissions tab, click on the AttachPolicy button.
  3. Check the AdministratorAccess policy and click on the Attach Policy button in the bottom-right section.

We have completed creating a user with administrator rights.

Installing AWS CLI

We are going to proceed to installing AWS CLI (Command Line Interface). The AWS CLI is a tool to manage your AWS services. It is very a powerful tool that can control all the AWS services and it is the preferred method for programmatic access to AWS APIs via the command line. Although we will use Gradle to control our deployment and the cloud resource creation process, it is useful to have the AWS CLI installed on our system.

Prerequisites

  • Linux, OS X, or Unix
  • Python 2 version 2.6.5+ or Python 3 version 3.3+

For Mac OS X and Linux, these three commands will install the AWS CLI on your system:

    $ curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o
"awscli-bundle.zip"
$ unzip awscli-bundle.zip $ sudo ./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws

Once you have the AWS CLI installed, you can configure it with the security credentials you obtained previously. Type aws configure and follow the instructions. After you complete this step, your credentials will be saved at ~/.aws/configure and different programming platform SD's and the AWS CLI tool will use these credentials when they invoke AWS APIs.

Gradle

We must also have Gradle installed on our system. Gradle is a modern build tool that got popularity with Android. It uses Groovy-based DSL instead of XML files and mixes declarative and imperative type of build configuration. With Gradle you can define dependencies and project properties, and also you can write functions. We will leverage Gradle to build our deployment system, so with only one command we will be able to deploy all our software to cloud.

Throughout the book, we will use the Gradle wrapper that locks the Gradle version for the project, thus providing integrity between different teams. However, in order to run the gradle wrapper task, which will create the gradle wrapper files in our project, we have to have at least one Gradle version locally in our system.

If you do not have it already, execute the following:

    $ curl -s https://get.sdkman.io | bash  

Then, open a new terminal and type this:

    $ sdk install gradle 2.14  

This will install the Gradle 2.14 version.

Creating the project

Finally, we can start creating our project. We will create our project in our home directory, so we can start with these commands:

    $ mkdir -p ~ /serverlessbook
    $ cd ~/serverlessbook 

Once we create the working directory, we can create the build.gradle file, which will be the main build file of our project:

    $ touch build.gradle

We can start with the Gradle wrapper task, which will generate Gradle files in our project. Write this block into the build.gradle file:

task wrapper(type: Wrapper) { 
  gradleVersion = '2.14' 
} 

And then execute the command:

    $ gradle wrapper 

This will create Gradle wrapper files in our project. This means that in the root directory of the project, ./gradlew can be called instead of the local gradle. It is a nice feature of Gradle: Let's assume that you distributed your project to other team members and you are not sure whether they have Gradle installed on their system (or its version if they already have). With the Gradle wrapper, you make sure that everybody who checked out the project will run Gradle 2.14 if they run ./gradlew. If they do not have any Gradle version in their system, the script will download it.

We can now proceed to add the declarations needed for all projects. Add this code block to the build.gradle file:

// allprojects means this configuration 
// will be inherited by the root project itself and subprojects 
allprojects { 
   // Artifact Id of the projct 
   group 'com.serverlessbook' 
   // Version of the project 
   version '1.0' 
   // Gradle JAVA plugin needed for JAVA support 
   apply plugin: 'java' 
   // We will be using JAVA 8, then 1.8 
   sourceCompatibility = 1.8 
} 

With this code block, we tell to Gradle that we are building a Java 8 project with the artifact ID com.serverlessbook and version 1.0.

Also, we need to create the settings.gradle file, which will include some generic settings about the project and subproject names in the future. In the root project, create a new file with the name settings.gradle and type this line:

rootProject.name = 'forum' 

Actually, this line is optional. When the root project name is not given a name explicitly, Gradle assigns the name of the directory where the project is placed as the project name. For consistency, however, it is a good idea to name the project explicitly because other developers may always check out our code to a directory with another name and we would not love that our project has another name then.

In our Gradle build script, we get access to important values about the project with variables such as project.name and project.version.

Now we should add repositories to fetch the dependencies for the project itself and the build script. In order to accomplish this, first, we have to add this block to the build.gradle file:

allprojects {
repositories {
mavenCentral()
jcenter()
maven {
url "https://jitpack.io"
}
}
}

Here, we defined Maven Central, Bintray JCenter, and Jitpack as the three most popular repositories. We need the same dependencies for the build script, thus we add the following block to the same file:

buildscript {
repositories {
mavenCentral()
jcenter()
maven {
url "https://jitpack.io"
}
}
}
Repositories and dependencies defined in buildscript are used only in the Gradle build script itself. We will excessively use build script dependencies because our Gradle script will manage the deployment process. Therefore, it is important that you have these repositories for the build script as well.

Implementing the Lambda Dependency

In the previous section, we already finished the generic Gradle setup. In this section, we will learn how to write Lambda functions and create the very core part of our project that will be the entry point for all our Lambda functions.

In our project, we will have more than one specific AWS Lambda function, one for each REST endpoint and several more for auxiliary services. These functions will share some common code and dependencies; therefore, is convenient to create a subproject under our root project. In Gradle, subprojects act like different projects but they can inherit the build configuration from their root project. In any case, these projects will be compiled independently and produce different JAR files in their respective build directories.

In our project structure, one subproject will include the common code we will need for every single Lambda function, and this project will be required as a dependency by other subprojects that implement the Lambda function. As a naming convention, the core Lambda subproject will be called lambda, while the individual Lambda function that will be deployed will be named by the lambda- prefix.

We can start implementing this core AWS Lambda subproject and create a new directory under our root directory called with the its name:

    $ mkdir lambda  

Then, let's create a new build. gradle file for the newly created subproject:

    $ touch lambda/build.gradle

By default, Gradle will not recognize the new subproject just because we created a new directory under the root directory. To make Gradle recognize it as a subproject, we must add a new include directive to the settings.gradle file. This command will add the new line to settings.gradle:

    $ echo $"include 'lambda'" >> settings.gradle

After this point, our subproject can inherit the directives from the root project so we will not have to repeat the most of those.

Now we can define the required dependencies for our main Lambda library. At this point, we will need only the aws-lambda-java-core and jackson-databind packages. While the former is the standard AWS library for Lambda functions, the latter is used for JSON serialization and deserialization purposes, which we will be using heavily. In order to add these dependencies, just add these lines in the lambda/build.gradle file:

dependencies {
compile 'com.amazonaws:aws-lambda-java-core:1.1.0'
compile 'com.fasterxml.jackson.core:jackson-databind:2.6.+'
}

Previously, we mentioned that AWS Lambda is invoking a specific method for every Lambda function to inject the event data and accept this method's response as the Lambda response. To determine which method to invoke, AWS Lambda leverages interfaces. aws-lambda-java core includes the RequestStreamHandler interface in the com.amazonaws.services.lambda.runtime package. In our base Lambda package, we will create a method that implements this interface.

Now let's create our first package and implement the LambdaHandler<I, O> method inside it:

    $ mkdir -p lambda/src/main/java/com/serverlessbook/lambda
$ touch lambda/src/main/java/com/serverlessbook/lambda/
LambdaHandler.java

Let's start implementing our class:

package com.serverlessbook.lambda; 

import com.fasterxml.jackson.databind.ObjectMapper; import com.amazonaws.services.lambda.runtime.Context; import com.amazonaws.services.lambda.runtime.RequestStreamHandler; import java.io.IOException; import java.io.InputStream; import java.io.OutputStream;
import java.lang.reflect.ParameterizedType; public abstract class LambdaHandler<I, O> implements RequestStreamHandler { @Override public void handleRequest(InputStream input, OutputStream output,
Context context) throws IOException { } public abstract O handleRequest(I input, Context context); }

As you may have noted, this class is using generics. It is expected that the implemented handleRequest abstract method in the inheriting classes accept one POJO (Plain Old Java Object) and return another POJO. On the other hand, in the overridden handleRequest method gets AWS Lambda event data as InputStream and it should return OutputStream including the output JSON. Our base LambdaHandler method will implement methods that convert JSON into InputStream and OutputStream into JSON. The I and O type references are the key points in this case because using this information, our base class will know which POJO classes it should use when it carries out the transformation.

If you have ever read the AWS Lambda documentation, you might have seen the RequestHandler class in the AWS Lambda library, which exactly does what we will do in the base class. However, Lambda's built-in JSON serialization does not meet the requirements for our project because it does not support advanced features of the Jackson JSON library. That's why we are implementing our own JSON serializer. If you are building a simple Lambda function that does not require these advanced options, you can check out https://docs.aws.amazon.com/lambda/latest/dg/java-handler-io-type-pojo.html and use the built-in serializer.

Before we go on implementing the base Lambda handler method, I suggest that you look at the TDD (Test Driven Development) approach and write a test class for the planned implementation. Having the test class will explain better which type of implementation we need and will draw a clear picture about the next step.

Before we start implementing the test, first, we have to add Junit as a dependency to our project. Open build.gradle in the root project and add these files to the end:

allprojects { 
  dependencies { 
    testCompile group: 'junit', name: 'junit', version: '4.11' 
  } 
} 

Then, let's create our first test file:

    $ mkdir -p lambda/src/test/java/com/serverlessbook/lambda
    $ touch lambda/src/test/java/com/serverlessbook/lambda/
LambdaHandlerTest.java

We can then to start implementing it writing the following code to LambdaHandlerTest file we've just created. First of all, inside the test class we will create two stub POJO's and a LambdaHandler class to run the test against:

public class LambdaHandlerTest { 
  protected static class TestInput { 
    public String value; 
  } 
  protected static class TestOutput { 
    public String value; 
  } 
  protected static class TestLambdaHandler extends LambdaHandler<TestInput,
TestOutput> { @Override public TestOutput handleRequest(TestInput input, Context context) { TestOutput testOutput = new TestOutput(); testOutput.value = input.value; return testOutput; } } }

Here, we have the sample TestInput and TestOutput classes, which are simple POJO classes with one variable each and one TestLambdaHandler class that implements the LambdaHandler class with type references to these POJO classes. As you may have noted, the stub class does not do too much and simply returns a TestOutput object with the same value it gets.

Finally, we can add the test method that will exactly emulate the AWS Lambda runtime and carry out a black-box text over our TestLambdaHandler method:

@Test 
public void handleRequest() throws Exception { 
   String jsonInputAndExpectedOutput = "{\"value\":\"testValue\"}"; 
   InputStream exampleInputStream = new   
ByteArrayInputStream(jsonInputAndExpectedOutput.getBytes(
StandardCharsets.UTF_8)); OutputStream exampleOutputStream = new OutputStream() { private final StringBuilder stringBuilder = new StringBuilder();
@Override public void write(int b) { stringBuilder.append((char) b); } @Override public String toString() { return stringBuilder.toString(); } }; TestLambdaHandler lambdaHandler = new TestLambdaHandler(); lambdaHandler.handleRequest(exampleInputStream, exampleOutputStream, null); assertEquals(jsonInputAndExpectedOutput, exampleOutputStream.toString()); }

To run the test, we can execute this command:

    $ ./gradlew test

Once you run the command, you will see that test will fail. It is normal for our test to fail because we did not complete the implementation of our LambdaHandler method and this is how Test Driven Development works: first, write the test, and then implement it until the test returns to green.

I think it is time to move on to implementation. Open the LambdaHandler class again and add a field with Jackson's ObjectMapper type and create the default constructor to initiate this object. You can add the following code to beginning of the class:

final ObjectMapper mapper; 
 
protected LambdaHandler() { 
    mapper = new ObjectMapper(); 
} 
AWS Lambda does not create an object from the handler class for every new request. Instead, it creates an instance of the class for the first request (called the 'heat up' stage) and uses the same instance for other requests. This created object will stay in the memory for about 20 minutes if there is no consequent request for that Lambda function. It is good to know about this undocumented fact because it means that we can cache objects among different requests using object properties, like we do here for ObjectMapper. In this case ObjectMapper will not be created for every request, and it will be 'cached' in the memory. However, you can think of the handler object like Servlets and you should pay attention to thread safety before you decide to use object properties.

Now we need helper methods in the handler for serialization and deserialization. First, we need a method to get the Class object for the I type reference:

@SuppressWarnings("unchecked") 
private Class<I> getInputType() { 
  return (Class<I>) ((ParameterizedType)
getClass().getGenericSuperclass()).getActualTypeArguments()[0]; }

We can use the deserializer and serializer methods:

private I deserializeEventJson(InputStream inputStream, Class<I> clazz) throws
IOException { return mapper.readerFor(clazz).readValue(inputStream); } private void serializeOutput(OutputStream outputStream, O output) throws
IOException { mapper.writer().writeValue(outputStream, output);
}

Finally, we can implement the handler method:

@Override 
public void handleRequest(InputStream input, OutputStream output,
Context context) throws IOException { I inputObject = deserializeEventJson(input, getInputType()); O handlerResult = handleRequest(inputObject, context); serializeOutput(output, handlerResult); }

It seems we are good to go. Let's run the test again:

    $ ./gradlew test 

Congratulations! We completed an important step and built the base class for our Lambda functions.

Hello Lambda!

We are now ready to implement our first Lambda function, which will just upload to the cloud via AWS CLI and invoke manually.

First, we have to create a new subproject, like we did earlier. This time, the subproject will be called lambda-test. We can easily do that with these two commands:

    $ mkdir -p lambda-test/src/main/java/com/serverlessbook/lambda/test
    $ echo $"include 'lambda'" >> settings.gradle
    $ touch lambda-test/src/main/java/com/serverlessbook/lambda/
test/Handler.java

We can create a blank class in Handler.java like this:

package com.serverlessbook.lambda.test; 
public class Handler {} 

Note that we've already chosen a naming convention for package naming: while our base Lambda package sits in the com.serverlessbook.lambda package, individual Lambda functions are in packages named with the com.serverlessbook.lambda.{function-name} format. We will also call handler classes Handler because it sounds perfect in English: Handler implements LambdaHandler. This naming convention is, of course, up to you and your team, but it is convenient to keep things organized.

If you are already familiar with the Gradle build mechanism, you might have realized that before we proceed to implement Lambda's handler function, we have to add the lambda subproject to lambda-test as a dependency, and that is a very valid point. The easiest way to do that would be by creating a build.gradle file for the lambda-test subproject, add the dependency in the dependencies {} block, and move on. On the other hand, we know that our project will include more than one Lambda function, and all of them will share the same build configuration. Putting this configuration in a central location is a very good idea for clear organization and maintainability. Fortunately, Gradle is a very powerful tool that allows such scenarios. We can create a build configuration block in our root project and apply this configuration only to subprojects whose name starts with lambda-, in accordance with our subproject naming convention. Then, we can edit our root build.gradle and add this block to the end of the file:

configure(subprojects.findAll()) {
if (it.name.startsWith("lambda-")) {
}
}

It tells Gradle to apply this configuration only to Lambda projects. Inside this block, we will have an important configuration, but for now, we can start with the most important dependency and edit the block to appear like this:

configure(subprojects.findAll()) { 
  if (it.name.startsWith("lambda-")) { 
    dependencies { 
      compile project(':lambda') 
    } 
  } 
} 

In this step, we have to add another important build configuration, which is the Shadow plugin. The Shadow plugin creates an uber-JAR (also known as a fat JAR or JAR with dependencies) that is required by AWS Lambda. After each build phase, this plugin will compile all the dependencies along with that project's source into a second-and bigger-JAR file, which will be our deployment package for AWS Lambda. To install this plugin, first, we have to edit the buildscript configuration of the root build.gradle file. After editing, the buildscript section should look like this:

buildscript { 
  repositories { 
    mavenCentral() 
    jcenter() 
    maven { 
      url "https://jitpack.io" 
    } 
  } 
 
  dependencies { 
    classpath "com.github.jengelman.gradle.plugins:shadow:1.2.3" 
  } 
} 

We have to apply the plugin to all lambda functions. We have to add two lines to the lambda subproject's configuration, and the final version should look like this:

configure(subprojects.findAll()) { 
  if (it.name.startsWith("lambda-")) { 
     dependencies { 
        compile project(':lambda') 
     } 
 
     apply plugin: "com.github.johnrengelman.shadow" 
     build.finalizedBy shadowJar 
  } 
} 

The first line applies the Shadow plugin, which adds shadowJar task to every lambda subproject. The second directive ensures that after every build task, the shadowJar is automatically executed, thus an uber-JAR is placed into the build directory.

You can try our basic build configuration by running this command in the root directory:

    $ ./gradlew build

You can see the uber-JAR file lambda-test-1.0-all.jar in the lambada-test/build/libs directory.

Now we are going to implement the handler function with very basic functionality, like what we did previously to test the base handler. For the sake of simplicity, we will define input and output classes as inner static classes, although this is not the recommended way of creating classes in Java. Now open the Handler class and edit it like this:

package com.serverlessbook.lambda.test; 
 
import com.amazonaws.services.lambda.runtime.Context; 
import com.serverlessbook.lambda.LambdaHandler; 
 
public class Handler extends LambdaHandler<Handler.TestInput, Handler.TestOutput> { 
    static class TestInput { 
        public String value; 
    } 
    static class TestOutput { 
        public String value; 
    } 
    @Override 
    public TestOutput handleRequest(TestInput input, Context context) { 
        TestOutput testOutput = new TestOutput(); 
        testOutput.value = input.value; 
        return testOutput; 
    } 
} 

That's it; we have now a very basic Lambda function, which is ready to deploy to the cloud. In the next section, we will deploy and run it on AWS Lambda runtime.

Deploying to the Cloud

Approaching the end of this chapter, we have a latest step, which is deploying our code to the cloud. In the next chapters, we will learn how to use CloudFormation for a production-ready deployment process. However, nothing is preventing us from using CLI to play a bit with Lambda at this stage.

Previously, we mentioned that AWS resources are protected by IAM policies and created a user and attached a policy to it. IAM has another entity type, which is called a role. Roles are very similar to users, and they are also identities and can access resources that are allowed by policies attached to them. However, while a user is associated with one person, roles can be assumed by whoever needs them. Lambda functions use roles to access other AWS resources. Every Lambda function should be associated with a role (execution role), and the Lambda function can call any resource that the policies attached to that role allow.

In the following chapters, while we create our CloudFormation stack, we will create very advanced role definitions. However, at this stage, our test Lambda function does not need to access any AWS resources; thus, a basic role with minimum access rights will be sufficient to run the example. In this section, you create an IAM role using the following predefined role type and access policy:

  • The AWS service role of the AWS Lambda type. This role grants AWS Lambda permission to assume the role.
  • The AWSLambdaBasicExecutionRole access policy that you attach to the role. This managed policy grants permissions for Amazon CloudWatch actions that your Lambda function needs for logging and monitoring.

To create the IAM role:

  1. Sign in to the Identity and Access Management (IAM) console at https://console.aws.amazon.com/iam/.
  2. In the navigation pane, choose Roles and then choose Create New Role.
  3. Enter a role name, say, lambda-execution-role, and then choose Next Step.
  4. On the next screen, select AWS Lambda in the AWS Service Roles section.
  5. In Attach Policy, choose AWSLambdaBasicExecutionRole and then proceed.
  6. Take down the ARN of the role you just created.

Now we are ready to deploy our first Lambda function. First, let's build our project again using the build command:

    $ ./gradlew build

Check whether the uber-JAR file is created in the build folder. Then, create the function using AWS CLI:

    $ aws lambda create-function \ 
      --region us-east-1\ 
      --function-name book-test \ 
      --runtime java8 \ 
      --role ROLE_ARN_YOU_CREATED \ 
      --handler com.serverlessbook.lambda.test.Handler \ 
      --zip-file fileb://${PWD}/lambda-test/build/libs/
lambda-test-1.0-all.jar

If everything goes well, the following happens:

{ 
   "CodeSha256": "6cSUk4g8GdlhvApF6LfpT1dCOgemO2LOtrH7pZ6OATk=", 
   "FunctionName": "book-test", 
   "CodeSize": 1481805, 
   "MemorySize": 128, 
   "FunctionArn": "arn:aws:lambda:us-east-1:YOUR_ACCOUNT_ID:
function:book-test", "Version": "$LATEST", "Role": "arn:aws:iam::YOUR_ACCOUNT-ID:role/lambda-execution-role", "Timeout": 3, "LastModified": "2016-08-22T22:12:30.419+0000", "Handler": "com.serverlessbook.lambda.test.Handler", "Runtime": "java8", "Description": "" }

This means that your function has already been created. You can navigate to https://eu-central-1.console.aws.amazon.com/lambda to check whether your function is already there or not. To execute the function, you can use the following command:

    $ aws lambda invoke --invocation-type RequestResponse \ 
                        --region us-east-1 \ 
                        --profile serverlessbook \ 
                        --function-name book-test \ 
                        --payload '{"value":"test"}' \ 
                        --log-type Tail \ 
                        /tmp/test.txt 

You can see the output value in the /tmp/test.txt file and try the command with different values to see different outputs. Note that the first invocation is always slower, while the subsequent calls are significantly faster. This is because of the heat-up mechanism of AWS Lambda that we will mention later in the book.

Congratulations, and welcome to the world of AWS Lambda officially!

Summary

In this chapter, we described serverless computing and learned about the use cases in which it can be useful. We set up an AWS Account, created a skeleton Gradle project for our book, and wrote a basic library. Finally, we implemented a very basic Lambda function on top of our work and deployed and executed it.

In the next chapter, we will learn how to use Cloudformation for a more automatized deployment process and add the dependency injection framework to our project, which will orchestrate the different services we will implement in our project.

Bibliography

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Design a real-world serverless application from scratch
  • Learn about AWS Lambda function and how to use Lambda functions to glue other AWS Services
  • Use the Java programming language and well-known design patterns. Although Java is used for the examples in this book, the concept is applicable across all languages
  • Learn to migrate your JAX-RS application to AWS Lambda and API Gateway

Description

Over the past years, all kind of companies from start-ups to giant enterprises started their move to public cloud providers in order to save their costs and reduce the operation effort needed to keep their shops open. Now it is even possible to craft a complex software system consisting of many independent micro-functions that will run only when they are needed without needing to maintain individual servers. The focus of this book is to design serverless architectures, and weigh the advantages and disadvantages of this approach, along with decision factors to consider. You will learn how to design a serverless application, get to know that key points of services that serverless applications are based on, and known issues and solutions. The book addresses key challenges such as how to slice out the core functionality of the software to be distributed in different cloud services and cloud functions. It covers basic and advanced usage of these services, testing and securing the serverless software, automating deployment, and more. By the end of the book, you will be equipped with knowledge of new tools and techniques to keep up with this evolution in the IT industry.

Who is this book for?

This book is for developers and software architects who are interested in designing on the back end. Since the book uses Java to teach concepts, knowledge of Java is required.

What you will learn

  • Learn to form microservices from bigger Softwares
  • Orchestrate and scale microservices
  • Design and set up the data flow between cloud services and custom business logic
  • Get to grips with cloud provider's APIs, limitations, and known issues
  • Migrate existing Java applications to a serverless architecture
  • Acquire deployment strategies
  • Build a highly available and scalable data persistence layer
  • Unravel cost optimization techniques
Estimated delivery fee Deliver to Norway

Standard delivery 10 - 13 business days

€11.95

Premium delivery 3 - 6 business days

€16.95
(Includes tracking information)

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Jul 19, 2017
Length: 242 pages
Edition : 1st
Language : English
ISBN-13 : 9781787129191
Vendor :
Amazon
Languages :
Concepts :
Tools :

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Estimated delivery fee Deliver to Norway

Standard delivery 10 - 13 business days

€11.95

Premium delivery 3 - 6 business days

€16.95
(Includes tracking information)

Product Details

Publication date : Jul 19, 2017
Length: 242 pages
Edition : 1st
Language : English
ISBN-13 : 9781787129191
Vendor :
Amazon
Languages :
Concepts :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
€18.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
€189.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts
€264.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total 110.97
Building Serverless Web Applications
€36.99
Building Serverless Architectures
€36.99
Serverless Design Patterns and Best Practices
€36.99
Total 110.97 Stars icon
Banner background image

Table of Contents

9 Chapters
Getting Started with Serverless Chevron down icon Chevron up icon
Infrastructure as a Code Chevron down icon Chevron up icon
Hello Internet Chevron down icon Chevron up icon
Applying Enterprise Patterns Chevron down icon Chevron up icon
Persisting Data Chevron down icon Chevron up icon
Building Supporting Services Chevron down icon Chevron up icon
Searching Data Chevron down icon Chevron up icon
Monitoring, Logging, and Security Chevron down icon Chevron up icon
Lambda Framework Chevron down icon Chevron up icon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is the delivery time and cost of print book? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela
What is custom duty/charge? Chevron down icon Chevron up icon

Customs duty are charges levied on goods when they cross international borders. It is a tax that is imposed on imported goods. These duties are charged by special authorities and bodies created by local governments and are meant to protect local industries, economies, and businesses.

Do I have to pay customs charges for the print book order? Chevron down icon Chevron up icon

The orders shipped to the countries that are listed under EU27 will not bear custom charges. They are paid by Packt as part of the order.

List of EU27 countries: www.gov.uk/eu-eea:

A custom duty or localized taxes may be applicable on the shipment and would be charged by the recipient country outside of the EU27 which should be paid by the customer and these duties are not included in the shipping charges been charged on the order.

How do I know my custom duty charges? Chevron down icon Chevron up icon

The amount of duty payable varies greatly depending on the imported goods, the country of origin and several other factors like the total invoice amount or dimensions like weight, and other such criteria applicable in your country.

For example:

  • If you live in Mexico, and the declared value of your ordered items is over $ 50, for you to receive a package, you will have to pay additional import tax of 19% which will be $ 9.50 to the courier service.
  • Whereas if you live in Turkey, and the declared value of your ordered items is over € 22, for you to receive a package, you will have to pay additional import tax of 18% which will be € 3.96 to the courier service.
How can I cancel my order? Chevron down icon Chevron up icon

Cancellation Policy for Published Printed Books:

You can cancel any order within 1 hour of placing the order. Simply contact [email protected] with your order details or payment transaction id. If your order has already started the shipment process, we will do our best to stop it. However, if it is already on the way to you then when you receive it, you can contact us at [email protected] using the returns and refund process.

Please understand that Packt Publishing cannot provide refunds or cancel any order except for the cases described in our Return Policy (i.e. Packt Publishing agrees to replace your printed book because it arrives damaged or material defect in book), Packt Publishing will not accept returns.

What is your returns and refunds policy? Chevron down icon Chevron up icon

Return Policy:

We want you to be happy with your purchase from Packtpub.com. We will not hassle you with returning print books to us. If the print book you receive from us is incorrect, damaged, doesn't work or is unacceptably late, please contact Customer Relations Team on [email protected] with the order number and issue details as explained below:

  1. If you ordered (eBook, Video or Print Book) incorrectly or accidentally, please contact Customer Relations Team on [email protected] within one hour of placing the order and we will replace/refund you the item cost.
  2. Sadly, if your eBook or Video file is faulty or a fault occurs during the eBook or Video being made available to you, i.e. during download then you should contact Customer Relations Team within 14 days of purchase on [email protected] who will be able to resolve this issue for you.
  3. You will have a choice of replacement or refund of the problem items.(damaged, defective or incorrect)
  4. Once Customer Care Team confirms that you will be refunded, you should receive the refund within 10 to 12 working days.
  5. If you are only requesting a refund of one book from a multiple order, then we will refund you the appropriate single item.
  6. Where the items were shipped under a free shipping offer, there will be no shipping costs to refund.

On the off chance your printed book arrives damaged, with book material defect, contact our Customer Relation Team on [email protected] within 14 days of receipt of the book with appropriate evidence of damage and we will work with you to secure a replacement copy, if necessary. Please note that each printed book you order from us is individually made by Packt's professional book-printing partner which is on a print-on-demand basis.

What tax is charged? Chevron down icon Chevron up icon

Currently, no tax is charged on the purchase of any print book (subject to change based on the laws and regulations). A localized VAT fee is charged only to our European and UK customers on eBooks, Video and subscriptions that they buy. GST is charged to Indian customers for eBooks and video purchases.

What payment methods can I use? Chevron down icon Chevron up icon

You can pay with the following card types:

  1. Visa Debit
  2. Visa Credit
  3. MasterCard
  4. PayPal
What is the delivery time and cost of print books? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela