Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Mastering Apache Cassandra - Second Edition
Mastering Apache Cassandra - Second Edition

Mastering Apache Cassandra - Second Edition: Build, manage, and configure high-performing, reliable NoSQL database for your application with Cassandra

eBook
€22.99 €32.99
Paperback
€41.99
Subscription
Free Trial
Renews at €18.99p/m

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Table of content icon View table of contents Preview book icon Preview Book

Mastering Apache Cassandra - Second Edition

Chapter 1. Quick Start

Welcome to Cassandra and congratulations on choosing a database that beats most of the NoSQL databases in performance. Cassandra is a powerful database based on solid fundamentals of distributed computing and fail-safe design, and it is well-tested by companies such as Facebook, Twitter, and Netflix. Unlike conventional databases and some of the modern databases that use the master-slave pattern, Cassandra uses the all-nodes-the-same pattern; this makes the system free from a single point of failure. This chapter is an introduction to Cassandra. The aim is to get you through with a proof-of-concept project to set the right state of mind for the rest of the book.

With version 2, Cassandra has evolved into a mature database system. It is now easier to manage, and more developer-friendly compared to the previous versions. With CQL 3 and removal of super columns, it is less likely that a developer can go wrong with Cassandra. In the upcoming sections, we will model, program, and execute a simple blogging application to see Cassandra in action. If you have a beginner-level experience with Cassandra, you may opt to skip this chapter.

Introduction to Cassandra

Quoting from Wikipedia:

"Apache Cassandra is an open source distributed database management system designed to handle large amounts of data across many commodity servers, providing high availability with no single point of failure. Cassandra offers robust support for clusters spanning multiple datacenters, with asynchronous masterless replication allowing low latency operations for all clients."

Let's try to understand in detail what it means.

A distributed database

In computing, distributed means splitting data or tasks across multiple machines. In the context of Cassandra, it means that the data is distributed across multiple machines. It means that no single node (a machine in a cluster is usually called a node) holds all the data, but just a chunk of it. It means that you are not limited by the storage and processing capabilities of a single machine. If the data gets larger, add more machines. If you need more parallelism (ability to access data in parallel/concurrently), add more machines. This means that a node going down does not mean that all the data is lost (we will cover this issue soon).

If a distributed mechanism is well designed, it will scale with a number of nodes. Cassandra is one of the best examples of such a system. It scales almost linearly, with regard to performance, when we add new nodes. This means that Cassandra can handle the behemoth of data without wincing.

Note

Check out an excellent paper on the NoSQL database comparison titled, Solving Big Data Challenges for Enterprise Application Performance Management at http://vldb.org/pvldb/vol5/p1724_tilmannrabl_vldb2012.pdf.

High availability

We will discuss availability in the next chapter. For now, assume availability is the probability that we query and the system just works. A high-availability system is one that is ready to serve any request at any time. High availability is usually achieved by adding redundancies. So, if one part fails, the other part of the system can serve the request. To a client, it seems as if everything works fine.

Cassandra is a robust software. Nodes joining and leaving are automatically taken care of. With proper settings, Cassandra can be made failure-resistant. This means that if some of the servers fail, the data loss will be zero. So, you can just deploy Cassandra over cheap commodity hardware or a cloud environment, where hardware or infrastructure failures may occur.

Replication

Continuing from the previous two points, Cassandra has a pretty powerful replication mechanism (we will see more details in the next chapter). Cassandra treats every node in the same manner. Data need not be written on a specific server (master), and you need not wait until the data is written to all the nodes that replicate this data (slaves). So, there is no master or slave in Cassandra, and replication happens asynchronously. This means that the client can be returned with success as a response as soon as the data is written on at least one server. We will see how we can tweak these settings to ensure the number of servers we want to have data written on before the client returns.

From this, we can derive that when there is no master or slave, we can write to any node for any operation. Since we have the ability to choose how many nodes to read from or write to, we can tweak it to achieve very low latency (read or write from one server).

Multiple data centers

Expanding from a single machine to a single data center cluster or multiple data centers is very simple compared to traditional databases where you need to make a plethora of configuration changes and watch replication. If you are planning to shard, it becomes a developer's nightmare. We will see later in this book that we can use this data center setting to make a real-time replicating system across data centers. We can use each data center to perform different tasks without overloading the other data centers. This is a powerful support when you do not have to worry whether users in Japan with a data center in Tokyo and users in the US with a data center in Virginia, are in sync or not.

These are just broad strokes of Cassandra's capabilities. We will explore more in the upcoming chapters. This chapter is about getting excited learning about Cassandra.

A brief introduction to a data model

Cassandra has three containers, one within another. The outermost container is keyspace. You can think of keyspace as a database in the RDBMS land. Tables reside under keyspace. A table can be assumed as a relational database table, except it is more flexible. A table is basically a sorted map of sorted maps (refer to the following figure). Each table must have a primary key. This primary key is called row key or partition key. (We will later see that in a CQL table, the row key is the same as the primary key. If the primary key is made up of more than one column, the first component of this composite key is equivalent to the row key). Each partition is associated with a set of cells. Each cell has a name and a value. These cells may be thought of as columns in the traditional database system. The CQL engine interprets a group of cells with the same cell name prefix as a row. The following figure shows the Cassandra data model:

A brief introduction to a data model

Note that if you come with Cassandra Thrift experience, it might be hard to view how Cassandra 1.2 and newer versions have changed terminology. Before CQL, the tables were called column families. A column family holds a group of rows, and rows are a sorted set of columns.

One obvious benefit of having such a flexible data storage mechanism is that you can have arbitrary number of cells with customized names and have a partition key store data as a list of tuples (a tuple is an ordered set; in this case, the tuple is a key-value pair). This comes handy when you have to store things such as time series, for example, if you want to use Cassandra to store your Facebook timeline or your Twitter feed or you want the partition key to be a sensor ID and each cell to represent a tuple with name as the timestamp when the data was created and value as the data sent by the sensor. Also, in a partition, cells are by default naturally ordered by the cell's name. So, in our sensor case, you will get data sorted for free. The other difference is, unlike RDBMS, Cassandra does not have relations. This means relational logic will be needed to be handled at the application level. This also means that we may want to denormalize the database because there is no join and to avoid looking up multiple tables by running multiple queries. Denormalization is a process of adding redundancy in data to achieve high read performance. For more information, visit http://en.wikipedia.org/wiki/Denormalization.

Partitions are distributed across the cluster, creating effective auto-sharding. Each server holds a range(s) of keys. So, if balanced, a cluster with more nodes will have less rows per node. All these concepts will be repeated in detail in the later chapters.

Note

Types of keys

In the context of Cassandra, you may find the concept of keys a bit confusing. There are five terms that you may encounter. Here is what they generally mean:

  • Primary key: This is the column or a group of columns that uniquely defines a row of the CQL table.
  • Composite key: This is a type of primary key that is made up of more than one column. Sometimes, the composite key is also referred to as the compound key.
  • Partition key: Cassandra's internal data representation is large rows with a unique key called row key. It uses these row key values to distribute data across cluster nodes. Since these row keys are used to partition data, they as called partition keys. When you define a table with a simple key, that key is the partition key. If you define a table with a composite key, the first term of that composite key works as the partition key. This means all the CQL rows with the same partition key lives on one machine.
  • Clustering key: This is the column that tells Cassandra how the data within a partition is ordered (or clustered). This essentially provides presorted retrieval if you know what order you want your data to be retrieve in.
  • Composite partition key: Optionally, CQL lets you define a composite partition key (the first part of a composite key). This key helps you distribute data across nodes if any part of the composite partition key differs. Let's take a look at the following example:
CREATE TABLE customers (
  id uuid,
  email text,
  PRIMARY KEY (id)
)

In the preceding example, id is the primary key and also the partition key. There is no clustering. It is a simple key. Let's add a twist to the primary key:

CREATE TABLE country_states (
  country text,
  state text,
  population int,
  PRIMARY KEY (country, state)
)

In the preceding example, we have a composite key that uses country and state to uniquely define a CQL row. The country column is the partition key, so all the rows with the same country node will belong to the same node/machine. The rows within a partition will be sorted by the state names. So, when you query for states in the US, you will encounter the row with California before the one with New York. What if I want to partition by composition? Let's take a look at the following example:

CREATE TABLE country_chiefs (
  country text,
  prez_name text,
  num_states int,
  capital text,
  ruling_year int,
  PRIMARY KEY ((country, prez_name), num_states, capital)
)

The preceding example has a composite key involving four columns: country, prez_name, num_states, and capital, with country and prez_name constituting composite partition key. This means the rows with the same country but different president will be in a different partition. Rows will be ordered by the number of states followed by the capital name.

Installing Cassandra locally

Installing Cassandra on your local machine for experimental or development purposes is as easy as downloading and unzipping the tarball (the .tar compressed file). For development purposes, Cassandra does not have any extreme requirements. Any modern computer with 1 GB of RAM and a dual-core processor is good to test the water. All the examples in this chapter are performed on a laptop with 4 GB of RAM, a dual-core processor, and the Ubuntu 14.04 operating system. Cassandra is supported on all major platforms; after all, it's Java. Here are the steps to install Cassandra locally:

  1. Install Oracle Java 1.6 (Java 6) or higher. Installing the JVM is sufficient, but you may need the Java Development Kit (JDK) if you are planning to code in Java:
    # Check whether you have Java installed in your system
    $ java -version
    java version "1.7.0_21"
    Java(TM) SE Runtime Environment (build 1.7.0_21-b11)
    Java HotSpot(TM) 64-Bit Server VM (build 23.21-b01, mixed mode)
    

    If you do not have Java, you may want to follow the installation details for your machine from the Oracle Java website (http://www.oracle.com/technetwork/java/javase/downloads/index.html).

  2. Download Cassandra 2.0.0 or a newer version from the Cassandra website (http://archive.apache.org/dist/cassandra/ or http://cassandra.apache.org/download/). This book uses Cassandra 2.1.2, which was the latest version at the time of writing this book. Decompress this file to a suitable directory:
    # Download Cassandra
    wget http://archive.apache.org/dist/cassandra/2.1.2/apache-cassandra-2.1.2-bin.tar.gz
    
    # Untar to your home directory
    tar xzf apache-cassandra-2.1.2-bin.tar.gz -C $HOME
    

    The unzipped file location is $HOME/apache-cassandra-2.1.2. Let's call this location CASSANDRA_HOME. Wherever we refer to CASSANDRA_HOME in this book, always assume it to be the location where Cassandra is installed.

  3. You may want to edit $CASSANDRA_HOME/conf/cassandra.yaml to configure Cassandra. It is advisable to change the data directory to a writable location when you start Cassandra as a nonroot user.
  4. To change the data directory, change the data_file_directories attribute in cassandra.yaml, as follows (here, the data directory is chosen as /mnt/Cassandra/data; you may want to set the directory where you want to put the data):
    data_file_directories:
        - /mnt/cassandra/data
  5. Set the commit log directory:
    commitlog_directory: /mnt/cassandra/commitlog
  6. Set the saved caches directory:
    saved_caches_directory: /mnt/cassandra/saved_caches
  7. Set the logging location. Edit $CASSANDRA_HOME/conf/log4j-server.properties as follows:
    log4j.appender.R.File=/tmp/cassandra.log

With this, you are ready to start Cassandra. Fire up your shell and type in $CASSANDRA_HOME/bin/cassandra -f. In this command, -f stands for foreground. You can keep viewing the logs and press Ctrl + C to shut the server down. If you want to run it in the background, do not use the -f option. If your data or log directories are not set with the appropriate user permission, you may want to start Cassandra as superuser using the sudo command. The server is ready when you see statistics in the startup log:

Installing Cassandra locally

Cassandra in action

There is no better way to learn a technology than by performing a proof of concept of the technology. In this section, we will work on a very simple application to get you familiarized with Cassandra. We will build the backend of a simple blogging application, where a user can perform the following tasks:

  • Create a blogging account
  • Publish posts
  • Tag the posts, and posts can be searched using those tags
  • Have people comment on those posts
  • Have people upvote or downvote a post or a comment

Modeling data

In the RDBMS world, you would glance over the entities and think about relations while modeling the application. Then, you will join tables to get the required data. There is no join option in Cassandra, so we will have to denormalize things. Looking at the previously mentioned specifications, we can say that:

  • We need a blogs table to store the blog name and other global information, such as the blogger's username and password
  • We will have to pull posts for the blog, ideally, sorted in reverse chronological order
  • We will also have to pull all the comments for each post, when we see the post page
  • We will have to maintain tags in such a way that tags can be used to pull all the posts with the same tag
  • We will also have to have counters for the upvotes and downvotes for posts and comments

With the preceding details, let's see the tables we need:

  • blogs: This table will hold global blog metadata and user information, such as blog name, username, password, and other metadata.
  • posts: This table will hold individual posts. At first glance, posts seems to be an ordinary table with primary keys as post ID and a reference to the blog that it belongs to. The problem arises when we add the requirement of being able to be sorted by timestamp. Unlike RDBMS, you cannot just perform an ORDER BY operation across partitions. The work-around for this is to use a composite key. A composite key consists of a partition key and one or more column(s) that determines where the other columns are going to be stored. Also, the other columns in the composite key determine relative ordering for the set of columns that are being inserted as a row with the key.

    Remember that a partition is completely stored on a node. The benefit of this is that the fetches are faster, but at the same time a partition is limited by the total number of cells that it can hold, which is 2 billion cells. The other downside of having everything on one partition may cause lots of requests to go to only a couple of nodes (replicas), making them a hotspot in the cluster, which is not good. You can avoid this by using some sort of bucketing such as involving months and years in the partition key. This will make sure that the partition changes every month and each partition has only one month worth of records. This will solve both the problems: the cap on the number of records and the hotspot issue. However, we will still need a way to order buckets. For this example, we will have all the posts in one partition just to keep things simple. We will tackle the bucketing issues in Chapter 3, Effective CQL. The following figure shows how to write time series grouped data using composite columns:

    Modeling data
  • comments: They have a similar property as post, except it is linked to a post instead of being linked to a blog.
  • tags: They are a part of post. We use the Set data type to represent tags on the posts. One of the features that we mentioned earlier is to be able to search posts by tags. The best way to do it is to create an index on the tags column and make it searchable. Unfortunately, index on collections data types has not been supported until Cassandra Version 2.1 (https://issues.apache.org/jira/browse/CASSANDRA-4511). In our case, we will have to create and manage this sort of indexing manually. So, we will create a tags table that will have a compound primary key with tag and blog ID as its components.
  • counters: Ideally, you would think that you want to put upvote and downvote counters as a part of the posts and comments tables' column definition, but Cassandra does not support a table that has a counter type column(s) and some other type column unless the counter is a part of the primary key definition. So, in our case, we will create two new tables just to keep track of votes.

With this, we are done with data modeling. The next step is inserting and getting data back.

Modeling data

Schema based on the discussion

Writing code

Time to start something tangible! In this section, we will create the schema, insert the data, and make interesting queries to retrieve the data. In a real application, you will have a GUI with button and links to be able to log in, post, comment, upvote and downvote, and navigate. Here, we will stick to what happens in the backend when you perform those actions. This will keep the discussion from any clutter introduced by other software components. Also, this section contains Cassandra Query Language (CQL), a SQL-like query language for Cassandra. So, you can just copy these statements and paste them into your CQL shell ($CASSANDRA_HOME/bin/cqlsh) to see it working. If you want to build an application using these statements, you should be able to just use these statements in your favorite language via the CQL driver library that you can find at http://www.datastax.com/download#dl-datastax-drivers. You can also download a simple Java application that is built using these statements from my GitHub account (https://github.com/naishe/mastering-cassandra-v2).

Setting up

Setting up a project involves creating a keyspace and tables. This can be done via the CQL shell or from your favorite programming language.

Here are the statements to create the schema:

cqlsh> CREATE KEYSPACE weblog WITH REPLICATION = {'class': 'SimpleStrategy', 'replication_factor': 1};

cqlsh> USE weblog;

cqlsh:weblog> CREATE TABLE blogs (id uuid PRIMARY KEY, blog_name varchar, author varchar, email varchar, password varchar);

cqlsh:weblog> CREATE TABLE posts (id timeuuid, blog_id uuid, posted_on timestamp, title text, content text, tags set<varchar>, PRIMARY KEY(blog_id, id));

cqlsh:weblog> CREATE TABLE categories (cat_name varchar, blog_id uuid, post_id timeuuid, post_title text, PRIMARY KEY(cat_name, blog_id, post_id));

cqlsh:weblog> CREATE TABLE comments (id timeuuid, post_id timeuuid, title text, content text, posted_on timestamp, commenter varchar, PRIMARY KEY(post_id, id));

cqlsh:weblog> CREATE TABLE post_votes(post_id timeuuid PRIMARY KEY, upvotes counter, downvotes counter);

cqlsh:weblog> CREATE TABLE comment_votes(comment_id timeuuid PRIMARY KEY, upvotes counter, downvotes counter);

Note

Universally unique identifiers: uuid and timeuuid

In the preceding CQL statements, there are two interesting data types—uuid and timeuuid. uuid stands for universally unique identifier. There are five types of them. One of these uuid types is timeuuid, which is essentially uuid type 1 that takes timestamp as its first component. This means it can be used to sort things by time. This is what we wanted to do in this example: sort posts by the time they were published.

On the other hand, uuid accepts any of these five types of uuid as long as the format follows the standard uuid format.

In Cassandra, if you have chosen the uuid type for a column, you will need to pass uuid while inserting the data. With timeuuid, just passing timestamp is enough.

The first statement requests Cassandra to create a keyspace named weblog with replication factor 1 because we are running a single node Cassandra on a local machine. Here are a couple of things to notice:

  • The column tags in the posts table is a set of strings.
  • The primary key for posts, categories, and comments has more than one component. The first of these components is a partition key. Data with the same primary key in a table resides on the same machine. This means, all the posts' records that belong to one blog stays on one machine (not really; if the replication factor is more than one, the records get replicated to as many machines). This is true for all the tables with composite keys.
  • Categories have three components in its primary key. One is the category name, which is the partition key, another is the blog ID, and then the post ID. One can argue that inclusion of the post ID in the primary key was unnecessary. You could just use the category name and blog ID. The reason to include the post ID in the primary key was to enable sorting by the post ID.
  • Note that some of the IDs in the table definition are timeuuid. The timeuuid data type is an interesting ID generation mechanism. It generates a timestamp-based (provided by you) uuid, which is unique and you can use it in applications where you want things to be ordered by chronology.

Inserting records

This section demonstrates inserting the records in the schema. Unlike RDBMS, you will find that there are some redundancies in the system. You may notice that you cannot have a lot of rules enforced by Cassandra. It is up to the developer to make sure the records are inserted, updated, and deleted from appropriate places.

Note

Note that the CQL code is just for instruction purposes and is just a snippet. Your output may vary.

We will see a simple INSERT example now:

cqlsh:weblog> INSERT INTO blogs (id, blog_name, author, email, password) VALUES ( blobAsUuid(timeuuidAsBlob(now())), 'Random Ramblings', 'JRR Rowling', '[email protected]', 'someHashed#passwrd');


cqlsh:weblog> SELECT * FROM blogs;


 id           | author      | blog_name        | email             | password

 ------------+-------------+------------------+-------------------+--------------------

  83cec... | JRR Rowling | Random Ramblings | [email protected] | someHashed#passwrd



  (1 rows)

Tip

Downloading the example code

You can download the example code files for all Packt books you have purchased from your account at http://www.packtpub.com. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you.

The application would generate uuid or you will get uuid from an existing record in the blogs table based on a user's e-mail address or some other criteria. Here, just to be concise, the uuid generation is left to Cassandra, and it is retrieved by running the SELECT statement. Let's insert some posts to this blog:

# First post



cqlsh:weblog> INSERT INTO posts (id, blog_id, title, content, tags, posted_on) VALUES (now(), 83cec740-22b1-11e4-a4f0-7f1a8b30f852, 'first post', 'hey howdy!', {'random','welcome'}, 1407822921000);

cqlsh:weblog> SELECT * FROM posts;

 blog_id | id           | content    | posted_on                | tags                  | title

------------+-----------+------------+--------------------------+-----------------------+------------

 83cec... | 04722... | hey howdy! | 2014-08-12 11:25:21+0530 | {'random', 'welcome'} | first post

(1 rows)

cqlsh:weblog> INSERT INTO categories (cat_name, blog_id, post_id, post_title) VALUES ( 'random', 83cec740-22b1-11e4-a4f0-7f1a8b30f852, 047224f0-22b2-11e4-a4f0-7f1a8b30f852, 'first post');

cqlsh:weblog> INSERT INTO categories (cat_name, blog_id, post_id, post_title) VALUES ( 'welcome', 83cec740-22b1-11e4-a4f0-7f1a8b30f852, 047224f0-22b2-11e4-a4f0-7f1a8b30f852, 'first post');


# Second post



cqlsh:weblog> INSERT INTO posts (id, blog_id, title, content, tags, posted_on) VALUES (now(), 83cec740-22b1-11e4-a4f0-7f1a8b30f852, 'Fooled by randomness...', 'posterior=(prior*likelihood)/evidence', {'random','maths'}, 1407823189000);

cqlsh:weblog> select * from posts;

 blog_id  | id           | content                               | posted_on                | tags                  | title

------------+-----------+---------------------------------------+--------------------------+-----------------------+-------------------------

 83cec.... | 04722... |                            hey howdy! | 2014-08-12 11:25:21+0530 | {'random', 'welcome'} |              first post


 83cec... | c06a4... | posterior=(prior*likelihood)/evidence | 2014-08-12 11:29:49+0530 |   {'maths', 'random'} | Fooled by randomness...

(2 rows)

cqlsh:weblog> INSERT INTO categories (cat_name, blog_id, post_id, post_title) VALUES ( 'random', 83cec740-22b1-11e4-a4f0-7f1a8b30f852, c06a42f0-22b2-11e4-a4f0-7f1a8b30f852, 'Fooled by randomness...');

cqlsh:weblog> INSERT INTO categories (cat_name, blog_id, post_id, post_title) VALUES ( 'maths', 83cec740-22b1-11e4-a4f0-7f1a8b30f852, c06a42f0-22b2-11e4-a4f0-7f1a8b30f852, 'Fooled by randomness...');

Note

You may want to insert more rows so that we can experiment with pagination in the upcoming sections.

You may notice that the primary key, which is of type timeuuid, is created using Cassandra's built-in now() function, and we repeated the title in the categories table. The rationale behind repetition is that we may want to display the title of all the posts that match a tag that a user clicked. These titles will have URLs to redirect us to the posts (a post can be retrieved by the blog ID and post ID). Alternatively, Cassandra does not support a relational connect between two tables, so you cannot join categories and posts to display the title. The other option is to use the blog ID and post ID to retrieve the post's title. However, that's more work, and somewhat inefficient.

Let's insert some comments and upvote and downvote some posts and comments:

# Insert some comments

cqlsh:weblog>  INSERT INTO comments (id, post_id, commenter, title, content, posted_on) VALUES (now(), c06a42f0-22b2-11e4-a4f0-7f1a8b30f852, '[email protected]', 'Thoughful article but...', 'It is too short to describe the complexity.', 1407868973000);

cqlsh:weblog> INSERT INTO comments (id, post_id, commenter, title, content, posted_on) VALUES (now(), c06a42f0-22b2-11e4-a4f0-7f1a8b30f852, '[email protected]', 'Nice!', 'Thanks, this is good stuff.', 1407868975000);

cqlsh:weblog> INSERT INTO comments (id, post_id, commenter, title, content, posted_on) VALUES (now(), c06a42f0-22b2-11e4-a4f0-7f1a8b30f852, '[email protected]', 'Follow my blog', 'Please follow my blog.', 1407868979000);

cqlsh:weblog> INSERT INTO comments (id, post_id, commenter, title, content, posted_on) VALUES (now(), 047224f0-22b2-11e4-a4f0-7f1a8b30f852, '[email protected]', 'New blogger?', 'Welcome to weblog application.', 1407868981000);

# Insert some votes

cqlsh:weblog> UPDATE comment_votes SET upvotes = upvotes + 1 WHERE comment_id = be127d00-22c2-11e4-a4f0-7f1a8b30f852;

cqlsh:weblog> UPDATE comment_votes SET upvotes = upvotes + 1 WHERE comment_id = be127d00-22c2-11e4-a4f0-7f1a8b30f852;


cqlsh:weblog> UPDATE comment_votes SET downvotes = downvotes + 1 WHERE comment_id = be127d00-22c2-11e4-a4f0-7f1a8b30f852;

cqlsh:weblog> UPDATE post_votes SET downvotes = downvotes + 1 WHERE post_id = d44e0440-22c2-11e4-a4f0-7f1a8b30f852;

cqlsh:weblog> UPDATE post_votes SET upvotes = upvotes + 1 WHERE post_id = d44e0440-22c2-11e4-a4f0-7f1a8b30f852;

Counters are always inserted or updated using the UPDATE statement.

Retrieving data

Now that we have data inserted for our application, we need to retrieve it. To blog applications, usually the blog name serves as the primary key in their database. So, when you request cold-caffein.blogspot.com, a blog metadata table with the blog ID as cold-caffein exists. We, on the other hand, can use the blog uuid to request to serve the contents. So, we assume that having the blog ID is handy.

Let's display posts. We should not load all the posts for the user upfront. It is not a good idea from the usability point of view. It demands more bandwidth, and it is probably a lot of reads for Cassandra. So first, let's pull two posts at a time from ones posted earlier:

cqlsh:weblog> select * from posts where blog_id = 83cec740-22b1-11e4-a4f0-7f1a8b30f852 order by id desc limit 2;

 blog_id | id            | content         | posted_on                | tags                  | title

-----------+-------------+-----------------+--------------------------+-----------------------+--------------
83cec… | c2240… | posterior=(prior*likelihood)/evidence | 2014-08-12 11:29:49+0530 |   {'maths', 'random'} | Fooled by randomness...

83cec… | 965a2… |                            hey howdy! | 2014-08-12 11:25:21+0530 | {'random', 'welcome'} |              first post

(2 rows)

This was the first page. For the next page, we can use an anchor. We can use the last post's ID as an anchor, as its timeuuid increases monotonically with time. So, posts older than that will have the post ID with smaller values, and this will work as our anchor:

cqlsh:weblog> select * from posts where blog_id = 83cec740-22b1-11e4-a4f0-7f1a8b30f852 and id < 8eab0c10-2314-11e4-bac7-3f5f68a133d8 order by id desc limit 2;



 blog_id | id          | content         | posted_on                | tags                  | title

-----------+-----------+-----------------+--------------------------+-----------------------+--------------

 83cec... | 83f16... | random content8 | 2014-08-13 23:33:00+0530 | {'garbage', 'random'} | random post8


 83cec... | 76738... | random content7 | 2014-08-13 23:32:58+0530 | {'garbage', 'random'} | random post7



(2 rows)

You can retrieve the posts on the next page as follows:

cqlsh:weblog> select * from posts where blog_id = 83cec740-22b1-11e4-a4f0-7f1a8b30f852 and id < 76738dc0-2314-11e4-bac7-3f5f68a133d8 order by id desc limit 2;



 blog_id | id          | content         | posted_on                | tags                  | title

-----------+-----------+-----------------+--------------------------+-----------------------+--------------

 83cec... | 6f85d... | random content6 | 2014-08-13 23:32:56+0530 | {'garbage', 'random'} | random post6


 83cec... | 684c5... | random content5 | 2014-08-13 23:32:54+0530 | {'garbage', 'random'} | random post5



(2 rows)

Now for each post, we need to perform the following tasks:

  • Pull a list of comments
  • Up and downvotes
  • Load comments as follows:
    cqlsh:weblog> select * from comments where post_id = c06a42f0-22b2-11e4-a4f0-7f1a8b30f852 order by id desc;
    
    
    
     post_id  | id          | commenter     | content                                     | posted_on                | title
    ------------+----------+---------------+---------------------------------------------+--------------------------+--------------------------
    
     c06a4... | cd5a8... |     [email protected] |                      Please follow my blog. | 2014-08-13 00:12:59+0530 |           Follow my blog
    
     c06a4... | c6aff... | [email protected] |                 Thanks, this is good stuff. | 2014-08-13 00:12:55+0530 |                    Nice!
    
    
    c06a4... | be127... | [email protected] | It is too short to describe the complexity. | 2014-08-13 00:12:53+0530 | Thoughful article but...
  • Individually fetch counters for each post and comment as follows:
    cqlsh:weblog> select * from comment_votes where comment_id = be127d00-22c2-11e4-a4f0-7f1a8b30f852;
    
    
    
     comment_id   | downvotes | upvotes
    --------------------+-----------+---------
     be127...         |         1 |       6
    
    (1 rows)
    
    
    
    cqlsh:weblog> select * from post_votes where post_id = c06a42f0-22b2-11e4-a4f0-7f1a8b30f852;
    
     post_id | downvotes | upvotes
    ------------+-----------+---------
     c06a4... |         2 |       7
    
    (1 rows)

Now, we want to facilitate the users of our blogging website with the ability to click on a tag and see a list of all the posts with that tag. Here is what we do:

cqlsh:weblog> select * from categories where cat_name = 'maths' and blog_id = 83cec740-22b1-11e4-a4f0-7f1a8b30f852 order by blog_id desc;

 cat_name | blog_id| post_id  | post_title

----------+--------------+-----------+-------------------------

    maths | 83cec... | a865c... |                    YARA

    maths | 83cec... | c06a4... | Fooled by randomness...

(2 rows)

We can obviously use the pagination and sorting here. I think you have got the idea.

Sometimes, it is nice to see what people generally comment. It would be great if we could find all the comments by a user. To make a nonprimary key field searchable in Cassandra, you need to create an index on that column. So, let's do that:

cqlsh:weblog> CREATE INDEX commenter_idx ON comments (commenter);

cqlsh:weblog> select * from comments where commenter = '[email protected]';

 post_id   | id          | commenter     | content                                     | posted_on                | title

-------------+-----------+---------------+---------------------------------------------+--------------------------+--------------------------

 04722... | d44e0... | [email protected] |              Welcome to weblog application. | 2014-08-13 00:13:01+0530 |             New blogger?

 c06a4... | be127... | [email protected] | It is too short to describe the complexity. | 2014-08-13 00:12:53+0530 | Thoughful article but...


(2 rows)

This completes all the requirements we stated. We did not cover the update and delete operations. They follow the same pattern as the insertion of records. The developer needs to make sure that the data is updated or deleted from all the places. So, if you want to update a post's title, it needs to be done in the posts and category tables.

Writing your application

Cassandra provides the API for almost all the main stream programming languages. Developing applications for Cassandra is nothing more than actually executing CQL through an API and collection result set or iterator for the query. This section will give you a glimpse of the Java code for the example we discussed earlier. It uses the DataStax Java driver for Cassandra. The full code is available at https://github.com/naishe/mastering-cassandra-v2.

Getting the connection

An application creates a single instance of the Cluster object and keeps it for its life cycle. Every time you want to execute a query or a bunch of queries, you ask for a session object from the Cluster object. In a way, it is like a connection pool. Let's take a look at the following example:

public class CassandraConnection {
  private static Cluster cluster = getCluster();
  public static final Session getSession(){
    if ( cluster == null ){
      cluster = getCluster();
    }
    return cluster.connect();
  }

  private static Cluster getCluster(){
    Cluster clust = Cluster
        .builder()
        .addContactPoint(Constants.HOST)
        .build();
    return clust;
  }
[-- snip --]

Executing queries

Query execution is barely different from what we did in the command prompt earlier:

  private static final String BLOGS_TABLE_DEF =
      "CREATE TABLE IF NOT EXISTS "
+ Constants.KEYSPACE + ".blogs "
      + "("
      + "id uuid PRIMARY KEY, "
      + "blog_name varchar, "
      + "author varchar, "
      + "email varchar, "
      + "password varchar"
      + ")";
[-- snip --]
    Session conn = CassandraConnection.getSession();
[--snip--]
    conn.execute(BLOGS_TABLE_DEF);
[-- snip --]
    conn.close();

Object mapping

The DataStax Java driver provides an easy-to use, annotation-based object mapper, which can help you avoid a lot of code bloat and marshalling effort. Here is an example of the Blog object that maps to the blogs table:

@Table(keyspace = Constants.KEYSPACE, name = "blogs")
public class Blog extends AbstractVO<Blog> {
  @PartitionKey
  private UUID id;
  @Column(name = "blog_name")
  private String blogName;
  private String author;
  private String email;
  private String password;

  public UUID getId() {
    return id;
  }
  public void setId(UUID id) {
    this.id = id;
  }
  public String getBlogName() {
    return blogName;
  }
  public void setBlogName(String blogName) {
    this.blogName = blogName;
  }
  public String getAuthor() {
    return author;
  }
  public void setAuthor(String author) {
    this.author = author;
  }
  public String getEmail() {
    return email;
  }
  public void setEmail(String email) {
    this.email = email;
  }
  public String getPassword() {
    return password;
  }
  public void setPassword(String password) {
/* Ideally, you'd use a unique salt with this hashing */
    this.password = Hashing
      .sha256()
      .hashString(password, Charsets.UTF_8)
      .toString();
  }

  @Override
  public boolean equals(Object that) {
    return this.getId().equals(((Blog)that).getId());
  }

  @Override
  public int hashCode() {
    return Objects.hashCode(getId(), getEmail(), getAuthor(), getBlogName());
  }

  @Override
  protected Blog getInstance() {
    return this;
  }

  @Override
  protected Class<Blog> getType() {
    return Blog.class;
  }

  // ----- ACCESS VIA QUERIES -----

  public static Blog getBlogByName(String blogName,
 SessionWrapper sessionWrapper)
throws BlogNotFoundException {
  AllQueries queries = sessionWrapper.getAllQueries();
  Result<Blog> rs = queries.getBlogByName(blogName);
  if (rs.isExhausted()){
  throw new BlogNotFoundException();
  }
  return rs.one();
}

}

For now, forget about the AbstractVO super class. That is just some abstraction, where common things are thrown into AbstractVO. You can see the annotations that basically show which keyspace and table this class is mapped to. Each instance variable is mapped with a column in the table. For any column that has a different name than the attribute name in the class, you will have to explicitly state that. Getters and setters do not have to be dumb. You can get creative in there. For example, setPassword setter takes a plain text password and hashes it before storing. Note that you must mention which field acts as the partition key. You do not have to specify all the fields that consist of the primary key, just the first component. Now you can use DataStax's mapper to create, retrieve, update, and delete an object without having to marshal the results into the object. Here is an example:

Blog blog = 
new MappingManager(session)
.mapper(Blog.class)
.get(blogUUID);

You can execute any arbitrary queries and map it to an object. To do that, you will have to write an interface that contains a method signature of what the query consumes as its argument and what it returns as the method return type, as follows:

@Accessor
public interface AllQueries {
[--snip--]

  @Query("SELECT * FROM " + Constants.KEYSPACE + ".blogs WHERE blog_name = :blogName")
  public Result<Blog> getBlogByName(@Param("blogName") String blogName);
[-- snip --]

Tip

This interface is annotated with Accessor, and it has methods that basically satisfy the Query annotation that it carries. The snippet of the Blog class uses this method to retrieve the names blog by blog.

Summary

We have started learning about Cassandra. You can set up your local machine, play with CQL3 in cqlsh, and write a simple program that uses Cassandra on the backend. It seems like we are all done. But, it's not so. Cassandra is not all about ease in modeling or simple to code around with (unlike RDBMS). It is all about speed, availability, and reliability. The only thing that matters in a production setup is how quickly and reliably your application can serve a fickle-minded user. It does not matter if you have an elegant database architecture with the third normal form or if you use a functional programming language and follow the Don't Repeat Yourself (DRY) principle religiously. Cassandra and many other modern databases, especially in the NoSQL space, are there to provide you with speed. Cassandra's performance increases almost linearly with the addition of new nodes, which makes it suitable for high throughput applications without committing a lot of expensive infrastructure to begin with. For more information, visit http://vldb.org/pvldb/vol5/p1724_tilmannrabl_vldb2012.pdf. The rest of the book is aimed at giving you a solid understanding of the following aspects of Cassandra—one chapter at a time:

  • You will learn the internals of Cassandra and the general programming pattern for Cassandra
  • Setting up a cluster and tweaking Cassandra and Java settings to get the maximum out of Cassandra for your use
  • Infrastructure maintenance—nodes going down, scaling up and down, backing the data up, keeping vigil monitoring, and getting notified about an interesting event on your Cassandra setup will be covered
  • Cassandra is easy to use with the Apache Hadoop and Apache Pig tools and we will see simple examples of this

The best thing about these chapters is that there is no prerequisite. Most of these chapters start from the basics to get you familiar with the concept and then take you to an advanced level. So, if you have never used Hadoop, do not worry. You can still have a simple setup up and running with Cassandra.

In the next chapter, we will see Cassandra internals and what makes it so fast.

Left arrow icon Right arrow icon

Description

The book is aimed at intermediate developers with an understanding of core database concepts who want to become a master at implementing Cassandra for their application.

Who is this book for?

The book is aimed at intermediate developers with an understanding of core database concepts who want to become a master at implementing Cassandra for their application.
Estimated delivery fee Deliver to Malta

Premium delivery 7 - 10 business days

€32.95
(Includes tracking information)

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Mar 26, 2015
Length: 350 pages
Edition : 1st
Language : English
ISBN-13 : 9781784392611
Category :
Languages :
Tools :

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Estimated delivery fee Deliver to Malta

Premium delivery 7 - 10 business days

€32.95
(Includes tracking information)

Product Details

Publication date : Mar 26, 2015
Length: 350 pages
Edition : 1st
Language : English
ISBN-13 : 9781784392611
Category :
Languages :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
€18.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
€189.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts
€264.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total 99.97
Mastering Apache Cassandra - Second Edition
€41.99
Cassandra High Availability
€20.99
Learning Apache Cassandra
€36.99
Total 99.97 Stars icon
Banner background image

Table of Contents

9 Chapters
1. Quick Start Chevron down icon Chevron up icon
2. Cassandra Architecture Chevron down icon Chevron up icon
3. Effective CQL Chevron down icon Chevron up icon
4. Deploying a Cluster Chevron down icon Chevron up icon
5. Performance Tuning Chevron down icon Chevron up icon
6. Managing a Cluster – Scaling, Node Repair, and Backup Chevron down icon Chevron up icon
7. Monitoring Chevron down icon Chevron up icon
8. Integration with Hadoop Chevron down icon Chevron up icon
Index Chevron down icon Chevron up icon

Customer reviews

Top Reviews
Rating distribution
Full star icon Full star icon Full star icon Full star icon Half star icon 4.6
(10 Ratings)
5 star 60%
4 star 40%
3 star 0%
2 star 0%
1 star 0%
Filter icon Filter
Top Reviews

Filter reviews by




manish arora Apr 01, 2018
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Good book
Amazon Verified review Amazon
Rajesh Ashtaputre Sep 05, 2015
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Very well written book, provides just enough background compared to nonsensical philosophy many others offer.
Amazon Verified review Amazon
Glenn Wiorek Jun 10, 2015
Full star icon Full star icon Full star icon Full star icon Full star icon 5
This second edition of Master Cassandra over comes the major complaint of the first edition as being out of date. It covers the latest versions of Cassandra and then some. All the first edition chapters have been updated, and CQL get a good introduction in the 3rd chapter now. This is my 3rd book on Cassandra and I have to say it is now my favorite as it has tons of internals information. It is well written and made many complex concepts very understandable for me. Glad to have in in my Library.One spot I disagree with the author though is on swap. In his experience he may prefer the database system go down from the OS's Out-of-Memory(OOM) kill from lack of swap space instead of having sluggish performance. Many places I have worked, every outage counted against your SLA and could lead to huge fines. Sluggish is not always consider down or count against your SLA and on the fly adjustments might be possible to get it back to acceptable levels. Other then that I enjoyed the book.
Amazon Verified review Amazon
Bob Mason Apr 29, 2016
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Very good book that I'm using to teach a Cassandra course.
Amazon Verified review Amazon
Martin Friedemann Jan 15, 2016
Full star icon Full star icon Full star icon Full star icon Full star icon 5
It is refering to a recent version and it gives very good insides about the usage of Cassandra. I worked more than 1 year with Cassandra 2.0.x and 2.1.x and even after this time it helped me to get a better understanding of Cassandra.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is the delivery time and cost of print book? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela
What is custom duty/charge? Chevron down icon Chevron up icon

Customs duty are charges levied on goods when they cross international borders. It is a tax that is imposed on imported goods. These duties are charged by special authorities and bodies created by local governments and are meant to protect local industries, economies, and businesses.

Do I have to pay customs charges for the print book order? Chevron down icon Chevron up icon

The orders shipped to the countries that are listed under EU27 will not bear custom charges. They are paid by Packt as part of the order.

List of EU27 countries: www.gov.uk/eu-eea:

A custom duty or localized taxes may be applicable on the shipment and would be charged by the recipient country outside of the EU27 which should be paid by the customer and these duties are not included in the shipping charges been charged on the order.

How do I know my custom duty charges? Chevron down icon Chevron up icon

The amount of duty payable varies greatly depending on the imported goods, the country of origin and several other factors like the total invoice amount or dimensions like weight, and other such criteria applicable in your country.

For example:

  • If you live in Mexico, and the declared value of your ordered items is over $ 50, for you to receive a package, you will have to pay additional import tax of 19% which will be $ 9.50 to the courier service.
  • Whereas if you live in Turkey, and the declared value of your ordered items is over € 22, for you to receive a package, you will have to pay additional import tax of 18% which will be € 3.96 to the courier service.
How can I cancel my order? Chevron down icon Chevron up icon

Cancellation Policy for Published Printed Books:

You can cancel any order within 1 hour of placing the order. Simply contact [email protected] with your order details or payment transaction id. If your order has already started the shipment process, we will do our best to stop it. However, if it is already on the way to you then when you receive it, you can contact us at [email protected] using the returns and refund process.

Please understand that Packt Publishing cannot provide refunds or cancel any order except for the cases described in our Return Policy (i.e. Packt Publishing agrees to replace your printed book because it arrives damaged or material defect in book), Packt Publishing will not accept returns.

What is your returns and refunds policy? Chevron down icon Chevron up icon

Return Policy:

We want you to be happy with your purchase from Packtpub.com. We will not hassle you with returning print books to us. If the print book you receive from us is incorrect, damaged, doesn't work or is unacceptably late, please contact Customer Relations Team on [email protected] with the order number and issue details as explained below:

  1. If you ordered (eBook, Video or Print Book) incorrectly or accidentally, please contact Customer Relations Team on [email protected] within one hour of placing the order and we will replace/refund you the item cost.
  2. Sadly, if your eBook or Video file is faulty or a fault occurs during the eBook or Video being made available to you, i.e. during download then you should contact Customer Relations Team within 14 days of purchase on [email protected] who will be able to resolve this issue for you.
  3. You will have a choice of replacement or refund of the problem items.(damaged, defective or incorrect)
  4. Once Customer Care Team confirms that you will be refunded, you should receive the refund within 10 to 12 working days.
  5. If you are only requesting a refund of one book from a multiple order, then we will refund you the appropriate single item.
  6. Where the items were shipped under a free shipping offer, there will be no shipping costs to refund.

On the off chance your printed book arrives damaged, with book material defect, contact our Customer Relation Team on [email protected] within 14 days of receipt of the book with appropriate evidence of damage and we will work with you to secure a replacement copy, if necessary. Please note that each printed book you order from us is individually made by Packt's professional book-printing partner which is on a print-on-demand basis.

What tax is charged? Chevron down icon Chevron up icon

Currently, no tax is charged on the purchase of any print book (subject to change based on the laws and regulations). A localized VAT fee is charged only to our European and UK customers on eBooks, Video and subscriptions that they buy. GST is charged to Indian customers for eBooks and video purchases.

What payment methods can I use? Chevron down icon Chevron up icon

You can pay with the following card types:

  1. Visa Debit
  2. Visa Credit
  3. MasterCard
  4. PayPal
What is the delivery time and cost of print books? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela