Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Mastering Apache Cassandra 3.x
Mastering Apache Cassandra 3.x

Mastering Apache Cassandra 3.x: An expert guide to improving database scalability and availability without compromising performance , Third Edition

Arrow left icon
Profile Icon Malepati Profile Icon Ploetz
Arrow right icon
€18.99 per month
Full star icon Full star icon Full star icon Full star icon Full star icon 5 (2 Ratings)
Paperback Oct 2018 348 pages 3rd Edition
eBook
€17.99 €26.99
Paperback
€32.99
Subscription
Free Trial
Renews at €18.99p/m
Arrow left icon
Profile Icon Malepati Profile Icon Ploetz
Arrow right icon
€18.99 per month
Full star icon Full star icon Full star icon Full star icon Full star icon 5 (2 Ratings)
Paperback Oct 2018 348 pages 3rd Edition
eBook
€17.99 €26.99
Paperback
€32.99
Subscription
Free Trial
Renews at €18.99p/m
eBook
€17.99 €26.99
Paperback
€32.99
Subscription
Free Trial
Renews at €18.99p/m

What do you get with a Packt Subscription?

Free for first 7 days. $19.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing
Table of content icon View table of contents Preview book icon Preview Book

Mastering Apache Cassandra 3.x

Quick Start

Welcome to the world of Apache Cassandra! In this first chapter, we will briefly introduce Cassandra, along with a quick, step-by-step process to get your own single-node cluster up and running. Even if you already have experience working with Cassandra, this chapter will help to provide assurance that you are building everything properly. If this is your first foray into Cassandra, then get ready to take your first steps into a larger world.

In this chapter, we will cover the following topics:

  • Introduction to Cassandra
  • Installation and configuration
  • Starting up and shutting down Cassandra
  • Cassandra Cluster Manager (CCM)

By the end of this chapter, you will have built a single-node cluster of Apache Cassandra. This will be a good exercise to help you start to see some of the configuration and thought that goes into building a larger cluster. As this chapter progresses and the material gets more complex, you will start to connect the dots and understand exactly what is happening between installation, operation, and development.

Introduction to Cassandra

Apache Cassandra is a highly available, distributed, partitioned row store. It is one of the more popular NoSQL databases used by both small and large companies all over the world to store and efficiently retrieve large amounts of data. While there are licensed, proprietary versions available (which include enterprise support), Cassandra is also a top-level project of the Apache Software Foundation, and has deep roots in the open source community. This makes Cassandra a proven and battle-tested approach to scaling high-throughput applications.

High availability

Cassandra's design is premised on the points outlined in the Dynamo: Amazon's Highly Available Key-value Store paper (https://www.allthingsdistributed.com/files/amazon-dynamo-sosp2007.pdf). Specifically, when you have large networks of interconnected hardware, something is always in a state of failure. In reality, every piece of hardware being in a healthy state is the exception, rather than the rule. Therefore, it is important that a data storage system is able to deal with (and account for) issues such as network or disk failure.

Depending on the Replication Factor (RF) and required consistency level, a Cassandra cluster is capable of sustaining operations with one or two nodes in a failure state. For example, let's assume that a cluster with a single data center has a keyspace configured for a RF of three. This means that the cluster contains three copies of each row of data. If an application queries with a consistency level of one, then it can still function properly with one or two nodes in a down state.

Distributed

Cassandra is known as a distributed database. A Cassandra cluster is a collection of nodes (individual instances running Cassandra) all working together to serve the same dataset. Nodes can also be grouped together into logical data centers. This is useful for providing data locality for an application or service layer, as well as for working with Cassandra instances that have been deployed in different regions of a public cloud.

Cassandra clusters can scale to suit both expanding disk footprint and higher operational throughput. Essentially, this means that each cluster becomes responsible for a smaller percentage of the total data size. Assuming that the 500 GB disks of a six node cluster (RF of three) start to reach their maximum capacity, then adding three more nodes (for a total of nine) accomplishes the following:

  • Brings the total disk available to the cluster up from 3 TB to 4.5 TB
  • The percentage of data that each node is responsible for drops from 50% down to 33%

Additionally, let's assume that before the expansion of the cluster (from the prior example), the cluster was capable of supporting 5,000 operations per second. Cassandra scales linearly to support operational throughput. After increasing the cluster from six nodes to nine, the cluster should then be expected to support 7,500 operations per second.

Partitioned row store

In Cassandra, rows of data are stored in tables based on the hashed value of the partition key, called a token. Each node in the cluster is assigned multiple token ranges, and rows are stored on nodes that are responsible for their tokens.

Each keyspace (collection of tables) can be assigned a RF. The RF designates how many copies of each row should be stored in each data center. If a keyspace has a RF of three, then each node is assigned primary, secondary, and tertiary token ranges. As data is written, it is written to all of the nodes that are responsible for its token.

Installation

To get started with Cassandra quickly, we'll step through a single-node, local installation.

The following are the requirements to run Cassandra locally:

  • A flavor of Linux or macOS
  • A system with between 4 GB and 16 GB of random access memory (RAM)
  • A local installation of the Java Development Kit (JDK) version 8, latest patch
  • A local installation of Python 2.7 (for cqlsh)
  • Your user must have sudo rights to your local system
While you don't need to have sudo rights to run Apache Cassandra, it is required for some of the operating system configurations.
Apache Cassandra 3.11.2 breaks with JDK 1.8.0_161. Make sure to use either an older or newer version of the JDK.

Head to the Apache download site for the Cassandra project (http://cassandra.apache.org/download/), choose 3.11.2, and select a mirror to download the latest version of Cassandra. When complete, copy the .tar or .gzip file to a location that your user has read and write permissions for. This example will assume that this is going to be the ~/local/ directory:

mkdir ~/local
cd ~/local
cp ~/Downloads/apache-cassandra-3.11.2-bin.tar.gz .

Untar the file to create your cassandra directory:

tar -zxvf apache-cassandra-3.11.2-bin.tar.gz

Some people prefer to rename this directory, like so:

mv apache-cassandra-3.11.2/ cassandra/

Configuration

At this point, you could start your node with no further configuration. However, it is good to get into the habit of checking and adjusting the properties that are indicated as follows.

cassandra.yaml

It is usually a good idea to rename your cluster. Inside the conf/cassandra.yaml file, specify a new cluster_name property, overwriting the default Test Cluster:

cluster_name: 'PermanentWaves'

The num_tokens property default of 256 has proven to be too high for the newer, 3.x versions of Cassandra. Go ahead and set that to 24:

num_tokens: 24

To enable user security, change the authenticator and authorizer properties (from their defaults) to the following values:

authenticator: PasswordAuthenticator
authorizer: CassandraAuthorizer
Cassandra installs with all security disabled by default. Even if you are not concerned with security on your local system, it makes sense to enable it to get used to working with authentication and authorization from a development perspective.

By default, Cassandra will come up bound to localhost or 127.0.0.1. For your own local development machine, this is probably fine. However, if you want to build a multi-node cluster, you will want to bind to your machine's IP address. For this example, I will use 192.168.0.101. To configure the node to bind to this IP, adjust the listen_address and rpc_address properties:

listen_address: 192.168.0.101
rpc_address: 192.168.0.101

If you set listen_address and rpc_address, you'll also need to adjust your seed list (defaults to 127.0.0.1) as well:

seeds: 192.168.0.101

I will also adjust my endpoint_snitch property to use GossipingPropertyFileSnitch:

endpoint_snitch: GossipingPropertyFileSnitch

cassandra-rackdc.properties

In terms of NoSQL databases, Apache Cassandra handles multi-data center awareness better than any other. To configure this, each node must use GossipingPropertyFileSnitch (as previously mentioned in the preceding cassandra.yaml configuration process) and must have its local data center (and rack) settings defined. Therefore, I will set the dc and rack properties in the conf/cassandra-rackdc.properties file:

dc=ClockworkAngels
rack=R40

Starting Cassandra

To start Cassandra locally, execute the Cassandra script. If no arguments are passed, it will run in the foreground. To have it run in the background, send the -p flag with a destination file for the Process ID (PID):

cd cassandra
bin/cassandra -p cassandra.pid

This will store the PID of the Cassandra process in a file named cassandra.pid in the local/cassandra directory. Several messages will be dumped to the screen. The node is successfully running when you see this message:

Starting listening for CQL clients on localhost/192.168.0.101:9042 (unencrypted).

This can also be verified with the nodetool status command:

bin/nodetool status

Datacenter: ClockworkAngels
===========================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN 192.168.0.101 71.26 KiB 24 100.0% 0edb5efa... R40

Cassandra Cluster Manager

If you want an even faster way to install Cassandra, you can use an open source tool called CCM. CCM installs Cassandra for you, with very minimal configuration. In addition to ease of installation, CCM also allows you to run multiple Cassandra nodes locally.

First, let's clone the CCM repository from GitHub, and cd into the directory:

git clone https://github.com/riptano/ccm.git
cd ccm

Next, we'll run the setup program to install CCM:

sudo ./setup.py install

To verify that my local cluster is working, I'll invoke nodetool status via node1:

ccm node1 status

Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN 127.0.0.1 100.56 KiB 1 66.7% 49ecc8dd... rack1
UN 127.0.0.2 34.81 KiB 1 66.7% 404a8f97... rack1
UN 127.0.0.3 34.85 KiB 1 66.7% eed33fc5... rack1

To shut down your cluster, go ahead and send the stop command to each node:

ccm stop node1
ccm stop node2
ccm stop node3

Note that CCM requires a working installation of Python 2.7 or later, as well as a few additional libraries (pyYAML, six, ant, and psutil), and local IPs 127.0.0.1 through 127.0.0.3 to be available. Visit https://github.com/riptano/ccm for more information.

Using CCM actually changes many of the commands that we will follow in this book. While it is a great tool for quickly spinning up a small cluster for demonstration purposes, it can complicate the process of learning how to use Cassandra.

A quick introduction to the data model

Now that we have a Cassandra cluster running on our local machine, we will demonstrate its use with some quick examples. We will start with cqlsh, and use that as our primary means of working with the Cassandra data model.

Using Cassandra with cqlsh

To start working with Cassandra, let's start the Cassandra Query Language (CQL) shell . The shell interface will allow us to execute CQL commands to define, query, and modify our data. As this is a new cluster and we have turned on authentication and authorization, we will use the default cassandra and cassandra username and password, as follows:

bin/cqlsh 192.168.0.101 -u cassandra -p cassandra

Connected to PermanentWaves at 192.168.0.101:9042.
[cqlsh 5.0.1 | Cassandra 3.11.2 | CQL spec 3.4.4 | Native protocol v4]
Use HELP for help.
cassandra@cqlsh>

First, let's tighten up security. Let's start by creating a new superuser to work with.

New users can only be created if authentication and authorization are properly set in the cassandra.yaml file:

cassandra@cqlsh> CREATE ROLE cassdba WITH PASSWORD='flynnLives' AND LOGIN=true and SUPERUSER=true;

Now, set the default cassandra user to something long and indecipherable. You shouldn't need to use it ever again:

cassandra@cqlsh> ALTER ROLE cassandra WITH PASSWORD='dsfawesomethingdfhdfshdlongandindecipherabledfhdfh';

Then, exit cqlsh using the exit command and log back in as the new cassdba user:

cassandra@cqlsh> exit
bin/cqlsh 192.168.0.101 -u cassdba -p flynnLives

Connected to PermanentWaves at 192.168.0.101:9042.
[cqlsh 5.0.1 | Cassandra 3.11.2 | CQL spec 3.4.4 | Native protocol v4]
Use HELP for help.
cassdba@cqlsh>

Now, let's create a new keyspace where we can put our tables, as follows:

cassdba@cqlsh> CREATE KEYSPACE packt WITH replication =
{'class': 'NetworkTopologyStrategy', 'ClockworkAngels': '1'}
AND durable_writes = true;
For those of you who have used Cassandra before, you might be tempted to build your local keyspaces with SimpleStrategy. SimpleStrategy has no benefits over NetworkTopologyStrategy, and is limited in that it cannot be used in a plural data center environment. Therefore, it is a good idea to get used to using it on your local instance as well.

With the newly created keyspace, let's go ahead and use it:

cassdba@cqlsh> use packt;
cassdba@cqlsh:packt>
The cqlsh prompt changes depending on the user and keyspace currently being used.

Now, let's assume that we have a requirement to build a table for video game scores. We will want to keep track of the player by their name, as well as their score and game on which they achieved it. A table to store this data would look something like this:

CREATE TABLE hi_scores (name TEXT, game TEXT, score BIGINT,
PRIMARY KEY (name,game));

Next, we will INSERT data into the table, which will help us understand some of Cassandra's behaviors:

INSERT INTO hi_scores (name, game, score) VALUES ('Dad','Pacman',182330);
INSERT INTO hi_scores (name, game, score) VALUES ('Dad','Burgertime',222000);
INSERT INTO hi_scores (name, game, score) VALUES ('Dad','Frogger',15690);
INSERT INTO hi_scores (name, game, score) VALUES ('Dad','Joust',48150);
INSERT INTO hi_scores (name, game, score) VALUES ('Connor','Pacman',182330);
INSERT INTO hi_scores (name, game, score) VALUES ('Connor','Monkey Kong',15800);
INSERT INTO hi_scores (name, game, score) VALUES ('Connor','Frogger',4220);
INSERT INTO hi_scores (name, game, score) VALUES ('Connor','Joust',48850);
INSERT INTO hi_scores (name, game, score) VALUES ('Avery','Galaga',28880);
INSERT INTO hi_scores (name, game, score) VALUES ('Avery','Burgertime',1200);
INSERT INTO hi_scores (name, game, score) VALUES ('Avery','Frogger',1100);
INSERT INTO hi_scores (name, game, score) VALUES ('Avery','Joust',19520);

Now, let's execute a CQL query to retrieve the scores of the player named Connor:

cassdba@cqlsh:packt> SELECT * FROM hi_scores WHERE name='Connor';
name | game | score
--------+-------------+--------
Connor | Frogger | 4220
Connor | Joust | 48850
Connor | Monkey Kong | 15800
Connor | Pacman | 182330
(4 rows)

That works pretty well. But what if we want to see how all of the players did while playing the Joust game, as follows:

cassdba@cqlsh:packt> SELECT * FROM hi_scores WHERE game='Joust';

InvalidRequest: Error from server: code=2200 [Invalid query] message="Cannot execute this query as it might involve data filtering and thus may have unpredictable performance. If you want to execute this query despite the performance unpredictability, use ALLOW FILTERING"
As stated in the preceding error message, this query could be solved by adding the ALLOW FILTERING directive. Queries using ALLOW FILTERING are notorious for performing poorly, so it is a good idea to build your data model so that you do not use it.

Evidently, Cassandra has some problems with that query. We'll discuss more about why that is the case later on. But, for now, let's build a table that specifically supports querying high scores by game:

CREATE TABLE hi_scores_by_game (name TEXT, game TEXT, score BIGINT,
PRIMARY KEY (game,score)) WITH CLUSTERING ORDER BY (score DESC);

Now, we will duplicate our data into our new query table:

INSERT INTO hi_scores_by_game (name, game, score) VALUES ('Dad','Pacman',182330);
INSERT INTO hi_scores_by_game (name, game, score) VALUES ('Dad','Burgertime',222000);
INSERT INTO hi_scores_by_game (name, game, score) VALUES ('Dad','Frogger',15690);
INSERT INTO hi_scores_by_game (name, game, score) VALUES ('Dad','Joust',48150);
INSERT INTO hi_scores_by_game (name, game, score) VALUES ('Connor','Pacman',182330);
INSERT INTO hi_scores_by_game (name, game, score) VALUES ('Connor','Monkey Kong',15800);
INSERT INTO hi_scores_by_game (name, game, score) VALUES ('Connor','Frogger',4220);
INSERT INTO hi_scores_by_game (name, game, score) VALUES ('Connor','Joust',48850);
INSERT INTO hi_scores_by_game (name, game, score) VALUES ('Avery','Galaga',28880);
INSERT INTO hi_scores_by_game (name, game, score) VALUES ('Avery','Burgertime',1200);
INSERT INTO hi_scores_by_game (name, game, score) VALUES ('Avery','Frogger',1100);
INSERT INTO hi_scores_by_game (name, game, score) VALUES ('Avery','Joust',19520);

Now, let's try to query while filtering on game with our new table:

cassdba@cqlsh:packt> SELECT * FROM hi_scores_by_game
WHERE game='Joust';
game | score | name
-------+-------+--------
Joust | 48850 | Connor
Joust | 48150 | Dad
Joust | 19520 | Avery
(3 rows)

As mentioned previously, the following chapters will discuss why and when Cassandra only allows certain PRIMARY KEY components to be used in the WHERE clause. The important thing to remember at this point is that in Cassandra, tables and data structures should be modeled according to the queries that they are intended to serve.

Shutting down Cassandra

Before shutting down your cluster instances, there are some additional commands that should be run. Again, with your own, local node(s), these are not terribly necessary. But it is a good idea to get used to running these, should you ever need to properly shut down a production node that may contain data that people actually care about.

First, we will disable gossip. This keeps other nodes from communicating with the node while we are trying to bring it down:

bin/nodetool disablegossip

Next, we will disable the native binary protocol to keep this node from serving client requests:

bin/nodetool disablebinary

Then, we will drain the node. This will prevent it from accepting writes, and force all in-memory data to be written to disk:

bin/nodetool drain

With the node drained, we can kill the PID:

kill 'cat cassandra.pid'

We can verify that the node has stopped by tailing the log:

tail logs/system.log

INFO [RMI TCP Connection(2)-127.0.0.1] 2018-03-31 17:49:05,789 StorageService.java:2292 - Node localhost/192.168.0.101 state jump to shutdown
INFO [RMI TCP Connection(4)-127.0.0.1] 2018-03-31 17:49:49,492 Server.java:176 - Stop listening for CQL clients
INFO [RMI TCP Connection(6)-127.0.0.1] 2018-03-31 17:50:11,312 StorageService.java:1449 - DRAINING: starting drain process
INFO [RMI TCP Connection(6)-127.0.0.1] 2018-03-31 17:50:11,313 HintsService.java:220 - Paused hints dispatch
INFO [RMI TCP Connection(6)-127.0.0.1] 2018-03-31 17:50:11,314 Gossiper.java:1540 - Announcing shutdown
INFO [RMI TCP Connection(6)-127.0.0.1] 2018-03-31 17:50:11,314 StorageService.java:2292 - Node localhost/192.168.0.101 state jump to shutdown
INFO [RMI TCP Connection(6)-127.0.0.1] 2018-03-31 17:50:13,315 MessagingService.java:984 - Waiting for messaging service to quiesce
INFO [ACCEPT-localhost/192.168.0.101] 2018-03-31 17:50:13,316 MessagingService.java:1338 - MessagingService has terminated the accept() thread
INFO [RMI TCP Connection(6)-127.0.0.1] 2018-03-31 17:50:14,764 HintsService.java:220 - Paused hints dispatch
INFO [RMI TCP Connection(6)-127.0.0.1] 2018-03-31 17:50:14,861 StorageService.java:1449 - DRAINED

Summary

In this chapter, we introduced Apache Cassandra and some of its design considerations and components. These aspects were discussed and a high level description of each was given, as well as how each affects things like cluster layout and data storage. Additionally, we built our own local, single-node cluster. CCM was also introduced, with minimal discussion. Some basic commands with Cassandra's nodetool were introduced and put to use.

With a single-node cluster running, the cqlsh tool was introduced. We created a keyspace that will work in a plural data center configuration. The concept of query tables was also introduced, as well as running some simple read and write operations.

In the next chapter, we will take an in-depth look at Cassandra's underlying architecture, and understand what is key to making good decisions about cluster deployment and data modeling. From there, we'll discuss various aspects to help fine-tune a production cluster and its deployment process. That will bring us to monitoring and application development, and put you well on your way to mastering Cassandra!

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Write programs more efficiently using Cassandra's features with the help of examples
  • Configure Cassandra and fine-tune its parameters depending on your needs
  • Integrate Cassandra database with Apache Spark and build strong data analytics pipeline

Description

With ever-increasing rates of data creation, the demand for storing data fast and reliably becomes a need. Apache Cassandra is the perfect choice for building fault-tolerant and scalable databases. Mastering Apache Cassandra 3.x teaches you how to build and architect your clusters, configure and work with your nodes, and program in a high-throughput environment, helping you understand the power of Cassandra as per the new features. Once you’ve covered a brief recap of the basics, you’ll move on to deploying and monitoring a production setup and optimizing and integrating it with other software. You’ll work with the advanced features of CQL and the new storage engine in order to understand how they function on the server-side. You’ll explore the integration and interaction of Cassandra components, followed by discovering features such as token allocation algorithm, CQL3, vnodes, lightweight transactions, and data modelling in detail. Last but not least you will get to grips with Apache Spark. By the end of this book, you’ll be able to analyse big data, and build and manage high-performance databases for your application.

Who is this book for?

Mastering Apache Cassandra 3.x is for you if you are a big data administrator, database administrator, architect, or developer who wants to build a high-performing, scalable, and fault-tolerant database. Prior knowledge of core concepts of databases is required.

What you will learn

  • Write programs more efficiently using Cassandra s features more efficiently
  • Exploit the given infrastructure, improve performance, and tweak the Java Virtual Machine (JVM)
  • Use CQL3 in your application in order to simplify working with Cassandra
  • Configure Cassandra and fine-tune its parameters depending on your needs
  • Set up a cluster and learn how to scale it
  • Monitor a Cassandra cluster in different ways
  • Use Apache Spark and other big data processing tools

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Oct 31, 2018
Length: 348 pages
Edition : 3rd
Language : English
ISBN-13 : 9781789131499
Vendor :
Apache
Category :
Languages :
Tools :

What do you get with a Packt Subscription?

Free for first 7 days. $19.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing

Product Details

Publication date : Oct 31, 2018
Length: 348 pages
Edition : 3rd
Language : English
ISBN-13 : 9781789131499
Vendor :
Apache
Category :
Languages :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
€18.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
€189.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts
€264.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total 99.97
Cassandra 3.x High Availability
€29.99
Learning Apache Cassandra
€36.99
Mastering Apache Cassandra 3.x
€32.99
Total 99.97 Stars icon
Banner background image

Table of Contents

11 Chapters
Quick Start Chevron down icon Chevron up icon
Cassandra Architecture Chevron down icon Chevron up icon
Effective CQL Chevron down icon Chevron up icon
Configuring a Cluster Chevron down icon Chevron up icon
Performance Tuning Chevron down icon Chevron up icon
Managing a Cluster Chevron down icon Chevron up icon
Monitoring Chevron down icon Chevron up icon
Application Development Chevron down icon Chevron up icon
Integration with Apache Spark Chevron down icon Chevron up icon
References Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Rating distribution
Full star icon Full star icon Full star icon Full star icon Full star icon 5
(2 Ratings)
5 star 100%
4 star 0%
3 star 0%
2 star 0%
1 star 0%
Placeholder Nov 11, 2020
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Concepts were really depth along with metrics and spark integration for docker was really cool
Amazon Verified review Amazon
Tasmaniedemon Nov 30, 2018
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Tres bon livre sur le sujet C*
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is included in a Packt subscription? Chevron down icon Chevron up icon

A subscription provides you with full access to view all Packt and licnesed content online, this includes exclusive access to Early Access titles. Depending on the tier chosen you can also earn credits and discounts to use for owning content

How can I cancel my subscription? Chevron down icon Chevron up icon

To cancel your subscription with us simply go to the account page - found in the top right of the page or at https://subscription.packtpub.com/my-account/subscription - From here you will see the ‘cancel subscription’ button in the grey box with your subscription information in.

What are credits? Chevron down icon Chevron up icon

Credits can be earned from reading 40 section of any title within the payment cycle - a month starting from the day of subscription payment. You also earn a Credit every month if you subscribe to our annual or 18 month plans. Credits can be used to buy books DRM free, the same way that you would pay for a book. Your credits can be found in the subscription homepage - subscription.packtpub.com - clicking on ‘the my’ library dropdown and selecting ‘credits’.

What happens if an Early Access Course is cancelled? Chevron down icon Chevron up icon

Projects are rarely cancelled, but sometimes it's unavoidable. If an Early Access course is cancelled or excessively delayed, you can exchange your purchase for another course. For further details, please contact us here.

Where can I send feedback about an Early Access title? Chevron down icon Chevron up icon

If you have any feedback about the product you're reading, or Early Access in general, then please fill out a contact form here and we'll make sure the feedback gets to the right team. 

Can I download the code files for Early Access titles? Chevron down icon Chevron up icon

We try to ensure that all books in Early Access have code available to use, download, and fork on GitHub. This helps us be more agile in the development of the book, and helps keep the often changing code base of new versions and new technologies as up to date as possible. Unfortunately, however, there will be rare cases when it is not possible for us to have downloadable code samples available until publication.

When we publish the book, the code files will also be available to download from the Packt website.

How accurate is the publication date? Chevron down icon Chevron up icon

The publication date is as accurate as we can be at any point in the project. Unfortunately, delays can happen. Often those delays are out of our control, such as changes to the technology code base or delays in the tech release. We do our best to give you an accurate estimate of the publication date at any given time, and as more chapters are delivered, the more accurate the delivery date will become.

How will I know when new chapters are ready? Chevron down icon Chevron up icon

We'll let you know every time there has been an update to a course that you've bought in Early Access. You'll get an email to let you know there has been a new chapter, or a change to a previous chapter. The new chapters are automatically added to your account, so you can also check back there any time you're ready and download or read them online.

I am a Packt subscriber, do I get Early Access? Chevron down icon Chevron up icon

Yes, all Early Access content is fully available through your subscription. You will need to have a paid for or active trial subscription in order to access all titles.

How is Early Access delivered? Chevron down icon Chevron up icon

Early Access is currently only available as a PDF or through our online reader. As we make changes or add new chapters, the files in your Packt account will be updated so you can download them again or view them online immediately.

How do I buy Early Access content? Chevron down icon Chevron up icon

Early Access is a way of us getting our content to you quicker, but the method of buying the Early Access course is still the same. Just find the course you want to buy, go through the check-out steps, and you’ll get a confirmation email from us with information and a link to the relevant Early Access courses.

What is Early Access? Chevron down icon Chevron up icon

Keeping up to date with the latest technology is difficult; new versions, new frameworks, new techniques. This feature gives you a head-start to our content, as it's being created. With Early Access you'll receive each chapter as it's written, and get regular updates throughout the product's development, as well as the final course as soon as it's ready.We created Early Access as a means of giving you the information you need, as soon as it's available. As we go through the process of developing a course, 99% of it can be ready but we can't publish until that last 1% falls in to place. Early Access helps to unlock the potential of our content early, to help you start your learning when you need it most. You not only get access to every chapter as it's delivered, edited, and updated, but you'll also get the finalized, DRM-free product to download in any format you want when it's published. As a member of Packt, you'll also be eligible for our exclusive offers, including a free course every day, and discounts on new and popular titles.