Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Caffe2 Quick Start Guide
Caffe2 Quick Start Guide

Caffe2 Quick Start Guide: Modular and scalable deep learning made easy

eBook
$13.98 $19.99
Paperback
$26.99
Subscription
Free Trial
Renews at $19.99p/m

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Table of content icon View table of contents Preview book icon Preview Book

Caffe2 Quick Start Guide

Introduction and Installation

Welcome to the Caffe2 Quick Start Guide. This book aims to provide you with a quick introduction to the Caffe2 deep learning framework and how to use it for training and deployment of deep learning models. This book uses code samples to create, train, and run inference on actual deep learning models that solve real problems. In this way, its code can be applied quickly by readers to their own applications.

This chapter provides a brief introduction to Caffe2 and shows you how to build and install it on your computer. In this chapter, we will cover the following topics:

  • Introduction to deep learning and Caffe2
  • Building and installing Caffe2
  • Testing Caffe2 Python API
  • Testing Caffe2 C++ API

Introduction to deep learning

Terms such as artificial intelligence (AI), machine learning (ML), and deep learning (DL) are popular right now. This popularity can be attributed to significant improvements that deep learning techniques have brought about in the last few years in enabling computers to see, hear, read, and create. First and foremost, we'll introduce these three fields and how they intersect:

Figure 1.1: Relationship between deep learning, ML, and AI

AI

Artificial intelligence (AI) is a general term used to refer to the intelligence of computers, specifically their ability to reason, sense, perceive, and respond. It is used to refer to any non-biological system that has intelligence, and this intelligence is a consequence of a set of rules. It does not matter in AI if those sets of rules were created manually by a human, or if those rules were automatically learned by a computer by analyzing data. Research into AI started in 1956, and it has been through many ups and a couple of downs, called AI winters, since then.

ML

Machine learning (ML) is a subset of AI that uses statistics, data, and learning algorithms to teach computers to learn from given data. This data, called training data, is specific to the problem being solved, and contains examples of input and the expected output for each input. ML algorithms learn models or representations automatically from training data, and these models can be used to obtain predictions for new input data.

There are many popular types of models in ML, including artificial neural networks (ANNs), Bayesian networks, support vector machines (SVM), and random forests. The ML model that is of interest to us in this book is ANN. The structure of ANNs are inspired by the connections in the brain. These neural network models were initially popular in ML, but later fell out of favor since they required enormous computing power that was not available at that time.

Deep learning

Over the last decade, utilization of the parallel processing capability of graphics processing units (GPUs) to solve general computation problems became popular. This type of computation came to be known as general-purpose computing on GPU (GPGPU). GPUs were quite affordable and were easy to use as accelerators by using GPGPU programming models and APIs such as Compute Unified Device Architecture (CUDA) and Open Computing Language (OpenCL). Starting in 2012, neural network researchers harnessed GPUs to train neural networks with a large number of layers and started to generate breakthroughs in solving computer vision, speech recognition, and other problems. The use of such deep neural networks with a large number of layers of neurons gave rise to the term deep learning. Deep learning algorithms form a subset of ML and use multiple layers of abstraction to learn and parameterize multi-layer neural network models of data.

Introduction to Caffe2

The popularity and success of deep learning has been motivated by the creation of many popular and open source deep learning frameworks that can be used for training and inference of neural networks. Caffe was one of the first popular deep learning frameworks. It was created by Yangqing Jia at UC Berkeley for his PhD thesis and released to the public at the end of 2013. It was primarily written in C++ and provided a C++ API. Caffe also provided a rudimentary Python API wrapped around the C++ API. The Caffe framework created networks using layers. Users created networks by listing down and describing its layers in a text file commonly referred to as a prototxt.

Following the popularity of Caffe, universities, corporations, and individuals created and launched many deep learning frameworks. Some of the popular ones today are Caffe2, TensorFlow, MXNet, and PyTorch. TensorFlow is driven by Google, MXNet has the support of Amazon, and PyTorch was primarily developed by Facebook.

Caffe's creator, Yangqing Jia, moved to Facebook, where he created a follow-up to Caffe called Caffe2. Compared to the other deep learning frameworks, Caffe2 was designed to focus on scalability, high performance, and portability. Written in C++, it has both a C++ API and a Python API.

Caffe2 and PyTorch

Caffe2 and PyTorch are both popular DL frameworks, maintained and driven by Facebook. PyTorch originates from the Torch DL framework. It is characterized by a Python API that is easy for designing different network structures and experimenting with training parameters and regimens on them. While PyTorch could be used for inference in production applications on the cloud and in the edge, it is not as efficient when it comes to this.

Caffe2 has a Python API and a C++ API. It is designed for practitioners who tinker with existing network structures and use pre-trained models from PyTorch, Caffe, and other DL frameworks, and ready them for deployment inside applications, local workstations, low-power devices at the edge, mobile devices, and in the cloud.

Having observed the complementary features of PyTorch and Caffe2, Facebook has a plan to merge the two projects. As we will see later, Caffe2 source code is already organized as a subdirectory under the PyTorch Git repository. In the future, expect more intermingling of these two projects, with a final goal of fusing the two together to create a single DL framework that is easy to experiment with and tinker, efficient to train and deploy, and that can scale from the cloud to the edge, from general-purpose processors to special-purpose accelerators.

Hardware requirements

Working with deep learning models, especially the training process, requires a lot of computing power. While you could train a popular neural network on the CPU, it could typically take many hours or days, depending on the complexity of the network. Using GPUs for training is highly recommended since they typically reduce the training time by an order of magnitude or more compared to CPUs. Caffe2 uses CUDA to access the parallel processing capabilities of NVIDIA GPUs. CUDA is an API that enables developers to use the parallel computation capabilities of an NVIDIA GPU, so you will need to use an NVIDIA GPU. You can either install an NVIDIA GPU on your local computer, or use a cloud service provider such as Amazon AWS that provides instances with NVIDIA GPUs. Please take note of the running costs of such cloud instances before you use them for extended periods of training.

Once you have trained a model using Caffe2, you can use CPUs, GPUs, or many other processors for inference. We will explore a few such options in Chapter 6, Deploying Models to Accelerators for Inference, and Chapter 7, Caffe2 at the Edge and in the cloud, later in the book.

Software requirements

A major portion of deep learning research and development is currently taking place on Linux computers. Ubuntu is a distribution of Linux that happens to be very popular for deep learning research and development. We will be using Ubuntu as the operating system of choice in this book. If you are using a different flavor of Linux, you should be able to search online for commands similar to Ubuntu commands for most of the operations described here. If you use Windows or macOS, you will need to replace the Linux commands in this book with equivalent commands. All the code samples should work on Linux, Windows, and macOS with zero or minimal changes.

Building and installing Caffe2

Caffe2 can be built and installed from source code quite easily. Installing Caffe2 from source gives us more flexibility and control over our application setup. The build and install process has four stages:

  1. Installing dependencies
  2. Installing acceleration libraries
  1. Building Caffe2
  2. Installing Caffe2

Installing dependencies

We first need to install packages that Caffe2 is dependent on, as well as the tools and libraries required to build it.

  1. First, obtain information about the newest versions of Ubuntu packages by querying their online repositories using the apt-get tool:
$ sudo apt-get update
  1. Next, using the apt-get tool, install the libraries that are required to build Caffe2, and that Caffe2 requires for its operation:
$ sudo apt-get install -y --no-install-recommends \
build-essential \
cmake \
git \
libgflags2 \
libgoogle-glog-dev \
libgtest-dev \
libiomp-dev \
libleveldb-dev \
liblmdb-dev \
libopencv-dev \
libopenmpi-dev \
libsnappy-dev \
libprotobuf-dev \
openmpi-bin \
openmpi-doc \
protobuf-compiler \
python-dev \
python-pip

These packages include tools required to download Caffe2 source code (Git) and to build Caffe2 (build-essential, cmake, and python-dev). The rest are libraries that Caffe2 is dependent on, including Google Flags (libgflags2), Google Log (libgoogle-glog-dev), Google Test (libgtest-dev), LevelDB (libleveldb-dev), LMDB (liblmdb-dev), OpenCV (libopencv-dev), OpenMP (libiomp-dev), OpenMPI (openmpi-bin and openmpi-doc), Protobuf (libprotobuf-dev and protobuf-compiler), and Snappy (libsnappy-dev).

  1. Finally, install the Python Pip tool and use it to install other Python libraries such as NumPy and Protobuf Python APIs that are useful when working with Python:
$ sudo apt-get install -y --no-install-recommends python-pip                   

$ pip install --user \
future \
numpy \
protobuf

Installing acceleration libraries

Using Caffe2 to train DL networks and using them for inference involves a lot of math computation. Using acceleration libraries of math routines and deep learning primitives helps Caffe2 users by speeding up training and inference tasks. Vendors of CPUs and GPUs typically offer such libraries, and Caffe2 has support to use such libraries if they are available on your system.

Intel Math Kernel Library (MKL) is key to faster training and inference on Intel CPUs. This library is free for personal and community use. It can be downloaded by registering here: https://software.seek.intel.com/performance-libraries. Installation involves uncompressing the downloaded package and running the install.sh installer script as a superuser. The library files are installed by default to the /opt/intel directory. The Caffe2 build step, described in the next section, finds and uses the BLAS and LAPACK routines of MKL automatically, if MKL was installed at the default directory.

CUDA and CUDA Deep Neural Network (cuDNN) libraries are essential for faster training and inference on NVIDIA GPUs. CUDA is free to download after registering here: https://developer.nvidia.com/cuda-downloads. cuDNN can be downloaded from here: https://developer.nvidia.com/cudnn. Note that you need to have a modern NVIDIA GPU and an NVIDIA GPU driver already installed. As an alternative to the GPU driver, you could use the driver that is installed along with CUDA. Files of the CUDA and cuDNN libraries are typically installed in the /usr/local/cuda directory on Linux. The Caffe2 build step, described in the next section, finds and uses CUDA and cuDNN automatically if installed in the default directory.

Building Caffe2

Using Git, we can clone the Git repository containing Caffe2 source code and all the submodules it requires:

$ git clone --recursive https://github.com/pytorch/pytorch.git && cd pytorch

$ git submodule update --init

Notice how the Caffe2 source code now exists in a subdirectory inside the PyTorch source repository. This is because of Facebook's cohabitation plan for these two popular DL frameworks as it endeavors to merge the best features of both frameworks over a period of time.

Caffe2 uses CMake as its build system. CMake enables Caffe2 to be easily built for a wide variety of compilers and operating systems.

To build Caffe2 source code using CMake, we first create a build directory and invoke CMake from within it:

$ mkdir build
$ cd build

$ cmake ..

CMake checks available compilers, operating systems, libraries, and packages, and figures out which Caffe2 features to enable and compilation options to use. These options can be seen listed in the CMakeLists.txt file present at the root directory. Options are listed in the form of option(USE_FOOBAR "Use Foobar library" OFF). You can enable or disable those options by setting them to ON or OFF in CMakeLists.txt.

These options can also be configured when invoking CMake. For example, if your Intel CPU has support for AVX/AVX2/FMA, and you would wish to use those features to speed up Caffe2 operations, then enable the USE_NATIVE_ARCH option as follows:

$ cmake -DUSE_NATIVE_ARCH=ON ..

Installing Caffe2

CMake produces a Makefile file at the end. We can build Caffe2 and install it on our system using the following command:

$ sudo make install

This step involves building a large number of CUDA files, which can be very slow. It is recommended to use the parallel execution feature of make to use all the cores of your CPU for a faster build. We can do this by using the following command:

$ sudo make -j install

Using the make install method to build and install makes it difficult to update or uninstall Caffe2 later.
Instead, I prefer to create a Debian package of Caffe2 and install it. That way, I can uninstall or update it conveniently. We can do this using the checkinstall tool.

To install checkinstall, and to use it to build and install Caffe2, use the following commands:

$ sudo apt-get install checkinstall
$ sudo checkinstall --pkgname caffe2

This command also produces a Debian .deb package file that you can use to install on other computers or share with others. For example, on my computer, this command produced a file named caffe2_20181207-1_amd64.deb.

If you need a faster build, use the parallel execution feature of make along with checkinstall:

$ sudo checkinstall --pkgname caffe2 make -j install

If you need to uninstall Caffe2 in the future, you can now do that easily using the following command:

$ sudo dpkg -r caffe2

Testing the Caffe2 Python API

We have now installed Caffe2, but we need to make sure it is correctly installed and that its Python API is working. An easy way to do that is to return to your home directory and check whether the Python API of Caffe2 is imported and can execute correctly. This can be done using the following commands:

$ cd ~
$ python -c "from caffe2.python import core"

Do not run the preceding command from within the Caffe2 directories. This is to avoid the ambiguity of Python having to pick between your installed Caffe2 files and those in the source or build directories.

If your Caffe2 is not installed correctly, you may see an error of some kind, such as the one shown in the following code block, for example:

$ python -c "from caffe2.python import core"
Traceback (most recent call last):
File "<string>", line 1, in <module>
ImportError: No module named caffe2.python

If your Caffe2 has been installed correctly, then you may not see an error. However, you may still get a warning if you don't have a GPU:

$ python -c "from caffe2.python import core"
WARNING:root:This caffe2 python run does not have GPU support. Will run in CPU only mode.

Testing the Caffe2 C++ API

We have now installed Caffe2, but we need to make sure it is correctly installed and that its C++ API is working. An easy way to do that is to create a small C++ program that initializes the global environment of Caffe2. This is done by calling a method named GlobalInit and passing it the program's arguments. This is typically the first call in a Caffe2 C++ application.

Create a C++ source file named ch1.cpp with this code:

// ch1.cpp
#include "caffe2/core/init.h"

int main(int argc, char** argv)
{
caffe2::GlobalInit(&argc, &argv);
return 0;
}

We can compile this C++ source file using the following command:

$ g++ ch1.cpp -lcaffe2

We ask the linker to link with the libcaffe2.so shared library file by using the -lcaffe2 option. The compiler uses the default include file locations, and the linker uses the default shared library file locations, so we do not need to specify those.

By default, Caffe2 header files are installed to a caffe2 subdirectory in /usr/local/include. This location is usually automatically included in a C++ compilation. Similarly, the Caffe2 shared library files are installed to /usr/local/lib by default. If you installed Caffe2 to a different location, you would need to specify the include directory location using the -I option and the shared library file location using the -L option.

We can now execute the compiled binary:

$ ./a.out

If it executes successfully, then your Caffe2 installation is fine. You are now ready to write Caffe2 C++ applications.

Summary

Congratulations! This chapter provided a brief introduction to deep learning and Caffe2. We examined the process of building and installing Caffe2 on our system. We are now ready to explore the world of deep learning by building our own networks, training our own models, and using them for inference on real-world problems.

In the next chapter, we will learn about Caffe2 operators and learn how to compose them to build simple computation graphs. We will then proceed to build a neural network that can recognize handwritten digits.

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Migrate models trained with other deep learning frameworks to Caffe2
  • Integrate Caffe2 with Android or iOS, and implement deep learning models for mobile devices
  • Leverage the distributed capabilities of Caffe2 to build models that scale easily

Description

Caffe2 is a popular deep learning library used for fast and scalable training, and inference of deep learning models on different platforms. This book introduces you to the Caffe2 framework and demonstrates how you can leverage its power to build, train, and deploy efficient neural network models at scale. The Caffe 2 Quick Start Guide will help you in installing Caffe2, composing networks using its operators, training models, and deploying models to different architectures. The book will also guide you on how to import models from Caffe and other frameworks using the ONNX interchange format. You will then cover deep learning accelerators such as CPU and GPU and learn how to deploy Caffe2 models for inference on accelerators using inference engines. Finally, you'll understand how to deploy Caffe2 to a diverse set of hardware, using containers on the cloud and resource-constrained hardware such as Raspberry Pi. By the end of this book, you will not only be able to compose and train popular neural network models with Caffe2, but also deploy them on accelerators, to the cloud and on resource-constrained platforms such as mobile and embedded hardware.

Who is this book for?

Data scientists and machine learning engineers who wish to create fast and scalable deep learning models in Caffe2 will find this book to be very useful. Some understanding of the basic machine learning concepts and prior exposure to programming languages like C++ and Python will be useful.

What you will learn

  • Build and install Caffe2
  • Compose neural networks
  • Import deep learning models from other frameworks
  • Train neural networks on a CPU or GPU
  • Deploy models at the edge and in the cloud
  • Import a neural network from Caffe
  • Deploy models on CPU or GPU accelerators using inference engines
Estimated delivery fee Deliver to Malaysia

Standard delivery 10 - 13 business days

$8.95

Premium delivery 5 - 8 business days

$45.95
(Includes tracking information)

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : May 31, 2019
Length: 136 pages
Edition : 1st
Language : English
ISBN-13 : 9781789137750
Category :
Languages :
Concepts :
Tools :

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Estimated delivery fee Deliver to Malaysia

Standard delivery 10 - 13 business days

$8.95

Premium delivery 5 - 8 business days

$45.95
(Includes tracking information)

Product Details

Publication date : May 31, 2019
Length: 136 pages
Edition : 1st
Language : English
ISBN-13 : 9781789137750
Category :
Languages :
Concepts :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
$19.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
$199.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts
$279.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total $ 109.97
Hands-On Deep Learning for IoT
$38.99
Qt 5 and OpenCV 4 Computer Vision Projects
$43.99
Caffe2 Quick Start Guide
$26.99
Total $ 109.97 Stars icon
Banner background image

Table of Contents

8 Chapters
Introduction and Installation Chevron down icon Chevron up icon
Composing Networks Chevron down icon Chevron up icon
Training Networks Chevron down icon Chevron up icon
Working with Caffe Chevron down icon Chevron up icon
Working with Other Frameworks Chevron down icon Chevron up icon
Deploying Models to Accelerators for Inference Chevron down icon Chevron up icon
Caffe2 at the Edge and in the cloud Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Rating distribution
Full star icon Full star icon Full star icon Full star icon Full star icon 5
(2 Ratings)
5 star 100%
4 star 0%
3 star 0%
2 star 0%
1 star 0%
Merrill Aug 27, 2019
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Well written , highly informational .
Amazon Verified review Amazon
Shruti Aug 27, 2019
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Great book for a beginner like me :)
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is the delivery time and cost of print book? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela
What is custom duty/charge? Chevron down icon Chevron up icon

Customs duty are charges levied on goods when they cross international borders. It is a tax that is imposed on imported goods. These duties are charged by special authorities and bodies created by local governments and are meant to protect local industries, economies, and businesses.

Do I have to pay customs charges for the print book order? Chevron down icon Chevron up icon

The orders shipped to the countries that are listed under EU27 will not bear custom charges. They are paid by Packt as part of the order.

List of EU27 countries: www.gov.uk/eu-eea:

A custom duty or localized taxes may be applicable on the shipment and would be charged by the recipient country outside of the EU27 which should be paid by the customer and these duties are not included in the shipping charges been charged on the order.

How do I know my custom duty charges? Chevron down icon Chevron up icon

The amount of duty payable varies greatly depending on the imported goods, the country of origin and several other factors like the total invoice amount or dimensions like weight, and other such criteria applicable in your country.

For example:

  • If you live in Mexico, and the declared value of your ordered items is over $ 50, for you to receive a package, you will have to pay additional import tax of 19% which will be $ 9.50 to the courier service.
  • Whereas if you live in Turkey, and the declared value of your ordered items is over € 22, for you to receive a package, you will have to pay additional import tax of 18% which will be € 3.96 to the courier service.
How can I cancel my order? Chevron down icon Chevron up icon

Cancellation Policy for Published Printed Books:

You can cancel any order within 1 hour of placing the order. Simply contact [email protected] with your order details or payment transaction id. If your order has already started the shipment process, we will do our best to stop it. However, if it is already on the way to you then when you receive it, you can contact us at [email protected] using the returns and refund process.

Please understand that Packt Publishing cannot provide refunds or cancel any order except for the cases described in our Return Policy (i.e. Packt Publishing agrees to replace your printed book because it arrives damaged or material defect in book), Packt Publishing will not accept returns.

What is your returns and refunds policy? Chevron down icon Chevron up icon

Return Policy:

We want you to be happy with your purchase from Packtpub.com. We will not hassle you with returning print books to us. If the print book you receive from us is incorrect, damaged, doesn't work or is unacceptably late, please contact Customer Relations Team on [email protected] with the order number and issue details as explained below:

  1. If you ordered (eBook, Video or Print Book) incorrectly or accidentally, please contact Customer Relations Team on [email protected] within one hour of placing the order and we will replace/refund you the item cost.
  2. Sadly, if your eBook or Video file is faulty or a fault occurs during the eBook or Video being made available to you, i.e. during download then you should contact Customer Relations Team within 14 days of purchase on [email protected] who will be able to resolve this issue for you.
  3. You will have a choice of replacement or refund of the problem items.(damaged, defective or incorrect)
  4. Once Customer Care Team confirms that you will be refunded, you should receive the refund within 10 to 12 working days.
  5. If you are only requesting a refund of one book from a multiple order, then we will refund you the appropriate single item.
  6. Where the items were shipped under a free shipping offer, there will be no shipping costs to refund.

On the off chance your printed book arrives damaged, with book material defect, contact our Customer Relation Team on [email protected] within 14 days of receipt of the book with appropriate evidence of damage and we will work with you to secure a replacement copy, if necessary. Please note that each printed book you order from us is individually made by Packt's professional book-printing partner which is on a print-on-demand basis.

What tax is charged? Chevron down icon Chevron up icon

Currently, no tax is charged on the purchase of any print book (subject to change based on the laws and regulations). A localized VAT fee is charged only to our European and UK customers on eBooks, Video and subscriptions that they buy. GST is charged to Indian customers for eBooks and video purchases.

What payment methods can I use? Chevron down icon Chevron up icon

You can pay with the following card types:

  1. Visa Debit
  2. Visa Credit
  3. MasterCard
  4. PayPal
What is the delivery time and cost of print books? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela