Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Deployment with Docker
Deployment with Docker

Deployment with Docker: Apply continuous integration models, deploy applications quicker, and scale at large by putting Docker to work

eBook
₹799.99 ₹2621.99
Paperback
₹3276.99
Subscription
Free Trial
Renews at ₹800p/m

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
OR
Modal Close icon
Payment Processing...
tick Completed

Billing Address

Table of content icon View table of contents Preview book icon Preview Book

Deployment with Docker

Containers - Not Just Another Buzzword

In technology, sometimes the jumps in progress are small but, as is the case with containerization, the jumps have been massive and turn the long-held practices and teachings completely upside down. With this book, we will take you from running a tiny service to building elastically scalable systems using containerization with Docker, the cornerstone of this revolution. We will perform a steady but consistent ramp-up through the basic blocks with a focus on the inner workings of Docker, and, as we continue, we will try to spend a majority of the time in the world of complex deployments and their considerations.

Let’s take a look at what we will cover in this chapter:

  • What are containers and why do we need them?

  • Docker’s place in the container world

  • Thinking with a container mindset

The what and why of containers

We can’t start talking about Docker without actually covering the ideas that make it such a powerful tool. A container, at the most basic level, is an isolated user-space environment for a given discrete set of functionality. In other words, it is a way to modularize a system (or a part of one) into pieces that are much easier to manage and maintain while often also being very resilient to failures.

In practice, this net gain is never free and requires some investment in the adoption and implementation of new tooling (such as Docker), but the change pays heavy dividends to the adopters in a drastic reduction of development, maintenance, and scaling costs over its lifetime.

At this point, you might ask this: how exactly are containers able to provide such huge benefits? To understand this, we first need to take a look at deployments before such tooling was available.

In the earlier days of deployments, the process for deploying a service would go something like this:

  1. Developer would write some code.
  2. Operations would deploy that code.
  3. If there were any problems in deployment, the operations team would tell the developer to fix something and we would go back to step 1.

A simplification of this process would look something like this:

dev machine => code => ops => bare-metal hosts

The developer would have to wait for the whole process to bounce back for them to try to write a fix anytime there was a problem. What is even worse, operations groups would often have to use various arcane forms of magic to ensure that the code that developers gave them can actually run on deployment machines, as differences in library versions, OS patches, and language compilers/interpreters were all high risk for failures and likely to spend a huge amount of time in this long cycle of break-patch-deploy attempts.

The next step in the evolution of deployments came to improve this workflow with the virtualization of bare-metal hosts as manual maintenance of a heterogeneous mix of machines and environments is a complete nightmare even when they were in single-digit counts. Early tools such as chroot came out in the late 70s but were later replaced (though not fully) with hypervisors such as Xen, KVM, Hyper-V, and a few others, which not only reduced the management complexity of larger systems, but also provided Ops and developers both with a deployment environment that was more consistent as well as more computationally dense:

dev machine => code => ops => n hosts * VM deployments per host

This helped out in the reduction of failures at the end of the pipeline, but the path from the developer to the deployment was still a risk as the VM environments could very easily get out of sync with the developers.

From here, if we really try to figure out how to make this system better, we can already see how Docker and other container technologies are the organic next step. By making the developers' sandbox environment as close as we can get to the one in production, a developer with an adequately functional container system can literally bypass the ops step, be sure that the code will work on the deployment environment, and prevent any lengthy rewrite cycles due to the overhead of multiple group interactions:

dev machine => container => n hosts * VM deployments per host

With Ops being needed primarily in the early stages of system setup, developers can now be empowered to take their code directly from the idea all the way to the user with the confidence that a majority of issues that they will find will be ones that they will be able to fix.

If you consider this the new model of deploying services, it is very reasonable to understand why we have DevOps roles nowadays, why there is such a buzz around Platform as a Service (PaaS) setups, and how so many tech giants can apply a change to a service used by millions at a time within 15 minutes with something as simple as git push origin by a developer without any other interactions with the system.

But the benefits don't stop there either! If you have many little containers everywhere and if you have increased or decreased demand for a service, you can add or eliminate a portion of your host machines, and if the container orchestration is properly done, there will be zero downtime and zero user-noticeable changes on scaling changes. This comes in extremely handy to providers of services that need to handle variable loads at different times--think of Netflix and their peak viewership times as an example. In most cases, these can also be automated on almost all cloud platforms (that is, AWS Auto Scaling Groups, Google Cluster Autoscaler, and Azure Autoscale) so that if some triggers occur or there are changes in resource consumption, the service will automatically scale up and down the number of hosts to handle the load. By automating all these processes, your PaaS can pretty much be a fire-and-forget flexible layer, on top of which developers can worry about things that really matter and not waste time with things such as trying to figure out whether some system library is installed on deployment hosts.

Now don't get me wrong; making one of these amazing PaaS services is not an easy task by any stretch of imagination, and the road is covered in countless hidden traps but if you want to be able to sleep soundly throughout the night without phone calls from angry customers, bosses, or coworkers, you must strive to be as close as you can to these ideal setups regardless of whether you are a developer or not.

Docker's place

So far, we have talked a lot about containers but haven't mentioned Docker yet. While Docker has been emerging as the de facto standard in containerization, it is currently one of many competing technologies in this space, and what is relevant today may not be tomorrow. For this reason, we will cover a little bit of the container ecosystem so that if you see shifts occurring in this space, don't hesitate to try another solution, as picking the right tool for the job almost always beats out trying to, as the saying goes, fit a square peg in a round hole.

While most people know Docker as the Command-line Interface (CLI) tool, the Docker platform extends above and beyond that to include tooling to create and manage clusters, handle persistent storage, build and share Docker containers, and many others, but for now, we will focus on the most important part of that ecosystem: the Docker container.

Introduction to Docker containers

Docker containers, in essence, are a grouping of a number of filesystem layers that are stacked on top of each other in a sequence to create the final layout that is then run in an isolated environment by the host machine's kernel. Each layer describes which files have been added, modified, and/or deleted relative to its previous parent layer. For example, you have a base layer with a file /foo/bar, and the next layer adds a file /foo/baz. When the container starts, it will combine the layers in order and the resulting container will have both /foo/bar and /foo/baz. This process is repeated for any new layer to end up with a fully composed filesystem to run the specified service or services.

Think of the arrangement of the filesystem layers in an image as the intricate layering of sounds in a symphony: you have the percussion instruments in the back to provide the base for the sound, wind instruments a bit closer to drive the movements, and in the front, the string instruments with the lead melody. Together, it creates a pleasing end result. In the case of Docker, you generally have the base layers set up the main OS layers and configuration, the service infrastructure layers go on top of that (interpreter installation, the compilation of helpers, and so on), and the final image that you run is finally topped with the actual service code. For now, this is all you will need to know, but we will cover this topic in much more detail in the next chapter.

In essence, Docker in its current form is a platform that allows easy and fast development of isolated (or not depending on how the service is configured) Linux and Windows services within containers that are scalable, easily interchangeable, and easily distributable.

The competition

Before we get too deep into Docker itself, let us also cover some of the current competitors in broad strokes and see how they differ from Docker itself. The curious thing about almost all of them is that they are generally a form of abstraction around Linux control groups (cgroups) and namespaces that limit the use of Linux host's physical resources and isolate groups of processes from each other, respectively. While almost every tooling mentioned here provides some sort of containerization of resources, it can differ greatly in the depth of isolation, implementation security, and/or the container distribution.

rkt

rkt, often written as Rocket, is the closest competing application containerization platform from CoreOS that was started as a more secure application container runtime. Over time, Docker has closed a number of its security failings but unlike rkt, which runs with limited privileges as a user service, Docker's main service runs as root. This means that if someone manages to break out of the Docker container, they will automatically have full access to the host's root, which is obviously a really bad thing from an operations perspective while with rkt, the hacker would also need to escalate their privilege from the limited user. While this comparison here isn't painting Docker in great light from a security standpoint, if its development trajectory is to be extrapolated, it is possible and likely that this issue will be heavily mitigated and/or fixed in the future.

Another interesting difference is that unlike Docker, which is designed to run a single process within the container, rkt can run multiple processes within a container. This makes deploying multiple services within a single container much easier. Now, having said that, you actually can run multiple processes within a Docker container (we will cover this at a later point in the book) but it is a great pain to set that up properly but I did find in practice that the pressure to keep services and containers based on a single process really pushes the developer to create containers as true microservices instead of treating them as mini VMs so don't consider this necessarily as a problem.

While there are many other smaller reasons to choose Docker over rkt and vice versa, one massive thing cannot be ignored: the rate of adoption. While rkt is a bit younger, Docker has been adopted by almost all big tech giants, and there doesn't seem to be any sign of stopping the trend. With this in mind, if you need to work on microservices today, the choice is probably very clear but as with any tech field, the ecosystem may look much differently in a year or even just a couple of months.

System-level virtualization

On the opposite side, we have platforms for working with full system images instead of applications like LXD, OpenVZ, KVM, and a few others. They, unlike Docker and rkt, are designed to provide you with full support for all of the virtualized system services but at the cost of much higher resource usage purely by its definition. While having separate system containers on a host is needed for things like better security, isolation, and possibly compatibility, almost the entire use of these containers from personal experience can be moved to an application-level virtualization system with a bit of work to provide better resource use profile and higher modularity at a slight increase of cost in creating the initial infrastructure. A sensible rule to follow here is that if you are writing applications and services, you should probably use application-level virtualization but if you are providing VMs to the end user or want much more isolation between services you should use a system-level virtualization.

Desktop application-level virtualizations

Flatpak, AppImage, Snaps, and other similar technologies also provide isolation and packaging for single-application level containers, but unlike Docker, all of them target the deployment of desktop applications and do not have as precise control over the container life cycle (that is starting, stopping, forced termination, and so on) nor do they generally provide layered images. Instead, most of these tools have nice wrapper Graphical User Interfaces (GUIs) and provide a significantly better workflow for installing, running, and updating desktop applications. While most have large overlaps with Docker due to the same underlying reliance on mentioned cgroups and namespaces, these application-level virtualization platforms do not traditionally handle server applications (applications that run without UI components) and vice versa. Since this field is still young and the space they all cover is relatively small, you can probably expect consolidations and cross-overs so in this case it would be either for Docker to enter the desktop application delivery space and/or for one or more of these competing technologies to try to support server applications.

When should containerization be considered?

We've covered a lot of ground so far, but there is an important aspect that we did not cover yet but which is an extremely important thing to evaluate as containers do not make sense in a large array of circumstances as the end deployment target regardless of how much buzz there is around this concept, so we will cover some general use cases where this type of platform should really be considered (or not). While containerization should be the end goal in most cases from an operations perspective and offers huge dividends with minimal effort when injected into the development process, turning deployment machines into a containerized platform is a pretty tricky process, and if you will not gain tangible benefits from it, you might as well dedicate this time to something that will bring real and tangible value to your services.

Let's start this by covering scaling thresholds first. If your services as a whole can completely fit and run well on a relatively small or medium virtual machine or a bare-metal host and you don't anticipate sudden scaling needs, virtualization on the deployment machines will lead you down the path of pain that really isn't warranted in most cases. The high front-loaded costs of setting up even a benign but robust virtualized setup will usually be better spent on developing service features at that level.

If you see increases in demand with a service backed with a VM or bare-metal host, you can always scale up to a larger host (vertical scaling) and refocus your team but for anything less than that, you probably shouldn't go that route. There have been many cases where a business has spent months working to get the container technology implemented since it is so popular, only to lose their customers due to lack of development resources and having to shut their doors.

Now that your system is maxing out the limits of vertical scalability, is it a good time to add things such as Docker clusters to the mix? The real answer is "maybe". If your services are homogeneous and consistent across hosts, such as sharded or clustered databases or simple APIs, in most cases, this still isn't the right time either as you can scale this system easily with host images and some sort of a load balancer. If you're opting for a bit more fanciness, you can use a cloud-based Database as a Service (DBaaS) such as Amazon RDS, Microsoft DocumentDB, or Google BigQuery and auto-scale service hosts up or down through the same provider (or even a different one) based on the required level of performance.

If there is ample foreshadowing of service variety beyond this, the need for a much shorter pipeline from developer to deployment, rising complexity, or exponential growth, you should consider each of these as triggers to re-evaluate your pros/cons but there is no clear threshold that will be a clear cut-off. A good rule of thumb here, though, is that if you have a slow period for your team it won't hurt to explore the containerization options or to gear up your skills in this space, but be very careful to not underestimate the time it would take to properly set up such a platform regardless of how easy the Getting Started instructions look on many of these tools.

With this all, what are the clear signs that you need to get containers into your workflow as soon as you can? There can be many subtle hints here but the following list covers the ones that should immediately bring the containers topic up for discussion if the answer is yes, as the benefits greatly outweigh the time investment into your service platform:

  • Do you have more than 10 unique, discrete, and interconnected services in your deployment?
  • Do you have three or more programming languages you need to support on the hosts?
  • Are your ops resources constantly deploying and upgrading services?
  • Do any of your services require "four 9s" (99.99%) or better availability?
  • Do you have a recurring pattern of services breaking in deployments because developers are not considerate of the environment that the services will run in?
  • Do you have a talented Dev or Ops team that's sitting idle?
  • Does your project have a burning hole in the wallet?

Okay, maybe the last one is a bit of a joke but it is in the list to illustrate, in somewhat of a sarcastic tone, that at the time of writing this getting a PaaS platform operational, stable, and secure is neither easy nor cheap regardless of whether your currency is time or money. Many will try to trick you into the idea that you should always use containers and make everything Dockerized, but keep a skeptical mindset and make sure that you evaluate your options with care.

The ideal Docker deployment

Now that we have the real-talk parts done with, let us say that we are truly ready to tackle containers and Docker for an imaginary service. We covered bits and pieces of this earlier in the chapter, but we will here concretely define what our ideal requirements would look like if we had ample time to work on them:

  • Developers should be able to deploy a new service without any need for ops resources
  • The system can auto-discover new instances of services running
  • The system is flexibly scalable both up and down
  • On desired code commits, the new code will automatically get deployed without Dev or Ops intervention
  • You can seamlessly handle degraded nodes and services without interruption
  • You are capable of using the full extent of the resources available on hosts (RAM, CPUs, and so on)
  • Nodes should almost never need to be accessed individually by developers

If these are the requirements, you will be happy to know that almost all of them are feasible to a large extent and that we will cover almost all of them in detail in this book. For many of them, we will need to get into Docker way deeper and beyond most of the materials you will find elsewhere, but there is no point in teaching you deployments that you cannot take to the field that only print out "Hello World"s.

As we explore each topic in the following chapters, we will be sure to cover any pitfalls as there are many such complex system interactions. Some will be obvious to you, but many probably will not (for example, the PID1 issue), as the tooling in this space is relatively young and many tools critical for the Docker ecosystem are not even version 1.0 or have reached version 1.0 only recently.

Thus, you should consider this technology space to still be in its early stages of development so be realistic, don't expect miracles, and expect a healthy dose of little "gotchas". Keep also in mind that some of the biggest tech giants have been using Docker for a long time now (Red Hat, Microsoft, Google, IBM, and so on), so don't get scared either.

To get started and really begin our journey, we need to first reconsider the way we think about services.

The container mindset

Today, as we have somewhat covered earlier in the chapter, vast majority of services deployed today are a big mess of ad hoc or manually connected and configured pieces that tend to break apart as soon as a single piece is changed or moved. It is easy to imagine this as a tower of cards where the piece that needs changing is often in the middle of it, with risks taking the whole structure down. Small-to-medium projects and talented Dev and Ops team can mostly manage this level of complexity but it is really not a scalable methodology.

The developer workflow

Even if you're not working on a PaaS system, it is good to consider each piece of a service as something that should have a consistent environment between the developer and final deployment hosts, be able to run anywhere with minimal changes, and is modular enough to be swapped out with an API-compatible analogue if needed. For many of these cases, even a local Docker usage can go far in making the deployments easier as you can isolate each component into small pieces that don't change as your development environment changes.

To illustrate this, imagine a practical case where we are writing a simple web service that talks to a database on a system that is based on the latest Ubuntu, but our deployment environment is some iteration of CentOS. In this case, due to their vastly different support cycle lengths coordinating between versions and libraries will be extremely difficult, so as a developer, you can use Docker to provide you with the same version of the database that CentOS would have, and you can test your service in a CentOS-based container to ensure that all the libraries and dependencies can work when it gets deployed. This process will improve the development workflow even if the real deployment hosts have no containerization.

Now we will take this example in a slightly more realistic direction: what if you need to run your service without modifications of code on all currently supported versions of CentOS?

With Docker, you can have a container for each version of the OS that you can test the service against in order to ensure that you are not going to get any surprises. For additional points, you can automate a test suite runner to launch each one of the OS version containers one by one (or even better, in parallel) to run your whole test suite against them automatically on any code changes. With just these few small tweaks, we have taken an ad-hoc service that would constantly break in production to something that you almost never have to worry about as you can be confident that it will work when deployed, which is really powerful tooling to have.

If you extend this process, you can locally create Docker recipes (Dockerfiles), which we will get into in the next chapter in detail, with the exact set of steps needed to get your service running from a vanilla CentOS installation to fully capable of running the service. These steps can be taken with minimal changes by the ops teams as input to their automated configuration management (CM) system, such as Ansible, Salt, Puppet, or Chef, to ensure that the hosts will have the exact baseline that is required for things to run properly. This codified transfer of exact steps needed on the end target written by the service developer is exactly why Docker is such a powerful tool.

As is hopefully becoming apparent, Docker as a tool not only improves your development processes if they're on the deployment machines, but it can also be used throughout the process to standardize your environments and thus increase the efficiency of almost every part of the deployment pipeline. With Docker, you will most likely forget the infamous phrase that instills dread in every Ops person: "it works fine on my machine!". This, by itself, should be enough to make you consider splicing in container-based workflows even if your deployment infrastructure doesn't support containers.

The bottom line here that we've been dancing around and which you should always consider is that with the current tooling available, turning your whole deployment infrastructure into a container-based one is slightly difficult, but the addition of containers in any other part of your development process is generally not too difficult and can provide exponential workflow improvements to your team.

Summary

In this chapter, we followed along the history of deployments and looked at how containers with Docker bring us closer to this new world of micro-services. Docker was examined with a high-level overview about which parts we are most interested in. We covered the competition and where Docker fits into the ecosystem with some use cases. Lastly, we also covered when you should - and more importantly, when you shouldn't - consider containers in your infrastructure and development workflow.

In the next chapter, we will finally get our hands dirty and look into how to install and run Docker images along with creating our first Docker image, so be sure to stick around.

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • - Use Docker containers, horizontal node scaling, modern orchestration tools (Docker Swarm, Kubernetes, and Mesos) and Continuous Integration/Continuous Delivery to manage your infrastructure.
  • - Increase service density by turning often-idle machines into hosts for numerous Docker services.
  • - Learn what it takes to build a true container infrastructure that is scalable, reliable, and resilient in the face of increased complexities from using container infrastructures.
  • - Find out how to identify, debug, and mitigate most real-world, undocumented issues when deploying your own Docker infrastructure.
  • - Learn tips and tricks of the trade from existing Docker infrastructures running in production environments.

Description

Deploying Docker into production is considered to be one of the major pain points in developing large-scale infrastructures, and the documentation available online leaves a lot to be desired. With this book, you will learn everything you wanted to know to effectively scale your deployments globally and build a resilient, scalable, and containerized cloud platform for your own use. The book starts by introducing you to the containerization ecosystem with some concrete and easy-to-digest examples; after that, you will delve into examples of launching multiple instances of the same container. From there, you will cover orchestration, multi-node setups, volumes, and almost every relevant component of this new approach to deploying services. Using intertwined approaches, the book will cover battle-tested tooling, or issues likely to be encountered in real-world scenarios, in detail. You will also learn about the other supporting components required for a true PaaS deployment and discover common options to tie the whole infrastructure together. At the end of the book, you learn to build a small, but functional, PaaS (to appreciate the power of the containerized service approach) and continue to explore real-world approaches to implementing even larger global-scale services.

Who is this book for?

This book is aimed at system administrators, developers, DevOps engineers, and software engineers who want to get concrete, hands-on experience deploying multi-tier web applications and containerized microservices using Docker. This book is also for anyone who has worked on deploying services in some fashion and wants to take their small-scale setups to the next level (or simply to learn more about the process).

What you will learn

  • • Set up a working development environment and create a simple web service to demonstrate the basics
  • •Learn how to make your service more usable by adding a database and an app server to process logic
  • •Add resilience to your services by learning how to horizontally scale with a few containers on a single node
  • •Master layering isolation and messaging to simplify and harden the connectivity between containers
  • •Learn about numerous issues encountered at scale and their workarounds, from the kernel up to code versioning
  • •Automate the most important parts of your infrastructure with continuous integration

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Nov 22, 2017
Length: 298 pages
Edition : 1st
Language : English
ISBN-13 : 9781786463227
Vendor :
Docker
Tools :

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
OR
Modal Close icon
Payment Processing...
tick Completed

Billing Address

Product Details

Publication date : Nov 22, 2017
Length: 298 pages
Edition : 1st
Language : English
ISBN-13 : 9781786463227
Vendor :
Docker
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
₹800 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
₹4500 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just ₹400 each
Feature tick icon Exclusive print discounts
₹5000 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just ₹400 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total 10,203.97
Deployment with Docker
₹3276.99
Continuous Delivery with Docker and Jenkins
₹3649.99
Docker for Serverless Applications
₹3276.99
Total 10,203.97 Stars icon
Banner background image

Table of Contents

9 Chapters
Containers - Not Just Another Buzzword Chevron down icon Chevron up icon
Rolling Up the Sleeves Chevron down icon Chevron up icon
Service Decomposition Chevron down icon Chevron up icon
Scaling the Containers Chevron down icon Chevron up icon
Keeping the Data Persistent Chevron down icon Chevron up icon
Advanced Deployment Topics Chevron down icon Chevron up icon
The Limits of Scaling and the Workarounds Chevron down icon Chevron up icon
Building Our Own Platform Chevron down icon Chevron up icon
Exploring the Largest-Scale Deployments Chevron down icon Chevron up icon

Customer reviews

Rating distribution
Full star icon Full star icon Full star icon Full star icon Full star icon 5
(1 Ratings)
5 star 100%
4 star 0%
3 star 0%
2 star 0%
1 star 0%
Rick S Jun 30, 2018
Full star icon Full star icon Full star icon Full star icon Full star icon 5
THE AUTHOR does a good job of explaining in a way that people who aren't familiar with Docker can still understand. He also HAS a very strong grasp of what he's explaining. Some of the BAD security procedures that he explains how to sidestep are very easy to fall in to without guidance. If the AIM of this book is to bridge from beginner to expert, then the author did a great job. Five stars!
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

How do I buy and download an eBook? Chevron down icon Chevron up icon

Where there is an eBook version of a title available, you can buy it from the book details for that title. Add either the standalone eBook or the eBook and print book bundle to your shopping cart. Your eBook will show in your cart as a product on its own. After completing checkout and payment in the normal way, you will receive your receipt on the screen containing a link to a personalised PDF download file. This link will remain active for 30 days. You can download backup copies of the file by logging in to your account at any time.

If you already have Adobe reader installed, then clicking on the link will download and open the PDF file directly. If you don't, then save the PDF file on your machine and download the Reader to view it.

Please Note: Packt eBooks are non-returnable and non-refundable.

Packt eBook and Licensing When you buy an eBook from Packt Publishing, completing your purchase means you accept the terms of our licence agreement. Please read the full text of the agreement. In it we have tried to balance the need for the ebook to be usable for you the reader with our needs to protect the rights of us as Publishers and of our authors. In summary, the agreement says:

  • You may make copies of your eBook for your own use onto any machine
  • You may not pass copies of the eBook on to anyone else
How can I make a purchase on your website? Chevron down icon Chevron up icon

If you want to purchase a video course, eBook or Bundle (Print+eBook) please follow below steps:

  1. Register on our website using your email address and the password.
  2. Search for the title by name or ISBN using the search option.
  3. Select the title you want to purchase.
  4. Choose the format you wish to purchase the title in; if you order the Print Book, you get a free eBook copy of the same title. 
  5. Proceed with the checkout process (payment to be made using Credit Card, Debit Cart, or PayPal)
Where can I access support around an eBook? Chevron down icon Chevron up icon
  • If you experience a problem with using or installing Adobe Reader, the contact Adobe directly.
  • To view the errata for the book, see www.packtpub.com/support and view the pages for the title you have.
  • To view your account details or to download a new copy of the book go to www.packtpub.com/account
  • To contact us directly if a problem is not resolved, use www.packtpub.com/contact-us
What eBook formats do Packt support? Chevron down icon Chevron up icon

Our eBooks are currently available in a variety of formats such as PDF and ePubs. In the future, this may well change with trends and development in technology, but please note that our PDFs are not Adobe eBook Reader format, which has greater restrictions on security.

You will need to use Adobe Reader v9 or later in order to read Packt's PDF eBooks.

What are the benefits of eBooks? Chevron down icon Chevron up icon
  • You can get the information you need immediately
  • You can easily take them with you on a laptop
  • You can download them an unlimited number of times
  • You can print them out
  • They are copy-paste enabled
  • They are searchable
  • There is no password protection
  • They are lower price than print
  • They save resources and space
What is an eBook? Chevron down icon Chevron up icon

Packt eBooks are a complete electronic version of the print edition, available in PDF and ePub formats. Every piece of content down to the page numbering is the same. Because we save the costs of printing and shipping the book to you, we are able to offer eBooks at a lower cost than print editions.

When you have purchased an eBook, simply login to your account and click on the link in Your Download Area. We recommend you saving the file to your hard drive before opening it.

For optimal viewing of our eBooks, we recommend you download and install the free Adobe Reader version 9.