Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
vSphere High Performance Cookbook
vSphere High Performance Cookbook

vSphere High Performance Cookbook: A cookbook is the ideal way to learn a tool as complex as vSphere. Through experiencing the real-world recipes in this tutorial you'll gain deep insight into vSphere's unique attributes and reach a high level of proficiency.

eBook
R$80 R$271.99
Paperback
R$339.99
Subscription
Free Trial
Renews at R$50p/m

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Table of content icon View table of contents Preview book icon Preview Book

vSphere High Performance Cookbook

Chapter 1. CPU Performance Design

In this chapter, we will cover the tasks related with CPU performance design. You will learn the following aspects of CPU performance design:

  • Critical performance consideration – VMM scheduler

  • CPU scheduler – processor topology/cache aware

  • Ready time – warning sign

  • Hyperthreaded core sharing

  • Spotting CPU overcommitment

  • Fighting guest CPU saturation in SMP VMs

  • Controlling CPU resources using resource settings

  • What is most important to monitor in CPU performance

  • CPU performance best practices

Introduction


Ideally, a performance problem should be defined within the context of an ongoing performance management process. Performance management refers to the process of establishing performance requirements for applications, in the form of a service-level agreement (SLA), and then tracking and analyzing the achieved performance to ensure that those requirements are met. A complete performance management methodology includes collecting and maintaining baseline performance data for applications, systems, and subsystems, for example, storage and network.

In the context of performance management, a performance problem exists when an application fails to meet its predetermined SLA. Depending on the specific SLA, the failure might be in the form of excessively long response times or throughput below some defined threshold.

ESX/ESXi and virtual machine performance tuning is complicated because virtual machines share underlying physical resources, and in particular the CPU.

Finally, configuration issues or inadvertent user errors might lead to poor performance. For example, a user might use a symmetric multiprocessing (SMP) virtual machine when a single processor virtual machine would work well. You might also see a situation where a user sets shares but then forgets about resetting them, resulting in poor performance because of the changing characteristics of other virtual machines in the system.

If you overcommit any of these resources, you might see performance bottlenecks. For example, if too many virtual machines are CPU intensive, you might see slow performance because all of the virtual machines need to share the underlying physical CPU.

Critical performance consideration – VMM scheduler


The virtual machine monitor (VMM) is a thin layer that provides a virtual x86 hardware environment to the guest operating system on a virtual machine. This hardware includes a virtual CPU, virtual I/O devices, and timers. The VMM leverages key technologies in the VMkernel, such as scheduling, memory management, and the network and storage stacks.

Each VMM is devoted to one virtual machine. To run multiple virtual machines, the VMkernel starts multiple VMM instances, also known as worlds. Each VMM instance partitions and shares the CPU, memory, and I/O devices to successfully virtualize the system. The VMM can be implemented by using hardware virtualization, software virtualization (binary translation), or paravirtualization (which is deprecated) techniques.

Paravirtualization refers to the communication between the guest operating system and the hypervisor to improve performance and efficiency. The value proposition of paravirtualization is in the lower virtualization overhead, but the performance advantage of paravirtualization over hardware or software virtualization can vary greatly depending on the workload. Because paravirtualization cannot support unmodified operating systems (for example, Windows 2000/XP), its compatibility and portability is poor.

Paravirtualization can also introduce significant support and maintainability issues in production environments because it requires deep modifications to the operating system kernel and for this reason it was most widely deployed on Linux-based operating systems.

Getting ready

To step through this recipe, you need a running ESXi Server, a Virtual Machine, vCenter Server, and a working installation of the vSphere Client. No other prerequisites are required.

How to do it...

Let's get started:

  1. Open up VMware vSphere Client.

  2. Log in to the vCenter Server.

  3. In the virtual machine inventory, right-click on the virtual machine, and then click on Edit Settings. The Virtual Machine Properties dialog box appears.

  4. Click on the Options tab.

  5. Change the CPU/MMU Virtualization option under Advanced to one of the following options:

    • Automatic

    • Use software for instruction set and MMU virtualization

    • Use Intel VT-X/AMD-V for instruction set virtualization and software for MMU virtualization

    • Use Intel VT-X/AMD-V for instruction set virtualization and Intel EPT/AMD RVI for MMU virtualization

  6. Click on OK to save your changes.

  7. For the change to take effect, perform one of these actions:

    • Reset the virtual machine

    • Suspend and then resume the virtual machine

    • vMotion the virtual machine

How it works...

The VMM determines a set of possible monitor modes to use, and then picks one to use as the default monitor mode, unless something other than Automatic has been specified. The decision is based on:

  • The physical CPU's features and guest operating system type

  • Configuration file settings

There are three valid combinations for the monitor mode, as follows:

  • BT: Binary translation and shadow page tables

  • HV: AMD-V or Intel VT-x and shadow page tables

  • HWMMU: AMD-V with RVI, or Intel VT-x with EPT (RVI is inseparable from AMD-V, and EPT is inseparable from Intel VT-x)

BT, HV, and HWMMU are abbreviations used by ESXi to identify each combination.

When a virtual machine is powering on, the VMM inspects the physical CPU's features and the guest operating system type to determine the set of possible execution modes. The VMM first finds the set of modes allowed. Then it restricts the allowed modes by configuration file settings. Finally, among the remaining candidates, it chooses the preferred mode, which is the default monitor mode. This default mode is then used if you have left Automatic selected.

For the majority of workloads, the default monitor mode chosen by the VMM works best. The default monitor mode for each guest operating system on each CPU has been carefully selected after a performance evaluation of available choices. However, some applications have special characteristics that can result in better performance when using a non-default monitor mode. These should be treated as exceptions, not the rule.

The chosen settings are honored by the VMM only if the settings are supported on the intended hardware. For example, if you select Use software instruction set and MMU virtualization for a 64-bit guest operating system running on a 64-bit Intel processor, the VMM will choose Intel VT-x for CPU virtualization instead of BT. This is because BT is not supported by the 64-bit guest operating system on this processor.

There's more...

The virtual CPU consists of the virtual instruction set and the virtual memory management unit (MMU). An instruction set is a list of instructions that a CPU executes. The MMU is the hardware that maintains the mapping between the virtual addresses and the physical addresses in the memory.

The combination of techniques used to virtualize the instruction set and memory determines the monitor execution mode (also called the monitor mode). The VMM identifies the VMware ESXi hardware platform and its available CPU features, and then chooses a monitor mode for a particular guest operating system on that hardware platform. The VMM might choose a monitor mode that uses hardware virtualization techniques, software virtualization techniques, or a combination of hardware and software techniques.

We always had a challenge in hardware virtualization. x86 operating systems are designed to run directly on the bare metal hardware, so they assume that they have full control on the computer hardware. The x86 architecture offers four levels of privilege to operating systems and applications to manage access to the computer hardware: ring 0, ring 1, ring 2, and ring 3. User-level applications typically run in ring 3, the operating system needs to have direct access to the memory and hardware, and must execute its privileged instructions in ring 0.

Binary translation allows the VMM to run in ring 0 for isolation and performance, while moving the guest operating system to ring 1. Ring 1 is a higher privilege level than ring 3 and a lower privilege level than ring 0.

VMware can virtualize any x86 operating systems by using a combination of binary translation and direct execution techniques. With binary translation, the VMM dynamically translates all guest operating system instructions and caches the results for future use. The translator in the VMM does not perform a mapping from one architecture to another; that would be emulation not translation. Instead, it translates from the full unrestricted x86 instruction set issued by the guest operating system to a subset that is safe to execute inside the VMM. In particular, the binary translator replaces privileged instructions with sequences of instructions that perform the privileged operations in the virtual machine rather than on the physical machine. This translation enforces encapsulation of the virtual machine while preserving the x86 semantics as seen from the perspective of the virtual machine.

Meanwhile, user-level code is directly executed on the processor for high-performance virtualization. Each VMM provides each virtual machine with all of the services of the physical system, including a virtual BIOS, virtual devices, and virtualized memory management.

In addition to software virtualization, there is support for hardware virtualization. This allows some of the work of running virtual CPU instructions to be offloaded onto the physical hardware. Intel has the Intel Virtualization Technology (Intel VT-x) feature. AMD has the AMD Virtualization (AMD-V) feature. Intel VT-x and AMD-V are similar in aim but different in detail. Both designs aim to simplify virtualization techniques.

CPU scheduler – processor topology/cache aware


ESXi Server has an advanced CPU scheduler geared towards providing high performance, fairness, and isolation of virtual machines running on Intel/AMD x86 architectures.

The ESXi CPU scheduler is designed with the following objectives:

  • Performance isolation: Multi-VM fairness.

  • Co-scheduling: illusion that all vCPUs are concurrently online.

  • Performance: high throughput, low latency, high scalability, and low overhead.

  • Power efficiency: saving power without losing performance.

  • Wide Adoption: enabling all the optimizations on diverse processor architecture.

There can be only one active process per CPU at any given instant, for example, multiple vCPUs can run on the same pCPU, just not at one instant, but there are often more processes than CPUs. Therefore, queuing will occur, and the scheduler is responsible for controlling the queue, handling priorities, and preempting the use of the CPU.

The main tasks of the CPU scheduler are to choose which world is to be scheduled to a processor. In order to give each world a chance to run, the scheduler dedicates a time slice (also known as the duration a world can be executed (usually 10-20 ms, 50 for VMkernel by default)) for each process and then migrates the state of the world between run, wait, costop, and ready.

ESXi implements the proportional share-based algorithm. It associates each world with a share of CPU resource across all virtual machines. This is called entitlement and is calculated from the user-provided resource specifications, such as shares, reservations, and limits.

Getting ready

To step through this recipe, you need a running ESXi Server, a Virtual Machine, and a working installation of vSphere Client. No other prerequisites are required.

How to do it...

Let's get started:

  1. Log in to the VMware vSphere Client.

  2. In the virtual machine inventory, right-click on the virtual machine, and click on Edit Settings. The Virtual Machine Properties dialog box appears.

  3. Click on the Options tab.

  4. Under the Advanced section, click on General Row.

  5. Now on the right-hand side click on the Configuration Parameters button.

  6. Now click on the Add Row button at the bottom and add the parameter sched.cpu.vsmpConsolidate and on the Value section type TRUE.

  7. The final screen should like the following screenshot and then click on OK to save the setting.

How it works...

The CPU scheduler uses processor topology information to optimize the placement of vCPUs onto different sockets.

The CPU scheduler spreads the load across all the sockets to maximize the aggregate amount of cache available.

Cores within a single socket typically use a shared last-level cache. Use of a shared last-level cache can improve vCPU performance if the CPU is running memory-intensive workloads.

By default, the CPU scheduler spreads the load across all sockets in under-committed systems. This improves performance by maximizing the aggregate amount of cache available to the running vCPUs. For such workloads, it can be beneficial to schedule all of the vCPUs on the same socket, with a shared last-level cache, even when the ESXi host is under committed. In such scenarios, you can override the default behavior of the spreading vCPUs across packages by including the following configuration option in the virtual machine's VMX configuration file, sched.cpu.vsmpConsolidate=TRUE. However, it is usually better to stick with the default behavior.

Ready time – warning sign


To achieve the best performance in a consolidated environment, you must consider a ready time.

Ready time is the time that the vCPU waits, in the queue, for the pCPU (or physical Core) to be ready to execute its instruction. The scheduler handles the queue and when there is contention, and the processing resources are stressed, the queue might become long.

The ready time describes how much of the last observation period a specific world spent waiting in the queue. The ready time for a particular world (for example, a vCPU) is how much time during that interval was spent waiting in the queue to get access to a pCPU. The ready time can be expressed in percentage per vCPU over the observation time and statistically it can't be zero on average.

The value of the ready time, therefore, is an indicator of how long the VM was denied access to the pCPU resources which it wanted to use. This makes it a good indicator of performance.

When multiple processes are trying to use the same physical CPU, that CPU might not be immediately available, and a process must wait before the ESXi host can allocate a CPU to it.

The CPU scheduler manages access to the physical CPUs on the host system. A short spike in CPU used or CPU ready indicates that you are making the best use of the host resources. However, if both values are constantly high, the hosts are probably overloaded and performance is likely poor.

Generally, if the CPU used value for a virtual machine is above 90 percent and the CPU ready value is above 20 percent per vCPU (high number of vCPUs), performance is negatively affected.

This latency may impact the performance of the guest operating system and the running applications within a virtual machine.

Getting ready

To step through this recipe, you need a running ESXi Server, a couple of CPU-hungry virtual machines, VMware vCenter Server, and a working installation of vSphere Client. No other prerequisites are required.

How to do it...

Let's get started:

  1. Open up vSphere Client.

  2. Log in to the VMware vCenter Server.

  3. On the home screen, navigate to Hosts and Clusters.

  4. Expand the left-hand navigation list.

  5. Navigate to one of the CPU-hungry virtual machines.

  6. Navigate to the Performance screen.

  7. Navigate to the Advanced view.

  8. Click on Chart Options.

  9. Navigate to CPU from the Chart metrics.

  10. Navigate to the VM object.

    1. Select only Demand, Ready, and Usage in MHz.

      The key metrics when investigating a potential CPU issue are:

    • Demand: Amount of CPU that the virtual machine is trying to use.

    • Usage: Amount of CPU that the virtual machine is actually being allowed to use.

    • Ready: Amount of time for which the virtual machine is ready to run but (has work it wants to do) but was unable to because vSphere could not find physical resources to run the virtual machine on.

  11. Click on Ok.

In the following screenshot you will see the high ready time for the virtual machine:

Notice the amount of CPU this virtual machine is demanding and compare that to the amount of CPU usage the virtual machine is actually being able to get (usage in MHz). The virtual machine is demanding more than it is currently being allowed to use.

Notice that the virtual machine is also seeing a large amount of ready time.

Note

Ready time greater than 10 percent could be a performance concern. However, some less CPU-sensitive applications and virtual machines can have much higher values of ready time and still perform satisfactorily.

How it works...

Bad performance is when the users are unhappy. But that's subjective and hard to measure. We can measure other metrics easily, but they don't correlate perfectly with whether user's expectations are met. We want to find metrics that correlate well (though never perfectly) with user satisfaction. It's always the case that the final answer to "Is there a performance problem?" is subjective, but we can use objective metrics to make reasonable bets, and decide when it's worth asking the users if they're satisfied with the performance.

A vCPU is in ready state when the vCPU is ready to run (that is, it has a task it wants to execute) but is unable to run because the vSphere scheduler is unable to find physical host CPU resources to run the virtual machine on. One potential reason for elevated ready time is that the virtual machine is constrained by a user-set CPU limit or resource pool limit, reported as max limited (MLMTD). The amount of CPU denied because of a limit is measured as the metric max limited (MLMTD).

Ready time is reported in two different values between resxtop/esxtop and vCenter Server. In resxtop/esxtop, it is reported in an easily-understood percentage format. A figure of 5 percent means that the virtual machine spent 5 percent of its last sample period waiting for available CPU resources (only true for 1-vCPU VMs). In vCenter Server, ready time is reported as a time measurement. For example, in vCenter Server's real-time data, which produces sample values every 20,000 milliseconds, a figure of 1,000 milliseconds is reported for a 5 percent ready time. A figure of 2,000 milliseconds is reported for a 10 percent ready time.

Tip

As you may know that vCenter reports ready time in milliseconds (ms), use the following formula to convert the ms value to a percentage:

                                                 Metric Value (In Millisecond)
Metric Value (In Percent) = ------------------------------------------------	x 100
                                                 Total Time of Sample Period
                            (By default 20000 ms in vCenter for real-time graphs)

Although high ready time typically signifies CPU contention, the condition does not always warrant corrective action. If the value for ready time is close in value to the amount of time used on the CPU, and if the increased ready time occurs with occasional spikes in CPU activity but does not persist for extended periods of time, this might not indicate a performance problem. The brief performance hit is often within the accepted performance variance and does not require any action on the part of the administrator.

Hyperthreaded core sharing


The Hyperthreaded (HT) core sharing option enables us to define the different types of physical core sharing techniques with the virtual machines.

A Hyperthreaded processor (or lCPU) has the same number of function units as an older, non-Hyperthreaded processor. HT offers two execution contexts, so that it can achieve better function unit utilization by letting more than one thread execute concurrently. On the other hand, if you're running two programs which compete for the same function units, there is no advantage at all on having both running concurrently. When one is running, the other is necessarily waiting on the same function units.

A dual core processor has two times as many function units as a single-core processor, and can really run two programs concurrently with no competition for function units. A CPU socket can contain multiple cores. Each core can do CPU-type work. Twice as many cores will be able to do (roughly) twice as much work. If a core also has Hyperthreading enabled, then each core has two logical processors. However, two lCPUs cannot do twice as much work as one.

Getting ready

To step through this recipe, you need a running ESXi Server, a running virtual machine, VMware vCenter Server, and a working installation of vSphere Client. No other prerequisites are required.

How to do it...

Let's get started:

  1. Open up VMware vSphere Client.

  2. Log in to the vCenter Server.

  3. From the home screen, navigate to Hosts and Clusters.

  4. Expand the left-hand navigation list.

  5. Navigate to any one of the virtual machine.

  6. Right-click on the virtual machine and select Edit Settings.

  7. Click on the Resources tab.

  8. Click on Advanced CPU.

  9. Under Hyperthreaded Core Sharing, use the drop-down list to select any one of the available options.

There are three different HT sharing methods, as follows:

  • Any

  • None

  • Internal

How it works...

The following table elaborates the three methods of core sharing:

Option

Description

Any

The default for all virtual machines on a Hyperthreaded system. The virtual CPUs of a virtual machine with this setting can freely share cores with other virtual CPUs from this or any other virtual machine at any time.

None

Virtual CPUs of a virtual machine should not share cores with each other or with virtual CPUs from other virtual machines. That is, each virtual CPU from this virtual machine should always get a whole core to itself, with the other logical CPUs on that core being placed into the halted state.

Internal

This option is similar to none. Virtual CPUs from this virtual machine cannot share cores with virtual CPUs from other virtual machines. They can share cores with the other virtual CPUs from the same virtual machine. You can select this option only for SMP virtual machines. If applied to a uniprocessor virtual machine, the system changes this option to none.

These options have no effect on the fairness or CPU time allocation. Regardless of a virtual machine's hyperthreading settings, it still receives CPU time proportional to its CPU shares, and constrained by its CPU reservation and CPU limit values.

There's more...

If there are running VMs on the same virtual infrastructure cluster with different numbers of vCPU (for example, one vCPU and two vCPUs) then there is a good chance that one vCPU of your dual vCPU VM can work alone on one physical CPU and the other vCPU has to share a physical CPU with another VM. This causes tremendous synchronization overhead between the two vCPUs (you don't have this in physical multi-CPU machines because this sync is hardware based) which can cause the system process within the VM to go up from 50 percent to 100 percent CPU load.

Spotting CPU overcommitment


When we provision the CPU resources, which is the number of vCPUs allocated to running the virtual machines and that is greater than the number of physical cores on a host, is called CPU overcommitment.

CPU overcommitment is a normal practice in many situations; however, you need to monitor it closely. It increases the consolidation ratio.

CPU overcommitment is not recommended in order to satisfy or guarantee the workload of a tier-1 application with a tight SLA. CPU overcommitment may be successfully leveraged to highly consolidate and reduce the power consumption of light workloads on modern, multi-core systems.

Getting ready

To step through this recipe, you need a running ESXi Server, a couple of running CPU-hungry virtual machines, a SSH client (Putty), vCenter Server, and a working installation of vSphere Client. No other prerequisites are required.

The following table elaborates on Esxtop CPU Performance Metrics:

Esxtop Metric

Description

Implication

%RDY

Percentage of time a vCPU in a run queue is waiting for the CPU scheduler to let it run on a physical CPU.

A high %RDY time (use 20 percent as a starting point) may indicate the virtual machine is under resource contention. Monitor this; if the application speed is ok, a higher threshold may be tolerated.

%USED

Percentage of possible CPU processing cycles which were actually used for work during this time interval.

The %USED value alone does not necessarily indicate that the CPUs are overcommitted. However high %RDY values, plus high %USED values, are a sure indicator that your CPU resources are overcommitted.

How to do it...

To spot CPU overcommitment there are a few CPU resource parameters which you should monitor closely. Those are:

  1. Log in to the ESXi Server through the SSH client.

  2. Type esxtop and hit enter.

  3. Monitor the preceding values to understand CPU overcommitment.

This example uses esxtop to detect CPU overcommitment. Looking at the pCPU line near the top of the screen, you can determine that this host's two CPUs are 100 percent utilized. Four active virtual machines are shown, Res-Hungry-1 to Res-Hungry-4. These virtual machines are active because they have relatively high values in the %USED column. The values in the %USED column alone do not necessarily indicate that the CPUs are overcommitted. In the %RDY column, you see that the three active virtual machines have relatively high values. High %RDY values, plus high %USED values, are a sure indicator that your CPU resources are overcommitted.

From the CPU view, navigate to a VM and press the E key to expand the view. It will give a detailed vCPU view for the VM. This is important because at a quick level, CPU ready as a metric is best referenced when looking at performance concerns more broadly than a specific VM. If there is high ready percentage noted, contention could be an issue, particularly if other VMs show high utilization when more vCPUs than physical cores are present. In that case, other VMs could be leading to high ready time on a low idle VM. So, long story short, if the CPU ready time is high on VMs on a host, it's time to verify that no other VMs are seeing performance issues.

You can also use vCenter performance chart to spot the CPU overcommitment, as follows:

  1. Log in to the vCenter Server using vSphere Client.

  2. On the home screen, navigate to Hosts and Clusters.

  3. Go to the ESXi host.

  4. Click on the Performance tab.

  5. Navigate to the CPU from the Switch To drop-down menu on the right-hand side.

  6. Navigate to the Advanced tab and click on the Chart Options.

  7. Navigate to the ESXi host in the Objects section.

  8. Select only Used and Ready in the Counters section and click on OK.

Now you will see the ready time and the used time in the graph and you can spot the overcommitment. The following screenshot is an example output:

The following example shows that the host has high used time.

How it works...

Although high ready time typically signifies a CPU contention, the condition does not always warrant corrective action. If the value for ready time is also accompanied by high used time then it might signify that the host is overcommitted.

So used time and ready time for an host might signal contention. However, the host might not be over-committed, due to workload availability.

There might be periods of activity and periods that are idle. So the CPU is not over-committed all the time. Another very common source of high ready time for VMs, even when pCPU utilization is low, is due to storage being slow. A vCPU, which occupies a pCPU, can issue a storage I/O and then sits in the WAIT state on the pCPU blocking other vCPUs. Other vCPUs accumulate ready time; this vCPU and this pCPU accumulate wait time (which is not a part of the used or utilized time).

Fighting guest CPU saturation in SMP VMs


Guest CPU saturation happens when the application and operating system running in a virtual machine use all of the CPU resources that the ESXi host is providing for that virtual machine. However, this guest CPU saturation does not necessarily indicate that a performance problem exists.

Compute-intensive applications commonly use all of the available CPU resources, but this is expected and might be acceptable (as long as the end user thinks that the job is completing quickly enough). Even less-intensive applications might experience periods of high CPU demand without experiencing performance problems. However, if a performance problem exists when guest CPU saturation is occurring, steps should be taken to eliminate the condition.

When a virtual machine is configured with more than one vCPU but actively uses only one of those vCPUs, resources that could be used to perform useful work are being wasted. At this time you may see a potential performance problem—at least from the most active vCPU perspective.

Getting ready

To step through this recipe, you need a running ESXi Server, a couple of running CPU-hungry virtual machines, vCenter Server, and a working installation of vSphere Client. No other prerequisites are required.

How to do it...

To spot CPU overcommitment in the guest OS there are two CPU resource parameters which you should monitor closely as follows:

  • The ready time

  • The usage percentage

  1. Log in to the vCenter Server using vSphere Client.

  2. From the home screen, navigate to Hosts and Clusters.

  3. Expand the ESXi host and go to the CPU hungry VM.

  4. Click on the Performance tab.

  5. Navigate to the CPU from the Switch To drop-down menu on the right-hand side.

  6. Navigate to the Advanced tab and click on the Chart Options.

  7. Select only Usage Average in Percentage, Ready, and Used in the Counters section and click on OK

The preceding example shows the high usage and used value. We can see it is 100 percent.

The preceding example shows that after the CPU increase in the VM, the percentage of CPU usage dropped down to 52 percent.

How it works...

So for a SMP VM if you see it is the high CPU resources demanding, it may happen that either the application is single threaded or the guest operating system is configured with uniprocessor HAL.

Many applications are written with only a single thread of control. These applications cannot take advantage of more than one processor core.

In order for a virtual machine to take advantage of multiple vCPUs, the guest operating system running on the virtual machine must be able to recognize and use multiple processor cores. If the virtual machine is doing all of its work on vCPU0, the guest operating system might be configured with a kernel or a HAL that can recognize only a single processor core.

You have two possible approaches to solving performance problems related to guest CPU saturation:

  • Increase the CPU resources provided to the application.

  • Increase the efficiency with which the virtual machine uses CPU resources.

Adding CPU resources is often the easiest choice, particularly in a virtualized environment. If a virtual machine continues to experience CPU saturation even after adding CPU resources, the tuning and behavior of the application and operating system should be investigated.

Controlling CPU resources using resource settings


If you cannot rebalance the CPU load or increase the processor efficiency even after all of the recipes discussed earlier, then it might be something else which keeps the host CPU still saturated.

Now that could be a resource pool and its allocation of resources towards the virtual machine.

Many applications, such as batch jobs, respond to a lack of CPU resources by taking longer to complete but still produce correct and useful results. Other applications might experience failure or might be unable to meet the critical business requirements when denied sufficient CPU resources.

The resource controls available in vSphere can be used to ensure that the resource-sensitive applications always get sufficient CPU resources, even when host CPU saturation exists. You need to make sure that you understand how shares, reservations, and limits work when applied to resource pools or to individual VMs. The default values ensure that ESXi will be efficient and fair to all VMs. Change from the default settings only when you understand the consequences.

Getting ready

To step through this recipe, you need a running ESXi Server, a couple of running CPU hungry virtual machines, vCenter Server, and a working installation of vSphere Client. No other prerequisites are required.

How to do it...

Let's get started:

  1. Log in to the vCenter Server using vSphere Client.

  2. From the home screen, navigate to Hosts and Clusters.

  3. Expand the ESXi host and navigate to the CPU hungry virtual machine.

  4. Click on the Performance tab.

  5. Go to CPU from the Switch To drop-down menu on the right-hand side.

  6. Go to Advanced tab and click on the Chart Options.

  7. Select only Ready and Used in the Counters section and click on OK.

Now if there is a lower limit configured on the VM and at the same time if it is craving for a resource, then you will see a high ready time and a low used metric. An example of what it may look like is given in the following image:

Look at the preceding example and see when the VM is craving for more CPU resource, if you put a limit on top of it, then it will experience a high ready time and a low used time. Here in the above example this VM is set with a limit of 500MHz.

Now to rectify this, we can change the limit value and the VM should perform better with a low ready time and a high used value.

  1. Right-click on the CPU-hungry virtual machine and select Edit Settings.

  2. Click on the Resources tab.

  3. Click on CPU.

  4. Change the Share Value to High (2000 Shares).

  5. Change the Limit value to 2000MHz and Reservation to 2000MHz.

  6. Click on OK.

Now the VM should look and perform as shown in the following screenshot:

What is most important to monitor in CPU performance


Before you jump onto conclusion as to what to monitor for the CPU performance, you need to make sure that you know what affects the CPU performance. Things that can affect the CPU performance include:

  • CPU affinity: When you pin down a virtual CPU to a physical CPU, it may happen that your resource gets imbalanced. So, this is not advised until you have a strong reason to do that.

  • CPU prioritization: When CPU contention happens, the CPU scheduler will be forced to prioritize VMs based on entitlement and queue requests.

  • SMP VMs: If your application is not multithreaded then there is no benefit adding more CPU resources in VMs. In fact, the extra idle vCPUs add overhead that prevent some more useful work from being done.

  • Idle VMs: You may have too many idle VMs, which you think should not eat up resources. However, in reality CPU interrupt, shares, reservations, and specially limit settings can still be created for those VMs if they were changed from their default settings.

So, now you know what affects the CPU performance. You can now look at what it takes to monitor the CPU performance.

You can categorize the factors that should be monitored for the CPU performance into three main sections:

  • Host CPU usage

  • VM CPU usage

  • VM CPU ready time

To monitor these sections you need to know the esxtop counters and those are:

  • PCPU Used (%)

  • Per group statistics

    • %Used

    • %Sys

    • %RDY

    • %Wait

    • %CSTP

    • %MLMTD

Getting ready

To step through this recipe, you need a running ESXi Server, a couple of running CPU-hungry virtual machines, and a SSH Client (for example, Putty). No other prerequisites are required.

How to do it...

Let's get started:

  1. Log in to the ESXi host using SSH client (Putty).

  2. Run esxtop and monitor the statistics. The following screenshot is an example output:

  3. Now look at the performance counters as mentioned previously. In the following example output, look at the different metrics.

In the preceding example, you can see our pCPU 0 and pCPU 1 are heavily being used (100 percent and 73 percent UTIL respectively) and it shows the following figure:

Now in the preceding example, you see that the %Used value for the four CPU-hungry virtual machines are pretty high.

Also look at the %RDY screen, and you will see a high ready time, which indicates a performance problem.

The following list is a quick explanation for each of these metrics:

  • PCPU USED (%): This is the CPU utilization per physical CPU.

  • %USED: This is the physical CPU usage by per group.

  • %SYS: This is the VMkernel system activity time.

  • %RDY: This is the ready time. This is referred as the amount of time that the group spent ready to run, but waiting for the CPU to be available. Note that this is not adjusted for the number of vCPUs. You should expand the group to see %Ready for each vCPU, or at least divide this by the number of vCPUs to use an average per vCPU.

  • %WAIT: This is the percentage of time spent in blocked or busy state. This includes idle time and also the time waiting for I/O from the disk or network.

  • %CSTP: This is referred as the percentage of time spent in the VMkernel, on behalf of the group for processing interrupts. %CSTP for a vCPU is how much time the vCPU spent not running in order to allow the extra vCPUs in the same VM to catch up. High values suggest that this VM has more vCPUs than it needs and the performance might be suffering.

  • %MLMTD: This is the amount of time spent ready to run, but not scheduled because of a CPU limit.

CPU performance best practices


CPU virtualization adds varying amount of overhead, because of this you may need to fine tune the CPU performance and need to know what are the standard best practices.

Following are the standard CPU performance best practices:

  • You need to avoid using SMP VMs unless it is required by the application running inside the guest OS. That means if the application is not multithreaded then there is no benefit of using SMP VM.

  • You should prioritize the VM CPU usage with proportional share algorithm.

  • Use DRS (Distributed Resource Scheduler) and vMotion to redistribute VMs and reduce contention.

  • Use the latest available virtual hardware for the VMs.

  • Reduce the number of VMs running inside a single host. This way, you can not only reduce the contention, but also reduce the fault domain configuration.

  • You should leverage the application tuning guide from the vendor to tune your VMs for best performance.

Getting ready

To step through this recipe, you need a running ESXi Server, a couple of running virtual machines, and a working installation of vSphere Client. No other prerequisites are required.

How to do it…

Let's get started:

  1. For the first best practice, you need to check whether the application is single threaded or multi-threaded. If it is single threaded, then avoid running SMP VM.

  2. You need to log in to vCenter using vSphere Client, then go to the Home tab. Once there, go to the VM and look at the Summary tab.

  3. Now you can see whether the VM has one vCPU or multiple vCPUs. You see whether it's using them by looking at %Utilization or similar metric for each vCPU. This Summary tab doesn't tell us whether the app is single threaded or multi-threaded.

  4. For the second best practice, you need to prioritize the VM CPU using shares and reservation. Depending on the customer SLA, this has to be defined.

  5. You need to log in to the vCenter using vSphere Client, then go to the Home tab. Once there, go to the VM, right-click on it, and then select Edit Settings.

  6. Now go to the Resources tab and select CPU. Here, you need to define the Shares and Reservation values depending on your SLA and the performance factors. By default, ESXi is efficient and fair. It does not waste physical resources. If all the demands can be met, all will. If not all demands can be satisfied, the deprivation is shared equitably among VMs, by default.

    VMs can use, and then adjust the shares, reservations, or limits settings. But be sure that you know how they work first.

  7. For the third best practice, you need to have a vSphere Cluster and have DRS enabled for this. DRS would load balance the VMs across the ESXi hosts using vMotion.

    The first screenshot shows that the DRS is enabled on this vSphere Cluster:

    The second screenshot shows the automation level and migration threshold.

  8. For the fourth best practice, you first need to see what virtual hardware the VM is running on, and if it is not current then you need to upgrade that. Virtual hardware version can limit the number of vCPUs.

  9. You need to log in to the vCenter using vSphere Client, then go to the Home tab. Once there, go to VM and look at the Summary tab.

  10. In the following example it is hardware Version 8, which is old and we can upgrade it to hardware Version 9.

  11. Now to upgrade the virtual hardware of a VM, it has to be powered off and then right-click on the VM and go to Upgrade Virtual Hardware. It should give you a warning.

    Tip

    Take a snapshot prior to upgrading in order to mitigate the rare occurrence of a failure to boot the Guest Operating System after upgrading.

  12. Once you click on OK, the virtual hardware version will be upgraded.

  13. For the fifth recommendation, you need to limit the number of vCPUs required by the VMs that would run on the host and the number of sockets/cores available in each physical host. Remember the golden rule of "Don't keep all your eggs in one basket" can be retrieved based on fault domain tolerance and customer SLA. There is no simple answer to this. Monitor the VMs for performance and adjust as necessary.

  14. For the last recommendation, you need to get the vendor application tuning guide and follow that to tune your virtual environment. A typical example is Exchange 2010 Best Practices guide on VMware.

Left arrow icon Right arrow icon

Key benefits

  • Troubleshoot real-world vSphere performance issues and identify their root causes
  • Design and configure CPU, memory, networking, and storage for better and more reliable performance
  • Comprehensive coverage of performance issues and solutions including vCenter Server design and virtual machine and application tuning

Description

VMware vSphere is the key virtualization technology in today's market. vSphere is a complex tool and incorrect design and deployment can create performance-related problems. vSphere High Performance Cookbook is focused on solving those problems as well as providing best practices and performance-enhancing techniques. vSphere High Performance Cookbook offers a comprehensive understanding of the different components of vSphere and the interaction of these components with the physical layer which includes the CPU, memory, network, and storage. If you want to improve or troubleshoot vSphere performance then this book is for you! vSphere High Performance Cookbook will teach you how to tune and grow a VMware vSphere 5 infrastructure. This book focuses on tuning, optimizing, and scaling the infrastructure using the vSphere Client graphical user interface. This book will enable the reader with the knowledge, skills, and abilities to build and run a high-performing VMware vSphere virtual infrastructure. You will learn how to configure and manage ESXi CPU, memory, networking, and storage for sophisticated, enterprise-scale environments. You will also learn how to manage changes to the vSphere environment and optimize the performance of all vSphere components. This book also focuses on high value and often overlooked performance-related topics such as NUMA Aware CPU Scheduler, VMM Scheduler, Core Sharing, the Virtual Memory Reclamation technique, Checksum offloading, VM DirectPath I/O, queuing on storage array, command queuing, vCenter Server design, and virtual machine and application tuning. By the end of this book you will be able to identify, diagnose, and troubleshoot operational faults and critical performance issues in vSphere.

Who is this book for?

The book is primarily written for technical professionals with system administration skills and some VMware experience who wish to learn about advanced optimization and the configuration features and functions for vSphere 5.1.

What you will learn

  • Understand VMM Scheduler, Cache aware CPU Scheduler, NUMA Aware CPU Scheduler, and so on during CPU Performance Design
  • Learn about the virtual memory reclamation technique, monitoring host ballooning, and swapping activity
  • Get to grips with different vSwitch load balancing, considerations for checksum offloading, VMDirectPath I/O, and so on
  • Understand DRS algorithms, resource pool guidelines, SIOC threshold consideration, SDRS and its affinity/anti-affinity rules in DRS, SDRS, and resource control design
  • Scale up and scale out cluster design for performance, FT and its caveats, application monitoring, DPM, host affinity/anti-affinity rules
  • Design your vSphere storage based on various workloads and FC storage for best performance
  • Choose the right platform while designing your vCenter Server, redundant vCenter design, vCenter SSO and its deployment
Estimated delivery fee Deliver to Brazil

Standard delivery 10 - 13 business days

R$63.95

Premium delivery 3 - 6 business days

R$203.95
(Includes tracking information)

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Jul 26, 2013
Length: 240 pages
Edition : 1st
Language : English
ISBN-13 : 9781782170006
Vendor :
VMware
Category :
Languages :
Tools :

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Estimated delivery fee Deliver to Brazil

Standard delivery 10 - 13 business days

R$63.95

Premium delivery 3 - 6 business days

R$203.95
(Includes tracking information)

Product Details

Publication date : Jul 26, 2013
Length: 240 pages
Edition : 1st
Language : English
ISBN-13 : 9781782170006
Vendor :
VMware
Category :
Languages :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
R$50 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
R$500 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just R$25 each
Feature tick icon Exclusive print discounts
R$800 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just R$25 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total R$ 892.97
VMware vSphere 5.1 Cookbook
R$306.99
Troubleshooting vSphere Storage
R$245.99
vSphere High Performance Cookbook
R$339.99
Total R$ 892.97 Stars icon
Banner background image

Table of Contents

8 Chapters
CPU Performance Design Chevron down icon Chevron up icon
Memory Performance Design Chevron down icon Chevron up icon
Networking Performance Design Chevron down icon Chevron up icon
DRS, SDRS, and Resource Control Design Chevron down icon Chevron up icon
vSphere Cluster Design Chevron down icon Chevron up icon
Storage Performance Design Chevron down icon Chevron up icon
Designing vCenter and vCenter Database for Best Performance Chevron down icon Chevron up icon
Virtual Machine and Application Performance Design Chevron down icon Chevron up icon

Customer reviews

Top Reviews
Rating distribution
Full star icon Full star icon Full star icon Full star icon Half star icon 4.7
(7 Ratings)
5 star 71.4%
4 star 28.6%
3 star 0%
2 star 0%
1 star 0%
Filter icon Filter
Top Reviews

Filter reviews by




J. Walker Dec 23, 2013
Full star icon Full star icon Full star icon Full star icon Full star icon 5
I am working on a environment that is under extremely high stress in terms of CPU, network, memory, and IO performance. We moved it from all physical machines to VMs recently. Initially, the VMs could not handle the stress. The suggestions in this book, especially for IO, CPU, and Memory improved the performance enough to allow the VMs to handle the stress adequately.
Amazon Verified review Amazon
Lee Marzke Oct 31, 2013
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Packt provided me a copy of this new book to review on my blog site [...] In summary the book provides a lot of good information I've not seen elsewhere on setting up Performance graphs and improving memory and network performance. To read the entire review see: [...]
Amazon Verified review Amazon
SLJ Johnson Oct 24, 2014
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Personally I really enjoy cookbooks as they're condensed specific books which focus on the stuff you're after. Less platitudinal vendor nonsense and just the facts on how to make stuff a bit better than OOTB.
Amazon Verified review Amazon
Larry Karnis Aug 22, 2013
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Had a quick look through the book and am impressed. The scope and depth of detail in the book is great. Gets right to the point, explains the issue and tells you what to do to make things better/faster/etc. Explains trade offs (when choices available) as well as things to watch out for. Lots of actionable suggestions.You could probably find most/all of the same information by finding/reading VMware blogs. This book saves a lot of time (and filters out a lot of blog 'noise') so it is worth the money.FYI - this is not a book for beginners. You should be a solid VMware administrator (VCP or equivalent skills/experience).
Amazon Verified review Amazon
VirtuallyMikeB Oct 28, 2013
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Good day,Packt Publishing was gracious enough to give me a free e-copy of the book to review. I reviewed it on my blog at VirtuallyMikeBrown dot com. I've included the full text of my review below.-------------------------------------------I was graciously given the opportunity to read and review vSphere High Performance Cookbook, written by Prasenjit Sarkar (@stretchcloud) and published by Packt Publishing, whose subtitle states it has Over 60 recipes to help you improve vSphere performance and solve problems before they arise. Gulping down its chapters was easy after seeing that Prasenjit's recipes included fixes for such common, and some not so common, misconfigurations or lack thereof.The book states its audience includes technical professionals with vSphere administration experience that want to use advanced options and configurations to optimize their environments. The vSphere platform used in the book is 5.1. As I was reading, I kept wanting to give the book to the VMware admins I've come across to help them improve their deployments because I know how much they could use the recipes inside. In my varied VMware experiences, I've come across many of the topics presented in the book. I know first-hand how useful they can be and how often they go unnoticed or are left unconfigured.The chapter list includes the following topics:CPU Performance DesignMemory Performance DesignNetworking Performance DesignDRS, SDRS, and Resource Control DesignvSphere Cluster DesignStorage Performance DesignDesigning vCenter and vCenter Database for Best PerformanceVirtual Machine and Application Performance DesignThese topics are foundational in building out a vSphere environment for the best performance. I'm reminded of a live-blog post by Scott Lowe (@scott_lowe) during VMworld 2010 timeframe, if I remember right, where then-VMware CEO Paul Maritz stated there were about 800,000 VMware Administrators and about 60,000 of them were VCPs. I know these numbers have changed since then, but what this says to me is that the large majority of IT folks with their hands in a vSphere infrastructure have not taken the formal VCP training which happens to cover a lot of the topics in this book. In my experience, most VMware administrators are not virtualization folks; they're traditional Microsoft server folks that have been forced to work in a virtualized environment because that's how the technology train has rolled. They're not dumb, of course, but they sure could use some pointers in how to better manage and optimize a vSphere infrastructure. This book focuses on optimization and does a fine job.Common topics such as understanding %RDY, memory reclamation, swapping, vSwitch load balancing, multi-NIC vMotion, resource pool guidelines, affinity/anti-affinity rules, scale up vs. scale out, considerations for iSCSI and FC storage, which platforms to choose for a vCenter Server, SSO, and NUMA considerations are just a few of those covered in this cookbook. There are also more advanced topics covered I wasn't even aware of, such as modifying CPU scheduler options for processor topology and cache awareness.As we study and gain experience with vSphere, we hear about these topics in different capacities, but this book brings the topics together to focus on how to improve performance. Each topic includes an introduction to the concept followed by a section on what you need in a test lab to follow the recipe in the cookbook style. Through screenshots, graphs and tables, you're then shown how to perform the task. And finally, how the concept works is explained, perhaps with additional material to round out the topic.In addition, attention grabbing performance-enhancing topics includeSpotting CPU over commitmentWhat is most important to monitor in CPU performanceKey memory performance metrics to monitorIdentifying when memory is the problemMemory performance best practicesImproving network performance using network I/O controlUsing resource pool guidelinesDesigning a highly available and high-performance iSCSI SANDesigning a highly available and high-performance FC SANIf you're like me, you know a VMware admin or two that could benefit from reading this book. Thanks again to Packt Publishing for the opportunity to review this book. A free, digital copy was provided to me for doing so.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is the delivery time and cost of print book? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela
What is custom duty/charge? Chevron down icon Chevron up icon

Customs duty are charges levied on goods when they cross international borders. It is a tax that is imposed on imported goods. These duties are charged by special authorities and bodies created by local governments and are meant to protect local industries, economies, and businesses.

Do I have to pay customs charges for the print book order? Chevron down icon Chevron up icon

The orders shipped to the countries that are listed under EU27 will not bear custom charges. They are paid by Packt as part of the order.

List of EU27 countries: www.gov.uk/eu-eea:

A custom duty or localized taxes may be applicable on the shipment and would be charged by the recipient country outside of the EU27 which should be paid by the customer and these duties are not included in the shipping charges been charged on the order.

How do I know my custom duty charges? Chevron down icon Chevron up icon

The amount of duty payable varies greatly depending on the imported goods, the country of origin and several other factors like the total invoice amount or dimensions like weight, and other such criteria applicable in your country.

For example:

  • If you live in Mexico, and the declared value of your ordered items is over $ 50, for you to receive a package, you will have to pay additional import tax of 19% which will be $ 9.50 to the courier service.
  • Whereas if you live in Turkey, and the declared value of your ordered items is over € 22, for you to receive a package, you will have to pay additional import tax of 18% which will be € 3.96 to the courier service.
How can I cancel my order? Chevron down icon Chevron up icon

Cancellation Policy for Published Printed Books:

You can cancel any order within 1 hour of placing the order. Simply contact [email protected] with your order details or payment transaction id. If your order has already started the shipment process, we will do our best to stop it. However, if it is already on the way to you then when you receive it, you can contact us at [email protected] using the returns and refund process.

Please understand that Packt Publishing cannot provide refunds or cancel any order except for the cases described in our Return Policy (i.e. Packt Publishing agrees to replace your printed book because it arrives damaged or material defect in book), Packt Publishing will not accept returns.

What is your returns and refunds policy? Chevron down icon Chevron up icon

Return Policy:

We want you to be happy with your purchase from Packtpub.com. We will not hassle you with returning print books to us. If the print book you receive from us is incorrect, damaged, doesn't work or is unacceptably late, please contact Customer Relations Team on [email protected] with the order number and issue details as explained below:

  1. If you ordered (eBook, Video or Print Book) incorrectly or accidentally, please contact Customer Relations Team on [email protected] within one hour of placing the order and we will replace/refund you the item cost.
  2. Sadly, if your eBook or Video file is faulty or a fault occurs during the eBook or Video being made available to you, i.e. during download then you should contact Customer Relations Team within 14 days of purchase on [email protected] who will be able to resolve this issue for you.
  3. You will have a choice of replacement or refund of the problem items.(damaged, defective or incorrect)
  4. Once Customer Care Team confirms that you will be refunded, you should receive the refund within 10 to 12 working days.
  5. If you are only requesting a refund of one book from a multiple order, then we will refund you the appropriate single item.
  6. Where the items were shipped under a free shipping offer, there will be no shipping costs to refund.

On the off chance your printed book arrives damaged, with book material defect, contact our Customer Relation Team on [email protected] within 14 days of receipt of the book with appropriate evidence of damage and we will work with you to secure a replacement copy, if necessary. Please note that each printed book you order from us is individually made by Packt's professional book-printing partner which is on a print-on-demand basis.

What tax is charged? Chevron down icon Chevron up icon

Currently, no tax is charged on the purchase of any print book (subject to change based on the laws and regulations). A localized VAT fee is charged only to our European and UK customers on eBooks, Video and subscriptions that they buy. GST is charged to Indian customers for eBooks and video purchases.

What payment methods can I use? Chevron down icon Chevron up icon

You can pay with the following card types:

  1. Visa Debit
  2. Visa Credit
  3. MasterCard
  4. PayPal
What is the delivery time and cost of print books? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela