Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Distributed Computing in Java 9
Distributed Computing in Java 9

Distributed Computing in Java 9: Leverage the latest features of Java 9 for distributed computing

eBook
$27.98 $39.99
Paperback
$48.99
Subscription
Free Trial
Renews at $19.99p/m

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Table of content icon View table of contents Preview book icon Preview Book

Distributed Computing in Java 9

Quick Start to Distributed Computing

Distributed computing is the process of accomplishing a bigger task through splitting it into multiple subtasks, which can be performed by multiple components that are located in a network of computers termed as distributed systems. These distributed systems have the capability to communicate and coordinate their activities by exchanging the information and/or status of their individual processes. Having such distributed systems allows organizations to maintain comparatively smaller and cheaper computers in a network rather than having to maintain one large server with bigger capacity.

In this chapter, we will cover the following topics:

  • Evolution of computing models
  • Parallel computing
  • Amdahl's law
  • Distributed computing
  • Parallel versus distributed computing
  • Design considerations for distributed systems
  • Java support

Let's begin our discussion by remembering the great Charles Babbage, considered to be the "father of the computer", who originated the concept of a programmable computer. He, who was an English mechanical engineer and polymath, conceptualized and invented the first mechanical computer in the early 19th century. While Alan Turing introduced the principle of the modern computer in 1936, modern digital computers were heralded to the world in the 1940s, and the Electronic Numerical Integrator and Computer (ENIAC) was among the earliest electronic general-purpose computers made. From there on, computers have evolved to be faster and cheaper at an astonishing rate, along with the operating systems, programming languages, and so on. The computers with such faster processing capacity were called supercomputers and used to occupy more than one big room years ago. Today, we have multicore processing capacity computers such as minicomputers and mobiles/smart phones, which can be carried in a pocket and are able to do most of jobs humans need in day-to-day life.

While a computer may be regarded as executing one gigantic program stored in its main memory, in some computers, it is necessary to have the capacity of executing several programs concurrently. This is achieved through multitasking; that is, the computer is enabled to switch rapidly between multiple executing programs to show them running simultaneously.

Next-generation computers are designed to distribute their process across numerous CPUs in a multiprocessing configuration. This technique was earlier available in huge and commanding computers, such as supercomputers, servers, and mainframe computers. Nowadays, such multiprocessor and multicore capabilities are extensively available on personal computers and laptops.

Although such high-speed computers are demonstrating delightful processing abilities, the next serious invention that transformed the world of processing was high-speed computer networking. This technique permitted an enormous number of computers to interact and established the next level of processing. The incredible fact about networked computers is that they can be placed geographically either within the same location connected as Local Area Network (LAN) or be situated across continents and connected as Wide Area Network (WAN).

Today, a new computer/smartphone is definitely expected to have multiprocessor/multicore capacity at an affordably low cost. Besides, the trend has changed from CPU to Graphics Processing Unit (GPU), also called as Visual Processing Unit (VPU), which can be installed in personal computers, mobile phones, workstations, embedded systems, and gaming consoles. Recent GPUs are very capable of computer graphics manipulation and image processing, and they are more efficient than general-purpose CPUs due to their highly parallel assembly.

The following diagram represents the evolution of computing models from mainframe to cloud, how each concern like availability SLA, Scaling, Hardware, HA Type, Software and Consumption are varied over the time with technology.

Early computing was a uniprocessor computing that was performed on a single processor, which can be called centralized computing. Later, parallel computing with more than one processor simultaneously executing a single program helped middleware processing. Parallel processing was achieved through either a single computer with multiple CPUs or multiple network connected computers (with the help of software).

Let us now learn in detail about parallel computing and how the trend moved toward distributed computing.

Parallel computing

A parallel system contains more than one processor having direct memory access to the shared memory that can form a common address space. Usually, a parallel system is of a Uniform Memory Access (UMA) architecture. In UMA architecture, the access latency (processing time) for accessing any particular location of a memory from a particular processor is the same. Moreover, the processors are also configured to be in a close proximity and are connected in an interconnection network. Conventionally, the interprocess processor communication between the processors is happening through either read or write operations across a shared memory, even though the usage of the message-passing capability is also possible (with emulation on the shared memory). Moreover, the hardware and software are tightly coupled, and usually, the processors in such network are installed to run on the same operating system. In general, the processors are homogeneous and are installed within the same container of the shared memory. A multistage switch/bus containing a regular and symmetric design is used for greater efficiency.

The following diagram represents a UMA parallel system with multiple processors connecting to multiple memory units through network connection.

 

A multicomputer parallel system is another type of parallel system containing multiple processors configured without having a direct accessibility to the shared memory. Moreover, a common address space may or may not be expected to be formed by the memory of the multiple processors. Hence, computers belonging to this category are not expected to contain a common clock in practice. The processors are configured in a close distance, and they are also tightly coupled in general with homogeneous software and hardware. Such computers are also connected within an interconnected network. The processors can establish a communication with either of the common address space or message passing options. This is represented in the diagram below.

A multicomputer system in a Non-Uniform Memory Access (NUMA) architecture is usually configured with a common address space. In such NUMA architecture, accessing different memory locations in a shared memory across different processors shows different latency times.

Array processor exchanges information by passing as messages. Array processors have a very small market owing to the fact that they can perform closely synchronized data processing, and the data is exchanged in a locked event for applications such as digital signal processing and image processing. Such applications can also involve large iterations on the data as well.

Compared to the UMA and array processors architecture, NUMA as well as message-passing multicomputer systems are less preferred if the shared data access and communication much accepted. The primary benefit of having parallel systems is to derive a better throughput through sharing the computational tasks between multiple processors. The tasks that can be partitioned into multiple subtasks easily and need little communication for bringing synchronization in execution are the most efficient tasks to execute on parallel systems. The subtasks can be executed as a large vector or an array through matrix computations, which are common in scientific applications. Though parallel computing was much appreciated through research and was beneficial on legacy architectures, they are observed no more efficient/economic in recent times due to following reasons:

  • They need special configuration for compilers
  • The market for such applications that can attain efficiency through parallel processing is very small
  • The evolution of more powerful and efficient computers at lower costs made it less likely that organizations would choose parallel systems.

 

Amdahl's law

Amdahl's law is frequently considered in parallel computing to forecast the improvement in process speedup when increasing the use of multiple system processors. Amdahl's Law is named after the famous computer scientist Gene Amdahl; it was submitted at the American Federation of Information Processing Societies (AFIPS) during the Spring Joint Computer Conference in the year 1967.

The standard formula for Amdahl's Law is as follows:

where:

  • Slatency is the calculated improvement of the latency (execution) of the complete task.
  • s is the improvement in execution of the part of the task that benefits from the improved system resources.
  • p is the proportion of the execution time that the part benefiting from improved resources actually occupies.

Let's consider an example of a single task that can be further partitioned into four subtasks: each of their execution time percentages are p1 = 0.11, p2 = 0.18, p3 = 0.23, and p4 = 0.48, respectively. Then, it is observed that the first subtask is improved in speed, so s1 = 1. The second subtask is observed to be improved in speed by five times, so s2 = 5. The third subtask is observed to be improved in speed by 20 times, so s3 = 20. Finally, the fourth subtask is improved in speed by 1.6 times, so s4 = 1.6.

By using Amdahl's law, the overall speedup is follows:

Notice how the 20 times and 5 times speedup on the second and third parts, respectively, don't have much effect on the overall speedup when the fourth part (48% of the execution time) is sped up only 1.6 times.

The following formula demonstrates that the theoretical speedup of the entire program execution improves with the increase of the number/capacity of resources in the system and that, regardless with the magnitude of the improvement, the calculated improvement of the entire program is always expected to be limited by that particular task that cannot benefit from the resource improvement.

Consider if a program is expected to need about 20 hours to complete the processing with the help of a single processor. A specific sub task of the entire program that is expected to consume an hour to execute cannot be executed in parallel, while the remaining program of about 19 hours processing (p = 0.95) of the total execution time can be executed in parallel. In such scenarios, regardless of how many additional processors are dedicated to be executed in parallel of such program, the execution time of the program cannot be reduced to anytime less than that minimum 1 hour. Obviously, the expected calculated improvement of the execution speed is limited to, at most, 20 times (calculated as 1/(1 − p) = 20). Hence, parallel computing is applicable only for those processors that have more scope for having the capability of splitting them into subtasks/parallel programs as observed in the diagram below.

However, Amdahl's law is applicable only to scenarios where the program is of a fixed size. In general, on larger problems (larger datasets), more computing resources tend to get used if they are available, and the overall processing time in the parallel part usually improves much faster than the by default serial parts.

 

Distributed computing

Distributed computing is the concurrent usage of more than one connected computer to solve a problem over a network connection. The computers that take part in distributed computing appear as single machines to their users.

Distributing computation across multiple computers is a great approach when these computers are observed to interact with each other over the distributed network to solve a bigger problem in reasonably less latency. In many respects, this sounds like a generalization of the concepts of parallel computing that we discussed in the previous section. The purpose of enabling distributed systems includes the ability to confront a problem that is either bigger or longer to process by an individual computer.

Distributed computing, the latest trend, is performed on a distributed system, which is considered to be a group of computers that do not stake a common physical clock or a shared memory, interact with the information exchanged over a communication (inter/intra) network, with each computer having its own memory, and runs on its own operating system. Usually, the computers are semi-autonomous, loosely coupled and cooperate to address a problem collectively.

Examples of distributed systems include the Internet, an intranet, and a Network of Workstations (NOW), which is a group of networked personal workstations connected to server machines represented in the diagram above. Modern-day internet connections include a home hub with multiple devices connected and operating on the network; search engines such as Google and Amazon services are famous distributed systems. Three-dimensional animation movies from Pixar and DreamWorks are other trendy examples of distributed computing.

Given the number of frames to condense for a full-length feature (30 frames per second on a 2-hour movie, which is a lot!), movie studios have the requirement of spreading the full-rendering job to more computers.

In the preceding image, we can observe a web application, another illustration of a distributed application where multiple users connect to the web application over the Internet/intranet. In this architecture, the web application is deployed in a web server, which interacts with a DB server for data persistence.

The other aspects of the application requiring a distributed system configuration are instant messaging and video conferencing applications. Having the ability to solve such problems, along with improved performance, is the reason for choosing distributed systems.

The devices that can take part in distributed computing include server machines, work stations, and personal handheld devices.

Capabilities of distributed computing include integrating heterogeneous applications that are developed and run on different technologies and operating systems, multiple applications sharing common resources, a single instance service being reused by multiple clients, and having a common user interface for multiple applications.

 

Parallel versus distributed computing

While both distributed computing and parallel systems are widely available these days, the main difference between these two is that a parallel computing system consists of multiple processors that communicate with each other using a shared memory, whereas a distributed computing system contains multiple processors connected by a communication network.

In parallel computing systems, as the number of processors increases, with enough parallelism available in applications, such systems easily beat sequential systems in performance through the shared memory. In such systems, the processors can also contain their own locally allocated memory, which is not available to any other processors.

In distributed computing systems, multiple system processors can communicate with each other using messages that are sent over the network. Such systems are increasingly available these days because of the availability at low price of computer processors and the high-bandwidth links to connect them.

The following reasons explain why a system should be built distributed, not just parallel:

  • Scalability: As distributed systems do not have the problems associated with shared memory, with the increased number of processors, they are obviously regarded as more scalable than parallel systems.
  • Reliability: The impact of the failure of any single subsystem or a computer on the network of computers defines the reliability of such a connected system. Definitely, distributed systems demonstrate a better aspect in this area compared to the parallel systems.
  • Data sharing: Data sharing provided by distributed systems is similar to the data sharing provided by distributed databases. Thus, multiple organizations can have distributed systems with the integrated applications for data exchange.
  • Resources sharing: If there exists an expensive and a special purpose resource or a processor, which cannot be dedicated to each processor in the system, such a resource can be easily shared across distributed systems.
  • Heterogeneity and modularity: A system should be flexible enough to accept a new heterogeneous processor to be added into it and one of the processors to be replaced or removed from the system without affecting the overall system processing capability. Distributed systems are observed to be more flexible in this respect.
  • Geographic construction: The geographic placement of different subsystems of an application may be inherently placed as distributed. Local processing may be forced by the low communication bandwidth more specifically within a wireless network.
  • Economic: With the evolution of modern computers, high-bandwidth networks and workstations are available at low cost, which also favors distributed computing for economic reasons.

 

Design considerations for distributed systems

Following are some of the characteristics of distributed systems that should be considered in designing a project in a distributed environment:

No global clock: Being distributed across the world, distributed systems cannot be expected to have a common clock, and this gives a chance for the intrinsic asynchrony between the processors performing the computing. Distributed system coordination usually depends on a shared idea of the time at which the programs or business state occurs. However, with distributed systems, having no global clock, it is a challenge to attain the accuracy with which the computers in the network can synchronize their clocks to reflect the time at which the expected program execution happened. This limitation expects the systems in the network to communicate through messages instead of time-based events.

Geographical distribution: The individual systems taking a part in distributed system are expected to be connected through a network, previously through a Wide-Area Network (WAN), and now with a Network Of Workstations/Cluster Of Workstations (NOW/COW). An in-house distributed system is expected to be configured within a LAN connectivity. NOW is becoming widespread in the market with its low-cost, high-speed, off-the-shelf processing capability. Most popular NOW architectures include the Google search engine, Amazon.

No shared memory: An important and key feature of distributed computing and the message-passing model of communication is having no shared memory, which also infers the nonexistence of a common physical clock.

Independence and heterogeneity: The distributed system processors are loosely coupled so that they have their own individual capabilities in terms of speed and method of execution with versatile operating systems. They are not expected to be part of a dedicated system; however, they cooperate with one another by exposing the services and/or executing the tasks together as subtasks.

Fail-over mechanism: We often see computer systems failing, and it is the design responsibility of setting the expected behavior with the consequence of possible failures. Distributed systems are observed to be failed in integration as well as the individual sub systems. A fault in the network can result in the isolation of an individual or a group of computers in the distributed system; however, they might still be executing the programs they are expected to execute. In reality, the individual programs may not be able to detect such network failures or timeouts. Similarly, the failure of a particular computer, a system being terminated abruptly with an abrupt program or system failure, may not immediately be known by the other systems/components in the network with which the failed computer usually communicates with. The consequences of this characteristic of distributed systems has to be captured in the system design.

Security concerns: Distributed systems being set up on a shared Internet are prone to more unauthorized attacks and vulnerabilities.

Distributed systems are becoming increasingly popular with their ability to allow the polling of resources, including CPU cycles, data storage, devices and services becoming increasingly economical. Distributed systems are more reliable as they allow replication of resources and services, which reduces service outages due to individual system failures. Cost, speed, and availability of Internet are making it a decent platform on which to maintain distributed systems.

 

Java support

From a standalone application to web applications to the sophisticated cloud integration of enterprise, Java has been updating itself to accommodate various features that support the change. Especially, frameworks like Spring have come up with modules like Spring Boot, Batch, and Integration, which comply with most of the cloud integration features. As a language, Java has a great support for programs to be written using multithreaded distributed objects. In this model, an application contains numerous heavyweight processes that communicate using messaging or Remote Method Invocations (RMI). Each heavyweight process contains several lightweight processes, which are termed as threads in Java. Threads can communicate through the shared memory. Such software architecture reflects the hardware that is configured to be extensively accessible.

By assuming that there is, at most, one thread per process or by ignoring the parallelism within one process, it is the usual model of a distributed system. The purpose of making the logically simple is that the distributed program is more object-oriented because data in a remote object can be accessed only through an explicit messaging or a remote procedure call (RPC).

The object-orientated model promotes reusability as well as design simplicity. Furthermore, a large shared data structure has the requirement of shared processing, which is possible through object orientation and letting the process of execution be multithreaded. The programming should carry the responsibility of splitting the larger data structure across multiple heavyweight processes.

Programming language, which wants to support concurrent programming, should be able to instruct the process structure, and how several processes communicate with each other and synchronize. There are many ways the Java program can specify the process structure or create a new process. For example, UNIX processes are tree structured containing a unique process ID (pid) for each process. fork and wait are the commands to create and synchronize the processes. The fork command creates a child process from a parent process with a parent process address space copy:

pid = fork();
if (pid != 0 ) {
cout << "This is a parent process";
}
else {
cout << "This is a child process";
}

Java has a predefined class called Thread to enable concurrency through creating thread objects. A class can extend the Thread class if it should be executed in a separate thread, override the run() method, and execute the start() method to launch that thread:

public class NewThread extends Thread {
public void run() {
System.out.println("New Thread executing!");
}
public static void main(String[] args) {
Thread t1 = new NewThread();
t1.start();
}
}

In the cases where a class has to extend another class and execute as a new thread, Java supports this behavior through the interface Runnable, as shown in the following example:

public class Animal {
String name;
public Animal(String name) {
this.name = name;
}
public void setName(String name) {
this.name = name;
}
public String getName() {
return this.name;
}
}
public class Mammal extends Animal implements Runnable {
public Mammal(String name) {
super(name);
}
public void run() {
for (int i = 0; i < 100; i++) {
System.out.println("The name of the Animal is : " + this.getName());
}
}
public static void main(String[] args) {
Animal firstAnimal = new Mammal("Tiger");
Thread threadOne = new Thread((Runnable) firstAnimal);
threadOne.start();
Animal secondAnimal = new Mammal("Elephant");
Thread threadTwo = new Thread((Runnable) secondAnimal);
threadTwo.start();
}
}

In the following example of Fibonacci numbers, a thread waits for completing the execution of other threads using the Join mechanism. Threads can carry the priority as well to set the importance of one thread over the other to execute before:

 

public class Fib extends Thread
{
private int x;
public int answer;
public Fib(int x) {
this.x = x;
}
public void run() {
if( x <= 2 )
answer = 1;
else {
try {
Fib f1 = new Fib(x-1);
Fib f2 = new Fib(x-2);
f1.start();
f2.start();
f1.join();
f2.join();
answer = f1.answer + f2.answer;
}
catch(InterruptedException ex) { }
}
}
public static void main(String[] args) throws Exception
{
try {
Fib f = new Fib( Integer.parseInt(args[0]) );
f.start();
f.join();
System.out.println(f.answer);
}
catch(Exception ex) {
System.err.println("usage: java Fib NUMBER");
}
}
}

With the latest Java version, a Callable interface is introduced with a @FunctionalInterface annotation. With the help of this feature, we can create Callable objects using lambda expressions as follows:

Callable<Integer> callableObject = () -> { return 5 + 9; };

The preceding expression is equivalent to the following code:

Callable<Integer> callableObject = new Callable<Integer>() {
@Override
public Integer call() throws Exception {
return 5 + 6;
}
};

Following is the complete example with Callable and Future interfaces and lambda expressions for handing concurrent processing in Java 9:

package threads;

import java.util.Arrays;
import java.util.List;
import java.util.concurrent.Callable;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;

public class JavaCallableThreads {

public static void main(String[] args) {
final List<Integer> numbers = Arrays.asList(1,2,3,4,5);
Callable<Integer> callableObject = () -> {
int sum = numbers.stream().mapToInt(i -> i.intValue()).sum();
return sum;
};
ExecutorService exService = Executors.newSingleThreadExecutor();
Future<Integer> futureObj = exService.submit(callableObject);
Integer futureSum=0;
try {
futureSum = futureObj.get();
} catch (InterruptedException e) {
e.printStackTrace();
} catch (ExecutionException e) {
e.printStackTrace();
}
System.out.println("Sum returned = " + futureSum);
}

}

Modern Java enterprise applications have evolved through messaging (through message queue), web services, and writing microservices based distributed application like docker with applications deployed on cloud computing services like RedHat OpenShift, Amazon Web Services (AWS), Google App Engine and Kubernetes.

We will discuss the Java 9 support for such application development and deployment in detail in the coming chapters.

 

Summary

Through this chapter, you learnt about the essential computing mechanisms including centralized, parallel, and distributed computing. You also learnt about the types of parallel systems and their efficiency through Amdahl's law. We finished this chapter with an understanding of distributed computing, and its strengths and the aspects that should be considered in designing a distributed system.

In the next chapter, you will learn about communication between distributed systems including Sockets and streams, URLs, Class Loader, and message-based systems in detail.

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • ? Make the best of Java 9 features to write succinct code
  • ? Handle large amounts of data using HPC
  • ? Make use of AWS and Google App Engine along with Java to establish a powerful remote computation system

Description

Distributed computing is the concept with which a bigger computation process is accomplished by splitting it into multiple smaller logical activities and performed by diverse systems, resulting in maximized performance in lower infrastructure investment. This book will teach you how to improve the performance of traditional applications through the usage of parallelism and optimized resource utilization in Java 9. After a brief introduction to the fundamentals of distributed and parallel computing, the book moves on to explain different ways of communicating with remote systems/objects in a distributed architecture. You will learn about asynchronous messaging with enterprise integration and related patterns, and how to handle large amount of data using HPC and implement distributed computing for databases. Moving on, it explains how to deploy distributed applications on different cloud platforms and self-contained application development. You will also learn about big data technologies and understand how they contribute to distributed computing. The book concludes with the detailed coverage of testing, debugging, troubleshooting, and security aspects of distributed applications so the programs you build are robust, efficient, and secure.

Who is this book for?

This book is for basic to intermediate level Java developers who is aware of object-oriented programming and Java basic concepts.

What you will learn

  • ? Understand the basic concepts of parallel and distributed computing/programming
  • ? Achieve performance improvement using parallel processing, multithreading, concurrency, memory sharing, and hpc cluster computing
  • ? Get an in-depth understanding of Enterprise Messaging concepts with Java Messaging Service and Web Services in the context of Enterprise Integration Patterns
  • ? Work with Distributed Database technologies
  • ? Understand how to develop and deploy a distributed application on different cloud platforms including Amazon Web Service and Docker CaaS Concepts
  • ? Explore big data technologies
  • ? Effectively test and debug distributed systems
  • ? Gain thorough knowledge of security standards for distributed applications including two-way Secure Socket Layer
Estimated delivery fee Deliver to Turkey

Standard delivery 10 - 13 business days

$12.95

Premium delivery 3 - 6 business days

$34.95
(Includes tracking information)

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Jun 30, 2017
Length: 304 pages
Edition : 1st
Language : English
ISBN-13 : 9781787126992
Vendor :
Oracle
Languages :
Tools :

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Estimated delivery fee Deliver to Turkey

Standard delivery 10 - 13 business days

$12.95

Premium delivery 3 - 6 business days

$34.95
(Includes tracking information)

Product Details

Publication date : Jun 30, 2017
Length: 304 pages
Edition : 1st
Language : English
ISBN-13 : 9781787126992
Vendor :
Oracle
Languages :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
$19.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
$199.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts
$279.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total $ 188.97
Java: Data Science Made Easy
$84.99
Distributed Computing in Java 9
$48.99
Java 9 Concurrency Cookbook, Second Edition
$54.99
Total $ 188.97 Stars icon
Banner background image

Table of Contents

10 Chapters
Quick Start to Distributed Computing Chevron down icon Chevron up icon
Communication between Distributed Applications Chevron down icon Chevron up icon
RMI, CORBA, and JavaSpaces Chevron down icon Chevron up icon
Enterprise Messaging Chevron down icon Chevron up icon
HPC Cluster Computing Chevron down icon Chevron up icon
Distributed Databases Chevron down icon Chevron up icon
Cloud and Distributed Computing Chevron down icon Chevron up icon
Big Data Analytics Chevron down icon Chevron up icon
Testing, Debugging, and Troubleshooting Chevron down icon Chevron up icon
Security Chevron down icon Chevron up icon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is the delivery time and cost of print book? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela
What is custom duty/charge? Chevron down icon Chevron up icon

Customs duty are charges levied on goods when they cross international borders. It is a tax that is imposed on imported goods. These duties are charged by special authorities and bodies created by local governments and are meant to protect local industries, economies, and businesses.

Do I have to pay customs charges for the print book order? Chevron down icon Chevron up icon

The orders shipped to the countries that are listed under EU27 will not bear custom charges. They are paid by Packt as part of the order.

List of EU27 countries: www.gov.uk/eu-eea:

A custom duty or localized taxes may be applicable on the shipment and would be charged by the recipient country outside of the EU27 which should be paid by the customer and these duties are not included in the shipping charges been charged on the order.

How do I know my custom duty charges? Chevron down icon Chevron up icon

The amount of duty payable varies greatly depending on the imported goods, the country of origin and several other factors like the total invoice amount or dimensions like weight, and other such criteria applicable in your country.

For example:

  • If you live in Mexico, and the declared value of your ordered items is over $ 50, for you to receive a package, you will have to pay additional import tax of 19% which will be $ 9.50 to the courier service.
  • Whereas if you live in Turkey, and the declared value of your ordered items is over € 22, for you to receive a package, you will have to pay additional import tax of 18% which will be € 3.96 to the courier service.
How can I cancel my order? Chevron down icon Chevron up icon

Cancellation Policy for Published Printed Books:

You can cancel any order within 1 hour of placing the order. Simply contact [email protected] with your order details or payment transaction id. If your order has already started the shipment process, we will do our best to stop it. However, if it is already on the way to you then when you receive it, you can contact us at [email protected] using the returns and refund process.

Please understand that Packt Publishing cannot provide refunds or cancel any order except for the cases described in our Return Policy (i.e. Packt Publishing agrees to replace your printed book because it arrives damaged or material defect in book), Packt Publishing will not accept returns.

What is your returns and refunds policy? Chevron down icon Chevron up icon

Return Policy:

We want you to be happy with your purchase from Packtpub.com. We will not hassle you with returning print books to us. If the print book you receive from us is incorrect, damaged, doesn't work or is unacceptably late, please contact Customer Relations Team on [email protected] with the order number and issue details as explained below:

  1. If you ordered (eBook, Video or Print Book) incorrectly or accidentally, please contact Customer Relations Team on [email protected] within one hour of placing the order and we will replace/refund you the item cost.
  2. Sadly, if your eBook or Video file is faulty or a fault occurs during the eBook or Video being made available to you, i.e. during download then you should contact Customer Relations Team within 14 days of purchase on [email protected] who will be able to resolve this issue for you.
  3. You will have a choice of replacement or refund of the problem items.(damaged, defective or incorrect)
  4. Once Customer Care Team confirms that you will be refunded, you should receive the refund within 10 to 12 working days.
  5. If you are only requesting a refund of one book from a multiple order, then we will refund you the appropriate single item.
  6. Where the items were shipped under a free shipping offer, there will be no shipping costs to refund.

On the off chance your printed book arrives damaged, with book material defect, contact our Customer Relation Team on [email protected] within 14 days of receipt of the book with appropriate evidence of damage and we will work with you to secure a replacement copy, if necessary. Please note that each printed book you order from us is individually made by Packt's professional book-printing partner which is on a print-on-demand basis.

What tax is charged? Chevron down icon Chevron up icon

Currently, no tax is charged on the purchase of any print book (subject to change based on the laws and regulations). A localized VAT fee is charged only to our European and UK customers on eBooks, Video and subscriptions that they buy. GST is charged to Indian customers for eBooks and video purchases.

What payment methods can I use? Chevron down icon Chevron up icon

You can pay with the following card types:

  1. Visa Debit
  2. Visa Credit
  3. MasterCard
  4. PayPal
What is the delivery time and cost of print books? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela