Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Concurrent Patterns and Best Practices

You're reading from   Concurrent Patterns and Best Practices Build scalable apps in Java with multithreading, synchronization and functional programming patterns

Arrow left icon
Product type Paperback
Published in Sep 2018
Publisher Packt
ISBN-13 9781788627900
Length 264 pages
Edition 1st Edition
Languages
Concepts
Arrow right icon
Author (1):
Arrow left icon
Atul S. Khot Atul S. Khot
Author Profile Icon Atul S. Khot
Atul S. Khot
Arrow right icon
View More author details
Toc

Of patterns and paradigms

Moving away from explicit state management is a very prominent theme in programming. We always need a higher level of abstraction over the shared state model. As explained earlier, explicit locking does not cut it. 

The various concurrency patterns that we will study in this book try to stay away from explicit locking. For example, immutability is a major theme, giving us persistent data structures. A persistent data structure performs a smart copy on a write, thereby avoiding mutation altogether, as shown in the following diagram:

As shown in the preceding diagram, the original linked list has three elements, {1, 2, 3}. The head element of the list has the value 1. Thread T1 starts counting the number of elements in the list.

At any point in time, thread T2 can prepend an element to the original list. This should not disturb the world of thread T1; it should still see the original list as it is. In other words, T1's version of the list as it sees it is preserved. Any change in the list creates a new version of the data structure. As all the versions live as long as they are needed (that is, are persistent), we don't need any locking.

Similarly, thread T2 removes the first two elements. This is achieved by just setting its head to the third element; again, this doesn't disturb the state as seen by T1 and T2.

This is essentially copy-on-write. Immutability is a cornerstone of functional programming languages.        

A typical concurrency pattern is an active object. For example, how would you consume a legacy code base from multiple threads? The code base was written without any parallelism in mind, the state is strewn around, and it is almost impossible to figure out.

A brute-force approach could be to just wrap up the code in a big God object. Each thread could lock this object, use it, and relinquish the lock. However, this design would hurt concurrency, as it means that other threads would simply have to wait! Instead, we could use an active object, as shown in the following diagram:

To use this active object, a proxy sits in between the caller threads and the actual code base. It converts each invocation of the API into a runnable and puts it in a blocking queue (a thread-safe FIFO queue). 

There is just one thread running in the God object. It executes the runnables on the queue one by one, in contrast to how a typical Java object method is invoked (passively). Here, the object itself executes the work placed on the queue, hence the term active object. 

The rest of this chapter describes the many patterns and paradigms, that have evolved over the years, and are used in order to avoid the explicit locking of the shared state.    

Event-driven architecture 

Event-driven programming is a programming style in which code executes in response to an event, such as a keypress or a mouse click. In short, the flow of a program is driven by events.

GUI programming is an example of event-driven programming. For example, X Windows (driving most of your Linux GUI) processes a series of XEvents. Every keypress, mouse button press or release, and mouse movement generates a series of events. If you are on Linux, there is a command called xev. Running it via Terminal spawns a window. When moving a mouse over the window or pressing some keys, you can see the events that are generated.

Here is a capture of the xev program on my Linux laptop: 

You can plug in a callback, which gets triggered upon the reception of such an event. For example, an editor program could use keypress events to update its state (resulting in its documents being edited). Traditional event-driven programming could create a complex callback flow, thereby making it hard to figure out the control flows in the code.

Event-driven architecture (EDA) helps in decoupling a system's modules. Components communicate using events, which are encapsulated in messages. A component that emits an event does not know anything about the consumers. This makes EDA extremely loosely coupled. The architecture is inherently asynchronous. The producer is oblivious of the consumers of the event messages. This process is shown in the following diagram:

Given one thread and an event loop, with the callbacks executing quickly, we have a nice architecture. How does all this relate to concurrency? There could be multiple event loops running on a pool of threads. Thread pooling is an essential concept, as we will see in the upcoming chapters. 

As we have seen, an event loop manages events. The events are passed on to an installed handler, where they are processed. The handler can react to an event in two ways: either it succeeds or it fails. A failure is passed to the event loop again as another event. The handler for the exception decides to react accordingly. 

 

Reactive programming

Reactive programming is a related programming paradigm. A spreadsheet is an excellent example of a reactive application. If we set a formula and change any column value, the spreadsheet program reacts and computes the new result columns.

A message-driven architecture is the foundation of Reactive applications. A message-driven application may be event-driven, actor-based, or a combination of the two.

The following is a diagram of observable composition:

Composable event streams make event handling easier to understand. Reactive Extensions (Rx) is a framework that provides composable observables. At the heart of this framework is the observer pattern, with a functional flavor. The framework allows us to compose multiple observables. The observers are given the resulting event stream in an asynchronous fashion. For more information, see http://reactivex.io/intro.html.  

Function of composition is shown in the following code:   

This Scala code shows five standalone methods. Each method is converted to a function and then collected in a variable, list. The reduceRightcall iterates over this list and composes all the functions into a bigger one, f.

The f("hello") call shows that the composition has worked!

The actor paradigm

All this concurrent programming is tricky. What is the correct synchronization and visibility? What if we could go back to our simpler sequential programming model and let the platform handle concurrency for us? 

Look at the following diagram:

Actors are the abstraction over threads. We write our code using the message passing model only. The only way of talking to an actor is by sending it a message.

Looking back at our UNIX shell model, the concurrency is there, but we don't deal with it directly. Using actors, we write code as if it were for a sequential message processor.

We need to be aware of the underlying threading model, though. For example, we should always use the tell and not the ask pattern, as shown in the picture. The tell pattern is where we send a message to an actor and then forget about it, that is, we don't block for an answer. This is essentially the asynchronous way of doing things:

An actor is a lightweight entity (threads are heavyweight). The creation and destruction of actors, monetarily speaking, is similar to the creation and destruction of Java objects. Just as we don't think of the cost while designing a UNIX pipeline (we focus largely on getting our job done), actors give us the same freedom.  

Actors also allow us to add supervision and restart capabilities, thereby allowing us to write robust, resilient systems. This is the let it crash philosophy. 

Actors are pretty old as a design; the paradigm was tried and tested in the Telecom domain using the Erlang language.

We will be looking at the actor model and the Akka library in detail in an upcoming chapter.   

Message brokers

A message broker is an architectural pattern for enabling application integrations via a message-driven paradigm. You can, for example, make a Python application and integrate it with another that is written in C (or Java). Integrations are vital to an enterprise where different applications are made to cooperate with each other.

Concurrent processing is obviously implied here. As the producers and consumers are completely decoupled (they don't even know if the others exist), the producer and consumer applications could even run on different machines, thereby overlapping the processing and increasing the overall throughput: 

Decoupling is really a central concept when you start thinking about concurrent systems. Designing systems consisting of loosely coupled component systems gives us many benefits. For example, we could reuse the components, which allows us to cut down on development and maintenance costs. It also paves the way for enabling greater concurrency. 

What happens when a consumer produces messages too fast? The messages will be buffered in the broker. This essentially means there is an inherent flow control mechanism at work here. A slow consumer can consume at its own pace. Likewise, the producer can produce messages at its own (faster) pace. As both are oblivious of each other, the overall system works smoothly.

Software transactional memory

The idea of database transactions is also based around concurrent reads and writes. A transaction embodies an atomic operation, which means that either all or none of the steps in the operation are completed. If all the operations are completed, the transaction succeeds; otherwise, the transaction aborts. The software transactional memory is a concurrency control mechanism on similar lines. It, again, is a different paradigm, an alternative to lock-based synchronization.

Just like a database transaction, a thread makes modifications and then tries to commit the changes. Of course, if some other transaction wins, we roll back and retry. If there is an error, the transaction aborts and we retry again.

This scheme of things is called optimistic locking, wherein we don't care about other possible concurrent transactions. We just make changes and hope the commit succeeds. If it fails, we keep trying until it eventually succeeds.      

What are the benefits? We get increased concurrency, as there is no explicit locking, and all threads keep progressing; only in the case of a conflict will they retry.      

STM simplifies our understanding of multithreaded programs. This, in turn, helps make programs more maintainable. Each transaction can be expressed as a single-threaded computation, as shown in the following diagram. We don't have to worry about locking at all:

Composability is a big theme: lock-based programs do not compose. You cannot take two atomic operations and create one more atomic operation out of them. You need to specifically program a critical section around these. STM, on the other hand, can wrap these two operations inside a transaction block, as shown in the preceding diagram. 

Parallel collections 

Say that I am describing some new and exciting algorithm to you. I start telling you about how the algorithm exploits hash tables. We typically think of such data structures as all  residing in memory, locked (if required), and worked upon by one thread.

For example, take a list of numbers. Say that we want to sum all these numbers. This operation could be parallelized on multiple cores by using threads.

Now, we need to stay away from explicit locking. An abstraction that works concurrently on our list would be nice. It would split the list, run the function on each sublist, and collate the result in the end, as shown in the following diagram. This is the typical MapReduce paradigm in action: 

The preceding diagram shows a Scala collection that has been parallelized in order to use concurrency internally.  

What if the data structure is so large that it cannot all fit in the memory of a single machine? We could split the collection across a cluster of machines instead.

The Apache Spark framework does this for us. Spark's Resilient Distributed Dataset (RDD) is a partitioned collection that spreads the data structure across cluster machines, and thus can work on huge collections, typically to perform analytical processing.   

You have been reading a chapter from
Concurrent Patterns and Best Practices
Published in: Sep 2018
Publisher: Packt
ISBN-13: 9781788627900
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image