Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Learning Concurrent Programming in Scala

You're reading from   Learning Concurrent Programming in Scala Practical Multithreading in Scala

Arrow left icon
Product type Paperback
Published in Feb 2017
Publisher
ISBN-13 9781786466891
Length 434 pages
Edition 2nd Edition
Languages
Concepts
Arrow right icon
Author (1):
Arrow left icon
Aleksandar Prokopec Aleksandar Prokopec
Author Profile Icon Aleksandar Prokopec
Aleksandar Prokopec
Arrow right icon
View More author details
Toc

Table of Contents (11) Chapters Close

Preface 1. Introduction FREE CHAPTER 2. Concurrency on the JVM and the Java Memory Model 3. Traditional Building Blocks of Concurrency 4. Asynchronous Programming with Futures and Promises 5. Data-Parallel Collections 6. Concurrent Programming with Reactive Extensions 7. Software Transactional Memory 8. Actors 9. Concurrency in Practice 10. Reactors

Concurrent programming

In concurrent programming, we express a program as a set of concurrent computations that execute during overlapping time intervals and coordinate in some way. Implementing a concurrent program that functions correctly is usually much harder than implementing a sequential one. All the pitfalls present in sequential programming lurk in every concurrent program, but there are many other things that can go wrong, as we will learn in this book. A natural question arises: why bother? Can't we just keep writing sequential programs?

Concurrent programming has multiple advantages. First, increased concurrency can improve program performance. Instead of executing the entire program on a single processor, different subcomputations can be performed on separate processors, making the program run faster. With the spread of multicore processors, this is the primary reason why concurrent programming is nowadays getting so much attention.

A concurrent programming model can result in faster I/O operations. A purely sequential program must periodically poll I/O to check if there is any data input available from the keyboard, the network interface, or some other device. A concurrent program, on the other hand, can react to I/O requests immediately. For I/O-intensive operations, this results in improved throughput, and is one of the reasons why concurrent programming support existed in programming languages even before the appearance of multiprocessors. Thus, concurrency can ensure the improved responsiveness of a program that interacts with the environment.

Finally, concurrency can simplify the implementation and maintainability of computer programs. Some programs can be represented more concisely using concurrency. It can be more convenient to divide the program into smaller, independent computations than to incorporate everything into one large program. User interfaces, web servers, and game engines are typical examples of such systems.

In this book, we adopt the convention that concurrent programs communicate through the use of shared memory, and execute on a single computer. By contrast, a computer program that executes on multiple computers, each with its own memory, is called a distributed program, and the discipline of writing such programs is called distributed programming. Typically, a distributed program must assume that each of the computers can fail at any point, and provide some safety guarantees if this happens. We will mostly focus on concurrent programs, but we will also look at examples of distributed programs.

A brief overview of traditional concurrency

In a computer system, concurrency can manifest itself in the computer hardware, at the operating system level, or at the programming language level. We will focus mainly on programming-language-level concurrency.

The coordination of multiple executions in a concurrent system is called synchronization, and it is a key part in successfully implementing concurrency. Synchronization includes mechanisms used to order concurrent executions in time. Furthermore, synchronization specifies how concurrent executions communicate, that is, how they exchange information. In concurrent programs, different executions interact by modifying the shared memory subsystem of the computer. This type of synchronization is called shared memory communication. In distributed programs, executions interact by exchanging messages, so this type of synchronization is called message-passing communication.

At the lowest level, concurrent executions are represented by entities called processes and threads, covered in Chapter 2, Concurrency on the JVM and the Java Memory Model. Processes and threads traditionally use entities such as locks and monitors to order parts of their execution. Establishing an order between the threads ensures that the memory modifications done by one thread are visible to a thread that executes later.

Often, expressing concurrent programs using threads and locks is cumbersome. More complex concurrent facilities have been developed to address this, such as communication channels, concurrent collections, barriers, countdown latches, and thread pools. These facilities are designed to more easily express specific concurrent programming patterns, and some of them are covered in Chapter 3, Traditional Building Blocks of Concurrency.

Traditional concurrency is relatively low-level and prone to various kinds of errors, such as deadlocks, starvations, data races, and race conditions. You will usually not use low-level concurrency primitives when writing concurrent Scala programs. Still, a basic knowledge of low-level concurrent programming will prove invaluable in understanding high-level concurrency concepts later.

Modern concurrency paradigms

Modern concurrency paradigms are more advanced than traditional approaches to concurrency. Here, the crucial difference lies in the fact that a high-level concurrency framework expresses which goal to achieve, rather than how to achieve that goal.

In practice, the difference between low-level and high-level concurrency is less clear, and different concurrency frameworks form a continuum rather than two distinct groups. Still, recent developments in concurrent programming show a bias towards declarative and functional programming styles.

As we will see in Chapter 2, Concurrency on the JVM and the Java Memory Model, computing a value concurrently requires creating a thread with a custom run method, invoking the start method, waiting until the thread completes, and then inspecting specific memory locations to read the result. Here, what we really want to say is compute some value concurrently, and inform me when you are done. Furthermore, we would prefer to use a programming model that abstracts over the coordination details of the concurrent computation, to treat the result of the computation as if we already have it, rather than having to wait for it and then reading it from the memory. Asynchronous programming using futures is a paradigm designed to specifically support these kinds of statements, as we will learn in Chapter 4, Asynchronous Programming with Futures and Promises. Similarly, reactive programming using event streams aims to declaratively express concurrent computations that produce many values, as we will see in Chapter 6, Concurrent Programming with Reactive Extensions.

The declarative programming style is increasingly common in sequential programming too. Languages such as Python, Haskell, Ruby, and Scala express operations on their collections in terms of functional operators and allow statements such as filter all negative integers from this collection. This statement expresses a goal rather than the underlying implementation, allowing to it easy to parallelize such an operation behind the scene. Chapter 5, Data-Parallel Collections, describes the data-parallel collections framework available in Scala, which is designed to accelerate collection operations using multicores.

Another trend seen in high-level concurrency frameworks is specialization towards specific tasks. Software transactional memory technology is specifically designed to express memory transactions and does not deal with how to start concurrent executions at all. A memory transaction is a sequence of memory operations that appear as if they either execute all at once or do not execute at all. This is similar to the concept of database transactions. The advantage of using memory transactions is that this avoids a lot of errors typically associated with low-level concurrency. Chapter 7, Software Transactional Memory, explains software transactional memory in detail.

Finally, some high-level concurrency frameworks aim to transparently provide distributed programming support as well. This is especially true for data-parallel frameworks and message-passing concurrency frameworks, such as the actors described in Chapter 8, Actors.

You have been reading a chapter from
Learning Concurrent Programming in Scala - Second Edition
Published in: Feb 2017
Publisher:
ISBN-13: 9781786466891
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image