Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Effective Concurrency in Go

You're reading from   Effective Concurrency in Go Develop, analyze, and troubleshoot high performance concurrent applications with ease

Arrow left icon
Product type Paperback
Published in Apr 2023
Publisher Packt
ISBN-13 9781804619070
Length 212 pages
Edition 1st Edition
Languages
Concepts
Arrow right icon
Author (1):
Arrow left icon
Burak Serdar Burak Serdar
Author Profile Icon Burak Serdar
Burak Serdar
Arrow right icon
View More author details
Toc

Table of Contents (13) Chapters Close

Preface 1. Chapter 1: Concurrency – A High-Level Overview 2. Chapter 2: Go Concurrency Primitives FREE CHAPTER 3. Chapter 3: The Go Memory Model 4. Chapter 4: Some Well-Known Concurrency Problems 5. Chapter 5: Worker Pools and Pipelines 6. Chapter 6: Error Handling 7. Chapter 7: Timers and Tickers 8. Chapter 8: Handling Requests Concurrently 9. Chapter 9: Atomic Memory Operations 10. Chapter 10: Troubleshooting Concurrency Issues 11. Index 12. Other Books You May Enjoy

Mutex

Mutex is short for mutual exclusion. It is a synchronization mechanism to ensure that only one goroutine can enter a critical section while others are waiting.

A mutex is ready to be used when declared. Once declared, a mutex offers two basic operations: lock and unlock. A mutex can be locked only once, so if a goroutine locks a mutex, all other goroutines attempting to lock it will block until the mutex is unlocked. This ensures only one goroutine enters a critical section.

Typical uses of mutexes are as follows:

var m sync.Mutex
func f() {
    m.Lock()
    // Critical section
    m.Unlock()
    }
func g() {
    m.Lock()
    defer m.Unlock()
    // Critical section
}

To ensure mutual exclusion for a critical section, the mutex must be a shared object. That is, a mutex defined for a particularly critical section must be shared by all the goroutines to establish mutual exclusion.

We will illustrate the use of mutexes with a realistic example. A common problem that has been solved many times is the caching problem: certain operations, such as expensive computations, I/O operations, or working with databases, are slow, so it makes sense to cache the results once you obtain them. But by definition, a cache is shared among many goroutines, so it must be thread-safe. The following example is a cache implementation that loads objects from a database and puts them in a map. If the object does not exist in the database, the cache also remembers that:

type Cache struct {
     mu sync.Mutex
     m map[string]*Data
}
func (c *Cache) Get(ID string) (Data, bool) {
     c.mu.Lock()
     data, exists := c.m[ID]
     c.mu.Unlock()
     if exists {
           if data == nil {
                return Data{}, false
           }
           return *data, true
     }
     data, loaded = retrieveData(ID)
     c.mu.Lock()
     defer c.mu.Unlock()
     d, exists := c.m[data.ID]
     if exists {
          return *d, true
     }
     if !loaded {
           c.m[ID] = nil
           return Data{}, false
     }
     c.m[data.ID] = data
     return *data, true
}

The Cache structure includes a mutex. The Get method starts with locking the cache. This is because Cache.m is shared between goroutines, and all read or write operations involving Cache.m must be done by only one goroutine. If there are other cache requests ongoing at that moment, this call will block until the other goroutines are done.

The first critical section simply reads the map to see whether the requested object is already in the cache. Note the cache is unlocked as soon as the critical section is completed to allow other goroutines to enter their critical sections. If the requested object is in the cache, or if the nonexistence of that object is recorded in the cache, the method returns. Otherwise, the method retrieves the object from the database. Since the lock is not held during this operation, other goroutines may continue using the cache. This may cause other goroutines to load the same object as well. Once the object is loaded, the cache is locked again because the loaded object must be put in the cache. This time, we can use defer c.mu.Unlock() to ensure the cache is unlocked once the method returns. There is a second check to see whether the object was already placed in the cache by another goroutine. This is possible because multiple goroutines can ask for the object using the same ID at the same time, and many goroutines may proceed to load the object from the database. Checking this again after acquiring the lock will make sure that if another goroutine has already put the object into the cache, it will not be overwritten with a new copy.

An important point to note here is that mutexes should not be copied. When you copy a mutex, you end up with two mutexes, the original and the copy, and locking the original will not prevent the copies from locking their copies as well. The go vet tool catches these. For instance, declaring the cache Get method using a value receiver instead of a pointer will copy the cache struct and the mutex:

func (c Cache) Get(ID string) (Data,bool) {…}

This will copy the mutex at every call, thus all concurrent Get calls will enter into the critical section with no mutual exclusion.

A mutex does not keep track of which goroutine locked it. This has some implications. First, locking a mutex twice from the same goroutine will deadlock that goroutine. This is a common problem with multiple functions that can call each other and also lock the same mutex:

var m sync.Mutex
func f() {
    m.Lock()
    defer m.Unlock()
    // process
}
func g() {
    m.Lock()
    defer m.Unlock()
    f() // Deadlock
}

Here, the g() function calls the f() function, but the m mutex is already locked, so f deadlocks. One way to correct this problem is to declare two versions of f, one with a lock and one without:

func f() {
    m.Lock()
    defer m.Unlock()
    fUnlocked()
}
func fUnlocked() {
    // process
}
func g() {
    m.Lock()
    defer m.Unlock()
    fUnlocked()
}

Second, there is nothing preventing an unrelated goroutine from unlocking a mutex locked by another goroutine. Such things tend to happen after refactoring algorithms and forgetting to change the mutex names during the process. They create very subtle bugs.

The functionality of a mutex can be replicated using a channel with a buffer size of 1:

var mutexCh = make(chan struct{},1)
func Lock() {
    mutexCh<-struct{}{}
}
func Unlock() {
    select {
    case <-mutexCh:
    default:
    }
}

Many times, such as in the preceding cache example, there are two types of critical sections: one for the readers and one for the writers. The critical section for the readers allows multiple readers to enter the critical section but does not allow a writer to go into the critical section until all readers are done. The critical section for writers excludes all other writers and all readers. This means that there can be many concurrent readers of a structure, but there can be only one writer. For this, an RWMutex mutex can be used. This mutex allows multiple readers or a single writer to hold the lock. The modified cache is shown as follows:

type Cache struct {
    mu sync.RWMutex // Use read/write mutex
    cache map[string]*Data
}
func (c *Cache) Get(ID string) (Data, bool) {
c.mu.RLock()
    data, exists := c.m[data.ID]
    c.mu.RUnlock()
    if exists {
         if data == nil {
             return Data{}, false
        }
    return *data, true
    }
    data, loaded = retrieveData(ID)
    c.mu.Lock()
    defer c.mu.Unlock()
    d, exists := c.m[data.ID]
    if exists {
        return *d, true
    }
    if !loaded {
        c.m[ID] = nil
               return Data{}, false
    }
    c.m[data.ID] = data
    return *data, true
}

Note that the first lock is a reader lock. It allows many reader goroutines to execute concurrently. Once it is determined that the cache needs to be updated, a writer lock is used.

You have been reading a chapter from
Effective Concurrency in Go
Published in: Apr 2023
Publisher: Packt
ISBN-13: 9781804619070
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image