Introduction
In concurrent programming, managing shared resources and preventing race conditions is crucial. Go's sync
package provides powerful primitives for synchronization and handling race conditions effectively. Let's dive deep into how to use these tools to write thread-safe concurrent programs.
What are Race Conditions?
A race condition occurs when multiple goroutines access shared resources concurrently, and at least one of them is modifying the data. Here's a simple example:
package main
func main() {
counter := 0
// This will likely produce inconsistent results for i := 0; i < 1000; i++ {
go func() {
counter++ // Race condition! }()
}
}
The sync Package: Core Components
1. Mutex (Mutual Exclusion)
type SafeCounter struct {
mu sync.Mutex
value int }
func (c *SafeCounter) Increment() {
c.mu.Lock()
defer c.mu.Unlock()
c.value++
}
func (c *SafeCounter) Value() int {
c.mu.Lock()
defer c.mu.Unlock()
return c.value
}
2. RWMutex (Reader/Writer Mutex)
type SafeDataStore struct {
mu sync.RWMutex
data map[string]string }
func (s *SafeDataStore) Get(key string) string {
s.mu.RLock()
defer s.mu.RUnlock()
return s.data[key]
}
func (s *SafeDataStore) Set(key, value string) {
s.mu.Lock()
defer s.mu.Unlock()
s.data[key] = value
}
3. WaitGroup
func processItems(items []int) {
var wg sync.WaitGroup
for _, item := range items {
wg.Add(1)
go func(i int) {
defer wg.Done()
processItem(i)
}(item)
}
wg.Wait()
}
4. Once
type Singleton struct {
data string }
var (
instance *Singleton
once sync.Once
)
func GetInstance() *Singleton {
once.Do(func() {
instance = &Singleton{data: "initialized"}
})
return instance
}
5. Pool
var bufferPool = sync.Pool{
New: func() interface{} {
return make([]byte, 1024)
},
}
func processRequest() {
buf := bufferPool.Get().([]byte)
defer bufferPool.Put(buf)
// Use buffer... }
Advanced Synchronization Patterns
1. Multi-Resource Locking
type Account struct {
mu sync.Mutex
balance int }
func transfer(from, to *Account, amount int) {
// Prevent deadlocks by always locking in a consistent order if from.balance < amount {
return }
// Lock both accounts if from < to {
from.mu.Lock()
to.mu.Lock()
} else {
to.mu.Lock()
from.mu.Lock()
}
defer func() {
from.mu.Unlock()
to.mu.Unlock()
}()
from.balance -= amount
to.balance += amount
}
2. Condition Variables (sync.Cond)
type Queue struct {
cond *sync.Cond
items []interface{}
maxSize int }
func NewQueue(size int) *Queue {
return &Queue{
cond: sync.NewCond(&sync.Mutex{}),
maxSize: size,
}
}
func (q *Queue) Put(item interface{}) {
q.cond.L.Lock()
defer q.cond.L.Unlock()
for len(q.items) == q.maxSize {
q.cond.Wait()
}
q.items = append(q.items, item)
q.cond.Signal()
}
func (q *Queue) Get() interface{} {
q.cond.L.Lock()
defer q.cond.L.Unlock()
for len(q.items) == 0 {
q.cond.Wait()
}
item := q.items[0]
q.items = q.items[1:]
q.cond.Signal()
return item
}
Race Detector
Go provides a built-in race detector. Use it by adding -race
flag:
bash
go test -race mypkg # test the package go run -race mysrc.go # compile and run the program go build -race mycmd # build the command
Example of detecting a race condition:
func TestRace(t *testing.T) {
data := make(map[int]int)
// This will trigger the race detector go func() {
data[1] = 1 }()
go func() {
_ = data[1]
}()
}
Best Practices
1. Lock Granularity
// Bad: Too coarse-grained type BadCache struct {
mu sync.Mutex
data map[string]string }
// Good: Fine-grained locking type CacheEntry struct {
mu sync.RWMutex
value string }
type GoodCache struct {
data map[string]*CacheEntry
}
2. Defer Unlock
// Always use defer for unlocking func (c *Cache) Get(key string) string {
c.mu.Lock()
defer c.mu.Unlock()
return c.data[key]
}
3. Composition with sync.Locker
type ThreadSafeQueue struct {
sync.Mutex
items []interface{}
}
func (q *ThreadSafeQueue) Push(item interface{}) {
q.Lock()
defer q.Unlock()
q.items = append(q.items, item)
}
Common Patterns and Use Cases
1. Safe Lazy Initialization
type Resource struct {
once sync.Once
data *heavyData
}
func (r *Resource) getData() *heavyData {
r.once.Do(func() {
r.data = loadHeavyData()
})
return r.data
}
2. Concurrent Map Access
type ConcurrentMap struct {
sync.RWMutex
data map[string]interface{}
}
func (m *ConcurrentMap) Store(key string, value interface{}) {
m.Lock()
defer m.Unlock()
m.data[key] = value
}
func (m *ConcurrentMap) Load(key string) (interface{}, bool) {
m.RLock()
defer m.RUnlock()
val, ok := m.data[key]
return val, ok
}
3. Worker Pool with WaitGroup
func processWorkItems(items []WorkItem) error {
var (
wg sync.WaitGroup
errOnce sync.Once
err error
)
for _, item := range items {
wg.Add(1)
go func(item WorkItem) {
defer wg.Done()
if e := processItem(item); e != nil {
errOnce.Do(func() {
err = e
})
}
}(item)
}
wg.Wait()
return err
}
Performance Considerations
- Lock Contention
- Use RWMutex when reads are more common than writes
- Keep critical sections as small as possible
- Consider using atomic operations for simple counters
- Memory Usage
- Use sync.Pool for frequently allocated objects
- Be cautious with buffer sizes in pools
- Clean up resources properly
- Scalability
- Consider sharding for highly concurrent access
- Use buffered channels when appropriate
- Profile your application under load
Common Pitfalls
- Copying Mutex
// BAD: mutex should not be copied type Bad struct {
sync.Mutex
data int }
func (b Bad) Incorrect() {
b.Lock() // This locks a copy! defer b.Unlock()
b.data++
}
// GOOD: use pointer receiver func (b *Bad) Correct() {
b.Lock()
defer b.Unlock()
b.data++
}
- Not Unlocking
// BAD: potential deadlock func (c *Cache) BadFunc() {
c.mu.Lock()
if someCondition {
return // Oops, forgot to unlock! }
c.mu.Unlock()
}
// GOOD: always use defer func (c *Cache) GoodFunc() {
c.mu.Lock()
defer c.mu.Unlock()
if someCondition {
return // Safe, will still unlock }
}
Conclusion
The sync
package is fundamental to writing correct concurrent programs in Go. Key takeaways:
- Use Appropriate Tools
- Mutex for simple mutual exclusion
- RWMutex for read-heavy workloads
- WaitGroup for goroutine synchronization
- Once for one-time initialization
- Pool for resource reuse
- Follow Best Practices
- Always use the race detector during testing
- Keep critical sections small
- Use defer for unlocking
- Be careful with mutex copying
- Consider lock ordering to prevent deadlocks
- Think About Performance
- Profile your application
- Use appropriate synchronization primitives
- Consider the trade-offs between different approaches
- Testing and Verification
- Use the race detector regularly
- Write concurrent tests
- Verify thread safety
- Test edge cases