sync.Map internals, performance comparison with map + RWMutex

Internals of sync.Map

sync.Map — is a thread-safe map implementation in Go, optimized for certain use cases.

Basic structure sync.Map looks something like this:

type Map struct {
    mu Mutex
    read atomic.Value // readOnly
    dirty map[interface{}]*entry
    misses int
}

type readOnly struct {
    m       map[interface{}]*entry
    amended bool
}

type entry struct {
    p unsafe.Pointer // *interface{}
}

Here we see several key fields:

mu – mutex to protect access to dirty map
read — an atomic value containing readOnly structure
dirty — a regular Go map containing all current values
misses — read miss counter from read maps

main idea sync.Map consists of using two internal maps: read (read only) and dirty (for writing and reading). This allows for optimization of read operations, which often do not require locking.

Operation Load

When performing an operation Load, sync.Map first tries to find the value in read mape. If the value is found, it is returned without any blocking. This is a very fast operation.

If the value is not found in read mape, the counter is increasing missesAnd sync.Map checks dirty map, capturing the mutex. If the value is found in dirty mape, it's coming back.

Operation Store

By doing Store sync.Map first checks if the key exists in read map. If so, it attempts to update the value atomically. If that fails (e.g. the key was deleted), it moves on to updating dirty maps.
If the key does not exist in read mape, sync.Map captures the mutex and updates dirty mapu.

When dirty replaces read

An interesting point occurs when the number of misses when reading from read maps (misses) exceeds the length dirty maps. In this case sync.Map performs the “advance” operation:

It looks something like this:

func (m *Map) missLocked() {
    m.misses++
    if m.misses < len(m.dirty) {
        return
    }
    m.read.Store(&readOnly{m: m.dirty})
    m.dirty = nil
    m.misses = 0
}

This approach allows adapting to usage patterns: if there are many reads after a series of writes, the dirty map is promoted to read, which speeds up subsequent reads.

Performance comparison with map + RWMutex

Now let's compare the performance sync.Map with the usual mapprotected sync.RWMutex
.
A typical thread-safe map might look like this:


type SafeMap struct {
    mu sync.RWMutex
    m  map[interface{}]interface{}
}

func (sm *SafeMap) Load(key interface{}) (interface{}, bool) {
    sm.mu.RLock()
    defer sm.mu.RUnlock()
    val, ok := sm.m[key]
    return val, ok
}

func (sm *SafeMap) Store(key, value interface{}) {
    sm.mu.Lock()
    defer sm.mu.Unlock()
    sm.m[key] = value
}

The performance of these two approaches will depend on the specific use case:

Conclusion

sync.Map — is a powerful tool in a Go developer's arsenal, but it's not a silver bullet. Its internals are optimized for certain use cases, especially when there are a lot of reads and relatively few writes.

Normal map With RWMutex may be more efficient in scenarios with frequent writes or when the number of competing goroutines is small.

The article is based on a post from the channel Cross Join.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *