new library for managing concurrency in Go

One of the main features of the Go language is convenient work with concurrency. However, in large projects, some problems still arise:

  • goroutine leak

  • incorrect handling of panics in goroutines

  • poor code readability

  • the need to write repetitive code from time to time

As the author of the library points out in his articlehe often encounters errors when working with goroutines, which prompted him to create a new conc library.

Library Features

The library provides a set of tools for managing concurrency in Go. It allows you to synchronize access to shared resources, as well as control the execution of goroutines. Among its features are:

  • Your own WaitGroup without having to call defer

  • Own Pool to simplify the work with the launch of tasks with limited parallel execution

  • Techniques for Competing with Slices

  • Methods for dealing with panics in child goroutines

Dealing with panic

If you don’t want your program to crash when a child goroutine panics, or if you want to avoid other problems like deadlocks or goroutine leaks, it’s not easy to do this natively with the standard libraries:

type propagatedPanic struct {
    val   any
    stack []byte
}

func main() {
    done := make(chan *propagatedPanic)
    go func() {
        defer func() {
            if v := recover(); v != nil {
                done <- &propagatedPanic{
                    val:   v,
                    stack: debug.Stack(),
                }
            } else {
                done <- nil
            }
        }()
        doSomethingThatMightPanic()
    }()
    if val := <-done; val != nil {
        panic(val)
    }
}

The conc library does the job much more elegantly:

func main() {
    var wg conc.WaitGroup
    wg.Go(doSomethingThatMightPanic)
    // panics with a nice stacktrace
    wg.Wait()
}

Competitive processing of an array of data

It is often necessary to process large amounts of data concurrently. To do this, usually all slice elements are sent to the channel, from where they are taken by child goroutines and processed there.

func process(values []int) {
    feeder := make(chan int, 8)

    var wg sync.WaitGroup
    for i := 0; i < 10; i++ {
        wg.Add(1)
        go func() {
            defer wg.Done()
            for elem := range feeder {
                handle(elem)
            }
        }()
    }

    for _, value := range values {
        feeder <- value
    }
    close(feeder)
    wg.Wait()
}

With the conc library, iter.ForEach is fine for this:

func process(values []int) {
		iterator := iter.Iterator[int]{
			MaxGoroutines: len(input) / 2,
		}

    iterator.ForEach(values, handle)
}

Or if you need to match the elements of the output array so that output[i] = f(input[i]):

func process(
    input []int,
    f func(int) int,
) []int {
    output := make([]int, len(input))
    var idx atomic.Int64

    var wg sync.WaitGroup
    for i := 0; i < 10; i++ {
        wg.Add(1)
        go func() {
            defer wg.Done()

            for {
                i := int(idx.Add(1) - 1)
                if i >= len(input) {
                    return
                }

                output[i] = f(input[i])
            }
        }()
    }
    wg.Wait()
    return output
}

It is much easier and clearer to use the iter.Map method:

func process(
	values []int,
	f func(*int) int,
) []int {
		mapper := iter.Mapper[int, int]{
			MaxGoroutines: len(input) / 2,
		}

		return mapper.Map(input, f)
}

Conclusion

Above, only the main options for working with this library were shown, you can find many more examples directly in sources. If you are interested in how to work with a certain method, just look for an example of usage in the test files.

It is also worth noting that the current version of the library is pre-1.0. According to the developers, minor changes should be made before the release of version 1.0: API stabilization and default settings. Therefore, using this library in large projects can be a little risky for the time being, but you can start getting acquainted right now, especially since there are not too many source codes (no more than 2k lines of code).

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *