Concurrency in Golang

pyramidskeme on June 3, 2025, 03:45 PM

So one of the strengths of Go, is that concurrency is pretty much built into the architecture of the language. For anyone not familiar with the term concurrency, it basically just means running multiple pieces of the code at the same time, or 'in parallel'. This is a huge deal when making software faster because instead of waiting on each operation to run line by line, many of them can be running 'side by side' at the same time.

Concurrency is absolutely necessary if you want your code to take advantage of modern chip architecture. This is because one of the main reasons why modern chips are considered more powerful than ones of the past is that they have multiple 'cores' in them that can each process information simultaneously instead of one large powerful one. However, if you write code that doesn't take advantage of the multiple cores, it will still pretty much run the same speed as it would if you ran it on a chip with a single core. Simply running code written in single threaded manner, doesn't magically take advantage of the extra cores just because it's running on a modern chip. Concurrency is something you have to manage manually as the software developer. So let's get into how to take advantage of it with Go.

Goroutines

So basically, the first thing to understand about Go is the fact that you can run a function normally, like this:

doSomething()

And it will run on the main thread just like any old function, BUT, if you want to spawn it off onto its own thread, you just use the magical 'go' keyword and it will do it. It's pretty simple and looks like this:

go doSomething()

It's as simple as that to start taking advantage of the concurrency. No need to import any packages or anything like that. Now you can do something like this:

go doAThing()
go doAnotherThing()
go doOneMoreThing()

Obviously, this can make your program a lot more efficient because doAnotherThing() and doOneMoreThing() don't have to wait for doAThing() to finish before they can get going. One important thing to keep in mind, is that when using HTTP packages like "net/http", the methods that handle HTTP requests, for example, http.HandleFunc() are actually Goroutines. This makes sense because you definitely want your web server to process requests in parallel instead of waiting for one to complete before going on to the next. Otherwise, your users wouldn't be very happy.

So concurrency is easy then right? Not so fast... there are a few other things to consider. One problem you can have is if some of these Goroutines start trying to access the same data.

Mutex

Mutexes solve the problem of race conditions in concurrent code. The name is basically a shortened form of the term 'Mutual Exclusion.' So for anyone who doesn't already understand what a race condition is and why it's a serious problem allow me to explain. Basically, just say you have a variable in your code, and you want to write some code that updates that variable like this:

your_variable = your_variable + 1

This is really simple if only one function is running at a time. You will always get the result you expect. But just say there are multiple functions running along side each other at the same time and each one is attempting to add 1 to your variable. Now, the results will be unpredictable (a very bad thing) because it's unclear what the value will be at the time any one of them attempts to do the addition. You can now probably see why this is such a big deal with concurrency, because if you were to spawn off some Goroutines that needed to update a variable, you'd run into this problem. So 'excluding' other 'mutually running' code except one from attempting to act on your variable at one time will prevent this problem from happening. So how is that accomplished? That part is actually pretty simple. We just need to 'lock' the block of code we want to protect, then 'unlock' it when it's safe for another Goroutine to act on it. This can be done like this:

package main

import (
    "sync" // mutex comes from the sync package
)

type SomeThing struct {
    counter  int32
    mu          sync.Mutex
}

func updateCounter(st SomeThing) {
      st.mu.Lock()
      st.counter = st.counter + 1
      st.mu.Unlock()
}

myThing := SomeStruct{ counter: 0 }

for i := 0; i < 100; i++ {
      go updateCounter(myThing)
}

The for loop will spawn off 100 Goroutines that all will attempt to update the counter, but since you're locking the line of code that increments the counter until the Goroutine currently acting on it is finished, each other Goroutine will 'wait its turn' before also acting on it and you'll get the result you expect, 100.

One other thing to note is that you don't really have to add the mutex as a struct property like I did in the above example. Another way you could do it is by just passing a mutex into the updateCounter() function like this:

package main 

import (  
    "sync"
)

var counter  = 0
func updateCounter(m *sync.Mutex) { 
    m.Lock() 
    counter = counter + 1
    m.Unlock()
}

var m sync.Mutex

for i := 0; i < 100; i++ {   
    go updateCounter(&m)
}

pretty much accomplishes the same thing.

One more thing to note, is that there are some cases where you would want other Goroutines to be able to read from a variable, just not write to it. One example is caching. You'd only want a single Goroutine to ever be trying to update the cache, but each one running should be able to read from it to get the cached data. In those cases, you would just use a RWMutex, which functions pretty much the same way except you would call RLock() and RUnlock on it to lock and unlock writing to it without preventing it from being read in parallel.

WaitGroups

Ok, so I kinda have a confession to make. (And honestly, maybe I'll re-order this in the future). One other really important thing to know about Goroutines is that they only run until their parent function finishes, then they just... evaporate. Gone, just like that. They'll cease to exist. What that means is, if you have some Goroutines inside of other functions (which you definitely will because everything in your program will need to live inside the main function, which is also itself a Goroutine), they'll stop running when their parent function does, so they'll never finish if it completes before they do. here's an example of what I mean:

package main

import (
    "fmt"
)

func doAThing() {
    fmt.Print("I'm doing a thing \n")
}

func doAnotherThing() {
    fmt.Print("I'm doing some other thing \n")
}

func parentFunction () {
    go doAThing()
    go doAnotherThing()
}

func main () {
    parentFunction()
}

If you were to run this, it wouldn't work. Nothing would be output and you'd probably be very confused because it seems to be pretty valid code and there aren't any errors being thrown. But of course, like I described above, the problem is that parentFunction() has spawned off these little Goroutines then finishes before they get a chance to. When it finishes, they vanish, so we never see any output from them.

Waitgroups are the solution to this problem. What wait groups allow you to do is tell the parentFunction() to wait for some specific amount of time before finishing so that the Goroutines it has spawned can finish their work. Of course, one quick and dirty way you could accomplish this is by making the parent sleep for a few seconds but come on, we know that's not a real solution. So here's how WaitGroups really fix that. Basically before you spawn a Goroutine, you increment the WaitGroup's counter. So now that counter is tracking how many Goroutines are inside of the group that needs to be waited on. After each one finishes doing its work, you call the WaitGroup's .done() method which just subtracts 1 from the number of Goroutines in the group. So to fix our above example, we could implement a WaitGroup and it would look like this:

package main

import (
    "fmt"
    "sync" // WaitGroups come from the sync package
)

func doAThing(wg *sync.WaitGroup) {
    fmt.Print("I'm doing a thing \n")
    wg.Done()
}

func doAnotherThing(wg *sync.WaitGroup) {
    fmt.Print("I'm doing some other thing \n")
    wg.Done()
}

func parentFunction () {
    wg := &sync.WaitGroup{} // instantiate a wait group and get its pointer
    wg.Add(2) // let it know we have two Goroutines to wait on
    go doAThing(wg) // pass in the wait group so the Goroutine can let the group know when its done
    go doAnotherThing(wg) // same as above
    wg.Wait() // now the parent function will wait until everything in the group has finished
}

func main () {
    parentFunction()
}

Now the parent function will wait for the exact amount of time both of its Goroutines take to finish before it does and we'll get the output. The above example is pretty simple, but the main thing to remember is that you need to add 1 to the group before every call to a Goroutine to let it know there's a new member, and call done inside of that Goroutine so that it can let the group know its completed it's task.

Channels

Channels are essentially a way for Goroutines to pass around data. You can think of channels as wormholes opening within the spacetime of your Go runtime. These wormholes allow Goroutines to communicate with each other. This communication allows things like synchronization between Goroutines, creating data pipelines, avoiding having to share memory with mutexes, event notification, and Fan-Out/Fan-In patterns. The first important thing to understand about channels is that there are two types of channels, unbuffered and buffered.

An unbuffered channel can only have one value in it at a time, which means that a Goroutine who sends data to the channel will be blocked until the one receiving the data reads it. You could use this property of channels to sync up two Goroutines. For example, maybe you want one to wait until another one is ready so they can do some work together at the same time. Well, in that case you could have Goroutine A send a message to Goroutine B to let it know that it's ready and now A and B can sync up and start running together.

Buffered channels, just like you might guess, can hold a buffer of messages in them. So for buffered channels, the sending Goroutine will be able to send as many messages to it as the buffer can hold and continue its flow without being blocked, whether or not the receiver has read any of the messages from the buffer. But once the amount of messages in the buffer reaches its limit, the sending Goroutine will be blocked until the receiver has read some to reduce the amount to less than the buffer's limit.

To make things more concrete, here's an example of using an unbuffered channel to sync up two Goroutines:

ch := make(chan int) // its unbuffered because we don't pass in a second param for the buffer limit

go func() {
    ch <- 100
    fmt.Println("Sender: continuing after send")
    time.Sleep(3 * time.Second)
    fmt.Println("Sender: done with other work")
}()

go func() {
    value := <-ch
    fmt.Println("Receiver: got", value)
    time.Sleep(5 * time.Second)  // Long processing
    fmt.Println("Receiver: done processing")
}()

The two Goroutines above could have started at different times, but at the point where the data is sent to and received from the channel, they'll sync up.

Here's an example of using a buffered channel, which won't block execution of the sender until the buffer is full. The receiving Goroutine can process the messages as it is ready.

func producer(ch chan<- int) {
    for i := 1; i <= 10; i++ {
        fmt.Printf("Sending %d\n", i)
        ch <- i  // First 3 sends won't block
        fmt.Printf("Sent %d\n", i)
    }
    close(ch)
}

func consumer(ch <-chan int) {
    time.Sleep(2 * time.Second)  // Simulate slow consumer
    for value := range ch {
        fmt.Printf("Received %d\n", value)
        time.Sleep(500 * time.Millisecond)  // Process slowly
    }
}

func main() {
    ch := make(chan int, 3)  //  Buffer can hold up to 3 messages
    go producer(ch)
    consumer(ch)
}

Closing Channels

One more important thing to know about buffered channels is that they must be closed in some cases to let the receiving Goroutine know that no more values will be sent to the channel and it should carry on. So, for example, if the receiving Goroutine is looping through a range of messages in a channel, it will keep waiting for new ones to be added to the channel until it is closed. So if you didn't close the channel in this case, that Goroutine would basically stall out and you'll end up wasting memory resources or blocking the rest of the code that should run after it finishes reading. Here's an example:

ch := make(chan int, 2)

go func() {
    for value := range ch {
        fmt.Println("Got:", value)
    }
    fmt.Println("This never prints!")
}()

ch <- 10
ch <- 20
// Channel never closed - receiver goroutine waits forever!

time.Sleep(5 * time.Second)
fmt.Println("Main ending, but receiver still blocked")

You don't need to close an unbuffered channel because it basically closes once the value in it has been read, but make sure to do it with buffered channels to avoid weird bugs!

Context

The next really important concept to understand is Context. I'll go into detail on them on another post because there's quite a bit to them, but I do wanna hit on a few main points in this post since they're necessary to fully work with concurrency in Go.

There are three main things that contexts allow you to do:

  1. Let Goroutines know when they should cancel operation, so you don't waste resources. Consider the case where you have an HTTP request handler that kicks off a bunch of Goroutines to do some work. If the connection to the request is broken, you most likely would want to stop those spawned Goroutines from executing since their work is no longer needed.

  2. You might also have a case where some Goroutines are spawned off to do tasks that could potentially take more time than you would like them to spend. In those cases you can pass in a Context that allows you to set a deadline or time limit by which they should finish and have them cancel if they don't meet it. This can prevent Goroutines from getting 'stuck' spending more compute resources than you would like to allow.

  3. You can also use a Context to carry some scoped data. For example if you'd like to have some data exclusively available to a certain request, like user authentication data, you can put it in a Context. Each of those Goroutines could then behave accordingly to the user account info passed to them.

Again, there's more to all of this concurrency stuff, and you can read the Go docs for more detailed information, but this should give you a solid understanding of the main tools you have available to you to really start implementing parallel processing in your Go programs and take advantage of concurrency.

1
89