# Channels in Go

Channel is an important built-in feature in Go. It is one of the features that makes Go unique. Along with another unique feature, goroutine, channel makes concurrent programming convenient, fun and lowers the difficulties of concurrent programming.

This article will list all the channel related concepts, syntax and rules. To understand channels better, the internal structure of channels and some implementation details by the standard Go compiler/runtime are also simply described.

The information in this article may be slightly challenging for new gophers. Some parts of this article may need to be read several times to fully understand them.

### Concurrent Programming and Concurrency Synchronization

Modern CPUs often have multiple cores, and some CPU cores support hyper-threading. In other words, modern CPUs can process multiple instruction pipelines simultaneously. To fully use the power of modern CPUs, we need to do concurrent programming in coding our programs.

Concurrent computing is a form of computing in which several computations are executed during overlapping time periods. The following picture depicts two concurrent computing cases. In the picture, A and B represent two separate computations. The second case is also called parallel computing, which is special concurrent computing. In the first case, A and B are only in parallel during a small piece of time.

Concurrent computing may happen in a program, a computer, or a network. In Go 101, we only talk about program-scope concurrent computing. Goroutine, which has been introduced before, is the Go way to create concurrent computations.

Concurrent computations may share resources, generally memory resource. There are some circumstances may happen in a concurrent computing.

• In the same period of one computation is writing data to a memory segment, another computation is reading data from the same memory segment. Then the integrity of the data read by the other computation might be not preserved.

• In the same period of one computation is writing data to a memory segment, another computation is also writing data to the same memory segment. Then the integrity of the data stored at the memory segment might be not preserved.

These circumstances are called data races. One of the duties in concurrent programming is to control resource sharing among concurrent computations, so that data races will not happen. The ways to achieve this duty are called concurrency synchronization, or data synchronization. Go supports several data synchronization techniques. The following section will introduce one of them, channel.

Other duties in concurrent programming include

• determine how many computations are needed.

• determine when to start, block, unblock and end a computation.

• determine how to distribute workload among concurrent computations.

Most operations in Go are not synchronized. In other words, they are not concurrency-safe. These operations include value assignments, argument passing and container element manipulations, etc. There are only a few operations which are synchronized, including the several to be introduced channel operations below.

In Go, generally, each computation is a goroutine. So later we use goroutines to represent computations.

### Channel Introduction

One suggestion (made by Rob Pike) for concurrent programming is don’t (let computations) communicate by sharing memory, (let them) share memory by communicating (through channels).

Communicating by sharing memory and sharing memory by communicating are two programming manners in concurrent programming. When goroutines communicate by sharing memory, we need use some traditional concurrency synchronization techniques, such as mutex locks, to protect the shared memory to prevent data races. We can use channels to implement sharing memory by communicating.

Go provides a unique concurrency synchronization technique, channel. Channels make goroutines share memory by communicating. We can view a channel as an internal FIFO (first in, first out) data queue within a program. Some goroutines send values to the queue (the channel) and some other goroutines receive values from the queue.

Along with transferring values (through channels), the ownership of some values may also be transferred between goroutines. When a goroutine sends a value to a channel, we can view the goroutine releases the ownership of some values. When a goroutine receives a value from a channel, we can view the goroutine acquires the ownerships of some values.

Surely, there may be also not any ownership transferred along with channel communications.

The values (whose ownerships are transferred) are often referenced (but are not required to be referenced) by the transferred value. Please note, here, when we talk about ownership, we mean the ownership from the logic view. Unlike Rust language, Go doesn’t ensure value ownership from the syntax level. Go channels can help programmers write data races free code easily, but Go channels can’t prevent programmers from writing bad concurrent code from the syntax level.

Although Go also supports traditional concurrency synchronization techniques. only channel is first-level citizen in Go. Channel is one kind of types in Go, so we can use channels without importing any packages. On the other hand, those traditional concurrency synchronization techniques are provided in the sync and sync/atomic standard packages.

Honestly, each concurrency synchronization technique has its own best use scenarios. But channel has a wider application range and has more variety in using. One problem of channels is, the experience of programming with channels is so enjoyable and fun that programmers often even prefer to use channels for the scenarios which channels are not best for.

### Channel Types and Values

Like array, slice and map, each channel type has an element type. A channel can only transfer values of the element type of (the type of) the channel.

Channel types can be bi-directional or single-directional. Assume T is an arbitrary type,

• chan T denotes a bidirectional channel type. Compilers allow both receiving values from and sending values to bidirectional channels.

• chan<- T denotes a send-only channel type. Compilers don’t allow receiving values from send-only channels.

• <-chan T denotes a receive-only channel type. Compilers don’t allow sending values to receive-only channels.

T is called element types of these channel types.

Values of bidirectional channel type chan T can be implicitly converted to both send-only type chan<- Tand receive-only type <-chan T, but not vice versa (even if explicitly). Values of send-only type chan<- Tcan’t be converted to receive-only type <-chan T, and vice versa. Note that the <- signs in channel type literals are modifiers.

Each channel value has a capacity, which will be explained in the section after next. A channel value with a zero capacity is called unbuffered channel and a channel value with a non-zero capacity is called buffered channel.

The zero values of channel types are represented with the predeclared identifier nil. A non-nil channel value must be created by using the built-in make function. For example, make(chan int, 10) will create a channel whose element type is int. The second argument of the make function call specifies the capacity of the new created channel. The second parameter is optional and its default value is zero.

### Channel Value Comparisons

All channel types are comparable types.

From the article value parts, we know that non-nil channel values are multi-part values. After one channel value is assigned to another, the two channels share the same underlying part(s). In other words, the two channels represent the same internal channel object. The result of comparing them is true.

### Channel Operations

There are five channel specified operations. Assume the channel is ch, their syntax and function calls of these operations are listed here.

1. Close the channel by using the following function call

close(ch)

where close is a built-in function. The argument of a close function call must be a channel value, and the channel ch must not be a receive-only channel.

2. Send a value, v, to the channel by using the following syntax

ch <- v

where v must be a value which is assignable to the element type of channel ch, and the channel chmust not be a receive-only channel. Note that here <- is a channel-send operator.

3. Receive a value from the channel by using the following syntax

<-ch

A channel receive operation always returns at least one result, which is a value of the element type of the channel, and the channel ch must not be a send-only channel. Note that here <- is a channel-receive operator. Yes, its representation is the same as a channel-send operator.

For most scenarios, a channel receive operation is viewed as a single-value expression. However, when a channel operation is used as the only source value expression in an assignment, it can result a second optional untyped boolean value and become a multi-value expression. The untyped boolean value indicates whether or not the first result is sent before the channel is closed. (Below we will learn that we can receive unlimited number of values from a closed channel.)

Two channel receive operations which are used as source values in assignments:

v = <-chv, sentBeforeClosed = <-ch
4. Query the value buffer capacity of the channel by using the following function call

cap(ch)

where cap is a built-in function which has ever been introduced in containers in Go. The return result of a cap function call is an int value.

5. Query the current number of values in the value buffer (or the length) of the channel by using the following function call

len(ch)

where len is a built-in function which also has ever been introduced before. The return value of a len function call is an int value. The result length is number of elements which have already been sent successfully to the queried channel but haven’t been received (taken out) yet.

All these operations are already synchronized, so no further synchronizations are needed to safely perform these operations, except the case of concurrent send and close operations on the channel. The exception case should be avoided in code design, for it is a bad design. (The reason will be explained below.)

Like most other operations in Go, channel value assignments are not synchronized. Similarly, assigning the received value to another value is also not synchronized, though any channel receive operation is synchronized.

If the queried channel is a nil channel, both of the built-in cap and len functions return zero. The two query operations are so simple that they will not get further explanations later. In fact, the two operations are seldom used in practice.

Channel send, receive and close operations will be explained in detail in the next section.

### Detailed Explanations for Channel Operations

To make the explanations for channel operations simple and clear, in the remaining of this article, channels will be classified into three categories:

1. nil channels.
2. non-nil but closed channels.
3. not-closed non-nil channels.

The following table simply summarizes the behaviors for all kinds of operations applying on nil, closed and not-closed non-nil channels.

For the five cases shown without superscripts, the behaviors are very clear.

• Closing a nil or an already closed channel produces a panic in the current goroutine.

• Sending a value to a closed channel also produces a panic in the current goroutine.

• Sending a value to or receiving a value from a nil channel makes the current goroutine enter and stay in blocking state for ever.

The following will make more explanations for the four cases shown with superscripts (A, B, C and D).

To better understand channel types and values, and to make some explanations easier, learning the rough internal structures of internal channel objects is very helpful.

We can think of each channel as maintaining three queues (all can be viewed as FIFO queues) internally:

1. the receiving goroutine queue. The queue is a linked list without size limitation. Goroutines in this queue are all in blocking state and waiting to receive values from that channel.

2. the sending goroutine queue. The queue is also a linked list without size limitation. Goroutines in this queue are all in blocking state and waiting to send values to that channel. The value (or the address of the value, depending on compiler implementation) each goroutine is trying to send is also stored in the queue along with that goroutine.

3. the value buffer queue. This is a circular queue. Its size is equal to the capacity of the channel. The types of the values stored in this buffer queue are all the element type of that channel. If the current number of values stored in the value buffer queue of the channel reaches the capacity of the channel, the channel is called in full status. If no values are stored in the value buffer queue of the channel currently, the channel is called in empty status. For a zero-capacity (unbuffered) channel, it is always in both full and empty status.

Each channel internally holds a mutex lock which is used to avoid data races in all kinds of operations.

Channel operation case A: when a goroutine Gr tries to receive a value from a not-closed non-nil channel, the goroutine Gr will acquire the lock associated with the channel firstly, then do the following steps until one condition is satisfied.

1. If the value buffer queue of the channel is not empty, in which case the receiving goroutine queue of the channel must be empty, the goroutine Gr will receive (by unshifting) a value from the value buffer queue. If the sending goroutine queue of the channel is also not empty, a sending goroutine will be unshifted out of the sending goroutine queue and resumed to running state again. The value the just unshifted sending goroutine trying to send will be pushed into the value buffer queue of the channel. The receiving goroutine Gr continues running. For this scenario, the channel receive operation is called a non-blocking operation.

2. Otherwise (the value buffer queue of the channel is empty), if the sending goroutine queue of the channel is not empty, in which case the channel must be an unbuffered channel, the receiving goroutine Gr will unshift a sending goroutine from the sending goroutine queue of the channel and receive the value the just unshifted sending goroutine trying to send. The just unshifted sending goroutine will get unblocked and resumed to running state again. The receiving goroutine Grcontinues running. For this scenario, the channel receive operation is called a non-blocking operation.

3. If value buffer queue and the sending goroutine queue of the channel are both empty, the goroutine Gr will be pushed into the receiving goroutine queue of the channel and enter (and stay in) blocking state. It may be resumed to running state when another goroutine sends a value to the channel later. For this scenario, the channel receive operation is called a blocking operation.

Channel rule case B: when a goroutine Gs tries to send a value to a not-closed non-nil channel, the goroutine Gs will acquire the lock associated with the channel firstly, then do the following steps until one step condition is satisfied.

1. If the receiving goroutine queue of the channel is not empty, in which case the value buffer queue of the channel must be empty, the sending goroutine Gs will unshift a receiving goroutine from the receiving goroutine queue of the channel and send the value to the just unshifted receiving goroutine. The just unshifted receiving goroutine will get unblocked and resumed to running state again. The sending goroutine Gs continues running. For this scenario, the channel send operation is called a non-blocking operation.

2. Otherwise (the receiving goroutine queue is empty), if the value buffer queue of the channel is not full, in which case the sending goroutine queue must be also empty, the value the sending goroutine Gstrying to send will be pushed into the value buffer queue, and the sending goroutine Gs continues running. For this scenario, the channel send operation is called a non-blocking operation.

3. If the receiving goroutine queue is empty and the value buffer queue of the channel is already full, the sending goroutine Gs will be pushed into the sending goroutine queue of the channel and enter (and stay in) blocking state. It may be resumed to running state when another goroutine receives a value from the channel later. For this scenario, the channel send operation is called a blocking operation.

Above has mentioned, once a non-nil channel is closed, sending a value to the channel will produce a runtime panic in the current goroutine. Note, sending data to a closed channel is viewed as a non-blocking operation.

Channel operation case C: when a goroutine tries to close a not-closed non-nil channel, once the goroutine has acquired the lock of of the channel, both of the following two steps will be performed by the following order.

1. If the receiving goroutine queue of the channel is not empty, in which case the value buffer of the channel must be empty, all the goroutines in the receiving goroutine queue of the channel will be unshifted one by one, each of them will receive a zero value of the element type of the channel and be resumed to running state.

2. If the sending goroutine queue of the channel is not empty, all the goroutines in the sending goroutine queue of the channel will be unshifted one by one and each of them will produce a panic for sending on a closed channel. The values which have been already pushed into the value buffer of the channel are still there.

Channel operation case D: after a non-nil channel is closed, channel receive operations on the channel will never block. The values in the value buffer of the channel can still be received. Once all the values in the value buffer are taken out and received, infinite zero values of the element type of the channel will received by any of following receive operations on the channel. As above has mentioned, the optional second return result of a channel receive operation is an untyped boolean value which indicates whether or not the first result (the received value) is sent before the channel is closed. If the second return result is false, then the first return result (the received value) must be a zero value of the element type of the channel.

Knowing what are blocking and non-blocking channel send or receive operations is important to understand the mechanism of select control flow blocks which will be introduced in a later section.

In the above explanations, if a goroutine is unshifted out of a queue (either the sending goroutine queue or the receiving goroutine queue) of a channel, and the goroutine was blocked for being pushed into the queue at a select control flow code block, then the goroutine will be resumed to running state at the step 9 of theselect control flow code block execution. It may be dequeued from the corresponding goroutine queues of several channels involved in the select control flow code block.

According to the explanations listed above, we can get some facts about the internal queues of a channel.

• If the channel is closed, both of its sending goroutine queue and receiving goroutine queue must be empty, but its value buffer queue may not be empty.

• At any time, if the value buffer is not empty, then its receiving goroutine queue must be empty.

• At any time, if the value buffer is not full, then its sending goroutine queue must be empty.

• If the channel is buffered, then at any time, one of its sending goroutine queue and receiving goroutine queue must be empty.

• If the channel is unbuffered, then at any time, generally one of its sending goroutine queue and the receiving goroutine queue must be empty, but with an exception that a goroutine may be pushed into both of the two queues when executing a select control flow code block.

### Some Channel Use Examples

Let’s view some channel use examples to enhance the understanding by reading the last section.

A simple request/response example. The two goroutines in this example talk with each other through an unbuffered channel.

package mainimport (    "fmt"    "time")func main() {    c := make(chan int) // an unbuffered channel    go func(ch chan<- int, x int) {        time.Sleep(time.Second)        // <-ch    // this operation fails to compile.        ch <- x*x  // blocking here until the result is received    }(c, 3)    done := make(chan struct{})    go func(ch <-chan int) {        n := <-ch      // blocking here until 9 is sent        fmt.Println(n) // 9        // ch <- 123   // this operation fails to compile        time.Sleep(time.Second)        done <- struct{}{}    }(c)    <-done // blocking here until a value is sent to channel "done"    fmt.Println("bye")}

The output:

9bye

A demo of using a buffered channel. This program is not a concurrent one, it just shows how to use buffered channels.

package mainimport "fmt"func main() {    c := make(chan int, 2) // a buffered channel    c <- 3    c <- 5    close(c)    fmt.Println(len(c), cap(c)) // 2 2    x, ok := <-c    fmt.Println(x, ok) // 3 true    fmt.Println(len(c), cap(c)) // 1 2    x, ok = <-c    fmt.Println(x, ok) // 5 true    fmt.Println(len(c), cap(c)) // 0 2    x, ok = <-c    fmt.Println(x, ok) // 0 false    x, ok = <-c    fmt.Println(x, ok) // 0 false    fmt.Println(len(c), cap(c)) // 0 2    close(c) // panic!    c <- 7   // also panic if the above close call is removed.}

A never-ending football game.

package mainimport (    "fmt"    "time")func main() {    var ball = make(chan string)    kickBall := func(playerName string) {        for {            fmt.Println(<-ball, "kicked the ball.")            time.Sleep(time.Second)            ball <- playerName        }    }    go kickBall("John")    go kickBall("Alice")    go kickBall("Bob")    go kickBall("Emily")    ball <- "referee" // kick off    var c chan bool   // nil    <-c               // blocking here for ever}

### Channel Element Values are Transferred by Copy

When a value is transferred from one goroutine to another goroutine, the value will be copied at least one time. If the transferred value ever stayed in the value buffer of a channel, then two copies will happen in the transfer process. One copy happens when the value is copied from the sender goroutine into the value buffer, the other happens when the value is copied from the value buffer to the receiver goroutine. Like value assignments and function argument passing, when a value is transferred, only its direct part is copied.

For the standard Go compiler, the size of channel element types must be smaller than 65536. However, generally, we shouldn’t create channels with large-size element types, to avoid too large copy cost in the process of transferring values between goroutines. So if the passed value size is too large, it is best to use a pointer element type instead, to avoid a large value copy cost.

### About Channel and Goroutine Garbage Collections

Note, a channel is referenced by all the goroutines in either the sending or the receiving goroutine queue of the channel, so if neither of the two queues of the channel is empty, the channel will not be garbage collected for sure. On the other hand, if a goroutine is blocked and stays in either the sending or the receiving goroutine queue of a channel, then the goroutine will also not be garbage collected for sure, even if the channel is referenced only by this goroutine. In fact, a goroutine can be only garbage collected when it has already exited.

### Channel Send and Receive Operations are Simple Statements

Channel send operations and receive operations are simple statements. A channel receive operation can be always used as a single-value expression. Simple statements and expressions can be used at certain portions of basic control flow blocks.

An example in which channel send and receive operations appear as simple statements in two for control flow blocks.

package mainimport (    "fmt"    "time")func main() {    fibonacci := func() chan uint64 {        c := make(chan uint64)        go func() {            var x, y uint64 = 0, 1            for ; y < (1 << 63); c <- y { // here                x, y = y, x+y            }            close(c)        }()        return c    }    c := fibonacci()    for x, ok := <-c; ok; x, ok = <-c { // here        time.Sleep(time.Second)        fmt.Println(x)    }}

### for-range on Channels

The for-range control flow code block applies to channels. The loop will try to iteratively receive the values sent to a channel, until the channel is closed and its value buffer queue becomes blank. Unlike the for-range syntax on arrays, slices and maps, most one iteration variable, which is used to store the received values, is allowed to be present in the for-range syntax on channels.

for v = range aChannel {    // use v}

is equivalent to

for {    v, ok = <-aChannel    if !ok {        break    }    // use v}

Surely, here the aChannel value must not be a send-only channel. If it is a nil channel, the loop will block there for ever.

For example, the second for loop block in the example shown in the last section can be simplified to

    for x := range c {        time.Sleep(time.Second)        fmt.Println(x)    }

### select-case Control Flow Code Blocks

There is a select-case code block syntax which is specially designed for channels. The syntax is much like the switch-case block syntax. For example, there can be multiple case branches and at most onedefault branch in the select-case code block. But there are also some obvious differences between them.

• No expressions and statements are allowed to follow the select keyword (before {).

• No fallthrough statements are allowed to be used in case branches.

• Each statement following a case keyword in a select-case code block must be either a channel receive operation or a channel send operation statement. A channel receive operation can appear as the source value of a simple assignment statement. Later, a channel operation following a casekeyword will be called a case operation.

• In case of there are some non-blocking case operations, Go runtime will randomly select one of them to execute, then continue to execute the corresponding case branch.

• In case of all the case operations in a select-case code block are blocking operations, the defaultbranch will be selected to execute if the default branch is present. If the default branch is absent, the current goroutine will be pushed into the corresponding sending goroutine queue or receiving goroutine queue of every channel involved in all case operations, then enter blocking state.

By the rules, a select-case code block without any branches, select{}, will make the current goroutine stay in blocking state forever.

The following program will enter the default branch for sure.

package mainimport "fmt"func main() {    var c chan struct{} // nil    select {    case <-c:             // blocking operation    case c <- struct{}{}: // blocking operation    default:        fmt.Println("Go here.")    }}

An example showing how to use try-send and try-receive:

package mainimport "fmt"func main() {    c := make(chan string, 2)    trySend := func(v string) {        select {        case c <- v:        default: // go here if c is full.        }    }    tryReceive := func() string {        select {        case v := <-c: return v        default: return "-" // go here if c is empty.        }    }    trySend("Hello!") // succeed to send    trySend("Hi!")    // succeed to send    trySend("Bye!")   // fail to send, but will not be blocked.    // The following two line both succeed to receive.    fmt.Println(tryReceive()) // Hello!    fmt.Println(tryReceive()) // Hi!    // The following line fails to receive.    fmt.Println(tryReceive()) // -}

The following example has 50% possibility to panic. Both of the two case operations are non-blocking in this example.

package mainfunc main() {    c := make(chan struct{})    close(c)    select {    case c <- struct{}{}: // panic if this case is selected.    case <-c:    }}

### The Implementation of the Select Mechanism

The select mechanism in Go is an important and unique feature. Here the steps of the select mechanism implementation by the official Go runtime are listed.

There are several steps to execute a select-case block:

1. evaluate all involved channel expressions and value expressions to be potentially sent in caseoperations, from top to bottom and left to right. Destination values for receive operations (as source values) in assignments needn’t to be evaluated at this time.

2. randomize the branch orders for polling in step 5. The default branch is always put at the last position in the result order. Channels may be duplicate in the case operations.

3. sort all involved channels in the case operations to avoid deadlock in the next step. No duplicate channels stay in the first N channels of the sorted result, where N is the number of involved channels in the case operations. Below, the channel lock order is a concept for the first N channels in the sorted result.

4. lock (a.k.a., acquire the locks of) all involved channels by the channel lock order produced in last step.

5. poll each branch in the select block by the randomized order produced in step 2:(Up to here, the default branch is absent and all case operations are blocking operations.)

6. if this is a case branch and the corresponding channel operation is a send-value-to-closed-channel operation, unlock all channels by the inverse channel lock order and make the current goroutine panic. Go to step 12.

7. if this is a case branch and the corresponding channel operation is non-blocking, perform the channel operation and unlock all channels by the inverse channel lock order, then execute the corresponding case branch body. The channel operation may wake up another goroutine in blocking state. Go to step 12.

8. if this is the default branch, then unlock all channels by the inverse channel lock order and execute the default branch body. Go to step 12.

9. push (enqueue) the current goroutine (along with the information of the corresponding case branch) into the receiving or sending goroutine queue of the involved channel in each case operation. The current goroutine may be pushed into the queues of a channel for multiple times, for the involved channels in multiple cases may be the same one.

10. make the current goroutine enter blocking state and unlock all channels by the inverse channel lock order.

11. …, in blocking state, waiting other channel operations to wake up the current goroutine, …

12. the current goroutine is waken up by another channel operation in another goroutine. The other operation may be a channel close operation or a channel send/receive operation. If it is a channel send/receive operation, there must be a case channel receive/send operation (in the current being explained select-case block) cooperating with it (by transferring a value). In the cooperation, the current goroutine will be dequeued from the receiving/sending goroutine queue of the channel.

13. lock all involved channels by the channel lock order.

14. dequeue the current goroutine from the receiving goroutine queue or sending goroutine queue of the involved channel in each case operation,

15. if the current goroutine is waken up by a channel close operation, go to step 5.

16. if the current goroutine is waken up by a channel send/receive operation, the corresponding case branch of the cooperating receive/send operation has already been found in the dequeuing process, so just unlock all channels by the inverse channel lock order and execute the corresponding case branch.

17. done.

From the implementation, we know that

• a goroutine may stay in the sending goroutine queues and the receiving goroutine queues of multiple channels at the same time. It can even stay in the sending goroutine queue and the receiving goroutine queue of the same channel at the same time.

• when a goroutine being blocked at a select-case code block gets resumed later, it will be removed from all the sending goroutine queues and the receiving goroutine queues of every channels involved in the channel operations followed case keywords in the select-case code block.

### More

Although channels can help us write correct concurrent code easily, like other data synchronization techniques, channels will not prevent us from writing improper concurrent code.