The Go Blog
Go 1.3 is released
18 June 2014
Today we are happy to announce the release of Go 1.3. This release comes six months after our last major release and provides better performance, improved tools, support for running Go in new environments, and more. All Go users should upgrade to Go 1.3. You can grab the release from our downloads page and find the full list of improvements and fixes in the release notes. What follows are some highlights.
Godoc, the Go documentation server, now performs static analysis. When enabled with the -analysis flag, analysis results are presented in both the source and package documentation views, making it easier than ever to navigate and understand Go programs. See the documentation for the details.
The gc toolchain now supports the Native Client (NaCl) execution sandbox on the 32- and 64-bit Intel architectures. This permits the safe execution of untrusted code, useful in environments such as the Playground. To set up NaCl on your system see the NativeClient wiki page.
Also included in this release is experimental support for the DragonFly BSD, Plan 9, and Solaris operating systems. To use Go on these systems you must install from source.
Changes to the runtime have improved the performance of Go binaries, with an improved garbage collector, a new "contiguous" goroutine stack management strategy, a faster race detector, and improvements to the regular expression engine.
As part of the general overhaul of the Go linker, the compilers and linkers have been refactored. The instruction selection phase that was part of the linker has been moved to the compiler. This can speed up incremental builds for large projects.
The garbage collector is now precise when examining stacks (collection of the heap has been precise since Go 1.1), meaning that a non-pointer value such as an integer will never be mistaken for a pointer and prevent unused memory from being reclaimed. This change affects code that uses package unsafe; if you have unsafe code you should read the release notes carefully to see if your code needs updating.
We would like to thank the many people who contributed to this release; it would not have been possible without your help.
So, what are you waiting for? Head on over to the downloads page and start hacking.
GopherCon 2014 Wrap Up
28 May 2014
In April this year 700 gophers descended upon Denver to attend GopherCon, the world's first large-scale Go conference, organized entirely by the community. The three day event featured 24 talks and one panel discussion in a single track over two days, followed by an informal "hack day" full of code, conversation, and more than 4 hours (!) of lightning talks.

A complete set of video recordings is now available (the slides are here).
Two keynotes framed the conference:
- Rob Pike's opening talk "Hello, Gophers!" (slides) discusses the history of Go by walking through two of the first Go programs.
- Andrew Gerrand's closing talk "Go for gophers" (slides) explains the Go design philosophy through the lens of his personal experience learning the language.
One talk that resonated with members of the Go team was Peter Bourgon's "Best practices for Production Environments" (notes). From deployment to dependency management, it answers many frequently asked questions about Go use in the real world and is a must-see for anyone serious about building systems in Go.
But, really, you should just watch them all. They're great.
The Go Gopher was everywhere. Each attendee received one of the new pink and purple gophers, which now accompany the original blue one:

The gopher was also seen wearing a cape on the side of the incredible CoreOS bus:

Most of the Go team were at the conference, and we were moved by the passion and dedication of the Go community. It was a thrill to see the many different ways people are using the language. It was also great to put faces to many of the names we know well through their contributions to the project.
We would like to extend our thanks and congratulations to the GopherCon organizers (Brian Ketelsen and Erik St. Martin), the excellent speakers, and the tireless volunteers that pitched in to make the event such a success. We look forward to GopherCon 2015, which promises to be bigger and better still.
But if you really can't wait, we'll see you at dotGo in Paris on the 10th of October!
The Go Gopher
24 March 2014

The Go gopher is an iconic mascot and one of the most distinctive features of the Go project. In this post we'll talk about his origins, evolution, and behavior.
About 15 years ago—long before the Go project—the gopher first appeared as a promotion for the WFMU radio station in New Jersey. Renee French was commissioned to design a T-shirt for an annual fundraiser and out came the gopher.

He next made an appearance at Bell Labs, as Bob Flandrena's avatar in the Bell Labs mail system. Other Renee drawings became avatars for ken, r, rsc, and others. (Of course, Peter Weinberger's was his own iconic face.)

Another Bell Labs activity led to Renee creating Glenda, the Plan 9 mascot, a distant cousin of the WFMU gopher.

When we started the Go project we needed a logo, and Renee volunteered to draw it. It was featured on the first Go T-shirt and the Google Code site.

For the open source launch in 2009, Renee suggested adapting the WFMU gopher as a mascot. And the Go gopher was born:

(The gopher has no name. He's just the "Go gopher".)
For the launch of the Go App Engine runtime at Google I/O 2011 we engaged Squishable to manufacture the plush gophers. This was the first time the gopher was colored blue and appeared in three dimensions. The first prototype was kinda hairy:

But the second one was just right:

Around the same time, Renee roughed out a gopher in clay. This inspired a refined sculpture that became a vinyl figurine made by Kidrobot. The vinyls were first distributed at OSCON 2011.

The gopher therefore exists in many forms, but he has always been Renee's creation. He stands for the Go project and Go programmers everywhere, and is one of the most popular things in the Go world.
The Go gopher is a character; a unique creation. He's not any old gopher, just as Snoopy is not any old cartoon dog.
The gopher images are Creative Commons Attributions 3.0 licensed. That means you can play with the images but you must give credit to their creator (Renee French) wherever they are used.
Here are a few gopher adaptations that people have used as mascots for user group mascots and similar organizations.

They're cute and we like them, but by the Creative Commons rules the groups should give Renee credit, perhaps as a mention on the user group web site.
The vinyl and plush gophers are copyrighted designs; accept no substitutes! But how can you get one? Their natural habitat is near high concentrations of Go programmers, and their worldwide population is growing. They may be purchased from the Google Store, although the supply can be irregular. (These elusive creatures have been spotted in all kinds of places.)
Perhaps the best way to get a gopher is to catch one in the wild at a Go conference. There are two big chances this year: GopherCon (Denver, April 24-26) and dotGo (Paris, October 10).

(Photo by Noah Lorang.)
Go Concurrency Patterns: Pipelines and cancellation
13 March 2014
Introduction
Go's concurrency primitives make it easy to construct streaming data pipelines that make efficient use of I/O and multiple CPUs. This article presents examples of such pipelines, highlights subtleties that arise when operations fail, and introduces techniques for dealing with failures cleanly.
What is a pipeline?
There's no formal definition of a pipeline in Go; it's just one of many kinds of concurrent programs. Informally, a pipeline is a series of stages connected by channels, where each stage is a group of goroutines running the same function. In each stage, the goroutines
- receive values from upstream via inbound channels
- perform some function on that data, usually producing new values
- send values downstream via outbound channels
Each stage has any number of inbound and outbound channels, except the first and last stages, which have only outbound or inbound channels, respectively. The first stage is sometimes called the source or producer; the last stage, the sink or consumer.
We'll begin with a simple example pipeline to explain the ideas and techniques. Later, we'll present a more realistic example.
Squaring numbers
Consider a pipeline with three stages.
The first stage, gen
, is a function that converts a list of integers to a
channel that emits the integers in the list. The gen
function starts a
goroutine that sends the integers on the channel and closes the channel when all
the values have been sent:
func gen(nums ...int) <-chan int { out := make(chan int) go func() { for _, n := range nums { out <- n } close(out) }() return out }
The second stage, sq
, receives integers from a channel and returns a
channel that emits the square of each received integer. After the
inbound channel is closed and this stage has sent all the values
downstream, it closes the outbound channel:
func sq(in <-chan int) <-chan int { out := make(chan int) go func() { for n := range in { out <- n * n } close(out) }() return out }
The main
function sets up the pipeline and runs the final stage: it receives
values from the second stage and prints each one, until the channel is closed:
func main() { // Set up the pipeline. c := gen(2, 3) out := sq(c) // Consume the output. fmt.Println(<-out) // 4 fmt.Println(<-out) // 9 }
Since sq
has the same type for its inbound and outbound channels, we
can compose it any number of times. We can also rewrite main
as a
range loop, like the other stages:
func main() { // Set up the pipeline and consume the output. for n := range sq(sq(gen(2, 3))) { fmt.Println(n) // 16 then 81 } }
Fan-out, fan-in
Multiple functions can read from the same channel until that channel is closed; this is called fan-out. This provides a way to distribute work amongst a group of workers to parallelize CPU use and I/O.
A function can read from multiple inputs and proceed until all are closed by multiplexing the input channels onto a single channel that's closed when all the inputs are closed. This is called fan-in.
We can change our pipeline to run two instances of sq
, each reading from the
same input channel. We introduce a new function, merge, to fan in the
results:
func main() { in := gen(2, 3) // Distribute the sq work across two goroutines that both read from in. c1 := sq(in) c2 := sq(in) // Consume the merged output from c1 and c2. for n := range merge(c1, c2) { fmt.Println(n) // 4 then 9, or 9 then 4 } }
The merge
function converts a list of channels to a single channel by starting
a goroutine for each inbound channel that copies the values to the sole outbound
channel. Once all the output
goroutines have been started, merge
starts one
more goroutine to close the outbound channel after all sends on that channel are
done.
Sends on a closed channel panic, so it's important to ensure all sends
are done before calling close. The
sync.WaitGroup
type
provides a simple way to arrange this synchronization:
func merge(cs ...<-chan int) <-chan int { var wg sync.WaitGroup out := make(chan int) // Start an output goroutine for each input channel in cs. output // copies values from c to out until c is closed, then calls wg.Done. output := func(c <-chan int) { for n := range c { out <- n } wg.Done() } wg.Add(len(cs)) for _, c := range cs { go output(c) } // Start a goroutine to close out once all the output goroutines are // done. This must start after the wg.Add call. go func() { wg.Wait() close(out) }() return out }
Stopping short
There is a pattern to our pipeline functions:
- stages close their outbound channels when all the send operations are done.
- stages keep receiving values from inbound channels until those channels are closed.
This pattern allows each receiving stage to be written as a range
loop and
ensures that all goroutines exit once all values have been successfully sent
downstream.
But in real pipelines, stages don't always receive all the inbound values. Sometimes this is by design: the receiver may only need a subset of values to make progress. More often, a stage exits early because an inbound value represents an error in an earlier stage. In either case the receiver should not have to wait for the remaining values to arrive, and we want earlier stages to stop producing values that later stages don't need.
In our example pipeline, if a stage fails to consume all the inbound values, the goroutines attempting to send those values will block indefinitely:
// Consume the first value from output. out := merge(c1, c2) fmt.Println(<-out) // 4 or 9 return // Since we didn't receive the second value from out, // one of the output goroutines is hung attempting to send it. }
This is a resource leak: goroutines consume memory and runtime resources, and heap references in goroutine stacks keep data from being garbage collected. Goroutines are not garbage collected; they must exit on their own.
We need to arrange for the upstream stages of our pipeline to exit even when the downstream stages fail to receive all the inbound values. One way to do this is to change the outbound channels to have a buffer. A buffer can hold a fixed number of values; send operations complete immediately if there's room in the buffer:
c := make(chan int, 2) // buffer size 2 c <- 1 // succeeds immediately c <- 2 // succeeds immediately c <- 3 // blocks until another goroutine does <-c and receives 1
When the number of values to be sent is known at channel creation time, a buffer
can simplify the code. For example, we can rewrite gen
to copy the list of
integers into a buffered channel and avoid creating a new goroutine:
func gen(nums ...int) <-chan int { out := make(chan int, len(nums)) for _, n := range nums { out <- n } close(out) return out }
Returning to the blocked goroutines in our pipeline, we might consider adding a
buffer to the outbound channel returned by merge
:
func merge(cs ...<-chan int) <-chan int { var wg sync.WaitGroup out := make(chan int, 1) // enough space for the unread inputs // ... the rest is unchanged ...
While this fixes the blocked goroutine in this program, this is bad code. The
choice of buffer size of 1 here depends on knowing the number of values merge
will receive and the number of values downstream stages will consume. This is
fragile: if we pass an additional value to gen
, or if the downstream stage
reads any fewer values, we will again have blocked goroutines.
Instead, we need to provide a way for downstream stages to indicate to the senders that they will stop accepting input.
Explicit cancellation
When main
decides to exit without receiving all the values from
out
, it must tell the goroutines in the upstream stages to abandon
the values they're trying it send. It does so by sending values on a
channel called done
. It sends two values since there are
potentially two blocked senders:
func main() { in := gen(2, 3) // Distribute the sq work across two goroutines that both read from in. c1 := sq(in) c2 := sq(in) // Consume the first value from output. done := make(chan struct{}, 2) out := merge(done, c1, c2) fmt.Println(<-out) // 4 or 9 // Tell the remaining senders we're leaving. done <- struct{}{} done <- struct{}{} }
The sending goroutines replace their send operation with a select
statement
that proceeds either when the send on out
happens or when they receive a value
from done
. The value type of done
is the empty struct because the value
doesn't matter: it is the receive event that indicates the send on out
should
be abandoned. The output
goroutines continue looping on their inbound
channel, c
, so the upstream stages are not blocked. (We'll discuss in a moment
how to allow this loop to return early.)
func merge(done <-chan struct{}, cs ...<-chan int) <-chan int { var wg sync.WaitGroup out := make(chan int) // Start an output goroutine for each input channel in cs. output // copies values from c to out until c is closed or it receives a value // from done, then output calls wg.Done. output := func(c <-chan int) { for n := range c { select { case out <- n: case <-done: } } wg.Done() } // ... the rest is unchanged ...
This approach has a problem: each downstream receiver needs to know the number of potentially blocked upstream senders and arrange to signal those senders on early return. Keeping track of these counts is tedious and error-prone.
We need a way to tell an unknown and unbounded number of goroutines to stop sending their values downstream. In Go, we can do this by closing a channel, because a receive operation on a closed channel can always proceed immediately, yielding the element type's zero value.
This means that main
can unblock all the senders simply by closing
the done
channel. This close is effectively a broadcast signal to
the senders. We extend each of our pipeline functions to accept
done
as a parameter and arrange for the close to happen via a
defer
statement, so that all return paths from main
will signal
the pipeline stages to exit.
func main() { // Set up a done channel that's shared by the whole pipeline, // and close that channel when this pipeline exits, as a signal // for all the goroutines we started to exit. done := make(chan struct{}) defer close(done) in := gen(done, 2, 3) // Distribute the sq work across two goroutines that both read from in. c1 := sq(done, in) c2 := sq(done, in) // Consume the first value from output. out := merge(done, c1, c2) fmt.Println(<-out) // 4 or 9 // done will be closed by the deferred call. }
Each of our pipeline stages is now free to return as soon as done
is closed.
The output
routine in merge
can return without draining its inbound channel,
since it knows the upstream sender, sq
, will stop attempting to send when
done
is closed. output
ensures wg.Done
is called on all return paths via
a defer
statement:
func merge(done <-chan struct{}, cs ...<-chan int) <-chan int { var wg sync.WaitGroup out := make(chan int) // Start an output goroutine for each input channel in cs. output // copies values from c to out until c or done is closed, then calls // wg.Done. output := func(c <-chan int) { defer wg.Done() for n := range c { select { case out <- n: case <-done: return } } } // ... the rest is unchanged ...
Similarly, sq
can return as soon as done
is closed. sq
ensures its out
channel is closed on all return paths via a defer
statement:
func sq(done <-chan struct{}, in <-chan int) <-chan int { out := make(chan int) go func() { defer close(out) for n := range in { select { case out <- n * n: case <-done: return } } }() return out }
Here are the guidelines for pipeline construction:
- stages close their outbound channels when all the send operations are done.
- stages keep receiving values from inbound channels until those channels are closed or the senders are unblocked.
Pipelines unblock senders either by ensuring there's enough buffer for all the values that are sent or by explicitly signalling senders when the receiver may abandon the channel.
Digesting a tree
Let's consider a more realistic pipeline.
MD5 is a message-digest algorithm that's useful as a file checksum. The command
line utility md5sum
prints digest values for a list of files.
% md5sum *.go d47c2bbc28298ca9befdfbc5d3aa4e65 bounded.go ee869afd31f83cbb2d10ee81b2b831dc parallel.go b88175e65fdcbc01ac08aaf1fd9b5e96 serial.go
Our example program is like md5sum
but instead takes a single directory as an
argument and prints the digest values for each regular file under that
directory, sorted by path name.
% go run serial.go . d47c2bbc28298ca9befdfbc5d3aa4e65 bounded.go ee869afd31f83cbb2d10ee81b2b831dc parallel.go b88175e65fdcbc01ac08aaf1fd9b5e96 serial.go
The main function of our program invokes a helper function MD5All
, which
returns a map from path name to digest value, then sorts and prints the results:
func main() { // Calculate the MD5 sum of all files under the specified directory, // then print the results sorted by path name. m, err := MD5All(os.Args[1]) if err != nil { fmt.Println(err) return } var paths []string for path := range m { paths = append(paths, path) } sort.Strings(paths) for _, path := range paths { fmt.Printf("%x %s\n", m[path], path) } }
The MD5All
function is the focus of our discussion. In
serial.go, the implementation uses no concurrency and
simply reads and sums each file as it walks the tree.
// MD5All reads all the files in the file tree rooted at root and returns a map // from file path to the MD5 sum of the file's contents. If the directory walk // fails or any read operation fails, MD5All returns an error. func MD5All(root string) (map[string][md5.Size]byte, error) { m := make(map[string][md5.Size]byte) err := filepath.Walk(root, func(path string, info os.FileInfo, err error) error { if err != nil { return err } if !info.Mode().IsRegular() { return nil } data, err := ioutil.ReadFile(path) if err != nil { return err } m[path] = md5.Sum(data) return nil }) if err != nil { return nil, err } return m, nil }
Parallel digestion
In parallel.go, we split MD5All
into a two-stage
pipeline. The first stage, sumFiles
, walks the tree, digests each file in
a new goroutine, and sends the results on a channel with value type result
:
type result struct { path string sum [md5.Size]byte err error }
sumFiles
returns two channels: one for the results
and another for the error
returned by filepath.Walk
. The walk function starts a new goroutine to
process each regular file, then checks done
. If done
is closed, the walk
stops immediately:
func sumFiles(done <-chan struct{}, root string) (<-chan result, <-chan error) { // For each regular file, start a goroutine that sums the file and sends // the result on c. Send the result of the walk on errc. c := make(chan result) errc := make(chan error, 1) go func() { var wg sync.WaitGroup err := filepath.Walk(root, func(path string, info os.FileInfo, err error) error { if err != nil { return err } if !info.Mode().IsRegular() { return nil } wg.Add(1) go func() { data, err := ioutil.ReadFile(path) select { case c <- result{path, md5.Sum(data), err}: case <-done: } wg.Done() }() // Abort the walk if done is closed. select { case <-done: return errors.New("walk canceled") default: return nil } }) // Walk has returned, so all calls to wg.Add are done. Start a // goroutine to close c once all the sends are done. go func() { wg.Wait() close(c) }() // No select needed here, since errc is buffered. errc <- err }() return c, errc }
MD5All
receives the digest values from c
. MD5All
returns early on error,
closing done
via a defer
:
func MD5All(root string) (map[string][md5.Size]byte, error) { // MD5All closes the done channel when it returns; it may do so before // receiving all the values from c and errc. done := make(chan struct{}) defer close(done) c, errc := sumFiles(done, root) m := make(map[string][md5.Size]byte) for r := range c { if r.err != nil { return nil, r.err } m[r.path] = r.sum } if err := <-errc; err != nil { return nil, err } return m, nil }
Bounded parallelism
The MD5All
implementation in parallel.go
starts a new goroutine for each file. In a directory with many large
files, this may allocate more memory than is available on the machine.
We can limit these allocations by bounding the number of files read in parallel. In bounded.go, we do this by creating a fixed number of goroutines for reading files. Our pipeline now has three stages: walk the tree, read and digest the files, and collect the digests.
The first stage, walkFiles
, emits the paths of regular files in the tree:
func walkFiles(done <-chan struct{}, root string) (<-chan string, <-chan error) { paths := make(chan string) errc := make(chan error, 1) go func() { // Close the paths channel after Walk returns. defer close(paths) // No select needed for this send, since errc is buffered. errc <- filepath.Walk(root, func(path string, info os.FileInfo, err error) error { if err != nil { return err } if !info.Mode().IsRegular() { return nil } select { case paths <- path: case <-done: return errors.New("walk canceled") } return nil }) }() return paths, errc }
The middle stage starts a fixed number of digester
goroutines that receive
file names from paths
and send results
on channel c
:
func digester(done <-chan struct{}, paths <-chan string, c chan<- result) { for path := range paths { data, err := ioutil.ReadFile(path) select { case c <- result{path, md5.Sum(data), err}: case <-done: return } } }
Unlike our previous examples, digester
does not close its output channel, as
multiple goroutines are sending on a shared channel. Instead, code in MD5All
arranges for the channel to be closed when all the digesters
are done:
// Start a fixed number of goroutines to read and digest files. c := make(chan result) var wg sync.WaitGroup const numDigesters = 20 wg.Add(numDigesters) for i := 0; i < numDigesters; i++ { go func() { digester(done, paths, c) wg.Done() }() } go func() { wg.Wait() close(c) }()
We could instead have each digester create and return its own output channel, but then we would need additional goroutines to fan-in the results.
The final stage receives all the results
from c
then checks the
error from errc
. This check cannot happen any earlier, since before
this point, walkFiles
may block sending values downstream:
m := make(map[string][md5.Size]byte) for r := range c { if r.err != nil { return nil, r.err } m[r.path] = r.sum } // Check whether the Walk failed. if err := <-errc; err != nil { return nil, err } return m, nil }
Conclusion
This article has presented techniques for constructing streaming data pipelines in Go. Dealing with failures in such pipelines is tricky, since each stage in the pipeline may block attempting to send values downstream, and the downstream stages may no longer care about the incoming data. We showed how closing a channel can broadcast a "done" signal to all the goroutines started by a pipeline and defined guidelines for constructing pipelines correctly.
Further reading:
- Go Concurrency Patterns (video) presents the basics of Go's concurrency primitives and several ways to apply them.
- Advanced Go Concurrency Patterns (video) covers more complex uses of Go's primitives, especially
select
. - Douglas McIlroy's paper Squinting at Power Series shows how Go-like concurrency provides elegant support for complex calculations.
Go talks at FOSDEM 2014
24 February 2014
Introduction
At FOSDEM on the 2nd of February 2014 members of the Go community presented a series of talks in the Go Devroom. The day was a huge success, with 13 great talks presented to a consistently jam-packed room.
Video recordings of the talks are now available, and a selection of these videos are presented below.
The complete series of talks is available as a YouTube playlist. (You can also get them directly at the FOSDEM video archive.)
Scaling with Go: YouTube's Vitess
Google Engineer Sugu Sougoumarane described how he and his team built Vitess in Go to help scale YouTube.
Vitess is a set of servers and tools primarily developed in Go. It helps scale MySQL databases for the web, and is currently used as a fundamental component of YouTube's MySQL infrastructure.
The talk covers some history about how and why the team chose Go, and how it paid off. Sugu also talks abou tips and techniques used to scale Vitess using Go.
The slides for the talk are available here.
Camlistore
Camlistore is designed to be "your personal storage system for life, putting you in control, and designed to last." It's open source, under nearly 4 years of active development, and extremely flexible. In this talk, Brad Fitzpatrick and Mathieu Lonjaret explain why they built it, what it does, and talk about its design.
Write your own Go compiler
Elliot Stoneham explains the potential for Go as a portable language and reviews the Go tools that make that such an exciting possibility.
He said: "Based on my experiences writing an experimental Go to Haxe translator, I'll talk about the practical issues of code generation and runtime emulation required. I'll compare some of my design decisions with those of two other Go compiler/translators that build on the go.tools library. My aim is to encourage you to try one of these new 'mutant' Go compilers. I hope some of you will be inspired to contribute to one of them or even to write a new one of your own."
More
There were many more great talks, so please check out the complete series as a YouTube playlist. In particular, the lightning talks were a lot of fun.
I would like to give my personal thanks to the excellent speakers, Mathieu Lonjaret for managing the video gear, and to the FOSDEM staff for making all this possible.
See the index for more articles.