Related
I am trying to read multiple files in parallel in such a way so that each go routine that is reading a file write its data to that channel, then have a single go-routine that listens to that channel and adds the data to the map. Here is my play.
Below is the example from the play:
package main
import (
"fmt"
"sync"
)
func main() {
var myFiles = []string{"file1", "file2", "file3"}
var myMap = make(map[string][]byte)
dataChan := make(chan fileData, len(myFiles))
wg := sync.WaitGroup{}
defer close(dataChan)
// we create a wait group of N
wg.Add(len(myFiles))
for _, file := range myFiles {
// we create N go-routines, one per file, each one will return a struct containing their filename and bytes from
// the file via the dataChan channel
go getBytesFromFile(file, dataChan, &wg)
}
// we wait until the wait group is decremented to zero by each instance of getBytesFromFile() calling waitGroup.Done()
wg.Wait()
for i := 0; i < len(myFiles); i++ {
// we can now read from the data channel N times.
file := <-dataChan
myMap[file.name] = file.bytes
}
fmt.Printf("%+v\n", myMap)
}
type fileData struct {
name string
bytes []byte
}
// how to handle error from this method if reading file got messed up?
func getBytesFromFile(file string, dataChan chan fileData, waitGroup *sync.WaitGroup) {
bytes := openFileAndGetBytes(file)
dataChan <- fileData{name: file, bytes: bytes}
waitGroup.Done()
}
func openFileAndGetBytes(file string) []byte {
return []byte(fmt.Sprintf("these are some bytes for file %s", file))
}
Problem Statement
How can I use golang.org/x/sync/errgroup to wait on and handle errors from goroutines or if there is any better way like using semaphore? For example if any one of my go routine fails to read data from file then I want to cancels all those remaining in the case of any one routine returning an error (in which case that error is the one bubble back up to the caller). And it should automatically waits for all the supplied go routines to complete successfully for success case.
I also don't want to spawn 100 go-routines if total number of files is 100. I want to control the parallelism if possible if there is any way.
How can I use golang.org/x/sync/errgroup to wait on and handle errors from goroutines or if there is any better way like using semaphore? For example [...] I want to cancels all those remaining in the case of any one routine returning an error (in which case that error is the one bubble back up to the caller). And it should automatically waits for all the supplied go routines to complete successfully for success case.
There are many ways to communicate error states across goroutines. errgroup does a bunch of heavy lifting though, and is appropriate for this case. Otherwise you're going to end up implementing the same thing.
To use errgroup we'll need to handle errors (and for your demo, generate some). In addition, to cancel existing goroutines, we'll use a context from errgroup.NewWithContext.
From the errgroup reference,
Package errgroup provides synchronization, error propagation, and Context cancelation for groups of goroutines working on subtasks of a common task.
Your play doesn't support any error handling. We can't collect and cancel on errors if we don't do any error handling. So I added some code to inject error handling:
func openFileAndGetBytes(file string) (string, error) {
if file == "file2" {
return "", fmt.Errorf("%s cannot be read", file)
}
return fmt.Sprintf("these are some bytes for file %s", file), nil
}
Then that error had to be passed back from getBytesFromFile as well:
func getBytesFromFile(file string, dataChan chan fileData) error {
bytes, err := openFileAndGetBytes(file)
if err == nil {
dataChan <- fileData{name: file, bytes: bytes}
}
return err
}
Now that we've done that, we can turn our attention to how we're going to start up a number of goroutines.
I also don't want to spawn 100 go-routines if total number of files is 100. I want to control the parallelism if possible if there is any way.
Written well, the number of tasks, channel size, and number of workers are typically independent values. The trick is to use channel closure - and in your case, context cancellation - to communicate state between the goroutines. We'll need an additional channel for the distribution of filenames, and an additional goroutine for the collection of the results.
To illustate this point, my code uses 3 workers, and adds a few more files. My channels are unbuffered. This allows us to see some of the files get processed, while others are aborted. If you buffer the channels, the example will still work, but it's more likely for additional work to be processed before the cancellation is handled. Experiment with buffer size along with worker count and number of files to process.
var myFiles = []string{"file1", "file2", "file3", "file4", "file5", "file6"}
fileChan := make(chan string)
dataChan := make(chan fileData)
To start up the workers, instead of starting one for each file, we start the number we desire - here, 3.
for i := 0; i < 3; i++ {
worker_num := i
g.Go(func() error {
for file := range fileChan {
if err := getBytesFromFile(file, dataChan); err != nil {
fmt.Println("worker", worker_num, "failed to process", file, ":", err.Error())
return err
} else if err := ctx.Err(); err != nil {
fmt.Println("worker", worker_num, "context error in worker:", err.Error())
return err
}
}
fmt.Println("worker", worker_num, "processed all work on channel")
return nil
})
}
The workers call your getBytesFromFile function. If it returns an err, we return an err. errgroup will cancel our context automatically in this case. However, the exact order of operations is not deterministic, so more files may or may not get processed before the context is cancelled. I'll show several possibilties below.
by rangeing over fileChan, the worker automatically picks up end of work from the channel closure. If we get an error, we can return it to errgroup immediately. Otherwise, if the context has been cancelled, we can return the cancellation error immediately.
You might think that g.Go would automatically cancel our function. But it cannot. There is no way to cancel a running function in Go other than process termination. errgroup.Group.Go's function argument must cancel itself when appropriate based on the state of its context.
Now we can turn our attention to the thing that puts the files on fileChan. We have 2 options here: we can use a buffered channel of the size of myFiles, like you did. We can fill the entire channel with pending jobs. This is only an option if you know the number of jobs when you create the channel. The other option is to use an additional "distribution" goroutine that can block on writes to fileChan so that our "main" goroutine can continue.
// dispatch files
g.Go(func() error {
defer close(fileChan)
done := ctx.Done()
for _, file := range myFiles {
select {
case fileChan <- file:
continue
case <-done:
break
}
}
return ctx.Err()
})
I'm not sure it's strictly necessary to put this in the same errgroup in this case, because we can't get an error in the distributor goroutine. But this general pattern, drawn from the Pipeline example from errgroup, works regardless of whether the work dispatcher might generate errors.
This functions pretty simple, but the magic is in select along with ctx.Done() channel. Either we write to the work channel, or we fail if our context is done. This allows us to stop distributing work when one worker has failed one file.
We defer close(fileChan) so that, regardless of why we have finished (either we distributed all work, or the context was cancelled), the workers know there will be no more work on the incoming work queue (ie fileChan).
We need one more synchronization mechanism: once all the work is distributed, and all the results are in or work was finished being cancelled, (eg, after our errgroup's Wait() returns), we need to close our results channel, dataChan. This signals the results collector that there are no more results to be collected.
var err error // we'll need this later!
go func() {
err = g.Wait()
close(dataChan)
}()
We can't - and don't need to - put this in the errgroup.Group. The function can't return an error, and it can't wait for itself to close(dataChan). So it goes into a regular old goroutine, sans errgroup.
Finally we can collect the results. With dedicated worker goroutines, a distributor goroutine, and a goroutine waiting on the work and notifying that there will be no more writes to the dataChan, we can collect all the results right in the "primary" goroutine in main.
for data := range dataChan {
myMap[data.name] = data.bytes
}
if err != nil { // this was set in our final goroutine, remember
fmt.Println("errgroup Error:", err.Error())
}
I made a few small changes so that it was easier to see the output. You may already have noticed I changed the file contents from []byte to string. This was purely so that the results were easy to read. Pursuant also to that end, I am using encoding/json to format the results so that it is easy to read them and paste them into SO. This is a common pattern that I often use to indent structured data:
enc := json.NewEncoder(os.Stdout)
enc.SetIndent("", " ")
if err := enc.Encode(myMap); err != nil {
panic(err)
}
Finally we're ready to run. Now we can see a number of different results depending on just what order the goroutines execute. But all of them are valid execution paths.
worker 2 failed to process file2 : file2 cannot be read
worker 0 context error in worker: context canceled
worker 1 context error in worker: context canceled
errgroup Error: file2 cannot be read
{
"file1": "these are some bytes for file file1",
"file3": "these are some bytes for file file3"
}
Program exited.
In this result, the remaining work (file4 and file5) were not added to the channel. Remember, an unbuffered channel stores no data. For those tasks to be written to the channel, a worker would have to be there to read them. Instead, the context was cancelled after file2 failed, and the distribution function followed the <-done path within its select. file1 and file3 were already processed.
Here's a different result (I just ran the playground share a few times to get different results).
worker 1 failed to process file2 : file2 cannot be read
worker 2 processed all work on channel
worker 0 processed all work on channel
errgroup Error: file2 cannot be read
{
"file1": "these are some bytes for file file1",
"file3": "these are some bytes for file file3",
"file4": "these are some bytes for file file4",
"file5": "these are some bytes for file file5",
"file6": "these are some bytes for file file6"
}
In this case, it looks a little like our cancellation failed. but what really happened is that the goroutines just "happened" to queue and finish the rest of the work before errorgroup picked upon worker `'s failure and cancelled the context.
what errorgroup does
When you use errorgroup, you're really getting 2 things out of it:
easily accessing the first error your workers returned;
getting a context that errorgroup will cancel for you when
Keep in mind that errorgroup does not cancel goroutines. This tripped me up a bit at first. Errorgroup cancels the context. It's your responsibility to apply the status of that context to your goroutines (remember, the goroutine must end itself, errorgroup can't end it).
A final aside about contexts with file operations, and failing outstanding work
Most of your file operations, eg io.Copy or os.ReadFile, are actually a loop of subsequent Read operations. But io and os don't support contexts directly. so if you have a worker reading a file, and you don't implement the Read loop yourself, you won't have an opportunity to cancel based on context. That's probably okay in your case - sure, you may have read some more files than you really needed to, but only because you were already reading them when the error occurred. I would personally accept this state of affairs and not implement my own read loop.
The code
https://go.dev/play/p/9qfESp_eB-C
package main
import (
"context"
"encoding/json"
"fmt"
"os"
"golang.org/x/sync/errgroup"
)
func main() {
var myFiles = []string{"file1", "file2", "file3", "file4", "file5", "file6"}
fileChan := make(chan string)
dataChan := make(chan fileData)
g, ctx := errgroup.WithContext(context.Background())
for i := 0; i < 3; i++ {
worker_num := i
g.Go(func() error {
for file := range fileChan {
if err := getBytesFromFile(file, dataChan); err != nil {
fmt.Println("worker", worker_num, "failed to process", file, ":", err.Error())
return err
} else if err := ctx.Err(); err != nil {
fmt.Println("worker", worker_num, "context error in worker:", err.Error())
return err
}
}
fmt.Println("worker", worker_num, "processed all work on channel")
return nil
})
}
// dispatch files
g.Go(func() error {
defer close(fileChan)
done := ctx.Done()
for _, file := range myFiles {
if err := ctx.Err(); err != nil {
return err
}
select {
case fileChan <- file:
continue
case <-done:
break
}
}
return ctx.Err()
})
var err error
go func() {
err = g.Wait()
close(dataChan)
}()
var myMap = make(map[string]string)
for data := range dataChan {
myMap[data.name] = data.bytes
}
if err != nil {
fmt.Println("errgroup Error:", err.Error())
}
enc := json.NewEncoder(os.Stdout)
enc.SetIndent("", " ")
if err := enc.Encode(myMap); err != nil {
panic(err)
}
}
type fileData struct {
name,
bytes string
}
func getBytesFromFile(file string, dataChan chan fileData) error {
bytes, err := openFileAndGetBytes(file)
if err == nil {
dataChan <- fileData{name: file, bytes: bytes}
}
return err
}
func openFileAndGetBytes(file string) (string, error) {
if file == "file2" {
return "", fmt.Errorf("%s cannot be read", file)
}
return fmt.Sprintf("these are some bytes for file %s", file), nil
}
I am trying to use gzip.NewWriter to stream data in and write compressed data to a CSV file. Everything works except the footer doesn't seem to get written when I use defer gzip.Close(). I get an Unexpected end of data message when I try to open the file with 7-zip.
Note: I have seen this question (and answer) but I feel like my problem is a little different, because I am writing to a file and not returning the bytes.
As I understand it, OP in that question was returning the bytes before writing the footer. But because I am just writing to a file, I shouldn't be running into the same issue.
Here is a snippet of my code. I have removed all error-checking for the sake of brevity.
// Worker that reads invoices and compresses them into a single file.
func compressWorker(c Config, archived <-chan Invoice, done chan<- int) {
now := time.Now().Format("2006-01")
path := filepath.Join(c.OutDir, now+".csv.gz")
readColumns := false
f, _ := os.Create(path)
defer closeWriter("csv file", f)
cw := gzip.NewWriter(f)
cw.Name = now + ".csv"
// * This is where I originally had my defer Close call.
// defer closeWriter("compression stream", cw)
for i := range archived {
if !readColumns {
b := []byte(i.CsvColumns() + ",DateLastSaved\n")
cw.Write(b)
readColumns = true
}
b := []byte(i.ToCsvString() + "," + time.Now().Format("2006-01-02") + "\n")
cw.Write(b)
}
// Ordinarily I'd say we defer this earlier, but that doesn't work for some
// reason.
closeWriter("compression writer", cw)
done <- 1
}
// Print an error with a prefix string and exit.
func HandleErrorWithPrefix(e error, p string) {
if e != nil {
log.Fatalf("Error: %v; %v\n", p, e)
}
}
func closeWriter(n string, wc io.WriteCloser) {
log.Printf("closing %v\n", n)
err := wc.Close()
HandleErrorWithPrefix(err, fmt.Sprintf("error closing writer '%v'", n))
}
Interestingly, I get this from closeWriter with my original code and deferred closing:
2021/10/13 09:05:42 closing compression writer
2021/10/13 09:05:42 closing csv file
However, I get this when I close cw without deferring:
2021/10/13 09:06:01 closing compression stream
I get no errors in closeWriter though so I'm not sure why the file wouldn't be closing. Deferred works last-in-first-out right?
So it appears my problem wasn't much different: my program was ending before my writers had enough time to fully close.
I solved this, with the help of everyone commenting, by moving my done send to a deferred closure at the beginning of the worker function.
func compressWorker(c Config, archived <-chan Invoice, done chan<- int) {
// Defer here ensures it will always be run last.
defer func() {
done <- 1
}()
// ...
defer f.Close()
defer cw.Close()
// ...
}
I am trying to read data from a file and send it immediately the chunk read without waiting the other goroutine to complete file read. I have two function
func ReadFile(stream chan []byte, stop chan bool) {
file.Lock()
defer file.Unlock()
dir, _ := os.Getwd()
file, _ := os.Open(dir + "/someFile.json")
chunk := make([]byte, 512*4)
for {
bytesRead, err := file.Read(chunk)
if err == io.EOF {
break
}
if err != nil {
panic(err)
break
}
stream <- chunk[:bytesRead]
}
stop <- true
}
First one is responsible for reading the file and sending data chunks to the "stream" channel received from the other function
func SteamFile() {
stream := make(chan []byte)
stop := make(chan bool)
go ReadFile(stream, stop)
L:
for {
select {
case data := <-stream:
//want to send data chunk by socket here
fmt.Println(data)
case <-stop:
break L
}
}
}
Second function creates 2 channel, sends them to the first function and starts to listen to the channels.
The problem is that sometimes data := <-stream is missing. I do not receive for example the first part of the file but receive all the others. When I run the program with -race flag it tells that there is a data race and two goroutines write to and read from the channel simultaneously. To tell the truth I thought that's the normal way to use channels but now I see that it brings to even more issues.
Can someone please help me to solve this issue?
When I run the program with -race flag it tells that there is a data race and two goroutines write to and read from the channel simultaneously. To tell the truth I thought that's the normal way do use channels.
It is, and you are almost certainly misunderstanding the output of the race detector.
You're sharing the slice in ReadFile (and therefore it's underlying array), so the data is overwritten in ReadFile while it's being read in SteamFile [sic]. While this shouldn't trigger the race detector it is definitely a bug. Create a new slice for each Read call:
func ReadFile(stream chan []byte, stop chan bool) {
file.Lock()
defer file.Unlock()
dir, _ := os.Getwd()
file, _ := os.Open(dir + "/someFile.json")
- chunk := make([]byte, 512*4)
for {
+ chunk := make([]byte, 512*4)
bytesRead, err := file.Read(chunk)
Using Go, I have large log files. Currently I open them, create a new scanner bufio.NewScanner, and then for scanner.Scan() to loop through the lines. Each line is sent through a processing function, which matches it to regular expressions and extracts data. I would like to process this file in chunks simultaneously using goroutines. I believe this may be quicker than looping through the whole file sequentially.
It can take a few seconds per file, and I'm wondering if I can process a single file in, say, 10 pieces at a time. I believe I can sacrifice the memory if needed. I have ~3gb, and the biggest log file is maybe 75mb.
I see that a scanner has a .Split() method, where you can provide a custom split function, but I wasn't able to find a good solution using this method.
I've also tried creating a slice of slices, looping through the scanner with scanner.Scan() and appending scanner.Text() to each slice.
eg:
// pseudocode because I couldn't get this to work either
scanner := bufio.NewScanner(logInfo)
threads := [[], [], [], [], []]
i := 0
for scanner.Scan() {
i = i + 1
if i > 5 {
i = 0
}
threads[i] = append(threads[i], scanner.Text())
}
fmt.Println(threads)
I'm new to Go and concerned about efficiency and performance. I want to learn how to write good Go code! Any help or advice is really appreciated.
Peter gives a good starting point, if you wanted to do something like a fan-out, fan-in pattern you could do something like:
package main
import (
"bufio"
"fmt"
"log"
"os"
"sync"
)
func main() {
file, err := os.Open("/path/to/file.txt")
if err != nil {
log.Fatal(err)
}
defer file.Close()
lines := make(chan string)
// start four workers to do the heavy lifting
wc1 := startWorker(lines)
wc2 := startWorker(lines)
wc3 := startWorker(lines)
wc4 := startWorker(lines)
scanner := bufio.NewScanner(file)
go func() {
defer close(lines)
for scanner.Scan() {
lines <- scanner.Text()
}
if err := scanner.Err(); err != nil {
log.Fatal(err)
}
}()
merged := merge(wc1, wc2, wc3, wc4)
for line := range merged {
fmt.Println(line)
}
}
func startWorker(lines <-chan string) <-chan string {
finished := make(chan string)
go func() {
defer close(finished)
for line := range lines {
// Do your heavy work here
finished <- line
}
}()
return finished
}
func merge(cs ...<-chan string) <-chan string {
var wg sync.WaitGroup
out := make(chan string)
// Start an output goroutine for each input channel in cs. output
// copies values from c to out until c is closed, then calls wg.Done.
output := func(c <-chan string) {
for n := range c {
out <- n
}
wg.Done()
}
wg.Add(len(cs))
for _, c := range cs {
go output(c)
}
// Start a goroutine to close out once all the output goroutines are
// done. This must start after the wg.Add call.
go func() {
wg.Wait()
close(out)
}()
return out
}
If it is acceptable that line N+1 is processed before line N, you can use a simple fan-out pattern to get started. The Go blog explains this and more advanced patterns, such as cancelation and fan-in.
Note that this is just a starting point to keep it simple and on point. You would almost certainly want to wait for the process functions to return before exiting, for instance. This is explained in the mentioned blog post.
package main
import "bufio"
func main() {
var sc *bufio.Scanner
lines := make(chan string)
go process(lines)
go process(lines)
go process(lines)
go process(lines)
for sc.Scan() {
lines <- sc.Text()
}
close(lines)
}
func process(lines <-chan string) {
for line := range lines {
// implement processing here
}
}
I am new to Go and I might be missing the point but why are Go channels limited in the maximum buffer size buffered channels can have? For example if I make a channel like so
channel := make(chan int, 100)
I cannot add more than 100 elements to the channel without blocking, is there a reason for this? Further they cannot dynamically be resized, because the channel API does not support that.
This seems sort of limiting in the language's support for universal synchronization with a single mechanism since it lacks convenience compared to an unbounded semaphore. For example a generalized semaphore's value can be increased without bounds.
If one component of a program can't keep up with its input, it needs to put back-pressure on the rest of the system, rather than letting it run ahead and generate gigabytes of data that will never get processed because the system ran out of memory and crashed.
There is really no such thing as an unlimited buffer, because machines have limits on what they can handle. Go requires you to specify a size for buffered channels so that you will think about what size buffer your program actually needs and can handle. If it really needs a billion items, and can handle them, you can create a channel that big. But in most cases a buffer size of 0 or 1 is actually what is needed.
This is because Channels are designed for efficient communication between concurrent goroutines, but the need you have is something different: the fact you are blocking denotes that the recipient is not attending the work queue, and "dynamic" is rarely free.
There are a variety of different patterns and algorithms you can use to solve the problem you have: you could change your channel to accept arrays of ints, you could add additional goroutines to better balance or filter work. Or you could implement your own dynamic channel. Doing so is certainly a useful exercise for seeing why dynamic channels aren't a great way to build concurrency.
package main
import "fmt"
func dynamicChannel(initial int) (chan <- interface{}, <- chan interface{}) {
in := make(chan interface{}, initial)
out := make(chan interface{}, initial)
go func () {
defer close(out)
buffer := make([]interface{}, 0, initial)
loop:
for {
packet, ok := <- in
if !ok {
break loop
}
select {
case out <- packet:
continue
default:
}
buffer = append(buffer, packet)
for len(buffer) > 0 {
select {
case packet, ok := <-in:
if !ok {
break loop
}
buffer = append(buffer, packet)
case out <- buffer[0]:
buffer = buffer[1:]
}
}
}
for len(buffer) > 0 {
out <- buffer[0]
buffer = buffer[1:]
}
} ()
return in, out
}
func main() {
in, out := dynamicChannel(4)
in <- 10
fmt.Println(<-out)
in <- 20
in <- 30
fmt.Println(<-out)
fmt.Println(<-out)
for i := 100; i < 120; i++ {
in <- i
}
fmt.Println("queued 100-120")
fmt.Println(<-out)
close(in)
fmt.Println("in closed")
for i := range out {
fmt.Println(i)
}
}
Generally, if you are blocking, it indicates your concurrency is not well balanced. Consider a different strategy. For example, a simple tool to look for files with a matching .checksum file and then check the hashes:
func crawl(toplevel string, workQ chan <- string) {
defer close(workQ)
for _, path := filepath.Walk(toplevel, func (path string, info os.FileInfo, err error) error {
if err == nil && info.Mode().IsRegular() {
workQ <- path
}
}
// if a file has a .checksum file, compare it with the file's checksum.
func validateWorker(workQ <- chan string) {
for path := range workQ {
// If there's a .checksum file, read it, limit to 256 bytes.
expectedSum, err := os.ReadFile(path + ".checksum")[:256]
if err != nil { // ignore
continue
}
file, err := os.Open(path)
if err != nil {
continue
}
defer close(file)
hash := sha256.New()
if _, err := io.Copy(hash, file); err != nil {
log.Printf("couldn't hash %s: %w", path, err)
continue
}
actualSum := fmt.Sprintf("%x", hash.Sum(nil))
if actualSum != expectedSum {
log.Printf("%s: mismatch: expected %s, got %s", path, expectedSum, actualSum)
}
}
Even without any .checksum files, the crawl function will tend to outpace the worker queue. When .checksum files are encountered, especially if the files are large, the worker could take much, much longer to perform a single checksum.
A better aim here would be to achieve more consistent throughput by reducing the number of things the "validateWorker" does. Right now it is sometimes fast, because it checks for the checksum file. Othertimes it is slow, because it also loads has to read and checksum the files.
type ChecksumQuery struct {
Filepath string
ExpectSum string
}
func crawl(toplevel string, workQ chan <- ChecksumQuery) {
// Have a worker filter out files which don't have .checksums, and allow it
// to get a little ahead of the crawl function.
checkupQ := make(chan string, 4)
go func () {
defer close(workQ)
for path := range checkupQ {
expected, err := os.ReadFile(path + ".checksum")[:256]
if err == nil && len(expected) > 0 {
workQ <- ChecksumQuery{ path, string(expected) }
}
}
}()
go func () {
defer close(checkupQ)
for _, path := filepath.Walk(toplevel, func (path string, info os.FileInfo, err error) error {
if err == nil && info.Mode().IsRegular() {
checkupQ <- path
}
}
}()
}
Run a suitable number of validate workers, assign the workQ a suitable size, but if the crawler or validate functions block, it is because validate is doing useful work.
If your validate workers are all busy, they are all consuming large files from disk and hashing them. Having other workers interrupt this by crawling for more filenames, allocating and passing strings, isn't advantageous.
Other scenarios might be passing large lists to workers, in which pass the slices over channels (its cheap); or dynamic sized groups of things, in which case consider passing channels or captures over channels.
The buffer size is the number of elements that can be sent to the channel without the send blocking. By default, a channel has a buffer size of 0 (you get this with make(chan int)). This means that every single send will block until another goroutine receives from the channel. A channel of buffer size 1 can hold 1 element until sending blocks, so you'd get
c := make(chan int, 1)
c <- 1 // doesn't block
c <- 2 // blocks until another goroutine receives from the channel
I suggest you to look this for more clarification:
https://rogpeppe.wordpress.com/2010/02/10/unlimited-buffering-with-low-overhead/
http://openmymind.net/Introduction-To-Go-Buffered-Channels/