replace channel while iterating - go

I want to replace the channel with a new one under some condition, for example:
package main
import (
"log"
"time"
)
func subMsg(s string) chan string {
ch := make(chan string)
go func() {
ticker := time.NewTicker(time.Second * 2)
for range ticker.C {
ch <- s
}
}()
return ch
}
func main() {
chStr := subMsg("hello")
go func() {
i := 0
for s := range chStr {
log.Print(s)
i++
if i > 5 {
log.Print("new topic")
i = 0
chStr = subMsg("world")
}
}
}()
select {}
}
I expect this code snippet outputs 5 "hello", then "world", but it didn't work this way. I am not very clear about what happened when I re-assign a channel. Any suggestions?

You are using for range, and per Spec: For statements the range expression is only evaluated once.
The range expression x is evaluated once before beginning the loop, with one exception: if at most one iteration variable is present and len(x) is constant, the range expression is not evaluated.
The chStr variable is not checked by the for loop later on, so changing its value has no effect.
You can't use for range if you want to switch over to another channel.
Simply use a "normal" loop and receive from the channel inside it. Use the special form x, ok := <-ch so you know when the channel is closed, and so you can break from the loop (to mimic the for range working):
for {
s, ok := <-chStr
if !ok {
break
}
log.Print(s)
i++
if i > 5 {
log.Print("new topic")
i = 0
chStr = subMsg("world")
}
}
Try it on the Go Playground.

Related

How to always get the latest value from a Go channel?

I'm starting out with Go and I'm now writing a simple program which reads out data from a sensor and puts that into a channel to do some calculations with it. I now have it working as follows:
package main
import (
"fmt"
"time"
"strconv"
)
func get_sensor_data(c chan float64) {
time.Sleep(1 * time.Second) // wait a second before sensor data starts pooring in
c <- 2.1 // Sensor data starts being generated
c <- 2.2
c <- 2.3
c <- 2.4
c <- 2.5
}
func main() {
s := 1.1
c := make(chan float64)
go get_sensor_data(c)
for {
select {
case s = <-c:
fmt.Println("the next value of s from the channel: " + strconv.FormatFloat(s, 'f', 1, 64))
default:
// no new values in the channel
}
fmt.Println(s)
time.Sleep(500 * time.Millisecond) // Do heavy "work"
}
}
This works fine, but the sensor generates a lot of data, and I'm always only interested in the latest data. With this setup however, it only reads out the next item with every loop, which means that if the channel at some point contains 20 values, the newest value only is read out after 10 seconds.
Is there a way for a channel to always only contain one value at a time, so that I always only get the data I'm interested in, and no unnecessary memory is used by the channel (although the memory is the least of my worries)?
Channels are best thought of as queues (FIFO). Therefore you can't really skip around. However there are libraries out there that do stuff like this: https://github.com/cloudfoundry/go-diodes is an atomic ring buffer that will overwrite old data. You can set a smaller size if you like.
All that being said, it doesn't sound like you need a queue (or ring buffer). You just need a mutex:
type SensorData struct{
mu sync.RWMutex
last float64
}
func (d *SensorData) Store(data float64) {
mu.Lock()
defer mu.Unlock()
d.last = data
}
func (d *SensorData) Get() float64 {
mu.RLock()
defer mu.RUnlock()
return d.last
}
This uses a RWMutex which means many things can read from it at the same time while only a single thing can write. It will store a single entry much like you said.
No. Channels are FIFO buffers, full stop. That is how channels work and their only purpose. If you only want the latest value, consider just using a single variable protected by a mutex; write to it whenever new data comes in, and whenever you read it, you will always be reading the latest value.
Channels serves a specific purpose. You might want to use a code that is inside a lock and update the variable whenever new value is to be set.
This way reciever will always get the latest value.
You cannot get that from one channel directly, but you can use one channel per value and get notified when there are new values:
package main
import (
"fmt"
"strconv"
"sync"
"time"
)
type LatestChannel struct {
n float64
next chan struct{}
mu sync.Mutex
}
func New() *LatestChannel {
return &LatestChannel{next: make(chan struct{})}
}
func (c *LatestChannel) Push(n float64) {
c.mu.Lock()
c.n = n
old := c.next
c.next = make(chan struct{})
c.mu.Unlock()
close(old)
}
func (c *LatestChannel) Get() (float64, <-chan struct{}) {
c.mu.Lock()
n := c.n
next := c.next
c.mu.Unlock()
return n, next
}
func getSensorData(c *LatestChannel) {
time.Sleep(1 * time.Second)
c.Push(2.1)
time.Sleep(100 * time.Millisecond)
c.Push(2.2)
time.Sleep(100 * time.Millisecond)
c.Push(2.3)
time.Sleep(100 * time.Millisecond)
c.Push(2.4)
time.Sleep(100 * time.Millisecond)
c.Push(2.5)
}
func main() {
s := 1.1
c := New()
_, hasNext := c.Get()
go getSensorData(c)
for {
select {
case <-hasNext:
s, hasNext = c.Get()
fmt.Println("the next value of s from the channel: " + strconv.FormatFloat(s, 'f', 1, 64))
default:
// no new values in the channel
}
fmt.Println(s)
time.Sleep(250 * time.Millisecond) // Do heavy "work"
}
}
If you do not need the notify about new value, you can try to read Channels inside channels pattern in Golang.
Try this package https://github.com/subbuv26/chanup
It allows the producer to update the channel with latest value, which replaces the latest value. And produces does not get blocked. (with this, stale values gets overridden).
So, on the consumer side, always only the latest item gets read.
import "github.com/subbuv26/chanup"
ch := chanup.GetChan()
_ := ch.Put(testType{
a: 10,
s: "Sample",
})
_ := ch.Update(testType{
a: 20,
s: "Sample2",
})
// Continue updating with latest values
...
...
// On consumer end
val := ch.Get()
// val contains latest value
There is another way to solve this problem (trick)
sender work faster: sender remove channel if channel_length > 1
go func() {
for {
msg:=strconv.Itoa(int(time.Now().Unix()))
fmt.Println("make: ",msg," at:",time.Now())
messages <- msg
if len(messages)>1{
//remove old message
<-messages
}
time.Sleep(2*time.Second)
}
}()
receiver work slower:
go func() {
for {
channLen :=len(messages)
fmt.Println("len is ",channLen)
fmt.Println("received",<-messages)
time.Sleep(10*time.Second)
}
}()
OR, we can delete old message from receiver side
(read message like delete it)
There is an elegant channel-only solution. If you're OK with adding one more channel and goroutine - you can introduce a buferless channel and a goroutine that tries to send the latest value from your channel to it:
package main
import (
"fmt"
"time"
)
func wrapLatest(ch <-chan int) <-chan int {
result := make(chan int) // important that this one i unbuffered
go func() {
defer close(result)
value, ok := <-ch
if !ok {
return
}
LOOP:
for {
select {
case value, ok = <-ch:
if !ok {
return
}
default:
break LOOP
}
}
for {
select {
case value, ok = <-ch:
if !ok {
return
}
case result <- value:
if value, ok = <-ch; !ok {
return
}
}
}
}()
return result
}
func main() {
sendChan := make(chan int, 10) // may be buffered or not
for i := 0; i < 10; i++ {
sendChan <- i
}
go func() {
for i := 10; i < 20; i++ {
sendChan <- i
time.Sleep(time.Second)
}
close(sendChan)
}()
recvChan := wrapLatest(sendChan)
for i := range recvChan {
fmt.Println(i)
time.Sleep(time.Second * 2)
}
}

time.NewTimer doesn't work as I expect

I have a fairly simple program, which is supposed to self-terminate after a specified duration of time (for example, one second)
The code:
package main
import (
"fmt"
"time"
)
func emit(wordChannel chan string, done chan bool) {
defer close(wordChannel)
words := []string{"The", "quick", "brown", "fox"}
i := 0
t := time.NewTimer(1 * time.Second)
for {
select {
case wordChannel <- words[i]:
i++
if i == len(words) {
i = 0
}
// Please ignore the following case
case <-done:
done <- true
// fmt.Printf("Got done!\n")
close(done)
return
case <-t.C:
fmt.Printf("\n\nGot done!\n\n")
return
}
}
}
func main() {
mainWordChannel := make(chan string)
// Please ignore mainDoneChannel
mainDoneChannel := make(chan bool)
go emit(mainWordChannel, mainDoneChannel)
for word := range mainWordChannel {
fmt.Printf("%s ", word)
}
}
I compile and execute the binary, and you can see the execution here.
That is clearly above 1 second.
Go's documentation on NewTimer reads this:
func NewTimer
func NewTimer(d Duration) *Timer
NewTimer creates a new Timer that will send the current time on its channel after at least duration d.
Can someone kindly help me understand what's happening here? Why isn't the program terminating exactly (or closely at least) after 1 second?
Timer works as intended. It sends a value to a channel.
I think it is useful to read about select statement
Each iteration in your loop can write to a channel.
If > 1 communication can proceed there is no guarantee which are.
So, if add delay to read loop like this :
for word := range mainWordChannel {
fmt.Printf("%s ", word)
time.Sleep(time.Millisecond)
}
In this case, only read from timer.C can proceed. And program will end.

Selecting between time interval and length of channel

I'm here to find out the most idiomatic way to do the follow task.
Task:
Write data from a channel to a file.
Problem:
I have a channel ch := make(chan int, 100)
I need to read from the channel and write the values I read from the channel to a file. My question is basically how do I do so given that
If channel ch is full, write the values immediately
If channel ch is not full, write every 5s.
So essentially, data needs to be written to the file at least every 5s (assuming that data will be filled into the channel at least every 5s)
Whats the best way to use select, for and range to do my above task?
Thanks!
There is no such "event" as "buffer of channel is full", so you can't detect that [*]. This means you can't idiomatically solve your problem with language primitives using only 1 channel.
[*] Not entirely true: you could detect if the buffer of a channel is full by using select with default case when sending on the channel, but that requires logic from the senders, and repetitive attempts to send.
I would use another channel from which I would receive as values are sent on it, and "redirect", store the values in another channel which has a buffer of 100 as you mentioned. At each redirection you may check if the internal channel's buffer is full, and if so, do an immediate write. If not, continue to monitor the "incoming" channel and a timer channel with a select statement, and if the timer fires, do a "regular" write.
You may use len(chInternal) to check how many elements are in the chInternal channel, and cap(chInternal) to check its capacity. Note that this is "safe" as we are the only goroutine handling the chInternal channel. If there would be multiple goroutines, value returned by len(chInternal) could be outdated by the time we use it to something (e.g. comparing it).
In this solution chInternal (as its name says) is for internal use only. Others should only send values on ch. Note that ch may or may not be a buffered channel, solution works in both cases. However, you may improve efficiency if you also give some buffer to ch (so chances that senders get blocked will be lower).
var (
chInternal = make(chan int, 100)
ch = make(chan int) // You may (should) make this a buffered channel too
)
func main() {
delay := time.Second * 5
timer := time.NewTimer(delay)
for {
select {
case v := <-ch:
chInternal <- v
if len(chInternal) == cap(chInternal) {
doWrite() // Buffer is full, we need to write immediately
timer.Reset(delay)
}
case <-timer.C:
doWrite() // "Regular" write: 5 seconds have passed since last write
timer.Reset(delay)
}
}
}
If an immediate write happens (due to a "buffer full" situation), this solution will time the next "regular" write 5 seconds after this. If you don't want this and you want the 5-second regular writes be independent from the immediate writes, simply do not reset the timer following the immediate write.
An implementation of doWrite() may be as follows:
var f *os.File // Make sure to open file for writing
func doWrite() {
for {
select {
case v := <-chInternal:
fmt.Fprintf(f, "%d ", v) // Write v to the file
default: // Stop when no more values in chInternal
return
}
}
}
We can't use for ... range as that only returns when the channel is closed, but our chInternal channel is not closed. So we use a select with a default case so when no more values are in the buffer of chInternal, we return.
Improvements
Using a slice instead of 2nd channel
Since the chInternal channel is only used by us, and only on a single goroutine, we may also choose to use a single []int slice instead of a channel (reading/writing a slice is much faster than a channel).
Showing only the different / changed parts, it could look something like this:
var (
buf = make([]int, 0, 100)
)
func main() {
// ...
for {
select {
case v := <-ch:
buf = append(buf, v)
if len(buf) == cap(buf) {
// ...
}
}
func doWrite() {
for _, v := range buf {
fmt.Fprintf(f, "%d ", v) // Write v to the file
}
buf = buf[:0] // "Clear" the buffer
}
With multiple goroutines
If we stick to leave chInternal a channel, the doWrite() function may be called on another goroutine to not block the other one, e.g. go doWrite(). Since data to write is read from a channel (chInternal), this requires no further synchronization.
if you just use 5 seconds write, to increase the file write performance,
you may fill the channel any time you need,
then writer goroutine writes that data to the buffered file,
see this very simple and idiomatic sample without using timer
with just using for...range:
package main
import (
"bufio"
"fmt"
"os"
"sync"
)
var wg sync.WaitGroup
func WriteToFile(filename string, ch chan int) {
f, e := os.Create(filename)
if e != nil {
panic(e)
}
w := bufio.NewWriterSize(f, 4*1024*1024)
defer wg.Done()
defer f.Close()
defer w.Flush()
for v := range ch {
fmt.Fprintf(w, "%d ", v)
}
}
func main() {
ch := make(chan int, 100)
wg.Add(1)
go WriteToFile("file.txt", ch)
for i := 0; i < 500000; i++ {
ch <- i // do the job
}
close(ch) // Finish the job and close output file
wg.Wait()
}
and notice the defers order.
and in case of 5 seconds write, you may add one interval timer just to flush the buffer of this file to the disk, like this:
package main
import (
"bufio"
"fmt"
"os"
"sync"
"time"
)
var wg sync.WaitGroup
func WriteToFile(filename string, ch chan int) {
f, e := os.Create(filename)
if e != nil {
panic(e)
}
w := bufio.NewWriterSize(f, 4*1024*1024)
ticker := time.NewTicker(5 * time.Second)
quit := make(chan struct{})
go func() {
for {
select {
case <-ticker.C:
if w.Buffered() > 0 {
fmt.Println(w.Buffered())
w.Flush()
}
case <-quit:
ticker.Stop()
return
}
}
}()
defer wg.Done()
defer f.Close()
defer w.Flush()
defer close(quit)
for v := range ch {
fmt.Fprintf(w, "%d ", v)
}
}
func main() {
ch := make(chan int, 100)
wg.Add(1)
go WriteToFile("file.txt", ch)
for i := 0; i < 25; i++ {
ch <- i // do the job
time.Sleep(500 * time.Millisecond)
}
close(ch) // Finish the job and close output file
wg.Wait()
}
here I used time.NewTicker(5 * time.Second) for interval timer with quit channel, you may use time.AfterFunc() or time.Tick() or time.Sleep().
with some optimizations ( removing quit channel):
package main
import (
"bufio"
"fmt"
"os"
"sync"
"time"
)
var wg sync.WaitGroup
func WriteToFile(filename string, ch chan int) {
f, e := os.Create(filename)
if e != nil {
panic(e)
}
w := bufio.NewWriterSize(f, 4*1024*1024)
ticker := time.NewTicker(5 * time.Second)
defer wg.Done()
defer f.Close()
defer w.Flush()
for {
select {
case v, ok := <-ch:
if ok {
fmt.Fprintf(w, "%d ", v)
} else {
fmt.Println("done.")
ticker.Stop()
return
}
case <-ticker.C:
if w.Buffered() > 0 {
fmt.Println(w.Buffered())
w.Flush()
}
}
}
}
func main() {
ch := make(chan int, 100)
wg.Add(1)
go WriteToFile("file.txt", ch)
for i := 0; i < 25; i++ {
ch <- i // do the job
time.Sleep(500 * time.Millisecond)
}
close(ch) // Finish the job and close output file
wg.Wait()
}
I hope this helps.

Multiplexing Go Routine Output using fanIn function

I was trying to implement an example Go code for using returned channels from go routines without any "reading block" in the main function. Here, a fanIn function accepts channels from two other routines and return which got as input.
Here, the expected output is Random Outputs from two inner routines. But the actual output is always one "ann" followed by a "john", not at all random in any case.
Why am I not getting random output?
Go Playground: http://play.golang.org/p/46CiihtPwD
Actual output:
you say: ann,0
you say: john,0
you say: ann,1
you say: john,1
......
Code:
package main
import (
"fmt"
"time"
)
func main() {
final := fanIn(boring("ann"), boring("john"))
for i := 0; i < 100; i++ {
fmt.Println("you say:", <-final)
}
time.Sleep(4 * time.Second)
}
func boring(msg string) chan string {
c1 := make(chan string)
go func() {
for i := 0; ; i++ {
c1 <- fmt.Sprintf("%s,%d", msg, i)
time.Sleep(time.Second)
}
}()
return c1
}
func fanIn(input1, input2 <-chan string) chan string {
c := make(chan string)
go func() {
for {
c <- <-input1
}
}()
go func() {
for {
c <- <-input2
}
}()
return c
}
No particular reason, that's just how Go happens to schedule the relevant goroutines (basically, you're getting "lucky" that there's a pattern). You can't rely on it. If you really want an actual reliably random result, you'll have to manually mix in randomness somehow.
There's also the Multiplex function from https://github.com/eapache/channels/ (doc: https://godoc.org/github.com/eapache/channels#Multiplex) which does effectively the same thing as your fanIn function. I don't think it would behave any differently in terms of randomness though.

Go routines started with for-loop - one or many channels?

I would like to load some json files (".json") using a goroutine called from a for-loop. I'd like to have the loading parallellized (processing first files while the other files are being loaded).
Q1. Since the numer of files may vary (new ones to be added), I would use a (file) list with filenames (autogenerating the names only in this example), therefore I'd like to use a for-loop. Optimal?
Q2. What would be the most effective use of channel(s).
Q3. How would I define the channel(s) if a unique channel for each load operation (as in the example code below) is needed?
Example code (to be compacted & capable of loading the files using a list of file names):
func load_json(aChan chan byte, s string) {
// load "filename" + s + ".json"
// confirm to the channel
aChan <- 0
}
func do_stuff() {
// .. with the newly loaded json
}
func Main() {
chan_A := make(chan byte)
go load_json(chan_A, "_classA")
chan_B := make(chan byte)
go load_json(chan_B, "_classB")
chan_C := make(chan byte)
go load_json(chan_C, "_classC")
chan_D := make(chan byte)
go load_json(chan_D, "_classD")
<-chan_A
// Now, do stuff with Class A
<-chan_B
// etc...
<-chan_C
<-chan_D
fmt.Println("Done.")
}
EDIT:
I designed a simplified test solution based on the ideas suggested by "Tom" (see below). In my case I splitted the task in three phases, using one channel per phase to control the execution. However, I tend to get deadlocks with this code (See execution results and the note below below the code).
Run this code on the PlayGround.
How can I avoid the deadlocks in this code?:
type TJsonFileInfo struct {
FileName string
}
type TChannelTracer struct { // Will count & display visited phases A, B, C
A, B, C int
}
var ChannelTracer TChannelTracer
var jsonFileList = []string{
"./files/classA.json",
"./files/classB.json",
"./files/classC.json",
}
func LoadJsonFiles(aFileName string, aResultQueueChan chan *TJsonFileInfo) {
var newFileInfo TJsonFileInfo
newFileInfo.FileName = aFileName
// file, e := ioutil.ReadFile(newFileInfo.FileName)...
ChannelTracer.A += 1
fmt.Printf("A. Loaded file: %s\n", newFileInfo.FileName)
aResultQueueChan <- &newFileInfo
}
func UnmarshalFile(aWorkQueueChan chan *TJsonFileInfo, aResultQueueChan chan *TJsonFileInfo) {
FileInfo := <-aWorkQueueChan
ChannelTracer.B += 1
fmt.Printf("B. Marshalled file: %s\n", FileInfo.FileName)
aResultQueueChan <- FileInfo
}
func ProcessWork(aWorkQueueChan chan *TJsonFileInfo, aDoneQueueChan chan *TJsonFileInfo) {
FileInfo := <-aWorkQueueChan
ChannelTracer.C += 1
fmt.Printf("C. Processed file: %s \n", FileInfo.FileName)
aDoneQueueChan <- FileInfo
}
func main() {
marshalChan := make(chan *TJsonFileInfo)
processChan := make(chan *TJsonFileInfo)
doneProcessingChan := make(chan *TJsonFileInfo)
for _, fileName := range jsonFileList {
go LoadJsonFiles(fileName, marshalChan)
go UnmarshalFile(marshalChan, processChan)
go ProcessWork(processChan, doneProcessingChan)
}
for {
select {
case result := <-marshalChan:
result.FileName = result.FileName // dummy use
case result := <-processChan:
result.FileName = result.FileName // dummy use
case result := <-doneProcessingChan:
result.FileName = result.FileName // dummy use
fmt.Printf("Done%s Channels visited: %v\n", ".", ChannelTracer)
}
}
}
/**
RESULTS (for phases A, B and C):
A. Loaded file: ./files/classA.json
A. Loaded file: ./files/classB.json
A. Loaded file: ./files/classC.json
B. Marshalled file: ./files/classB.json
B. Marshalled file: ./files/classC.json
C. Processed file: ./files/classB.json
C. Processed file: ./files/classC.json
Done. Channels visited: {3 2 2} // ChannelTracer for phase A, B and C
Done. Channels visited: {3 2 2}
fatal error: all goroutines are asleep - deadlock!
*/
Note that this code doesn't access the file system so it should run on the PlayGround.
EDIT2: - Apart from the unsafe "ChannelTracer" I can avoid deadlocks only by consuming doneProcessingChannel the same number of times as the file tasks.
Run the code here: Playground
func main() {
marshalChan := make(chan *TJsonFileInfo)
processChan := make(chan *TJsonFileInfo)
doneProcessingChan := make(chan *TJsonFileInfo)
go UnmarshalFiles(marshalChan, processChan)
go ProcessWork(processChan, doneProcessingChan)
for _, fileName := range jsonFileList {
go LoadJsonFiles(fileName, marshalChan)
}
// Read doneProcessingChan equal number of times
// as the spawned tasks (files) above :
for i := 0; i < len(jsonFileList); i++ {
<-doneProcessingChan
fmt.Printf("Done%s Channels visited: %v\n", ".", ChannelTracer)
}
}
// RIL
building on the answer by #BraveNewCurrency I have composed a simplistic example program for you:
package main
import (
"encoding/json"
"fmt"
"os"
)
type Result struct {
Some string
Another string
AndAn int
}
func generateWork(work chan *os.File) {
files := []string{
"/home/foo/a.json",
"/home/foo/b.json",
"/home/foo/c.json",
}
for _, path := range files {
file, e := os.Open(path)
if e != nil {
panic(e)
}
work <- file
}
}
func processWork(work chan *os.File, done chan Result) {
file := <-work
decoder := json.NewDecoder(file)
result := Result{}
decoder.Decode(&result)
done <- result
}
func main() {
work := make(chan *os.File)
go generateWork(work)
done := make(chan Result)
for i := 0; i < 100; i++ {
go processWork(work, done)
}
for {
select {
case result := <-done:
// a result is available
fmt.Println(result)
}
}
}
Note that this program won't work on the playground because file-system access is disallowed there.
Edit:
To answer the edition in your question, I've taken the code and changed some small things:
package main
import (
_ "encoding/json"
"fmt"
_ "io/ioutil"
_ "os"
)
type TJsonMetaInfo struct {
MetaSystem string
}
type TJsonFileInfo struct {
FileName string
}
type TChannelTracer struct { // Will count & display visited phases A, B, C
A, B, C int
}
var ChannelTracer TChannelTracer
var jsonFileList = []string{
"./files/classA.json",
"./files/classB.json",
"./files/classC.json",
}
func LoadJsonFiles(aFileName string, aResultQueueChan chan *TJsonFileInfo) {
newFileInfo := TJsonFileInfo{aFileName}
// file, e := ioutil.ReadFile(newFileInfo.FileName)
// etc...
ChannelTracer.A += 1
fmt.Printf("A. Loaded file: %s\n", newFileInfo.FileName)
aResultQueueChan <- &newFileInfo
}
func UnmarshalFiles(aWorkQueueChan chan *TJsonFileInfo, aResultQueueChan chan *TJsonFileInfo) {
for {
FileInfo := <-aWorkQueueChan
ChannelTracer.B += 1
fmt.Printf("B. Unmarshalled file: %s\n", FileInfo.FileName)
aResultQueueChan <- FileInfo
}
}
func ProcessWork(aWorkQueueChan chan *TJsonFileInfo, aDoneQueueChan chan *TJsonFileInfo) {
for {
FileInfo := <-aWorkQueueChan
ChannelTracer.C += 1
fmt.Printf("C. Processed file: %s \n", FileInfo.FileName)
aDoneQueueChan <- FileInfo
}
}
func main() {
marshalChan := make(chan *TJsonFileInfo)
processChan := make(chan *TJsonFileInfo)
doneProcessingChan := make(chan *TJsonFileInfo)
go UnmarshalFiles(marshalChan, processChan)
go ProcessWork(processChan, doneProcessingChan)
for _, fileName := range jsonFileList {
go LoadJsonFiles(fileName, marshalChan)
}
for {
select {
case result := <-doneProcessingChan:
result.FileName = result.FileName // dummy use
fmt.Printf("Done%s Channels visited: %v\n", ".", ChannelTracer)
}
}
}
Note that this code still deadlocks but at the end, when all work is complete, in the last empty for loop in main().
Note also that these lines:
ChannelTracer.A += 1
ChannelTracer.B += 1
ChannelTracer.C += 1
are not concurrency-safe. This means that in a multi-threaded environment one goroutine and the other might try to increment the same counter at the same time, resulting in a wrong count. To come around this issue, take a look at the following packages:
http://golang.org/pkg/sync/
http://golang.org/pkg/sync/atomic/
You should structure your program this way:
1) the main routine creates a channel for "work to do" and probably one for "done work" (both channels should probably have some buffering)
2) spin off one goroutine to generate the file list and put them in the "work to do" channel.
3) spin up N goroutines (in a for loop) to process files. The routine will read the file from the "work to do" channel, process it, and send the response to the "done work" channel.
4) the main routine waits on "done work" and prints them or whatever.
The optimal "N" above varies depending on the problem
- If your work is CPU bound, the optimal N should be about the number of processors in your system.
- If your work is disk bound, performance may actually go down as you increase N because multiple workers will cause more random I/O.
- If your work pulls files from many remote computers (think webcrawling), then the optimal N might be very high (100 or even 1000).

Resources