I have a case where I want to spin up a go subroutine that will fetch some data from a source periodically. If the call fails, it stores the error until the next call succeeds. Now there are several instances in the code where a instance would access this data pulled by the go subroutine. How can I implement something like that?
UPDATE
I have had some sleep and coffee in me and I think I need to rephrase the problem more coherently using java-ish semantics.
I have come up with a basic singleton pattern that returns me a interface implementation that is running a go subroutine internally in a forever loop (lets put the cardinal sin of forever loops aside for a moment). The problem is that this interface implementation is being accessed by multiple threads to get the data collected by the go subroutine. Essentially, the data is pulled every 10 mins by the subroutine and then requested infinite number of times. How can I implement something like that?
Here's a very basic example of how you can periodically fetch and collect data.
Have in mind: running this code will do nothing as main will return before anything really happens, but how you handle this depends on your specific use case. This code is really bare bones and needs improvements. It is a sketch of a possible solution to a part of your problem :)
I didn't handle errors here, but you could handle them the same way fetched data is handled (so, one more chan for errors and one more goroutine to read from it).
func main() {
period := time.Second
respChan := make(chan string)
cancelChan := make(chan struct{})
dataCollection := []string
// periodicaly fetch data and send it to respChan
go func(period time.Duration, respChan chan string, cancelChan chan struct{}) {
ticker := time.Ticker(period)
for {
select {
case <-ticker.C:
go fetchData(respChan)
case <-cancelChan:
// close respChan to stop reading goroutine
close(respChan)
return
}
}
}(period, cancelChan)
// read from respChan and write to dataCollection
go func(respChan chan string) {
for data := range respChan {
dataCollection = append(dataCollection, data)
}
}(respChan)
// close cancelChan to gracefuly stop the app
// close(cancelChan)
}
func fetchData(respChan chan string) {
data := "fetched data"
respChan <- data
}
You can use channel for that, but then you would push data not pull. I guess that wouldn't be a problem.
var channelXY = make(chan struct{}, 5000) //Change queue limits to your need, if push is much faster than pull you need to calculate the buffer
go func(channelXY <- chan struct{})
for struct{} := range channelXY {
//DO STUFF
}
WaitGroup.Done()
}(channelXY)
go func() {
channelXY <- struct{}
}
remember to manage all routines with WaitGroup otherwise you programm will end before all routines are done.
EDIT: Close the channel to stop the channel-read go-routine:
close(channelXY)
Related
Hi I'm having a problem with a control channel (of sorts).
The essence of my program:
I do not know how many go routines I will be running at runtime
I will need to restart these go routines at set times, however, they could also potentially error out (and then restarted), so their timing will not be predictable.
These go routines will be putting messages onto a single channel.
So What I've done is created a simple random message generator to put messages onto a channel.
When the timer is up (random duration for testing) I put a message onto a control channel which is a struct payload, so I know there was a close signal and which go routine it was; in reality I'd then do some other stuff I'd need to do before starting the go routines again.
My problem is:
I receive the control message within my reflect.Select loop
I do not (or unable to) receive it in my randmsgs() loop
Therefore I can not stop my randmsgs() go routine.
I believe I'm right in understanding that multiple go routines can read from a single channel, therefore I think I'm misunderstanding how reflect.SelectCases fit into all of this.
My code:
package main
import (
"fmt"
"math/rand"
"reflect"
"time"
)
type testing struct {
control bool
market string
}
func main() {
rand.Seed(time.Now().UnixNano())
// explicitly define chanids for tests.
var chanids []string = []string{"GR I", "GR II", "GR III", "GR IV"}
stream := make(chan string)
control := make([]chan testing, len(chanids))
reflectCases := make([]reflect.SelectCase, len(chanids)+1)
// MAKE REFLECT SELECTS FOR 4 CONTROL CHANS AND 1 DATA CHANNEL
for i := range chanids {
control[i] = make(chan testing)
reflectCases[i] = reflect.SelectCase{Dir: reflect.SelectRecv, Chan: reflect.ValueOf(control[i])}
}
reflectCases[4] = reflect.SelectCase{Dir: reflect.SelectRecv, Chan: reflect.ValueOf(stream)}
// START GO ROUTINES
for i, val := range chanids {
runningFunc(control[i], val, stream, 1+rand.Intn(30-1))
}
// READ DATA
for {
o, recieved, ok := reflect.Select(reflectCases)
if !ok {
fmt.Println("You really buggered this one up...")
}
ty, err := recieved.Interface().(testing)
if err == true {
fmt.Printf("Read from: %v, and recieved close signal from: %s\n", o, ty.market)
// close control & stream here.
} else {
ty := recieved.Interface().(string)
fmt.Printf("Read from: %v, and recieved value from: %s\n", o, ty)
}
}
}
// THE GO ROUTINES - TIMER AND RANDMSGS
func runningFunc(q chan testing, chanid string, stream chan string, dur int) {
go timer(q, dur, chanid)
go randmsgs(q, chanid, stream)
}
func timer(q chan testing, t int, message string) {
for t > 0 {
time.Sleep(time.Second)
t--
}
q <- testing{true, message}
}
func randmsgs(q chan testing, chanid string, stream chan string) {
for {
select {
case <-q:
fmt.Println("Just sitting by the mailbox. :(")
return
default:
secondsToWait := 1 + rand.Intn(5-1)
time.Sleep(time.Second * time.Duration(secondsToWait))
stream <- fmt.Sprintf("%s: %d", chanid, secondsToWait)
}
}
}
I apologise for the wall of text, but I'm all out of ideas :(!
K/Regards,
C.
Your channels q in the second half are the same as control[0...3] in the first.
Your reflect.Select that you are running also reads from all of these channels, with no delay.
The problem I think comes down to that your reflect.Select is simply running too fast and "stealing" all the channel output right away. This is why randmsgs is never able to read the messages.
You'll notice that if you remove the default case from randmsgs, the function is able to (potentially) read some of the messages from q.
select {
case <-q:
fmt.Println("Just sitting by the mailbox. :(")
return
}
This is because now that it is running without delay, it is always waiting for a message on q and thus has the chance to beat the reflect.Select in the race.
If you read from the same channel in multiple goroutines, then the data passed will simply go to whatever goroutine reads it first.
This program appears to just be an experiment / learning experience, but I'll offer some criticism that may help.
Again, generally you don't have multiple goroutines reading from the same channel if both goroutines are doing different tasks. You're creating a mostly non-deterministic race as to which goroutine fetches the data first.
Second, this is a common beginner's anti-pattern with select that you should avoid:
for {
select {
case v := <-myChan:
doSomething(v)
default:
// Oh no, there wasn't anything! Guess we have to wait and try again.
time.Sleep(time.Second)
}
This code is redundant because select already behaves in such a way that if no case is initially ready, it will wait until any case is ready and then proceed with that one. This default: sleep is effectively making your select loop slower and yet spending less time actually waiting on the channel (because 99.999...% of the time is spent on time.Sleep).
I'm kinda new to Golang and trying to develop a program that uploads images async to imgur. However I'm having some difficulties with my code.
So this is my task;
func uploadT(url string,c chan string, d chan string) {
var subtask string
subtask=upload(url)
var status string
var url string
if subtask!=""{
status = "Success!"
url =subtask
} else {
status = "Failed!"
url =subtask
}
c<-url
d<-status
}
And here is my POST request loop for async uploading;
c:=make(chan string, len(js.Urls))
d:=make(chan string, len(js.Urls))
wg:=sync.WaitGroup{}
for i := range js.Urls{
wg.Add(1)
go uploadTask(js.Urls[i],c,d)
//Below commented out code is slowing down the routine therefore, I commented out.
//Needs to be working as well, however, it can work if I put this on task as well. I think I'm kinda confused with this one as well
//pol=append(pol,retro{Url:<-c,Status:<-d})
}
<-c
<-d
wg.Done()
FinishedTime := time.Now().UTC().Format(time.RFC3339)
qwe=append(qwe,outputURLs{
jobID:jobID,
retro:pol,
CreateTime: CreateTime,
FinishedTime: FinishedTime,
})
fmt.Println(jobID)
So I think my channels and routine does not work. It does print out jobID before the upload tasks. And also uploads seems too slow for async uploading.
I know the code is kinda mess, sorry for that. Any help is highly appreciated! Thanks in advance!
You're actually not using WaitGroup correctly. Everytime you call wg.Done() its actually subtracting 1 from the previous wg.Add to determine that a given task is complete. Finally, you'll need a wg.Wait() to synchronously wait for all tasks. WaitGroups are typically for fan out usage of running multiple tasks in parallel.
The simplest way based on your code example is to pass in the wg into your task, uploadT and call wg.Done() inside of the task. Note that you'll also want to use a pointer instead of the struct value.
The next implementation detail is to call wg.Wait() outside of the loop because you want to block until all the tasks are complete since all your tasks are ran with go which makes it async. If you don't wg.Wait(), it will log the jobID immediately like you said. Let me know if that is clear.
As a boilerplate, it should look something like this
func task(wg *sync.WaitGroup) {
wg.Done()
}
wg := &sync.WaitGroup{}
for i := 0; i < 10; i++ {
wg.Add(1)
go task(wg)
}
wg.Wait()
// do something after the task is done
fmt.Println("done")
The other thing I want to note is that in your current code example, you're using channels but you're not doing anything with the values you're pushing into the channels so you can technically remove them.
Your code is kinda confusing. But if I understand correctly what you are trying to do, you are processing a list of requests and want to return the url and status of each request and time each request completed. And you want to process these in parallel.
You don't need to use WaitGroups at all. WaitGroups are good when you just want to run a bunch of tasks without bothering with the results, just want to know when everything is done. But if you are returning results, channels are sufficient.
Here is an example code that does what I think you are trying to do
package main
import (
"time"
"fmt"
)
type Result struct {
URL string
Status string
Finished string
}
func task(url string, c chan string, d chan string) {
time.Sleep(time.Second)
c <- url
d <- "Success"
}
func main() {
var results []Result
urls := []string{"url1", "url2", "url3", "url4", "url5"}
c := make(chan string, len(urls))
d := make(chan string, len(urls))
for _, u := range urls {
go task(u, c, d)
}
for i := 0; i < len(urls); i++ {
res := Result{}
res.URL = <-c
res.Status = <-d
res.Finished = time.Now().UTC().Format(time.RFC3339)
results = append(results, res)
}
fmt.Println(results)
}
You can try it in the playground https://play.golang.org/p/N3oeA7MyZ8L
That said, this is a bit fragile. You are making channels the same size as your url list. This would work fine for a few urls, but if you have a list of a million urls you will make a rather large channel. You might want to fix the channel buffer size to some reasonable value and check whether or not channel is ready for processing before sending your request. This way you would avoid making a million requests all at once.
I'm trying to create message hub in golang. Messages are getting through different channels that persist in map[uint32]chan []float64. I do an endless loop over map and check if a channel has a message. If it has, I write it to the common client's write channel together with an incoming channel's id. It works fine, but uses all CPU, and other processes are throttled.
UPD: Items in map adding and removing dynamically by another function.
I thinking to limit CPU for this app through Docker, but maybe there is more elegant path?
My code :
func (c *Client) accumHandler() {
for !c.stop {
c.channels.Range(func(key, value interface{}) bool {
select {
case message := <-value.(chan []float64):
mess := map[uint32]interface{}{key.(uint32): message}
select {
case c.send <- mess:
}
default:
}
return !c.stop
})
}
}
If I'm reading the cards correctly, it seems like you are trying to pass along an array of floats to a common channel along with a channel identifier. I assume that you are doing this to pass multiple channels out to different publishers, but so that you only have to track a single channel for your consumer.
It turns out that you don't need to loop over channels to see when it's outputting a value. You can chain channels together inside of goroutines. For this reason, no busy wait is necessary. Something like this will suit your purposes (again, if I'm reading the cards correctly). Look for the all caps comment for the way around your busy loop. Link to playground.
var num_publishes = 3
func main() {
num_publishers := 10
single_consumer := make(chan []float64)
for i:=0;i<num_publishers;i+=1 {
c := make(chan []float64)
// connect channel to your single consumer channel
go func() { for { single_consumer <- <-c } }() // THIS IS PROBABLY WHAT YOU DIDN'T KNOW ABOUT
// send the channel to the publisher
go publisher(c, i*100)
}
// dumb consumer example
for i:=0;i<num_publishers*num_publishes;i+=1 {
fmt.Println(<-single_consumer)
}
}
func publisher(c chan []float64, publisher_id int) {
dummy := []float64{
float64(publisher_id+1),
float64(publisher_id+2),
float64(publisher_id+3),
}
for i:=0;i<num_publishes;i+=1 {
time.Sleep(time.Duration(rand.Intn(10000)) * time.Millisecond)
c <- dummy
}
}
It is eating all of your CPU because you are continuously cycling around the dictionary checking for messages, so even when there are no messages to process the CPU, or at least a thread or core, is running flat out. You need blocking sends and receives on the channels!
I assume you are doing this because you don't know how many channels there will be and therefor can't just select on all of the input channels. A better pattern would be to start a separate goroutine for each input channel you are currently storing in the dictionary. Each goroutine should have a loop in which it blocks waiting the input channel and on receiving a message does a blocking send to a channel to the client which is shared by all.
The question isn't complete so can't give exact code, but you're going to have goroutines that look something like this:
type Message struct {
id uint32,
message []float64
}
func receiverGoroutine(id uint32, input chan []float64, output chan Message) {
for {
message := <- input
output <- Message{id: id, message: message}
}
}
func clientGoroutine(c *Client, input chan Message) {
for {
message := <- input
// do stuff
}
}
(You'll need to add some "done" channels as well though)
Elsewhere you will start them with code like this:
clientChan := make(chan Message)
go clientGoroutine(client, clientChan)
for i:=0; i<max; i++ {
go receiverGoroutine( (uint32)i, make(chan []float64, clientChan)
}
Or you can just start the client routine and then add the others as they are needed rather than in a loop up front - depends on your use case.
iam currently playing around with go routines, channels and sync.WaitGroup. I know waitgroup is used to wait for all go routines to complete based on weather wg.Done() has been called enough times to derement the value set in wg.Add().
i wrote a small bit of code to try to test this in golang playground. show below
var channel chan int
var wg sync.WaitGroup
func main() {
channel := make(chan int)
mynums := []int{1,2,3,4,5,6,7,8,9}
wg.Add(1)
go addStuff(mynums)
wg.Wait()
close(channel)
recieveStuff(channel)
}
func addStuff(mynums []int) {
for _, val := range mynums {
channel <- val
}
wg.Done()
}
func recieveStuff(channel chan int) {
for val := range channel{
fmt.Println(val)
}
}
Iam getting a deadlock error. iam trying to wait on the routing to return with wg.Wait()? then, close the channel. Afterwards, send the channel to the recievestuff method to output the values in the slice? but it doesnt work. Ive tried moving the close() method inside the go routine after the loop also as i thought i may of been trying to close on the wrong routine in main(). ive found this stuff relatively confusing so far coming from java and c#. Any help is appreciated.
The call to wg.Wait() wouldn't return until wg.Done() has been called once.
In addStuff(), you're writing values to a channel when there's no other goroutine to drain those values. Since the channel is unbuffered, the first call to channel <- val would block forever, resulting in a deadlock.
Moreover, the channel in addStuff() remains nil since you're creating a new variable binding inside main, instead of assigning to the package-level variable. Writing to a nil channel blocks forever:
channel := make(chan int) //doesn't affect the package-level variable
Here's a modified example that runs to completion by consuming all values from the channel:
https://play.golang.org/p/6gcyDWxov7
The reading part isn't concurrent but the processing is. I phrased the title this way because I'm most likely to search for this problem again using that phrase. :)
I'm getting a deadlock after trying to go beyond the examples so this is a learning experience for me. My goals are these:
Read a file line by line (eventually use a buffer to do groups of lines).
Pass off the text to a func() that does some regex work.
Send the results somewhere but avoid mutexes or shared variables. I'm sending ints (always the number 1) to a channel. It's sort of silly but if it's not causing problems I'd like to leave it like this unless you folks have a neater option.
Use a worker pool to do this. I'm not sure how I tell the workers to requeue themselves?
Here is the playground link. I tried to write helpful comments, hopefully this makes sense. My design could be completely wrong so don't hesitate to refactor.
package main
import (
"bufio"
"fmt"
"regexp"
"strings"
"sync"
)
func telephoneNumbersInFile(path string) int {
file := strings.NewReader(path)
var telephone = regexp.MustCompile(`\(\d+\)\s\d+-\d+`)
// do I need buffered channels here?
jobs := make(chan string)
results := make(chan int)
// I think we need a wait group, not sure.
wg := new(sync.WaitGroup)
// start up some workers that will block and wait?
for w := 1; w <= 3; w++ {
wg.Add(1)
go matchTelephoneNumbers(jobs, results, wg, telephone)
}
// go over a file line by line and queue up a ton of work
scanner := bufio.NewScanner(file)
for scanner.Scan() {
// Later I want to create a buffer of lines, not just line-by-line here ...
jobs <- scanner.Text()
}
close(jobs)
wg.Wait()
// Add up the results from the results channel.
// The rest of this isn't even working so ignore for now.
counts := 0
// for v := range results {
// counts += v
// }
return counts
}
func matchTelephoneNumbers(jobs <-chan string, results chan<- int, wg *sync.WaitGroup, telephone *regexp.Regexp) {
// Decreasing internal counter for wait-group as soon as goroutine finishes
defer wg.Done()
// eventually I want to have a []string channel to work on a chunk of lines not just one line of text
for j := range jobs {
if telephone.MatchString(j) {
results <- 1
}
}
}
func main() {
// An artificial input source. Normally this is a file passed on the command line.
const input = "Foo\n(555) 123-3456\nBar\nBaz"
numberOfTelephoneNumbers := telephoneNumbersInFile(input)
fmt.Println(numberOfTelephoneNumbers)
}
You're almost there, just need a little bit of work on goroutines' synchronisation. Your problem is that you're trying to feed the parser and collect the results in the same routine, but that can't be done.
I propose the following:
Run scanner in a separate routine, close input channel once everything is read.
Run separate routine waiting for the parsers to finish their job, than close the output channel.
Collect all the results in you main routine.
The relevant changes could look like this:
// Go over a file line by line and queue up a ton of work
go func() {
scanner := bufio.NewScanner(file)
for scanner.Scan() {
jobs <- scanner.Text()
}
close(jobs)
}()
// Collect all the results...
// First, make sure we close the result channel when everything was processed
go func() {
wg.Wait()
close(results)
}()
// Now, add up the results from the results channel until closed
counts := 0
for v := range results {
counts += v
}
Fully working example on the playground: http://play.golang.org/p/coja1_w-fY
Worth adding you don't necessarily need the WaitGroup to achieve the same, all you need to know is when to stop receiving results. This could be achieved for example by scanner advertising (on a channel) how many lines were read and then the collector reading only specified number of results (you would need to send zeros as well though).
Edit: The answer by #tomasz above is the correct one. Please disregard this answer.
You need to do two things:
use buffered chan's so that sending doesn't block
close the results chan so that receiving doesn't block.
The use of buffered channels is essential because unbuffered channels need a receive for each send, which is causing the deadlock you're hitting.
If you fix that, you'll run into a deadlock when you try to receive the results, because results hasn't been closed.
Here's the fixed playground: http://play.golang.org/p/DtS8Matgi5