New to golang and programming in general. I am currently writing a small quiz program for a learning task and ran into a snag that the tutorial does not address because I have features not included on the tutorial.
Code is included below:
func runQuestions(randomize bool) int {
tempqSlice := qSlice //Create temporary set of questions (Don't touch original)
if randomize { //If user has chosen to randomize the question order
tempqSlice = shuffle(tempqSlice) //Randomize
}
var runningscore int
userinputchan := make(chan string) //Create return channel for user input
go getInput(userinputchan) //Constantly wait for user input
for num, question := range tempqSlice { //Iterate over each question
fmt.Printf("Question %v:\t%v\n\t", num+1, question.GetQuestion())
select {
case <-time.After(5 * time.Second): //
fmt.Println("-time up! next question-")
continue
case input := <-userinputchan:
if question.GetAnswer() == input { //If the answer is correct
runningscore++
continue //Continue to next question
}
}
}
return runningscore
}
func getInput(returnchan chan<- string) {
for {
reader := bufio.NewReader(os.Stdin) //Create reader
input, _ := reader.ReadString('\n') //Read from
returnchan <- strings.TrimSpace(input) //Trim the input and send it
}
}
Because the specification of the problem requires each question to have a timelimit, I have set a 'endless for loop go routine' running that waits for user input and then sends it when it is given. My problem is simple: I would like to stop the reader looking for input once the quiz is over but since the 'reader.ReadString('/n')' is already awaiting input, I'm not sure how.
I would like to stop the reader looking for input once the quiz is over
While the reader is looking for input, you can use a goroutine that runs in its background to check whether the quiz timer is expired.
Suppose your timer for quiz is 30 seconds. Pass the timer to the goroutine getInputand check for the timer expiry.
var runningscore int
userinputchan := make(chan string) //Create return channel for user input
myTime := flag.Int("timer", 30, "time to complete the quiz")
timer := startTimer(myTime)
go getInput(userinputchan, timer)
func startTimer(myTime *int) *time.Timer {
return time.NewTimer(time.Duration(*myTime) * time.Second)
}
func getInput(returnchan chan<- string, timer *time.Timer) {
for {
reader := bufio.NewReader(os.Stdin) //Create reader
go checkTime(timer)
input, _ := reader.ReadString('\n') //Read from
returnchan <- strings.TrimSpace(input) //Trim the input and send it
}
}
func checkTime(timer *time.Timer) {
<-timer.C
fmt.Println("\nYour quiz is over!!!")
// print the final score
os.Exit(1)
}
You cannot do this out of the box, however you can make a workaround by reading character by character from stdin, buffering the inputs and only sending the buffered data when the user hits Enter.
Unfortunately, there is there is no standard package for reading character by character, I will be using github.com/eiannone/keyboard. (You can get it by go get -u github.com/eiannone/keyboard).
The following class will be your input system. You can instantiate it by calling NewInput(yourResultChannel). This will start reading from stdin character by character and buffers them into buf. Once Enter (char == 0) is hit, it sends the contents of the buffer to the resultChannel end resets the buffer.
type Input struct {
lock sync.Mutex
buf []rune
resultCh chan string
}
func NewInput(resultCh chan string) *Input {
i := &Input{
resultCh: resultCh,
}
go i.readStdin()
return i
}
func (i *Input) ResetBuffer() {
i.lock.Lock()
defer i.lock.Unlock()
i.resetBuffer()
}
func (i *Input) readStdin() {
for {
char, _, err := keyboard.GetSingleKey()
if err != nil {
panic(err)
}
fmt.Printf("%s", string(char))
i.lock.Lock()
if char == 0 {
i.resultCh <- strings.TrimSpace(string(i.buf))
i.resetBuffer()
} else {
i.buf = append(i.buf, char)
}
i.lock.Unlock()
}
}
func (i *Input) resetBuffer() {
i.buf = []rune{}
}
Your only remaining job is to create an instance of the Input class at the point where you were previously starting your getInput() goroutine.
userinputchan := make(chan string) //Create return channel for user input
input := NewInput(userinputchan)
and reset the buffer when the question times out so that the previous garbage input won't be part of the new question's answer. In the select statement:
case <-time.After(5 * time.Second):
fmt.Println("-time up! next question-")
input.ResetBuffer()
continue
Full code: https://play.golang.org/p/-IXVvY9dOZO
Related
To give you context,
The variable elementInput is dynamic. I do not know the exact length of it.
It can be 10, 5, or etc.
The *Element channel type is struct
My example is working. But my problem is this implementation is still synchronized, because I am waiting for the channel return so that I can append it to my result
Can you pls help me how to concurrent call GetElements() function and preserve the order defined in elementInput (based on index)
elementInput := []string{FB_FRIENDS, BEAUTY_USERS, FITNESS_USERS, COMEDY_USERS}
wg.Add(len(elementInput))
for _, v := range elementInput {
//create channel
channel := make(chan *Element)
//concurrent call
go GetElements(ctx, page, channel)
//Preserve the order
var elementRes = *<-channel
if len(elementRes.List) > 0 {
el = append(el, elementRes)
}
}
wg.Wait()
Your implementation is not concurrent.
Reason after every subroutine call you are waiting for result, that is making this serial
Below is Sample implementation similar to your flow
calling Concurreny method which calls function concurrently
afterwards we loop and collect response from every above call
main subroutine sleep for 2 seconds
Go PlayGround with running code -> Sample Application
func main() {
Concurrency()
time.Sleep(2000)
}
func response(greeter string, channel chan *string) {
reply := fmt.Sprintf("hello %s", greeter)
channel <- &reply
}
func Concurrency() {
events := []string{"ALICE", "BOB"}
channels := make([]chan *string, 0)
// start concurrently
for _, event := range events {
channel := make(chan *string)
go response(event, channel)
channels = append(channels, channel)
}
// collect response
response := make([]string, len(channels))
for i := 0; i < len(channels); i++ {
response[i] = *<-channels[i]
}
// print response
log.Printf("channel response %v", response)
}
I'm writing a program that reads a list of order numbers in a file called orders.csv and compares it with the other csv files that are present in the folder.
The problem is that it goes into deadlock even using waitgroup and I don't know why.
For some reason stackoverflow says that my post is mostly code, so I have to add this line, because the whole code is necessary if someone wants to help me debug this problem I'm having.
package main
import (
"bufio"
"fmt"
"log"
"os"
"path/filepath"
"strings"
"sync"
)
type Files struct {
filenames []string
}
type Orders struct {
ID []string
}
var ordersFilename string = "orders.csv"
func main() {
var (
ordersFile *os.File
files Files
orders Orders
err error
)
mu := new(sync.Mutex)
wg := &sync.WaitGroup{}
wg.Add(1)
if ordersFile, err = os.Open(ordersFilename); err != nil {
log.Fatalln("Could not open file: " + ordersFilename)
}
orders = getOrderIDs(ordersFile)
files.filenames = getCSVsFromCurrentDir()
var filenamesSize = len(files.filenames)
var ch = make(chan map[string][]string, filenamesSize)
var done = make(chan bool)
for i, filename := range files.filenames {
go func(currentFilename string, ch chan<- map[string][]string, i int, orders Orders, wg *sync.WaitGroup, filenamesSize *int, mu *sync.Mutex, done chan<- bool) {
wg.Add(1)
defer wg.Done()
checkFile(currentFilename, orders, ch)
mu.Lock()
*filenamesSize--
mu.Unlock()
if i == *filenamesSize {
done <- true
close(done)
}
}(filename, ch, i, orders, wg, &filenamesSize, mu, done)
}
select {
case str := <-ch:
fmt.Printf("%+v\n", str)
case <-done:
wg.Done()
break
}
wg.Wait()
close(ch)
}
// getCSVsFromCurrentDir returns a string slice
// with the filenames of csv files inside the
// current directory that are not "orders.csv"
func getCSVsFromCurrentDir() []string {
var filenames []string
err := filepath.Walk(".", func(path string, info os.FileInfo, err error) error {
if path != "." && strings.HasSuffix(path, ".csv") && path != ordersFilename {
filenames = append(filenames, path)
}
return nil
})
if err != nil {
log.Fatalln("Could not read file names in current dir")
}
return filenames
}
// getOrderIDs returns an Orders struct filled
// with order IDs retrieved from the file
func getOrderIDs(file *os.File) Orders {
var (
orders Orders
err error
fileContent string
)
reader := bufio.NewReader(file)
if fileContent, err = readLine(reader); err != nil {
log.Fatalln("Could not read file: " + ordersFilename)
}
for err == nil {
orders.ID = append(orders.ID, fileContent)
fileContent, err = readLine(reader)
}
return orders
}
func checkFile(filename string, orders Orders, ch chan<- map[string][]string) {
var (
err error
file *os.File
fileContent string
orderFilesMap map[string][]string
counter int
)
orderFilesMap = make(map[string][]string)
if file, err = os.Open(filename); err != nil {
log.Fatalln("Could not read file: " + filename)
}
reader := bufio.NewReader(file)
if fileContent, err = readLine(reader); err != nil {
log.Fatalln("Could not read file: " + filename)
}
for err == nil {
if containedInSlice(fileContent, orders.ID) && !containedInSlice(fileContent, orderFilesMap[filename]) {
orderFilesMap[filename] = append(orderFilesMap[filename], fileContent)
// fmt.Println("Found: ", fileContent, " in ", filename)
} else {
// fmt.Printf("Could not find: '%s' in '%s'\n", fileContent, filename)
}
counter++
fileContent, err = readLine(reader)
}
ch <- orderFilesMap
}
// containedInSlice returns true or false
// based on whether the string is contained
// in the slice
func containedInSlice(str string, slice []string) bool {
for _, ID := range slice {
if ID == str {
return true
}
}
return false
}
// readLine returns a line from the passed reader
func readLine(r *bufio.Reader) (string, error) {
var (
isPrefix bool = true
err error = nil
line, ln []byte
)
for isPrefix && err == nil {
line, isPrefix, err = r.ReadLine()
ln = append(ln, line...)
}
return string(ln), err
}
The first issue is the wg.Add always must be outside of the goroutine(s) it stands for. If it isn't, the
wg.Wait call might be called before the goutine(s) have actually started running (and called wg.Add) and therefore will "think"
that there is nothing to wait for.
The second issue with the code is that there are multiple ways it waits for the routines to be done. There is
the WaitGroup and there is the done channel. Use only one of them. Which one depends also on how the results of the
goroutines are used. Here we come to the next problem.
The third issue is with gathering the results. Currently the code only prints / uses a single result from the goroutines.
Put a for { ... } loop around the select and use return to break out of the loop if the done channel is closed.
(Note that you don't need to send anything on the done channel, closing it is enough.)
Improved Version 0.0.1
So here the first version (including some other "code cleanup") with a done channel used for closing and the WaitGroup removed:
func main() {
ordersFile, err := os.Open(ordersFilename)
if err != nil {
log.Fatalln("Could not open file: " + ordersFilename)
}
orders := getOrderIDs(ordersFile)
files := Files{
filenames: getCSVsFromCurrentDir(),
}
var (
mu = new(sync.Mutex)
filenamesSize = len(files.filenames)
ch = make(chan map[string][]string, filenamesSize)
done = make(chan bool)
)
for i, filename := range files.filenames {
go func(currentFilename string, ch chan<- map[string][]string, i int, orders Orders, filenamesSize *int, mu *sync.Mutex, done chan<- bool) {
checkFile(currentFilename, orders, ch)
mu.Lock()
*filenamesSize--
mu.Unlock()
// TODO: This also accesses filenamesSize, so it also needs to be protected with the mutex:
if i == *filenamesSize {
done <- true
close(done)
}
}(filename, ch, i, orders, &filenamesSize, mu, done)
}
// Note: closing a channel is not really needed, so you can omit this:
defer close(ch)
for {
select {
case str := <-ch:
fmt.Printf("%+v\n", str)
case <-done:
return
}
}
}
Improved Version 0.0.2
In your case we have some advantage however. We know exactly how many goroutines we started and therefore also how
many results we expect. (Of course if each goroutine returns a result which currently this code does.) That gives
us another option as we can collect the results with another for loop having the same amount of iterations:
func main() {
ordersFile, err := os.Open(ordersFilename)
if err != nil {
log.Fatalln("Could not open file: " + ordersFilename)
}
orders := getOrderIDs(ordersFile)
files := Files{
filenames: getCSVsFromCurrentDir(),
}
var (
// Note: a buffered channel helps speed things up. The size does not need to match the size of the items that will
// be passed through the channel. A fixed, small size is perfect here.
ch = make(chan map[string][]string, 5)
)
for _, filename := range files.filenames {
go func(filename string) {
// orders and channel are not variables of the loop and can be used without copying
checkFile(filename, orders, ch)
}(filename)
}
for range files.filenames {
str := <-ch
fmt.Printf("%+v\n", str)
}
}
A lot simpler, isn't it? Hope that helps!
There is a lot wrong with this code.
You're using the WaitGroup wrong. Add has to be called in the main goroutine, else there is a chance that Wait is called before all Add calls complete.
There's an extraneous Add(1) call right after initializing the WaitGroup that isn't matched by a Done() call, so Wait will never return (assuming the point above is fixed).
You're using both a WaitGroup and a done channel to signal completion. This is redundant at best.
You're reading filenamesSize while not holding the lock (in the if i == *filenamesSize statement). This is a race condition.
The i == *filenamesSize condition makes no sense in the first place. Goroutines execute in an arbitrary order, so you can't be sure that the goroutine with i == 0 is the last one to decrement filenamesSize
This can all be simplified by getting rid of most if the synchronization primitives and simply closing the ch channel when all goroutines are done:
func main() {
ch := make(chan map[string][]string)
var wg WaitGroup
for _, filename := range getCSVsFromCurrentDir() {
filename := filename // capture loop var
wg.Add(1)
go func() {
checkFile(filename, orders, ch)
wg.Done()
}()
}
go func() {
wg.Wait() // after all goroutines are done...
close(ch) // let range loop below exit
}()
for str := range ch {
// ...
}
}
not an answer, but some comments that does not fit the comment box.
In this part of the code
func main() {
var (
ordersFile *os.File
files Files
orders Orders
err error
)
mu := new(sync.Mutex)
wg := &sync.WaitGroup{}
wg.Add(1)
The last statement is a call to wg.Add that appears dangling. By that i mean we can hardly understand what will trigger the required wg.Done counter part. This is a mistake to call for wg.Add without a wg.Done, this is prone to errors to not write them in such way we can not immediately find them in pair.
In that part of the code, it is clearly wrong
go func(currentFilename string, ch chan<- map[string][]string, i int, orders Orders, wg *sync.WaitGroup, filenamesSize *int, mu *sync.Mutex, done chan<- bool) {
wg.Add(1)
defer wg.Done()
Consider that by the time the routine is executed, and that you added 1 to the waitgroup, the parent routine continues to execute. See this example: https://play.golang.org/p/N9Chaqkv4bd
The main routine does not wait for the waitgroup because it does not have time to increment.
There is more to say but i find it hard to understand the purpose of your code so i am not sure how to help you further without basically rewrite it.
I am working on a personal project that will run on a Raspberry Pi with some sensors attached to it.
The function that read from the sensors and the function that handle the socket connection are executed in different goroutines, so, in order to send data on the socket when they are read from the sensors, I create a chan []byte in the main function and pass it to the goroutines.
My problem came out here: if I do multiple writes in a row, only the first data arrives to the client, but the others don't. But if I put a little time.Sleep in the sender function, all the data arrives correctly to the client.
Anyway, that's a simplified version of this little program :
package main
import (
"net"
"os"
"sync"
"time"
)
const socketName string = "./test_socket"
// create to the socket and launch the accept client routine
func launchServerUDS(ch chan []byte) {
if err := os.RemoveAll(socketName); err != nil {
return
}
l, err := net.Listen("unix", socketName)
if err != nil {
return
}
go acceptConnectionRoutine(l, ch)
}
// accept incoming connection on the socket and
// 1) launch the routine to handle commands from the client
// 2) launch the routine to send data when the server reads from the sensors
func acceptConnectionRoutine(l net.Listener, ch chan []byte) {
defer l.Close()
for {
conn, err := l.Accept()
if err != nil {
return
}
go commandsHandlerRoutine(conn, ch)
go autoSendRoutine(conn, ch)
}
}
// routine that sends data to the client
func autoSendRoutine(c net.Conn, ch chan []byte) {
for {
data := <-ch
if string(data) == "exit" {
return
}
c.Write(data)
}
}
// handle client connection and calls functions to execute commands
func commandsHandlerRoutine(c net.Conn, ch chan []byte) {
for {
buf := make([]byte, 1024)
n, err := c.Read(buf)
if err != nil {
ch <- []byte("exit")
break
}
// now, for sake of simplicity , only echo commands back to the client
_, err = c.Write(buf[:n])
if err != nil {
ch <- []byte("exit")
break
}
}
}
// write on the channel to the autosend routine so the data are written on the socket
func sendDataToClient(data []byte, ch chan []byte) {
select {
case ch <- data:
// if i put a little sleep here, no problems
// i i remove the sleep, only data1 is sent to the client
// time.Sleep(1 * time.Millisecond)
default:
}
}
func dummyReadDataRoutine(ch chan []byte) {
for {
// read data from the sensors every 5 seconds
time.Sleep(5 * time.Second)
// read first data and send it
sendDataToClient([]byte("dummy data1\n"), ch)
// read second data and send it
sendDataToClient([]byte("dummy data2\n"), ch)
// read third data and send it
sendDataToClient([]byte("dummy data3\n"), ch)
}
}
func main() {
ch := make(chan []byte)
wg := sync.WaitGroup{}
wg.Add(2)
go dummyReadDataRoutine(ch)
go launchServerUDS(ch)
wg.Wait()
}
I don't think it's correct to use a sleep to synchronize writes. How do I fix this while keeping the functions running on a different different goroutines.
The primary problem was in the function:
func sendDataToClient(data []byte, ch chan []byte) {
select {
case ch <- data:
// if I put a little sleep here, no problems
// if I remove the sleep, only data1 is sent to the client
// time.Sleep(1 * time.Millisecond)
default:
}
If the channel ch isn't ready at the moment the function is called, the default case will be taken and the data will never be sent. In this case you should eliminate the function and send to the channel directly.
Buffering the channel is orthogonal to the problem at hand, and should be done for the similar reasons as you would buffered IO, i.e. provide a "buffer" for writes that can't immediately progress. If the code were not able progress without a buffer, adding one only delays possible deadlocks.
You also don't need the exit sentinel value here, as you could range over the channel and close it when you're done. This however still ignores write errors, but again that requires some re-design.
for data := range ch {
c.Write(data)
}
You should also be careful passing slices over channels, as it's all too easy to lose track of which logical process has ownership and is going to modify the backing array. I can't say from the information given if passing the read+write data over channels improves the architecture, but this is not a pattern you will find in most go networking code.
JimB gave a good explanation, so I think his answer is the better one.
I have included my partial solution in this answer.
I was thinking that my code was clear and simplified, but as Jim said I can do it simpler and clearer. I leave my old code posted so people can understand better how you can post simpler code and not do a mess like I did.
As chmike said, my issue wasn't related to the socket like I was thinking, but was only related to the channel. Write on a unbuffered channel was one of the problems. After change the unbuffered channel to a buffered one, the issue was resolved. Anyway, this code is not "good code" and can be improved following the principles that JimB has written in his answer.
So here is the new code:
package main
import (
"net"
"os"
"sync"
"time"
)
const socketName string = "./test_socket"
// create the socket and accept clients connections
func launchServerUDS(ch chan []byte, wg *sync.WaitGroup) {
defer wg.Done()
if err := os.RemoveAll(socketName); err != nil {
return
}
l, err := net.Listen("unix", socketName)
if err != nil {
return
}
defer l.Close()
for {
conn, err := l.Accept()
if err != nil {
return
}
// this goroutine are launched when a client is connected
// routine that listen and echo commands
go commandsHandlerRoutine(conn, ch)
// routine to send data read from the sensors to the client
go autoSendRoutine(conn, ch)
}
}
// routine that sends data to the client
func autoSendRoutine(c net.Conn, ch chan []byte) {
for {
data := <-ch
if string(data) == "exit" {
return
}
c.Write(data)
}
}
// handle commands received from the client
func commandsHandlerRoutine(c net.Conn, ch chan []byte) {
for {
buf := make([]byte, 1024)
n, err := c.Read(buf)
if err != nil {
// if i can't read send an exit command to autoSendRoutine and exit
ch <- []byte("exit")
break
}
// now, for sake of simplicity , only echo commands back to the client
_, err = c.Write(buf[:n])
if err != nil {
// if i can't write back send an exit command to autoSendRoutine and exit
ch <- []byte("exit")
break
}
}
}
// this goroutine reads from the sensors and write to the channel , so data are sent
// to the client if a client is connected
func dummyReadDataRoutine(ch chan []byte, wg *sync.WaitGroup) {
x := 0
for x < 100 {
// read data from the sensors every 5 seconds
time.Sleep(1 * time.Second)
// read first data and send it
ch <- []byte("data1\n")
// read second data and send it
ch <- []byte("data2\n")
// read third data and send it
ch <- []byte("data3\n")
x++
}
wg.Done()
}
func main() {
// create a BUFFERED CHANNEL
ch := make(chan []byte, 1)
wg := sync.WaitGroup{}
wg.Add(2)
// launch the goruotines that handle the socket connections
// and read data from the sensors
go dummyReadDataRoutine(ch, &wg)
go launchServerUDS(ch, &wg)
wg.Wait()
}
Im trying to learn some go and I am trying to make a script that reads from a csv file, do some process and take some results.
I am following a pipeline pattern so that a goroutine reads from file per line with scanner, lines are sent to channel and different goroutines consume the channel content.
An example of what I am trying to do:
https://gist.github.com/pkulak/93336af9bb9c7207d592
My problem is: In csv file there are lots of records. I want to do some math between consequent lines. Lets say record1 is r1 record2 is r2 and so on.
When I read file I have r1. In the next scanner loop I have r2. I want to see if r1-r2 is a valid number for me. If yes do some math between them. Then check r3 r4 the same way. If r1-r2 is not valid, don't care about r2 and do r1-r3 and so on.
Should I handle this when reading the file, after I put lines of file in channel and handle channel content?
Any suggestion that does not break concurrency?
I think you should determine "is r1-r2 valid numbers for you" inside Read the lines into the work queue function.
So, you should read current line, and read next lines one by one while you don't have valid numbers pair. When you got it -- you will send this pair inside workQueue channel and search for the next pair.
This is your code with changes:
package main
import (
"bufio"
"log"
"os"
"errors"
)
var concurrency = 100
type Pair struct {
line1 string
line2 string
}
func main() {
// It will be better to receive file-path from somewhere (like args or something like this)
filePath := "/path/to/file.csv"
// This channel has no buffer, so it only accepts input when something is ready
// to take it out. This keeps the reading from getting ahead of the writers.
workQueue := make(chan Pair)
// We need to know when everyone is done so we can exit.
complete := make(chan bool)
// Read the lines into the work queue.
go func() {
file, e := os.Open(filePath)
if e != nil {
log.Fatal(e)
}
// Close when the function returns
defer file.Close()
scanner := bufio.NewScanner(file)
// Get pairs and send them into "workQueue" channel
for {
line1, e := getNextCorrectLine(scanner)
if e != nil {
break
}
line2, e := getNextCorrectLine(scanner)
if e != nil {
break
}
workQueue <- Pair{line1, line2}
}
// Close the channel so everyone reading from it knows we're done.
close(workQueue)
}()
// Now read them all off, concurrently.
for i := 0; i < concurrency; i++ {
go startWorking(workQueue, complete)
}
// Wait for everyone to finish.
for i := 0; i < concurrency; i++ {
<-complete
}
}
func getNextCorrectLine(scanner *bufio.Scanner) (string, error) {
var line string
for scanner.Scan() {
line = scanner.Text()
if isCorrect(line) {
return line, nil
}
}
return "", errors.New("no more lines")
}
func isCorrect(str string) bool {
// Make your validation here
return true
}
func startWorking(pairs <-chan Pair, complete chan<- bool) {
for pair := range pairs {
doTheWork(pair)
}
// Let the main process know we're done.
complete <- true
}
func doTheWork(pair Pair) {
// Do the work with the pair
}
I have a Golang program which makes real time predictions on a machine learning model built using TensorFlow. The data for the prediction needs to be read line by line from Stdin and the prediction has to be performed on each line of data. The flow of data is not constant. I need a system that ensures that every time there is data to be read from Stdin, the prediction method is invoked and if there is no data in Stdin, the program waits for new data and does not terminate.
I tried achieving this using channels and the select, but if there is no data in the Stdin, the program terminates. Below is the code snippet:
func run_the_model(in <-chan string) {
go func(){
...
...
...
//Fetch the model
//Run the prediction
//print the result on StdOut
}()
}
func main() {
data := make(chan string)
// read data from Stdin
go func() {
scan := bufio.NewScanner(os.Stdin)
for scan.Scan() {
data <- scan.Text()
}
}()
time.Sleep(time.Second * 5)
select{
case <-data:
run_the_model(data)
time.Sleep(time.Second * 5)
default:
println("Waiting for data")
time.Sleep(time.Duration(math.MaxInt64))
}
}
When there is no new data in Stdin the Select's default case must be executed and when there is new data in the data channel, the run_the_model must be executed. How can this be achieved?
Put your select in infinite loop.
for {
select{
case <-data:
run_the_model(data)
time.Sleep(time.Second * 5)
default:
println("Waiting for data")
time.Sleep(time.Duration(math.MaxInt64))
}
}
I think u using select wrong , for your case this should be works:
func runTheModel(in string) {
// do what ever u want
}
func main() {
data := make(chan string)
// read data from Stdin
go func() {
scan := bufio.NewScanner(os.Stdin)
for scan.Scan() {
data <- scan.Text()
}
}()
println("waiting for data:")
for d := range data {
// command to exit program
if d == "q" {
return
}
go runTheModel(d)
}
}