How do I notify another goroutine to stop? [duplicate] - go

This question already has answers here:
How to stop a goroutine
(9 answers)
Closed 3 years ago.
I have 2 goroutines, g is used to detect the condition when f should stop, f checks whether it should stop in each iteration before doing the actual processing. In other languages such as Java, I would use a thread-safe shared variable, like the following code:
func g(stop *bool) {
for {
if check_condition() {
*stop = true
return
}
}
}
func f(stop *bool) {
for {
if *stop {
return
}
do_something()
}
}
func main() {
var stop = false
go g(&stop)
go f(&stop)
...
}
I know the code above is not safe, but if I use channel to send stop from g to f, f would be blocked on reading from the channel, this is what I want to avoid. What is the safe and idiomatic way of doing this in Go?

Use channel close to notify other goroutines of a condition. Use select with a default clause to avoid blocking when checking for the condition.
func g(stop chan struct{}) {
for {
if check_condition() {
close(stop)
return
}
}
}
func f(stop chan struct{}) {
for {
select {
case <-stop:
return
default:
do_something()
}
}
}
func main() {
var stop = make(chan struct{})
go g(stop)
go f(stop)
}
It also works to send a value to a channel with capacity greater than zero, but closing the channel extends to supporting multiple goroutines.

The way is to use a select statement with a default clause (see this example).
So f would look something like:
func f(stop chan bool) {
select {
case s := <- stop:
if s {
return
}
default:
do_something()
}
}

Related

function with mutex.Lock that returns before unlocking

I need to use mutex to read a variable and if the variable is 0, return from the function. This would prevent the mutex from Unlocking though.
I know that I could simply put a mutex.Unlock just before the return but it does not seem nice / correct.
I can't even do a defer mutex.Unlock() at the beginning of the function because the the code after requires a lot of time to run.
Is there a correct way to do so?
This is the example:
func mutexfunc() {
mutex.Lock()
if variable == 0 {
return
}
mutex.Unlock()
// long execution time (mutex must be unlocked)
}
UPDATE:
this is the solution I prefer:
var mutex = &sync.Mutex{}
var mutexSensibleVar = 0
func main() {
if withLock(func() bool { return mutexSensibleVar == 1 }) {
fmt.Println("it's true")
} else {
fmt.Println("it's false")
}
fmt.Println("end")
}
func withLock(f func() bool) bool {
mutex.Lock()
defer mutex.Unlock()
return f()
}
If you can't use defer, which is something you can't do here, you have to do the obvious:
func mutexfunc() {
mutex.Lock()
if variable == 0 {
mutex.Unlock()
return
}
mutex.Unlock()
// long execution time (mutex must be unlocked)
}
If the mutex is there only to protect that variable (that is, there isn't other code you're not showing us), you can also use sync/atomic:
func f() {
if atomic.LoadInt64(&variable) ==0 {
return
}
...
}
You can separate the locked part into its own function.
func varIsZero() bool {
mutex.Lock()
defer mutex.Unlock()
return variable == 0
}
func mutexfunc() {
if varIsZero() { return }
...
}
An alternative would be to use an anonymous function inside mutexfunc rather than a completely independent function, but it's a matter of taste here.
Also consider the (clumsy but readable) variant with a "need to unlock" boolean:
func f(arg1 argtype1, arg2 argtype2) ret returntype {
var needToUnlock bool
defer func() {
if needToUnlock {
lock.Unlock()
}
}()
// arbitrary amount of code here that runs unlocked
lock.Lock()
needToUnlock = true
// arbitrary amount of code here that runs locked
lock.Unlock()
needToUnlock = false
// arbitrary amount of code here that runs unlocked
// repeat as desired
}
You can wrap such a thing up in a type:
type DeferableLock struct {
L Locker
isLocked bool
}
func (d *DeferableLock) Lock() {
d.L.Lock()
d.isLocked = true
}
func (d *DeferableLock) Unlock() {
d.L.Unlock()
d.isLocked = false
}
func (d *DeferableLock) EnsureUnlocked() {
if d.isLocked {
d.Unlock()
}
}
func NewDeferableLock(l Locker) *DeferableLock() {
return &DeferableLock{L: l}
}
You can now wrap any sync.Locker with a DeferableLock. At functions like f, use the deferable wrapper to wrap the lock, and call defer d.EnsureUnlock.
(Any resemblance to sync.Cond is entirely deliberate.)

Why concurrent read-write has different results depending on default value?

Why panic stop occurring if we replace global variable with true?
package main
import (
"time"
)
// If here written false, the panic is expected some times.
// But if we write true here, then panic will never happen.
// Why?
var value = false
func main() {
go func() { for { value = true } }()
time.Sleep(time.Second)
for {
if !value {
panic("FALSE!")
}
}
}
It is a race condition. In such situation you need to use WaitGroup.
Using time sleep is dangerous as we cannot predict how go routines are executed by the scheduler.
See the following code:
func main() {
var wait sync.WaitGroup
wait.Add(1);
go func() {
for {
if !value {
value = true
wait.Done()
}
}
}()
wait.Wait()
//time.Sleep(time.Second)
for {
if !value {
panic("FALSE!")
}
}
}

Watch for changes in a queue containing struct

I have two goroutines:
first one adds task to queue
second cleans up from the queue based on status
Add and cleanup might not be simultaneous.
If the status of task is success, I want to delete the task from the queue, if not, I will retry for status to be success (will have time limit). If that fails, I will log and delete from queue.
We can't communicate between add and delete because that is not how the real world scenario works.
I want something like a watcher which monitors addition in queue and does the following cleanup. To increase complexity, Add might be adding even during cleanup is happening (not shown here). I want to implement it without using external packages.
How can I achieve this?
type Task struct {
name string
status string //completed, failed
}
var list []*Task
func main() {
done := make(chan bool)
go Add()
time.Sleep(15)
go clean(done)
<-done
}
func Add() {
t1 := &Task{"test1", "completed"}
t2 := &Task{"test2", "failed"}
list = append(list, t1, t2)
}
func clean() {
for k, v := range list {
if v.status == "completed" {
RemoveIndex(list, k)
} else {
//for now consider this as retry
v.status == "completed"
}
if len(list) > 0 {
clean()
}
<-done
}
}
func RemoveIndex(s []int, index int) []int {
return append(s[:index], s[index+1:]...)
}
so i found a solution which works for me and posting it here for anyone it might be helpful for.
in my main i have added a ticker which runs every x seconds to watch if something is added in the queue.
type Task struct {
name string
status string //completed, failed
}
var list []*Task
func main() {
done := make(chan bool)
c := make(chan os.Signal, 2)
go Add()
go func() {
for {
select {
// case <-done:
// Cleaner(k)
case <-ticker.C:
Monitor(done)
}
}
}()
signal.Notify(c, os.Interrupt, syscall.SIGTERM)
<-c
//waiting for interrupt here
}
func Add() {
t1 := &Task{"test1", "completed"}
t2 := &Task{"test2", "failed"}
list = append(list, t1, t2)
}
func Monitor(done chan bool) {
if len(list) > 0 {
Cleaner()
}
}
func cleaner(){
//do cleaning here
// pop each element from queue and delete
}
func RemoveIndex(s []int, index int) []int {
return append(s[:index], s[index+1:]...)
}
so now this solution does not need to depend on communication between go routines,
in a real world scenario, the programme never dies and keeps adding and cleaning based on use case.you can optimize better by locking and unlocking before addition to queue and deletion from queue.

Stop for loop by passing empty struct down channel Go

I am attempting to create a poller in Go that spins up and every 24 hours executes a function.
I want to also be able to stop the polling, I'm attempting to do this by having a done channel and passing down an empty struct to stop the for loop.
In my tests, the for just loops infinitely and I can't seem to stop it, am I using the done channel incorrectly? The ticker case works as expected.
Poller struct {
HandlerFunc HandlerFunc
interval *time.Ticker
done chan struct{}
}
func (p *Poller) Start() error {
for {
select {
case <-p.interval.C:
err := p.HandlerFunc()
if err != nil {
return err
}
case <-p.done:
return nil
}
}
}
func (p *Poller) Stop() {
p.done <- struct{}{}
}
Here is the test that's exeuting the code and causing the infinite loop.
poller := poller.NewPoller(
testHandlerFunc,
time.NewTicker(1*time.Millisecond),
)
err := poller.Start()
assert.Error(t, err)
poller.Stop()
Seems like problem is in your use case, you calling poller.Start() in blocking maner, so poller.Stop() is never called. It's common, in go projects to call goroutine inside of Start/Run methods, so, in poller.Start(), i would do something like that:
func (p *Poller) Start() <-chan error {
errc := make(chan error, 1 )
go func() {
defer close(errc)
for {
select {
case <-p.interval.C:
err := p.HandlerFunc()
if err != nil {
errc <- err
return
}
case <-p.done:
return
}
}
}
return errc
}
Also, there's no need to send empty struct to done channel. Closing channel like close(p.done) is more idiomatic for go.
There is no explicit way in Go to broadcast an event to go routines for something like cancellation. Instead its idiomatic to create a channel that when closed signifies a message such as cancelling any work it has to do. Something like this is a viable pattern:
var done = make(chan struct{})
func cancelled() bool {
select {
case <-done:
return true
default:
return false
}
}
Go-routines can call cancelled to poll for a cancellation.
Then your main loop can respond to such an event but make sure you drain any channels that might cause go-routines to block.
for {
select {
case <-done:
// Drain whatever channels you need to.
for range someChannel { }
return
//.. Other cases
}
}

Why is the following code sample stuck after some iterations?

I am trying to learn golang and i got a little piece of code that i do not understand why it gets stuck after some time.
package main
import "log"
func main() {
deliveryChann := make(chan bool, 10000)
go func() {
for {
deliveryChann <- true
log.Println("Sent")
}
}()
go func() {
for {
select {
case <-deliveryChann:
log.Println("received")
}
}
}()
go func() {
for {
select {
case <-deliveryChann:
log.Println("received")
}
}
}()
go func() {
for {
select {
case <-deliveryChann:
log.Println("received")
}
}
}()
for {
}
}
An basic start on how to investigate would suffice.
The main goroutine (running the for {} loop) is hogging the thread, and none of the other goroutines are able to execute because of it. If you change the end of your main function to:
for {
runtime.Gosched()
}
then the thread will be released and another goroutine made active.
func Gosched()
Gosched yields the processor, allowing other goroutines to run. It does not suspend the current goroutine, so execution resumes automatically.
-- https://golang.org/pkg/runtime/#Gosched
Order of execution of goroutings is undefined. Code gets stuck is legal. You can be more deterministic doing communication with main(). For example place
for {
deliveryChann <- true
log.Println("Sent")
}
in main() instead of go func()

Resources