I need to use mutex to read a variable and if the variable is 0, return from the function. This would prevent the mutex from Unlocking though.
I know that I could simply put a mutex.Unlock just before the return but it does not seem nice / correct.
I can't even do a defer mutex.Unlock() at the beginning of the function because the the code after requires a lot of time to run.
Is there a correct way to do so?
This is the example:
func mutexfunc() {
mutex.Lock()
if variable == 0 {
return
}
mutex.Unlock()
// long execution time (mutex must be unlocked)
}
UPDATE:
this is the solution I prefer:
var mutex = &sync.Mutex{}
var mutexSensibleVar = 0
func main() {
if withLock(func() bool { return mutexSensibleVar == 1 }) {
fmt.Println("it's true")
} else {
fmt.Println("it's false")
}
fmt.Println("end")
}
func withLock(f func() bool) bool {
mutex.Lock()
defer mutex.Unlock()
return f()
}
If you can't use defer, which is something you can't do here, you have to do the obvious:
func mutexfunc() {
mutex.Lock()
if variable == 0 {
mutex.Unlock()
return
}
mutex.Unlock()
// long execution time (mutex must be unlocked)
}
If the mutex is there only to protect that variable (that is, there isn't other code you're not showing us), you can also use sync/atomic:
func f() {
if atomic.LoadInt64(&variable) ==0 {
return
}
...
}
You can separate the locked part into its own function.
func varIsZero() bool {
mutex.Lock()
defer mutex.Unlock()
return variable == 0
}
func mutexfunc() {
if varIsZero() { return }
...
}
An alternative would be to use an anonymous function inside mutexfunc rather than a completely independent function, but it's a matter of taste here.
Also consider the (clumsy but readable) variant with a "need to unlock" boolean:
func f(arg1 argtype1, arg2 argtype2) ret returntype {
var needToUnlock bool
defer func() {
if needToUnlock {
lock.Unlock()
}
}()
// arbitrary amount of code here that runs unlocked
lock.Lock()
needToUnlock = true
// arbitrary amount of code here that runs locked
lock.Unlock()
needToUnlock = false
// arbitrary amount of code here that runs unlocked
// repeat as desired
}
You can wrap such a thing up in a type:
type DeferableLock struct {
L Locker
isLocked bool
}
func (d *DeferableLock) Lock() {
d.L.Lock()
d.isLocked = true
}
func (d *DeferableLock) Unlock() {
d.L.Unlock()
d.isLocked = false
}
func (d *DeferableLock) EnsureUnlocked() {
if d.isLocked {
d.Unlock()
}
}
func NewDeferableLock(l Locker) *DeferableLock() {
return &DeferableLock{L: l}
}
You can now wrap any sync.Locker with a DeferableLock. At functions like f, use the deferable wrapper to wrap the lock, and call defer d.EnsureUnlock.
(Any resemblance to sync.Cond is entirely deliberate.)
This question already has answers here:
How to stop a goroutine
(9 answers)
Closed 3 years ago.
I have 2 goroutines, g is used to detect the condition when f should stop, f checks whether it should stop in each iteration before doing the actual processing. In other languages such as Java, I would use a thread-safe shared variable, like the following code:
func g(stop *bool) {
for {
if check_condition() {
*stop = true
return
}
}
}
func f(stop *bool) {
for {
if *stop {
return
}
do_something()
}
}
func main() {
var stop = false
go g(&stop)
go f(&stop)
...
}
I know the code above is not safe, but if I use channel to send stop from g to f, f would be blocked on reading from the channel, this is what I want to avoid. What is the safe and idiomatic way of doing this in Go?
Use channel close to notify other goroutines of a condition. Use select with a default clause to avoid blocking when checking for the condition.
func g(stop chan struct{}) {
for {
if check_condition() {
close(stop)
return
}
}
}
func f(stop chan struct{}) {
for {
select {
case <-stop:
return
default:
do_something()
}
}
}
func main() {
var stop = make(chan struct{})
go g(stop)
go f(stop)
}
It also works to send a value to a channel with capacity greater than zero, but closing the channel extends to supporting multiple goroutines.
The way is to use a select statement with a default clause (see this example).
So f would look something like:
func f(stop chan bool) {
select {
case s := <- stop:
if s {
return
}
default:
do_something()
}
}
I have a goroutine that will be run multiple times. But it can only run one at a time (single instance). What is the correct/idiomatic way to make sure a certain goroutine can run only one at a time?
Here is my contrived example code to illustrate the point:
func main() {
// Contrived example!!!!!!
// theCaller() may be run at multiple, unpredictable times
// theJob() must only be run one at a time
go theCaller()
go theCaller()
go theCaller()
}
func theCaller() {
if !jobIsRunning { // race condition here!
jobIsRunning = true
go theJob()
}
}
var jobIsRunning bool
// Can run multiple times, but only one at a time
func theJob() {
defer jobDone()
do_something()
}
func jobDone() {
jobIsRunning = false
}
Based on question and other comments from the OP, it looks like the goal is to start a new job if and only if a job is not already running.
Use a boolean variable protected by a sync.Mutex to record the running state of of the job. Set the variable to true when starting a job and to false when the job completes. Test this variable to determine if a job should be started.
var (
jobIsRunning bool
JobIsrunningMu sync.Mutex
)
func maybeStartJob() {
JobIsrunningMu.Lock()
start := !jobIsRunning
jobIsRunning = true
JobIsrunningMu.Unlock()
if start {
go func() {
theJob()
JobIsrunningMu.Lock()
jobIsRunning = false
JobIsrunningMu.Unlock()
}()
}
}
func main() {
maybeStartJob()
maybeStartJob()
maybeStartJob()
}
The lower-level sync/atomic package can also be used and may have better performance than using a mutex.
var jobIsRunning uint32
func maybeStartJob() {
if atomic.CompareAndSwapUint32(&jobIsRunning, 0, 1) {
go func() {
theJob()
atomic.StoreUint32(&jobIsRunning, 0)
}()
}
}
The sync/atomic package documentation warns that the functions in the package require great care to use correctly and that most applications should use the sync package.
Following problem:
I have a function that only should allow one caller to execute.
If someone tries to call the function and it is already busy the second caller should immediatly return with an error.
I tried the following:
1. Use a mutex
Would be pretty easy. But the problem is, you cannot check if a mutex is locked. You can only block on it. Therefore it does not work
2. Wait on a channel
var canExec = make(chan bool, 1)
func init() {
canExec <- true
}
func onlyOne() error {
select {
case <-canExec:
default:
return errors.New("already busy")
}
defer func() {
fmt.Println("done")
canExec <- true
}()
// do stuff
}
What I don't like here:
looks really messi
if easy to mistakenly block on the channel / mistakenly write to the channel
3. Mixture of mutex and shared state
var open = true
var myMutex *sync.Mutex
func canExec() bool {
myMutex.Lock()
defer myMutex.Unlock()
if open {
open = false
return true
}
return false
}
func endExec() {
myMutex.Lock()
defer myMutex.Unlock()
open = true
}
func onlyOne() error {
if !canExec() {
return errors.New("busy")
}
defer endExec()
// do stuff
return nil
}
I don't like this either. Using a shard variable with mutex is not that nice.
Any other idea?
I'll throw my preference out there - use the atomic package.
var (
locker uint32
errLocked = errors.New("Locked out buddy")
)
func OneAtATime(d time.Duration) error {
if !atomic.CompareAndSwapUint32(&locker, 0, 1) { // <-----------------------------
return errLocked // All logic in these |
} // four lines |
defer atomic.StoreUint32(&locker, 0) // <-----------------------------
// logic here, but we will sleep
time.Sleep(d)
return nil
}
The idea is pretty simple. Set the initial value to 0 (0 value of uint32). The first thing you do in the function is check if the value of locker is currently 0 and if so it changes it to 1. It does all of this atomically. If it fails simply return an error (or however else you like to handle a locked state). If successful, you immediately defer replacing the value (now 1) with 0. You don't have to use defer obviously, but failing to set the value back to 0 before returning would leave you in a state where the function could no longer be run.
After you do those 4 lines of setup, you do whatever you would normally.
https://play.golang.org/p/riryVJM4Qf
You can make things a little nicer if desired by using named values for your states.
const (
stateUnlocked uint32 = iota
stateLocked
)
var (
locker = stateUnlocked
errLocked = errors.New("Locked out buddy")
)
func OneAtATime(d time.Duration) error {
if !atomic.CompareAndSwapUint32(&locker, stateUnlocked, stateLocked) {
return errLocked
}
defer atomic.StoreUint32(&locker, stateUnlocked)
// logic here, but we will sleep
time.Sleep(d)
return nil
}
You can use a semaphore for this (go get golang.org/x/sync/semaphore)
package main
import (
"errors"
"fmt"
"sync"
"time"
"golang.org/x/sync/semaphore"
)
var sem = semaphore.NewWeighted(1)
func main() {
var wg sync.WaitGroup
for i := 0; i < 10; i++ {
wg.Add(1)
go func() {
defer wg.Done()
if err := onlyOne(); err != nil {
fmt.Println(err)
}
}()
time.Sleep(time.Second)
}
wg.Wait()
}
func onlyOne() error {
if !sem.TryAcquire(1) {
return errors.New("busy")
}
defer sem.Release(1)
fmt.Println("working")
time.Sleep(5 * time.Second)
return nil
}
You could use standard channel approach with select statement.
var (
ch = make(chan bool)
)
func main() {
i := 0
wg := sync.WaitGroup{}
for i < 100 {
i++
wg.Add(1)
go func() {
defer wg.Done()
err := onlyOne()
if err != nil {
fmt.Println("Error: ", err)
} else {
fmt.Println("Ok")
}
}()
go func() {
ch <- true
}()
}
wg.Wait()
}
func onlyOne() error {
select {
case <-ch:
// do stuff
return nil
default:
return errors.New("Busy")
}
}
Do you want a function to be executed exactly once or once at given time? In former case take a look at https://golang.org/pkg/sync/#Once.
If you want once at a time solution:
package main
import (
"fmt"
"sync"
"time"
)
// OnceAtATime protects function from being executed simultaneously.
// Example:
// func myFunc() { time.Sleep(10*time.Second) }
// func main() {
// once := OnceAtATime{}
// once.Do(myFunc)
// once.Do(myFunc) // not executed
// }
type OnceAtATime struct {
m sync.Mutex
executed bool
}
func (o *OnceAtATime) Do(f func()) {
o.m.Lock()
if o.executed {
o.m.Unlock()
return
}
o.executed = true
o.m.Unlock()
f()
o.m.Lock()
o.executed = false
o.m.Unlock()
}
// Proof of concept
func f(m int, done chan<- struct{}) {
for i := 0; i < 10; i++ {
fmt.Printf("%d: %d\n", m, i)
time.Sleep(250 * time.Millisecond)
}
close(done)
}
func main() {
done := make(chan struct{})
once := OnceAtATime{}
go once.Do(func() { f(1, done) })
go once.Do(func() { f(2, done) })
<-done
done = make(chan struct{})
go once.Do(func() { f(3, done) })
<-done
}
https://play.golang.org/p/nZcEcWAgKp
But the problem is, you cannot check if a mutex is locked. You can only block on it. Therefore it does not work
With possible Go 1.18 (Q1 2022), you will be able to test if a mutex is locked... without blocking on it.
See (as mentioned by Go 101) the issue 45435 from Tye McQueen :
sync: add Mutex.TryLock
This is followed by CL 319769, with the caveat:
Use of these functions is almost (but not) always a bad idea.
Very rarely they are necessary, and third-party implementations (using a mutex and an atomic word, say) cannot integrate as well with the race detector as implementations in package sync itself.
The objections (since retracted) were:
Locks are for protecting invariants.
If the lock is held by someone else, there is nothing you can say about the invariant.
TryLock encourages imprecise thinking about locks; it encourages making assumptions about the invariants that may or may not be true.
That ends up being its own source of races.
Thinking more about this, there is one important benefit to building TryLock into Mutex, compared to a wrapper:
failed TryLock calls wouldn't create spurious happens-before edges to confuse the race detector.
And:
A channel-based implementation is possible, but performs poorly in comparison.
There's a reason we have sync.Mutex rather than just using channel for locking.
I came up with the following generic solution for that:
Works for me, or do you see any problem with that?
import (
"sync"
)
const (
ONLYONECALLER_LOCK = "onlyonecaller"
ANOTHER_LOCK = "onlyonecaller"
)
var locks = map[string]bool{}
var mutex = &sync.Mutex{}
func Lock(lock string) bool {
mutex.Lock()
defer mutex.Unlock()
locked, ok := locks[lock]
if !ok {
locks[lock] = true
return true
}
if locked {
return false
}
locks[lock] = true
return true
}
func IsLocked(lock string) bool {
mutex.Lock()
defer mutex.Unlock()
locked, ok := locks[lock]
if !ok {
return false
}
return locked
}
func Unlock(lock string) {
mutex.Lock()
defer mutex.Unlock()
locked, ok := locks[lock]
if !ok {
return
}
if !locked {
return
}
locks[lock] = false
}
see: https://play.golang.org/p/vUUsHcT3L-
How about this package: https://github.com/viney-shih/go-lock . It use channel and semaphore (golang.org/x/sync/semaphore) to solve your problem.
go-lock implements TryLock, TryLockWithTimeout and TryLockWithContext functions in addition to Lock and Unlock. It provides flexibility to control the resources.
Examples:
package main
import (
"fmt"
"time"
"context"
lock "github.com/viney-shih/go-lock"
)
func main() {
casMut := lock.NewCASMutex()
casMut.Lock()
defer casMut.Unlock()
// TryLock without blocking
fmt.Println("Return", casMut.TryLock()) // Return false
// TryLockWithTimeout without blocking
fmt.Println("Return", casMut.TryLockWithTimeout(50*time.Millisecond)) // Return false
// TryLockWithContext without blocking
ctx, cancel := context.WithTimeout(context.Background(), 50*time.Millisecond)
defer cancel()
fmt.Println("Return", casMut.TryLockWithContext(ctx)) // Return false
// Output:
// Return false
// Return false
// Return false
}
Lets keep it simple:
package main
import (
"fmt"
"time"
"golang.org/x/sync/semaphore"
)
var sem *semaphore.NewWeighted(1)
func init() {
sem = emaphore.NewWeighted(1)
}
func doSomething() {
if !sem.TryAcquire(1) {
return errors.New("I'm busy")
}
defer sem.Release(1)
fmt.Println("I'm doing my work right now, then I'll take a nap")
time.Sleep(10)
}
func main() {
go func() {
doSomething()
}()
}
I am trying to learn golang and i got a little piece of code that i do not understand why it gets stuck after some time.
package main
import "log"
func main() {
deliveryChann := make(chan bool, 10000)
go func() {
for {
deliveryChann <- true
log.Println("Sent")
}
}()
go func() {
for {
select {
case <-deliveryChann:
log.Println("received")
}
}
}()
go func() {
for {
select {
case <-deliveryChann:
log.Println("received")
}
}
}()
go func() {
for {
select {
case <-deliveryChann:
log.Println("received")
}
}
}()
for {
}
}
An basic start on how to investigate would suffice.
The main goroutine (running the for {} loop) is hogging the thread, and none of the other goroutines are able to execute because of it. If you change the end of your main function to:
for {
runtime.Gosched()
}
then the thread will be released and another goroutine made active.
func Gosched()
Gosched yields the processor, allowing other goroutines to run. It does not suspend the current goroutine, so execution resumes automatically.
-- https://golang.org/pkg/runtime/#Gosched
Order of execution of goroutings is undefined. Code gets stuck is legal. You can be more deterministic doing communication with main(). For example place
for {
deliveryChann <- true
log.Println("Sent")
}
in main() instead of go func()