The author in the video on atomic ops has the following snippet. While the load to the stop flag is not relaxed, the store cannot be relaxed. My question is if the reason for store not to be relaxed has to do with it potentially being visible after the join of threads / if there is some other reason?
Worker threads:
while (!stop.load(std::memory_order_relaxed)) {
// do something (that's independent of stop flag)
}
Main thread:
int main() {
launch_workers();
stop = true; // <-- not relaxed
join_threads();
// do something
}
I have a loop that reads from a channel (indirectly receiving from a socket) but must also send a ping regularly if there is no other traffic. So I create a time.Timer on every loop iteration like this:
var timer *time.Timer
for {
timer = time.NewTimer(pingFrequency)
select {
case message := <-ch:
....
case <-timer.C:
ping()
}
_ = timer.Stop()
}
The doco for the Stop method implies you should drain the channel if it returns false but is that really necessary since the previous value of timer is no longer used and will be released by the GC. Ie even if the old timer.C has an unread value it does not matter since nothing is using the channel any longer.
The doc. for t.Stop() says:
To ensure the channel is empty after a call to Stop,
check the return value and drain the channel.
My question is: Why would you need to ensure the channel is empty since nothing is using it and it will be freed by the GC?
Please note that I am aware of the proper way to stop a timer in the general case.
if !timer.Stop() {
select {
case <-timer.C:
default:
}
}
This specific question has not been answered on SO or elsewhere (apart from the comment from #JimB below).
I'm writing a socket handler, and I thought of two ways to write individual synchronous event handlers (events of same type must be received in order):
For loop
for {
var packet EventType
select {
case packet = <-eventChannel:
case <- stop:
break
}
// Logic
}
go func Recursion
func GetEventType() {
var packet EventType
select {
case packet = <-eventChannel:
case <- stop:
return
}
// Logic
go func GetEventType()
}
I know that looping is almost always more efficient than recursing, but I couldn't find much on the performance of go func relative to alternatives. Here's my initial take on each method:
For loop:
Doesn't start new thread each call
Doesn't use call stack
Good pattern
go func Recursion:
Clean
Doesn't require anonymous function to use defer
Isolated access (data-hiding)
Are there any other reasons to use one over the other? Is method #2 an anti-pattern? Could method #2 cause a major slow-down (call stack?) under high throughput?
For some reason, when I remove the fmt.Printlns then the code is blocking.
I've got no idea why it happens. All I want to do is to implement a simple concurrency limiter...
I've never experienced such a weird thing. It's like that fmt flushes the variables or something and makes it work.
Also, when I use a regular function instead of a goroutine then it works too.
Here's the following code -
package main
import "fmt"
type ConcurrencyLimit struct {
active int
Limit int
}
func (c *ConcurrencyLimit) Block() {
for {
fmt.Println(c.active, c.Limit)
// If should block
if c.active == c.Limit {
continue
}
c.active++
break
}
}
func (c *ConcurrencyLimit) Decrease() int {
fmt.Println("decrease")
if c.active > 0 {
c.active--
}
return c.active
}
func main() {
c := ConcurrencyLimit{Limit: 1}
c.Block()
go func() {
c.Decrease()
}()
c.Block()
}
Clarification: Even though I've accepted #kaedys 's answer(here) a solution was answered by #Kaveh Shahbazian (here)
You're not giving c.Decrease() a chance to run. c.Block() runs an infinite for loop, but it never blocks in that for loop, just calling continue over and over on every iteration. The main thread spins at 100% usage endlessly.
However, when you add an fmt.Print() call, that makes a syscall, which allows the other goroutine to run.
This post has details on how exactly goroutines yield or are pre-empted. Note, however, that it's slightly out of date, as entering a function now has a random chance to yield that thread to another goroutine, to prevent similar style flooding of threads.
As others have pointed out, Block() will never yield; a goroutine is not a thread. You could use Gosched() in the runtime package to force a yield -- but note that spinning this way in Block() is a pretty terrible idea.
There are much better ways to do concurrency limiting. See http://jmoiron.net/blog/limiting-concurrency-in-go/ for one example
What you are looking for is called a semaphore. You can apply this pattern using channels
http://www.golangpatterns.info/concurrency/semaphores
The idea is that you create a buffered channel of a desired length. Then you make callers acquire the resource by putting a value into the channel and reading it back out when they want to free the resource. Doing so creates proper synchronization points in your program so that the Go scheduler runs correctly.
What you are doing now is spinning the cpu and blocking the Go scheduler. It depends on how many cpus you have available, the version of Go, and the value of GOMAXPROCS. Given the right combination, there may not be another available thread to service other goroutines while you infinitely spin that particular thread.
While other answers pretty much covered the reason (not giving a chance for the goroutine to run) - and I'm not sure what you intend to achieve here - you are mutating a value concurrently without proper synchronization. A rewrite of above code with synchronization considered; would be:
type ConcurrencyLimit struct {
active int
Limit int
cond *sync.Cond
}
func (c *ConcurrencyLimit) Block() {
c.cond.L.Lock()
for c.active == c.Limit {
c.cond.Wait()
}
c.active++
c.cond.L.Unlock()
c.cond.Signal()
}
func (c *ConcurrencyLimit) Decrease() int {
defer c.cond.Signal()
c.cond.L.Lock()
defer c.cond.L.Unlock()
fmt.Println("decrease")
if c.active > 0 {
c.active--
}
return c.active
}
func main() {
c := ConcurrencyLimit{
Limit: 1,
cond: &sync.Cond{L: &sync.Mutex{}},
}
c.Block()
go func() {
c.Decrease()
}()
c.Block()
fmt.Println(c.active, c.Limit)
}
sync.Cond is a synchronization utility designed for times that you want to check if a condition is met, concurrently; while other workers are mutating the data of the condition.
The Lock and Unlock functions work as we expect from a lock. When we are done with checking or mutating, we can call Signal to awake one goroutine (or call Broadcast to awake more than one), so the goroutine knows that is free to act upon the data (or check a condition).
The only part that may seem unusual is the Wait function. It is actually very simple. It is like calling Unlock and instantly call Lock again - with the exception that Wait would not try to lock again, unless triggered by Signal (or Broadcast) in other goroutines; like the workers that are mutating the data (of the condition).
I have a map which is used by goroutine A and replaced once in a time in goroutine B. By replacement I mean:
var a map[T]N
// uses the map
func goroutineA() {
for (...) {
tempA = a
..uses tempA in some way...
}
}
//refreshes the map
func gorountineB() {
for (...) {
time.Sleep(10 * time.Seconds)
otherTempA = make(map[T]N)
...initializes other tempA....
a = otherTempA
}
}
Do you see any problem in this pseudo code? (in terms of concurrecy)
The code isn't safe, since assignments and reads to a pointer value are not guaranteed to be atomic. This can mean that as one goroutine writes the new pointer value, the other may see a mix of bytes from the old and new value, which will cause your program to die in a nasty way. Another thing that may happen is that since there's no synchronisation in your code, the compiler may notice that nothing can change a in goroutineA, and lift the tempA := a statement out of the loop. This will mean that you'll never see new map assignments as the other goroutine updates them.
You can use go test -race to find these sorts of problems automatically.
One solution is to lock all access to the map with a mutex.
You may wish to read the Go Memory Model document, which explains clearly when changes to variables are visible inside goroutines.
When unsure about data races, run go run -race file.go, that being said, yes there will be a race.
The easiest way to fix that is using a sync.RWMutex :
var a map[T]N
var lk sync.RWMutex
// uses the map
func goroutineA() {
for (...) {
lk.RLock()
//actions on a
lk.RUnlock()
}
}
//refreshes the map
func gorountineB() {
for (...) {
otherTempA = make(map[T]N)
//...initializes other tempA....
lk.Lock()
a = otherTempA
lk.Unlock()
}
}