Main thread never yields to goroutine - go

edit * -- uncomment the two runtime lines and change Tick() to Sleep() and it works as expected, printing one number every second. Leaving code as is so answer/comments make sense.
go version go1.4.2 darwin/amd64
When I run the following, I never see anything printed from go Counter().
package main
import (
"fmt"
"time"
//"runtime"
)
var count int64 = 0
func main() {
//runtime.GOMAXPROCS(2)
fmt.Println("main")
go Counter()
fmt.Println("after Counter()")
for {
count++
}
}
func Counter() {
fmt.Println("In Counter()")
for {
fmt.Println(count)
time.Tick(time.Second)
}
}
> go run a.go
main
after Counter()
If I uncomment the runtime stuff, I will get some strange results like the following all printed at once (not a second apart):
> go run a.go
main
after Counter()
In Counter()
10062
36380
37351
38036
38643
39285
39859
40395
40904
43114
What I expect is that go Counter() will print whatever count is at every second while it is incremented continuously by the main thread. I'm not so much looking for different code to get the same thing done. My question is more about what is going on and where am I wrong in my expectations? The second results in particular don't make any sense, being printed all at once. They should never be printed closer than a second apart as far as I can tell, right?

There is nothing in your tight for loop that lets the Go scheduler run.
The schedule considers other goroutines whenever one blocks or on some (which? all?) function calls.
Since you do neither the easiest solution is to add a call to runtime.Gosched. E.g.:
for {
count++
if count % someNum == 0 {
runtime.Gosched()
}
}
Also note that writing and reading the same variable from different goroutines without locking or synchronization is a data race and there are no benign data races, your program could do anything when reading a value as it's being written. (And synchronization would also let the Go scheduler run.)
For a simple counter you can avoid locks by using atomic.AddInt64(&var, 1) and atomic.LoadInt64(&var).
Further note (as pointed out by #tomasz and completely missed by me), time.Tick doesn't delay anything, you may have wanted time.Sleep or for range time.Tick(…).

Related

Is there a race condition in the golang implementation of mutex the m.state is read without atomic function

In golang if two goroutines read and write a variable without mutex and atomic, that may bring data race condition.
Use command go run --race xxx.go will detect the race point.
While the implementation of Mutex in src/sync/mutex.go use the following code
func (m *Mutex) Lock() {
// Fast path: grab unlocked mutex.
if atomic.CompareAndSwapInt32(&m.state, 0, mutexLocked) {
if race.Enabled {
race.Acquire(unsafe.Pointer(m))
}
return
}
var waitStartTime int64
starving := false
awoke := false
iter := 0
old := m.state // This line confuse me !!!
......
The code old := m.state confuse me, because m.state is read and write by different goroutine.
The following function Test obvious has race condition problem. But if i put it in mutex.go, no race conditon will detect.
# mutex.go
func Test(){
a := int32(1)
go func(){
atomic.CompareAndSwapInt32(&a, 1, 4)
}()
_ = a
}
If put it in other package like src/os/exec.go, the conditon race problem will detect.
package main
import(
"sync"
"os"
)
func main(){
sync.Test() // race condition will not detect
os.Test() // race condition will detect
}
First of all the golang source always changes so let's make sure we are looking at the same thing. Take release 1.12 at
https://github.com/golang/go/blob/release-branch.go1.12/src/sync/mutex.go
as you said the Lock function begins
func (m *Mutex) Lock() {
// fast path where it will set the high order bit and return if not locked
if atomic.CompareAndSwapInt32(&m.state, 0, mutexLocked) {
return
}
//reads value to decide on the lower order bits
for {
//if statements involving CompareAndSwaps on the lower order bits
}
}
What is this CompareAndSwap doing? it looks atomically in that int32 and if it is 0 it swaps it to mutexLocked (which is 1 defined as a const above) and returns true that it swapped it.
Then it promptly returns. That is its fast path. The goroutine acquired the lock and now it is running can start running it's protected path.
If it is 1 (mutexLocked) already, it doesn't swap it and returns false (it didn't swap it).
Then it reads the state and enters a loop that it does atomic compare and swaps to determine how it should behave.
What are the possible states? combinations of locked, woken and starving as you see from the const block.
Now depending on how long the goroutine has been waiting on the waitlist it will get priority on when to check again if the mutex is now free.
But also observe that only Unlock() can set the mutexLocked bit back to 0.
in the Lock() CAS loop the only bits that are set are the starving and woken ones.Yes you can have multiple readers but only one writer at any time, and that writer is the one who is holding the mutex and is executing its protected path until calling Unlock(). Check out this article for more details.
By disassemble the binary output file, The Test function in different pack generate different code.
The reason is that the compiler forbid to generate race detect instrument in the sync package.
The code is :
var norace_inst_pkgs = []string{"sync", "sync/atomic"} // https://github.com/golang/go/blob/release-branch.go1.12/src/cmd/compile/internal/gc/racewalk.go
``

The variable in Goroutine not changed as expected

The codes are simple as below:
package main
import (
"fmt"
// "sync"
"time"
)
var count = uint64(0)
//var l sync.Mutex
func add() {
for {
// l.Lock()
// fmt.Println("Start ++")
count++
// l.Unlock()
}
}
func main() {
go add()
time.Sleep(1 * time.Second)
fmt.Println("Count =", count)
}
Cases:
Running the code without changing, u will get "Count = 0". Not expected??
Only uncomment line 16 "fmt.Println("Start ++")"; u will get output with lots of "Start ++" and some value with Count like "Count = 11111". Expected??
Only uncomment line 11 "var l sync.Mutex", line 15 "l.Lock()" and line 18 "l.Unlock()" and keep line 16 commented; u will get output like "Count = 111111111". Expected.
So... something wrong with my usage in shared variable...? My question:
Why case 1 had 0 with Count?
If case 1 is expected, why case 2 happened?
Env:
1. go version go1.8 linux/amd64
2. 3.10.0-123.el7.x86_64
3. CentOS Linux release 7.0.1406 (Core)
You have a data race on count. The results are undefined.
package main
import (
"fmt"
// "sync"
"time"
)
var count = uint64(0)
//var l sync.Mutex
func add() {
for {
// l.Lock()
// fmt.Println("Start ++")
count++
// l.Unlock()
}
}
func main() {
go add()
time.Sleep(1 * time.Second)
fmt.Println("Count =", count)
}
Output:
$ go run -race racer.go
==================
WARNING: DATA RACE
Read at 0x0000005995b8 by main goroutine:
runtime.convT2E64()
/home/peter/go/src/runtime/iface.go:255 +0x0
main.main()
/home/peter/gopath/src/so/racer.go:25 +0xb9
Previous write at 0x0000005995b8 by goroutine 6:
main.add()
/home/peter/gopath/src/so/racer.go:17 +0x5c
Goroutine 6 (running) created at:
main.main()
/home/peter/gopath/src/so/racer.go:23 +0x46
==================
Count = 42104672
Found 1 data race(s)
$
References:
Benign data races: what could possibly go wrong?
Without any synchronisation you have no guarantees at all.
There could be multiple reasons, why you see 'Count = 0' in your first case:
You have a multi processor or multi core system and one unit (cpu or core) is happily churning away at the for loop, while the other sleeps for one second and prints the line you are seeing afterwards. It would be completely legal for the compiler to generate machine code, which loads the value into some register and only ever increase that register in the for loop. The memory location can be updated, when the function is done with the variable. In case of an infinite for loop, that ist never. As you, the programmer told the compiler, that there is no contention about that variable, by omitting any synchronisation.
In your mutex version the synchronisation primitives tell the compiler,
that there might be some other thread taking the mutex, so it needs to write back the value from the register to the memory location before unlocking the mutex. At least one can think about it like that. What really happens that the unlock and a later lock operation introduce a happens before relation between the two go routines and this gives the guarantee, that we will see all writes to variables from before the unlock in one thread after the lock operation in the other thread, as described in go memory model locks howsoever this is implemented.
The Go runtime scheduler doesn't run the for loop at all, until the sleep in the main go routine is done. (Isn't likely, but, if I recall correctly, there is not guarantee that this isn't happening.) Sadly there is not much official documentation available about how the scheduler works in go, but it can only schedule a goroutine at certain points, it is not really preemptive. The consequences of this are severe. For example you could make your program run forever in some versions of go, by firing up as many go routines, as you had cores, doing endless for loops only incrementing a variable. There was no core left for the main go routine (which could end the program) and the scheduler can't preempt a go routine in an endless for loop doing only simple stuff, like incrementing a variable. I don't know, if that is changed now.
as others pointed out, that is a data race, google it and read up about it.
The difference between your versions there only line 16 is commented/uncommented is likely only because of run time, as printing to a terminal can be pretty slow.
For a correct program, you need to additionally lock the mutex after your sleep in you main program and before the fmt.Println and unlock it afterwards. But there can't be a deterministic expectation about the output, as the result will vary with machine/os/...

Goroutine only works when fmt.Println is executed

For some reason, when I remove the fmt.Printlns then the code is blocking.
I've got no idea why it happens. All I want to do is to implement a simple concurrency limiter...
I've never experienced such a weird thing. It's like that fmt flushes the variables or something and makes it work.
Also, when I use a regular function instead of a goroutine then it works too.
Here's the following code -
package main
import "fmt"
type ConcurrencyLimit struct {
active int
Limit int
}
func (c *ConcurrencyLimit) Block() {
for {
fmt.Println(c.active, c.Limit)
// If should block
if c.active == c.Limit {
continue
}
c.active++
break
}
}
func (c *ConcurrencyLimit) Decrease() int {
fmt.Println("decrease")
if c.active > 0 {
c.active--
}
return c.active
}
func main() {
c := ConcurrencyLimit{Limit: 1}
c.Block()
go func() {
c.Decrease()
}()
c.Block()
}
Clarification: Even though I've accepted #kaedys 's answer(here) a solution was answered by #Kaveh Shahbazian (here)
You're not giving c.Decrease() a chance to run. c.Block() runs an infinite for loop, but it never blocks in that for loop, just calling continue over and over on every iteration. The main thread spins at 100% usage endlessly.
However, when you add an fmt.Print() call, that makes a syscall, which allows the other goroutine to run.
This post has details on how exactly goroutines yield or are pre-empted. Note, however, that it's slightly out of date, as entering a function now has a random chance to yield that thread to another goroutine, to prevent similar style flooding of threads.
As others have pointed out, Block() will never yield; a goroutine is not a thread. You could use Gosched() in the runtime package to force a yield -- but note that spinning this way in Block() is a pretty terrible idea.
There are much better ways to do concurrency limiting. See http://jmoiron.net/blog/limiting-concurrency-in-go/ for one example
What you are looking for is called a semaphore. You can apply this pattern using channels
http://www.golangpatterns.info/concurrency/semaphores
The idea is that you create a buffered channel of a desired length. Then you make callers acquire the resource by putting a value into the channel and reading it back out when they want to free the resource. Doing so creates proper synchronization points in your program so that the Go scheduler runs correctly.
What you are doing now is spinning the cpu and blocking the Go scheduler. It depends on how many cpus you have available, the version of Go, and the value of GOMAXPROCS. Given the right combination, there may not be another available thread to service other goroutines while you infinitely spin that particular thread.
While other answers pretty much covered the reason (not giving a chance for the goroutine to run) - and I'm not sure what you intend to achieve here - you are mutating a value concurrently without proper synchronization. A rewrite of above code with synchronization considered; would be:
type ConcurrencyLimit struct {
active int
Limit int
cond *sync.Cond
}
func (c *ConcurrencyLimit) Block() {
c.cond.L.Lock()
for c.active == c.Limit {
c.cond.Wait()
}
c.active++
c.cond.L.Unlock()
c.cond.Signal()
}
func (c *ConcurrencyLimit) Decrease() int {
defer c.cond.Signal()
c.cond.L.Lock()
defer c.cond.L.Unlock()
fmt.Println("decrease")
if c.active > 0 {
c.active--
}
return c.active
}
func main() {
c := ConcurrencyLimit{
Limit: 1,
cond: &sync.Cond{L: &sync.Mutex{}},
}
c.Block()
go func() {
c.Decrease()
}()
c.Block()
fmt.Println(c.active, c.Limit)
}
sync.Cond is a synchronization utility designed for times that you want to check if a condition is met, concurrently; while other workers are mutating the data of the condition.
The Lock and Unlock functions work as we expect from a lock. When we are done with checking or mutating, we can call Signal to awake one goroutine (or call Broadcast to awake more than one), so the goroutine knows that is free to act upon the data (or check a condition).
The only part that may seem unusual is the Wait function. It is actually very simple. It is like calling Unlock and instantly call Lock again - with the exception that Wait would not try to lock again, unless triggered by Signal (or Broadcast) in other goroutines; like the workers that are mutating the data (of the condition).

Golang: goroutine infinite-loop

When an fmt.Print() line is removed from the code below, code runs infinitely. Why?
package main
import "fmt"
import "time"
import "sync/atomic"
func main() {
var ops uint64 = 0
for i := 0; i < 50; i++ {
go func() {
for {
atomic.AddUint64(&ops, 1)
fmt.Print()
}
}()
}
time.Sleep(time.Second)
opsFinal := atomic.LoadUint64(&ops)
fmt.Println("ops:", opsFinal)
}
The Go By Example article includes:
// Allow other goroutines to proceed.
runtime.Gosched()
The fmt.Print() plays a similar role, and allows the main() to have a chance to proceed.
A export GOMAXPROCS=2 might help the program to finish even in the case of an infinite loop, as explained in "golang: goroute with select doesn't stop unless I added a fmt.Print()".
fmt.Print() explicitly passes control to some syscall stuff
Yes, go1.2+ has pre-emption in the scheduler
In prior releases, a goroutine that was looping forever could starve out other goroutines on the same thread, a serious problem when GOMAXPROCS provided only one user thread.
In Go 1.2, this is partially addressed: The scheduler is invoked occasionally upon entry to a function. This means that any loop that includes a (non-inlined) function call can be pre-empted, allowing other goroutines to run on the same thread.
Notice the emphasis (that I put): it is possible that in your example the for loop atomic.AddUint64(&ops, 1) is inlined. No pre-emption there.
Update 2017: Go 1.10 will get rid of GOMAXPROCS.

Issue with runtime.LockOSThread() and runtime.UnlockOSThread

I have a code like,
Routine 1 {
runtime.LockOSThread()
print something
send int to routine 2
runtime.UnlockOSThread
}
Routine 2 {
runtime.LockOSThread()
print something
send int to routine 1
runtime.UnlockOSThread
}
main {
go Routine1
go Routine2
}
I use run time lock-unlock because, I don't want that printing of
Routine 1 will mix with Routine 2. However, after execution of above
code, it outputs same as without lock-unlock (means printing outputs
mixed). Can anybody help me why this thing happening and how to force
this for happening.
NB: I give an example of print something, however there are lots of
printing and sending events.
If you want to serialize "print something", e.g. each "print something" should perform atomically, then just serialize it.
You can surround "print something" by a mutex. That'll work unless the code deadlock because of that - and surely it easily can in a non trivial program.
The easy way in Go to serialize something is to do it with a channel. Collect in a (go)routine everything which should be printed together. When collection of the print unit is done, send it through a channel to some printing "agent" as a "print job" unit. That agent will simply receive its "tasks" and atomically print each one. One gets that atomicity for free and as an important bonus the code can not deadlock easily no more in the simple case, where there are only non interdependent "print unit" generating goroutines.
I mean something like:
func printer(tasks chan string) {
for s := range tasks {
fmt.Printf(s)
}
}
func someAgentX(tasks chan string) {
var printUnit string
//...
tasks <- printUnit
//...
}
func main() {
//...
tasks := make(chan string, size)
go printer(tasks)
go someAgent1(tasks)
//...
go someAgentN(tasks)
//...
<- allDone
close(tasks)
}
What runtime.LockOSThread does is prevent any other goroutine from running on the same thread. It forces the runtime to create a new thread and run Routine2 there. They are still running concurrently but on different threads.
You need to use sync.Mutex or some channel magic instead.
You rarely need to use runtime.LockOSThread but it can be useful for forcing some higher priority goroutine to run on a thread of it's own.
package main
import (
"fmt"
"sync"
"time"
)
var m sync.Mutex
func printing(s string) {
m.Lock() // Other goroutines will stop here if until m is unlocked
fmt.Println(s)
m.Unlock() // Now another goroutine at "m.Lock()" can continue running
}
func main() {
for i := 0; i < 10; i++ {
go printing(fmt.Sprintf("Goroutine #%d", i))
}
<-time.After(3e9)
}
I think, this is because of runtime.LockOSThread(),runtime.UnlockOSThread does not work all time. It totaly depends on CPU, execution environment etc. It can't be forced by anyother way.

Resources