I created this simple app to demonstrate the issue I was having.
package main
import (
"fmt"
"unsafe"
"sync"
)
type loc_t struct {
count [9999]int64
Counter int64
}
func (l loc_t) rampUp (wg *sync.WaitGroup) {
defer wg.Done()
l.Counter += 1
}
func main() {
wg := new(sync.WaitGroup)
loc := loc_t{}
fmt.Println(unsafe.Sizeof(loc))
wg.Add(1)
go loc.rampUp(wg)
wg.Wait()
fmt.Println(loc.Counter)
}
If I run the above I will get a fatal error: newproc: function arguments too large for new goroutine
runtime stack:
runtime: unexpected return pc for runtime.systemstack called from 0x0
Now the reason for that is the 2k stack size when a go is used to spawn a background task. What's interesting is I'm only passing a pointer the called function. This issue happened to me in production, different struct obviously, everything was working for a year, and then all of sudden it started throwing this error.
Method receivers are passed to method calls, just like any other parameter. So if the method has a non-pointer receiver, the whole struct in your case will be copied. The easiest solution would be to use a pointer receiver, if you can.
If you must use a non-pointer receiver, then you can circumvent this by not launching the method call as the goroutine but another function, possibly a function literal:
go func() {
loc.rampUp(wg)
}()
If the loc variable may be modified concurrently (before the launched goroutine would get scheduled and copy it for the rampUp() method), you can create a copy of it manually and use that in the goroutine, like this:
loc2 := loc
wg.Add(1)
go func() {
loc2.rampUp(wg)
}()
These solutions work because launching the new goroutine does not require big initial stack, so the initial stack limit will not get in the way. And the stack size is dynamic, so after the launch it will grow as needed. Details can be read here: Does Go have an "infinite call stack" equivalent?
The issue with the stack size is, obviously, the size of the struct itself. So as your struct grows organically, you may, as I did, cross that 2k stack call size.
The above problem can be fixed by using a pointer to the struct in the function declaration.
func (l *loc_t) rampUp (wg *sync.WaitGroup) {
defer wg.Done()
l.Counter += 1
}
This creates a pointer to the struct, so all that goes to the stack is the pointer, instead of an entire copy of the struct.
Obviously this can have other implications including race conditions if you're making the call in several threads at once. But as a solution to an ever growing struct that will suddenly start causing stack overflows, it's a solution.
Anyway, hope this is helpful to someone else out there.
Related
func (s *server) send(m *message) error {
go func() {
s.outgoingMessageChan <- message
}()
return nil
}
func main(s *server) {
for {
select {
case <-someChannel:
// do something
case msg := <-s.outGoingMessageChan:
// take message sent from "send" and do something
}
}
}
I am pulling out of this s.outgoingMessageChan in another function, before using an anonymous go function, a call to this function would usually block - meaning whenever send is called, s.outgoingMessageChan <- message would block until something is pulling out of it. However after wrapping it like this it doesn't seem to block anymore. I understand that it kind of sends this operation to the background and proceeds as usual, but I'm not able to wrap my head around how this doesn't affect the current function call.
Each time send is called a new goroutine is created, and returns immediately. (BTW there is no reason to return an error if there can never be an error.) The goroutine (which has it's own "thread" of execution) will block if nothing is ready to read from the chan (assuming it's unbuffered). Once the message is read off the chan the goroutine will continue but since it does nothing else it will simply end.
I should point out that there is no such thing as an anonymous goroutine. Goroutines have no identifier at all (except for a number that you should only use for debugging purposes). You have an anonymous function which you put the go keyword in front causing it to run in a separate goroutine.
For a send function that blocks as you seem to want then just use:
func (s *server) send(m *message) {
s.outgoingMessageChan <- message
}
However, I can't see any point in this function (though it would be inlined and just as efficient as not using a function).
I suspect you may be calling send many times before anything is read from the chan. In this case many new goroutines will be created (each time you call send) which will all block. Each time the chan is read from one will unblock delivering its value and that goroutine will terminate. Doing this you are simply creating an inefficient buffering mechanism. Moreover, if send is called for a prolonged period at a faster rate than the values can be read from the chan then you will eventually run out of memory. Better would be to use a buffered chan (and no goroutines) that once it (the chan) became full exerted "back-pressure" on whatever was producing the messages.
Another point is that the function name main is used to identify the entry point to a program. Please use another name for your 2nd function above. It also seems like it should be a method (using s *server receiver) than a function.
In Go, a sync.Mutex or chan is used to prevent concurrent access of shared objects. However, in some cases I am just interested in the "latest" value of a variable or field of an object.
Or I like to write a value and do not care if another go-routine overwrites it later or has just overwritten it before.
Update: TLDR; Just don't do this. It is not safe. Read the answers, comments, and linked documents!
Update 2021: The Go memory model is going to be specified more thoroughly and there are three great articles by Russ Cox that will teach you more about the surprising effects of unsynchronized memory access. These articles summarize a lot of the below discussions and learnings.
Here are two variants good and bad of an example program, where both seem to produce "correct" output using the current Go runtime:
package main
import (
"flag"
"fmt"
"math/rand"
"time"
)
var bogus = flag.Bool("bogus", false, "use bogus code")
func pause() {
time.Sleep(time.Duration(rand.Uint32()%100) * time.Millisecond)
}
func bad() {
stop := time.After(100 * time.Millisecond)
var name string
// start some producers doing concurrent writes (DANGER!)
for i := 0; i < 10; i++ {
go func(i int) {
pause()
name = fmt.Sprintf("name = %d", i)
}(i)
}
// start consumer that shows the current value every 10ms
go func() {
tick := time.Tick(10 * time.Millisecond)
for {
select {
case <-stop:
return
case <-tick:
fmt.Println("read:", name)
}
}
}()
<-stop
}
func good() {
stop := time.After(100 * time.Millisecond)
names := make(chan string, 10)
// start some producers concurrently writing to a channel (GOOD!)
for i := 0; i < 10; i++ {
go func(i int) {
pause()
names <- fmt.Sprintf("name = %d", i)
}(i)
}
// start consumer that shows the current value every 10ms
go func() {
tick := time.Tick(10 * time.Millisecond)
var name string
for {
select {
case name = <-names:
case <-stop:
return
case <-tick:
fmt.Println("read:", name)
}
}
}()
<-stop
}
func main() {
flag.Parse()
if *bogus {
bad()
} else {
good()
}
}
The expected output is as follows:
...
read: name = 3
read: name = 3
read: name = 5
read: name = 4
...
Any combination of read: and read: name=[0-9] is correct output for this program. Receiving any other string as output would be an error.
When running this program with go run --race bogus.go it is safe.
However, go run --race bogus.go -bogus warns of the concurrent reads and writes.
For map types and when appending to slices I always need a mutex or a similar method of protection to avoid segfaults or unexpected behavior. However, reading and writing literals (atomic values) to variables or field values seems to be safe.
Question: Which Go data types can I safely read and safely write concurrently without a mutext and without producing segfaults and without reading garbage from memory?
Please explain why something is safe or unsafe in Go in your answer.
Update: I rewrote the example to better reflect the original code, where I had the the concurrent writes issue. The important leanings are already in the comments. I will accept an answer that summarizes these learnings with enough detail (esp. on the Go-runtime).
However, in some cases I am just interested in the latest value of a variable or field of an object.
Here is the fundamental problem: What does the word "latest" mean?
Suppoose that, mathematically speaking, we have a sequence of values Xi, with 0 <= i < N. Then obviously Xj is "later than" Xi if j > i. That's a nice simple definition of "latest" and is probably the one you want.
But when two separate CPUs within a single machine—including two goroutines in a Go program—are working at the same time, time itself loses meaning. We cannot say whether i < j, i == j, or i > j. So there is no correct definition for the word latest.
To solve this kind of problem, modern CPU hardware, and Go as a programming language, gives us certain synchronization primitives. If CPUs A and B execute memory fence instructions, or synchronization instructions, or use whatever other hardware provisions exist, the CPUs (and/or some external hardware) will insert whatever is required for the notion of "time" to regain its meaning. That is, if the CPU uses barrier instructions, we can say that a memory load or store that was executed before the barrier is a "before" and a memory load or store that is executed after the barrier is an "after".
(The actual implementation, in some modern hardware, consists of load and store buffers that can rearrange the order in which loads and stores go to memory. The barrier instruction either synchronizes the buffers, or places an actual barrier in them, so that loads and stores cannot move across the barrier. This particular concrete implementation gives an easy way to think about the problem, but isn't complete: you should think of time as simply not existing outside the hardware-provided synchronization, i.e., all loads from, and stores to, some location are happening simultaneously, rather than in some sequential order, except for these barriers.)
In any case, Go's sync package gives you a simple high level access method to these kinds of barriers. Compiled code that executes before a mutex Lock call really does complete before the lock function returns, and the code that executes after the call really does not start until after the lock function returns.
Go's channels provide the same kinds of before/after time guarantees.
Go's sync/atomic package provides much lower level guarantees. In general you should avoid this in favor of the higher level channel or sync.Mutex style guarantees. (Edit to add note: You could use sync/atomic's Pointer operations here, but not with the string type directly, as Go strings are actually implemented as a header containing two separate values: a pointer, and a length. You could solve this with another layer of indirection, by updating a pointer that points to the string object. But before you even consider doing that, you should benchmark the use of the language's preferred methods and verify that these are a problem, because code that works at the sync/atomic level is hard to write and hard to debug.)
Which Go data types can I safely read and safely write concurrently without a mutext and without producing segfaults and without reading garbage from memory?
None.
It really is that simple: You cannot, under no circumstance whatsoever, read and write concurrently to anything in Go.
(Btw: Your "correct" program is not correct, it is racy and even if you get rid of the race condition it would not deterministically produce the output.)
Why can't you use channels
package main
import (
"fmt"
"sync"
)
func main() {
var wg sync.WaitGroup // wait group to close channel
var buffer int = 1 // buffer of the channel
// channel to get the share data
cName := make(chan string, buffer)
for i := 0; i < 10; i++ {
wg.Add(1) // add to wait group
go func(i int) {
cName <- fmt.Sprintf("name = %d", i)
wg.Done() // decrease wait group.
}(i)
}
go func() {
wg.Wait() // wait of wait group to be 0
close(cName) // close the channel
}()
// process all the data
for n := range cName {
println("read:", n)
}
}
The above code returns the following output
read: name = 0
read: name = 5
read: name = 1
read: name = 2
read: name = 3
read: name = 4
read: name = 7
read: name = 6
read: name = 8
read: name = 9
https://play.golang.org/p/R4n9ssPMOeS
Article about channels
I want to compute the inverse element of a prime in modular arithmetic.
In order to speed things up I start a few goroutines which try to find the element in a certain range. When the first one finds the element, it sends it to the main goroutine and at this point I want to terminate the program. So I call close in the main goroutine, but I don't know if the goroutines will finish their execution (I guess not). So a few questions arise:
1) Is this a bad style, should I have something like a WaitGroup?
2) Is there a more idiomatic way to do this computation?
package main
import "fmt"
const (
Procs = 8
P = 1000099
Base = 1<<31 - 1
)
func compute(start, end uint64, finished chan struct{}, output chan uint64) {
for i := start; i < end; i++ {
select {
case <-finished:
return
default:
break
}
if i*P%Base == 1 {
output <- i
}
}
}
func main() {
finished := make(chan struct{})
output := make(chan uint64)
for i := uint64(0); i < Procs; i++ {
start := i * (Base / Procs)
end := (i + 1) * (Base / Procs)
go compute(start, end, finished, output)
}
fmt.Println(<-output)
close(finished)
}
Is there a more idiomatic way to do this computation?
You don't actually need a loop to compute this.
If you use the GCD function (part of the standard library), you get returned numbers x and y such that:
x*P+y*Base=1
this means that x is the answer you want (because x*P = 1 modulo Base):
package main
import (
"fmt"
"math/big"
)
const (
P = 1000099
Base = 1<<31 - 1
)
func main() {
bigP := big.NewInt(P)
bigBase := big.NewInt(Base)
// Compute inverse of bigP modulo bigBase
bigGcd := big.NewInt(0)
bigX := big.NewInt(0)
bigGcd.GCD(bigX,nil,bigP,bigBase)
// x*bigP+y*bigBase=1
// => x*bigP = 1 modulo bigBase
fmt.Println(bigX)
}
Is this a bad style, should I have something like a WaitGroup?
A wait group solves a different problem.
In general, to be a responsible go citizen here and ensure your code runs and tidies up behind itself, you may need to do a combination of:
Signal to the spawned goroutines to stop their calculations when the result of the computation has been found elsewhere.
Ensure a synchronous process waits for the goroutines to stop before returning. This is not mandatory if they properly respond to the signal in #1, but if you don't wait, there will be no guarantee they have terminated before the parent goroutine continues.
In your example program, which performs this task and then quits, there is strictly no need to do either. As this comment indicates, your program's main method terminates upon a satisfactory answer being found, at which point the program will end, any goroutines will be summarily terminated, and the operating system will tidy up any consumed resources. Waiting for goroutines to stop is unnecessary.
However, if you wrapped this code up into a library or it became part of a long running "inverse prime calculation" service, it would be desirable to tidy up the goroutines you spawned to avoid wasting cycles unnecessarily. Additionally, in general, you may have other scenarios in which goroutines store state, hold handles to external resources, or hold handles to internal objects which you risk leaking if not properly tidied away – it is desirable to properly close these.
Communicating the requirement to stop working
There are several approaches to communicate this. I don't claim this is an exhaustive list! (Please do suggest other general-purpose methods in the comments or by proposing edits to the post.)
Using a special channel
Signal the child goroutines by closing a special "shutdown" channel reserved for the purpose. This exploits the channel axiom:
A receive from a closed channel returns the zero value immediately
On receiving from the shutdown channel, the goroutine should immediately arrange to tidy any local state and return from the function. Your earlier question had example code which implemented this; a version of the pattern is:
func myGoRoutine(shutdownChan <-chan struct{}) {
select {
case <-shutdownChan:
// tidy up behaviour goes here
return
// You may choose to listen on other channels here to implement
// the primary behaviour of the goroutine.
}
}
func main() {
shutdownChan := make(chan struct{})
go myGoRoutine(shutdownChan)
// some time later
close(shutdownChan)
}
In this instance, the shutdown logic is wasted because the main() method will immediately return after the call to close. This will race with the shutdown of the goroutine, but we should assume it will not properly execute its tidy-up behaviour. Point 2 addresses ways to fix this.
Using a context
The context package provides the option to create a context which can be cancelled. On cancellation, a channel exposed by the context's Done() method will be closed, which signals time to return from the goroutine.
This approach is approximately the same as the previous method, with the exception of neater encapsulation and the availability of a context to pass to downstream calls in your goroutine to cancel nested calls where desired. Example:
func myGoRoutine(ctx context.Context) {
select {
case <-ctx.Done():
// tidy up behaviour goes here
return
// Put real behaviour for the goroutine here.
}
}
func main() {
// Get a context (or use an existing one if you are provided with one
// outside a `main` method:
ctx := context.Background()
// Create a derived context with a cancellation method
ctx, cancel := context.WithCancel(ctx)
go myGoRoutine(ctx)
// Later, when ready to quit
cancel()
}
This has the same bug as the other case in that the main method will not wait for the child goroutines to quit before returning.
Waiting (or "join"ing) for child goroutines to stop
The code which closes the shutdown channel or closes the context in the above examples will not wait for child goroutines to stop working before continuing. This may be acceptable in some instances, while in others you may require the guarantee that goroutines have stopped before continuing.
sync.WaitGroup can be used to implement this requirement. The documentation is comprehensive. A wait group is a counter which should be incremented using its Add method on starting a goroutine and decremented using its Done method when a goroutine completes. Code can wait for the counter to return to zero by calling its Wait method, which blocks until the condition is true. All calls to Add must occur before a call to Wait.
Example code:
func main() {
var wg sync.WaitGroup
// Increment the WaitGroup with the number of goroutines we're
// spawning.
wg.Add(1)
// It is common to wrap a goroutine in a function which performs
// the decrement on the WaitGroup once the called function returns
// to avoid passing references of this control logic to the
// downstream consumer.
go func() {
// TODO: implement a method to communicate shutdown.
callMyFunction()
wg.Done()
}()
// Indicate shutdown, e.g. by closing a channel or cancelling a
// context.
// Wait for goroutines to stop
wg.Wait()
}
Is there a more idiomatic way to do this computation?
This algorithm is certainly parallelizable through use of goroutines in the manner you have defined. As the work is CPU-bound, the limitation of goroutines to the number of available CPUs makes sense (in the absence of other work on the machine) to benefit from the available compute resource.
See peterSO's answer for a bug fix.
I'm writing a package to control a Canon DSLR using their EDSDK DLL from Go.
This is a personal project for a photo booth to use at our wedding at my partners request, which I'll be happy to post on GitHub when complete :).
Looking at the examples of using the SDK elsewhere, it isn't threadsafe and uses thread-local resources, so I'll need to make sure I'm calling it from a single thread during usage. While not ideal, it looks like Go provides a "runtime.LockOSThread" function for doing just that, although this does get called by the core DLL interop code itself, so I'll have to wait and find out if that interferes or not.
I want the rest of the application to be able to call the SDK using a higher level interface without worrying about the threading, so I need a way to pass function call requests to the locked thread/Goroutine to execute there, then pass the results back to the calling function outside of that Goroutine.
So far, I've come up with this working example of using very broad function definitions using []interface{} arrays and passing back and forward via channels. This would take a lot of mangling of input/output data on every call to do type assertions back out of the interface{} array, even if we know what we should expect for each function ahead of time, but it looks like it'll work.
Before I invest a lot of time doing it this way for possibly the worst way to do it - does anyone have any better options?
package edsdk
import (
"fmt"
"runtime"
)
type CanonSDK struct {
FChan chan functionCall
}
type functionCall struct {
Function func([]interface{}) []interface{}
Arguments []interface{}
Return chan []interface{}
}
func NewCanonSDK() (*CanonSDK, error) {
c := &CanonSDK {
FChan: make(chan functionCall),
}
go c.BackgroundThread(c.FChan)
return c, nil
}
func (c *CanonSDK) BackgroundThread(fcalls <-chan functionCall) {
runtime.LockOSThread()
for f := range fcalls {
f.Return <- f.Function(f.Arguments)
}
runtime.UnlockOSThread()
}
func (c *CanonSDK) TestCall() {
ret := make(chan []interface{})
f := functionCall {
Function: c.DoTestCall,
Arguments: []interface{}{},
Return: ret,
}
c.FChan <- f
results := <- ret
close(ret)
fmt.Printf("%#v", results)
}
func (c *CanonSDK) DoTestCall([]interface{}) []interface{} {
return []interface{}{ "Test", nil }
}
For similar embedded projects I've played with, I tend to create a single goroutine worker that listens on a channel to perform all the work over that USB device. And any results sent back out on another channel.
Talk to the device with channels only in Go in a one-way exchange. LIsten for responses from the other channel.
Since USB is serial and polling, I had to setup a dedicated channel with another goroutine that justs picks items off the channel when they were pushed into it from the worker goroutine that just looped.
Below is an example of how to use mutex lock in order to safely access data. How would I go about doing the same with the use of CSP (communication sequential processes) instead of using mutex lock’s and unlock’s?
type Stack struct {
top *Element
size int
sync.Mutex
}
func (ss *Stack) Len() int {
ss.Lock()
size := ss.size
ss.Unlock()
return size
}
func (ss *Stack) Push(value interface{}) {
ss.Lock()
ss.top = &Element{value, ss.top}
ss.size++
ss.Unlock()
}
func (ss *SafeStack) Pop() (value interface{}) {
ss.Lock()
size := ss.size
ss.Unlock()
if size > 0 {
ss.Lock()
value, ss.top = ss.top.value, ss.top.next
ss.size--
ss.Unlock()
return
}
return nil
}
If you actually were to look at how Go implements channels, you'd essentially see a mutex around an array with some additional thread handling to block execution until the value is passed through. A channel's job is to move data from one spot in memory to another with ease. Therefore where you have locks and unlocks, you'd have things like this example:
func example() {
resChan := make(int chan)
go func(){
resChan <- 1
}()
go func(){
res := <-resChan
}
}
So in the example, the first goroutine is blocked after sending the value until the second goroutine reads from the channel.
To do this in Go with mutexes, one would use sync.WaitGroup which will add one to the group on setting the value, then release it from the group and the second goroutine will lock and then unlock the value.
The oddities in your example are 1 no goroutines, so it's all happening in a single main goroutine and the locks are being used more traditionally (as in c thread like) so channels won't really accomplish anything. The example you have would be considered an anti-pattern, like the golang proverb says "Don't communicate by sharing memory, share memory by communicating."