Go algorithm for looping through servers in predefined ratio - algorithm

I am tying to make a algorithm that can loop true things, backend server in my case by a pre defined ratio.
for example I got 2 backend servers
type server struct {
addr string
ratio float64
counter int64
}
// s2 is a beast and may handle 3 times the requests then s1 *edit
s1 := &server{":3000", 0.25}
s2 := &server{":3001", 0.75}
func nextServer() {
server := next() // simple goroutine that provides the next server between s1 and s2
N := server.counter / i
if float64(N) > server.ratio {
//repeat this function
return nextServer()
}
server.counter += 1
}
for i := 0; i < 1000; i++ {
nextServer()
}
s1 has 250 as counter (requests handled)
s2 is huge so he has 750 as counter (requests handled)
this is a very simple implementation of what I got but when i is like 10000, it keeps looping in
nextServer() cause N is always > server.ratio.
as long as i is around 5000 it works perfect. but I think there are better algorithms for looping in ratios.
How to make this simple and solid?

Something like this?
package main
import (
"fmt"
"math/rand"
)
type server struct {
addr string
ratio float64
}
var servers []server
func nextServer() *server {
rndFloat := rand.Float64() //pick a random number between 0.0-1.0
ratioSum := 0.0
for _, srv := range servers {
ratioSum += srv.ratio //sum ratios of all previous servers in list
if ratioSum >= rndFloat { //if ratiosum rises above the random number
return &srv //return that server
}
}
return nil //should not come here
}
func main() {
servers = []server{server{"0.25", 0.25}, server{"0.50", 0.50},
server{"0.10", 0.10}, server{"0.15", 0.15}}
counts := make(map[string]int, len(servers))
for i := 0; i < 100; i++ {
srv := nextServer()
counts[srv.addr] += 1
}
fmt.Println(counts)
}
Yields for example:
map[0.50:56 0.15:15 0.25:24 0.10:5]

Related

Parallel execution of prime finding algorithm slows runtime

So I implemented the following prime finding algorithm in go.
primes = []
Assume all numbers are primes (vacuously true)
check = 2
if check is still assumed to be prime append it to primes
multiply check by each prime less than or equal to its minimum factor and
eliminate results from assumed primes.
increment check by 1 and repeat 4 thru 6 until check > limit.
Here is my serial implementation:
package main
import(
"fmt"
"time"
)
type numWithMinFactor struct {
number int
minfactor int
}
func pow(base int, power int) int{
result := 1
for i:=0;i<power;i++{
result*=base
}
return result
}
func process(check numWithMinFactor,primes []int,top int,minFactors []numWithMinFactor){
var n int
for i:=0;primes[i]<=check.minfactor;i++{
n = check.number*primes[i]
if n>top{
break;
}
minFactors[n] = numWithMinFactor{n,primes[i]}
if i+1 == len(primes){
break;
}
}
}
func findPrimes(top int) []int{
primes := []int{}
minFactors := make([]numWithMinFactor,top+2)
check := 2
for power:=1;check <= top;power++{
if minFactors[check].number == 0{
primes = append(primes,check)
minFactors[check] = numWithMinFactor{check,check}
}
process(minFactors[check],primes,top,minFactors)
check++
}
return primes
}
func main(){
fmt.Println("Welcome to prime finder!")
start := time.Now()
fmt.Println(findPrimes(1000000))
elapsed := time.Since(start)
fmt.Println("Finding primes took %s", elapsed)
}
This runs great producing all the primes <1,000,000 in about 63ms (mostly printing) and primes <10,000,000 in 600ms on my pc. Now I figure none of the numbers check such that 2^n < check <= 2^(n+1) have factors > 2^n so I can do all the multiplications and elimination for each check in that range in parallel once I have primes up to 2^n. And my parallel implementation is as follows:
package main
import(
"fmt"
"time"
"sync"
)
type numWithMinFactor struct {
number int
minfactor int
}
func pow(base int, power int) int{
result := 1
for i:=0;i<power;i++{
result*=base
}
return result
}
func process(check numWithMinFactor,primes []int,top int,minFactors []numWithMinFactor, wg *sync.WaitGroup){
defer wg.Done()
var n int
for i:=0;primes[i]<=check.minfactor;i++{
n = check.number*primes[i]
if n>top{
break;
}
minFactors[n] = numWithMinFactor{n,primes[i]}
if i+1 == len(primes){
break;
}
}
}
func findPrimes(top int) []int{
primes := []int{}
minFactors := make([]numWithMinFactor,top+2)
check := 2
var wg sync.WaitGroup
for power:=1;check <= top;power++{
for check <= pow(2,power){
if minFactors[check].number == 0{
primes = append(primes,check)
minFactors[check] = numWithMinFactor{check,check}
}
wg.Add(1)
go process(minFactors[check],primes,top,minFactors,&wg)
check++
if check>top{
break;
}
}
wg.Wait()
}
return primes
}
func main(){
fmt.Println("Welcome to prime finder!")
start := time.Now()
fmt.Println(findPrimes(1000000))
elapsed := time.Since(start)
fmt.Println("Finding primes took %s", elapsed)
}
Unfortunately not only is this implementation slower running up to 1,000,000 in 600ms and up to 10 million in 6 seconds. My intuition tells me that there is potential for parallelism to improve performance however I clearly haven't been able to achieve that and would greatly appreciate any input on how to improve runtime here, or more specifically any insight as to why the parallel solution is slower.
Additionally the parallel solution consumes more memory relative to the serial solution but that is to be expected; the serial solution can grid up to 1,000,000,000 in about 22 seconds where the parallel solution runs out of memory on my system (32GB ram) going for the same target. But I'm asking about runtime here not memory use, I could for example use the zero value state of the minFactors array rather than a separate isPrime []bool true state but I think it is more readable as is.
I've tried passing a pointer for primes []int but that didn't seem to make a difference, using a channel instead of passing the minFactors array to the process function resulted in big time memory use and a much(10x ish) slower performance. I've re-written this algo a couple times to see if I could iron anything out but no luck. Any insights or suggestions would be much appreciated because I think parallelism could make this faster not 10x slower!
Par #Volker's suggestion I limited the number of processes to somthing less than my pc's available logical processes with the following revision however I am still getting runtimes that are 10x slower than the serial implementation.
package main
import(
"fmt"
"time"
"sync"
)
type numWithMinFactor struct {
number int
minfactor int
}
func pow(base int, power int) int{
result := 1
for i:=0;i<power;i++{
result*=base
}
return result
}
func process(check numWithMinFactor,primes []int,top int,minFactors []numWithMinFactor, wg *sync.WaitGroup){
defer wg.Done()
var n int
for i:=0;primes[i]<=check.minfactor;i++{
n = check.number*primes[i]
if n>top{
break;
}
minFactors[n] = numWithMinFactor{n,primes[i]}
if i+1 == len(primes){
break;
}
}
}
func findPrimes(top int) []int{
primes := []int{}
minFactors := make([]numWithMinFactor,top+2)
check := 2
nlogicalProcessors := 20
var wg sync.WaitGroup
var twoPow int
for power:=1;check <= top;power++{
twoPow = pow(2,power)
for check <= twoPow{
for nLogicalProcessorsInUse := 0 ; nLogicalProcessorsInUse < nlogicalProcessors; nLogicalProcessorsInUse++{
if minFactors[check].number == 0{
primes = append(primes,check)
minFactors[check] = numWithMinFactor{check,check}
}
wg.Add(1)
go process(minFactors[check],primes,top,minFactors,&wg)
check++
if check>top{
break;
}
if check>twoPow{
break;
}
}
wg.Wait()
if check>top{
break;
}
}
}
return primes
}
func main(){
fmt.Println("Welcome to prime finder!")
start := time.Now()
fmt.Println(findPrimes(10000000))
elapsed := time.Since(start)
fmt.Println("Finding primes took %s", elapsed)
}
tldr; Why is my parallel implementation slower than serial implementation how do I make it faster?
Par #mh-cbon's I made larger jobs for parallel processing resulting in the following code.
package main
import(
"fmt"
"time"
"sync"
)
func pow(base int, power int) int{
result := 1
for i:=0;i<power;i++{
result*=base
}
return result
}
func process(check int,primes []int,top int,minFactors []int){
var n int
for i:=0;primes[i]<=minFactors[check];i++{
n = check*primes[i]
if n>top{
break;
}
minFactors[n] = primes[i]
if i+1 == len(primes){
break;
}
}
}
func processRange(start int,end int,primes []int,top int,minFactors []int, wg *sync.WaitGroup){
defer wg.Done()
for start <= end{
process(start,primes,top,minFactors)
start++
}
}
func findPrimes(top int) []int{
primes := []int{}
minFactors := make([]int,top+2)
check := 2
nlogicalProcessors := 10
var wg sync.WaitGroup
var twoPow int
var start int
var end int
var stepSize int
var stepsTaken int
for power:=1;check <= top;power++{
twoPow = pow(2,power)
stepSize = (twoPow-start)/nlogicalProcessors
stepsTaken = 0
stepSize = (twoPow/2)/nlogicalProcessors
for check <= twoPow{
start = check
end = check+stepSize
if stepSize == 0{
end = twoPow
}
if stepsTaken == nlogicalProcessors-1{
end = twoPow
}
if end>top {
end = top
}
for check<=end {
if minFactors[check] == 0{
primes = append(primes,check)
minFactors[check] = check
}
check++
}
wg.Add(1)
go processRange(start,end,primes,top,minFactors,&wg)
if check>top{
break;
}
if check>twoPow{
break;
}
stepsTaken++
}
wg.Wait()
if check>top{
break;
}
}
return primes
}
func main(){
fmt.Println("Welcome to prime finder!")
start := time.Now()
fmt.Println(findPrimes(1000000))
elapsed := time.Since(start)
fmt.Println("Finding primes took %s", elapsed)
}
This runs at a similar speed to the serial implementation.
So I did eventually get a parallel version of the code to run slightly faster than the serial version. following suggestions from #mh-cbon (See above). However this implementation did not result in vast improvements relative to the serial implementation (50ms to 10 million compared to 75ms serially) Considering that allocating and writing an []int 0:10000000 takes 25ms I'm not disappointed by these results. As #Volker stated "such stuff often is not limited by CPU but by memory bandwidth." which I believe is the case here.
I would still love to see any additional improvements however I am somewhat satisfied with what I've gained here.
Serial code running up to 2 billion 19.4 seconds
Parallel code running up to 2 billion 11.1 seconds
Initializing []int{0:2Billion} 4.5 seconds

Flatbuffer serialization performance is slow compared to protobuf

With following IDL files my intention is to measure the serialization speed of Flatbuffer . I am using golang for my analysis
namespace MyFlat;
struct Vertices {
x : double;
y :double;
}
table Polygon {
polygons : [Vertices];
}
table Layer {
polygons : [Polygon];
}
root_type Layer;
Here is the code I have written for calculation
package main
import (
"MyFlat"
"fmt"
"io/ioutil"
"log"
"strconv"
"time"
flatbuffers "github.com/google/flatbuffers/go"
)
func calculation(size int, vertices int) {
b := flatbuffers.NewBuilder(0)
var polyoffset []flatbuffers.UOffsetT
rawSize := ((16 * vertices) * size) / 1024
var vec1 flatbuffers.UOffsetT
var StartedAtMarshal time.Time
var EndedAtMarshal time.Time
StartedAtMarshal = time.Now()
for k := 0; k < size; k++ {
MyFlat.PolygonStartPolygonsVector(b, vertices)
for i := 0; i < vertices; i++ {
MyFlat.CreateVertices(b, 2.0, 2.4)
}
vec1 = b.EndVector(vertices)
MyFlat.PolygonStart(b)
MyFlat.PolygonAddPolygons(b, vec1)
polyoffset = append(polyoffset, MyFlat.PolygonEnd(b))
}
MyFlat.LayerStartPolygonsVector(b, size)
for _, offset := range polyoffset {
b.PrependUOffsetT(offset)
}
vec := b.EndVector(size)
MyFlat.LayerStart(b)
MyFlat.LayerAddPolygons(b, vec)
finalOffset := MyFlat.LayerEnd(b)
b.Finish(finalOffset)
EndedAtMarshal = time.Now()
SeElaprseTime := EndedAtMarshal.Sub(StartedAtMarshal).String()
mybyte := b.FinishedBytes()
file := "/tmp/myflat_" + strconv.Itoa(size) + ".txt"
if err := ioutil.WriteFile(file, mybyte, 0644); err != nil {
log.Fatalln("Failed to write address book:", err)
}
StartedAt := time.Now()
layer := MyFlat.GetRootAsLayer(mybyte, 0)
size = layer.PolygonsLength()
obj := &MyFlat.Polygon{}
layer.Polygons(obj, 1)
for i := 0; i < obj.PolygonsLength(); i++ {
objVertices := &MyFlat.Vertices{}
obj.Polygons(objVertices, i)
fmt.Println(objVertices.X(), objVertices.Y())
}
EndedAt := time.Now()
DeElapseTime := EndedAt.Sub(StartedAt).String()
fmt.Println(size, ",", vertices, ", ", SeElaprseTime, ",", DeElapseTime, ",", (len(mybyte) / 1024), ",", rawSize)
}
func main() {
data := []int{500000, 1000000, 1500000, 3000000, 8000000}
for _, size := range data {
//calculation(size, 5)
//calculation(size, 10)
calculation(size, 20)
}
}
Problem is I find it serialization is quite slow compared to protobuff with similar idl.
For 3M polygons serialization its taking almost 4.1167037s. Where in protobuf its taking half. Deserilization time for flatbuf is very less (in micro sec). In protobuf its quite high. But still if I add both flatbuf performance is lower.
Do you see any optimized way to serialize it. Flatbuffer is having a method createBinaryVector for byte vector but there is no direct way to serialize vector of polygon from a existing a user defined type vector.
I am adding protobuf code also
syntax = 'proto3';
package myproto;
message Polygon {
repeated double v_x = 1 ;
repeated double v_y = 2 ;
}
message CADData {
repeated Polygon polygon = 1;
string layer_name = 2;
}
Go Code with protobuf
package main
import (
"fmt"
"io/ioutil"
"log"
"math/rand"
"myproto"
"strconv"
"time"
"github.com/golang/protobuf/proto"
)
func calculation(size int, vertices int) {
var comp []*myproto.Polygon
var vx []float64
var vy []float64
for i := 0; i < vertices; i++ {
r := 0 + rand.Float64()*(10-0)
vx = append(vx, r)
vy = append(vy, r/2)
}
rawSize := ((16 * vertices) * size) / 1024
StartedAtMarshal := time.Now()
for i := 0; i < size; i++ {
comp = append(comp, &myproto.Polygon{
VX: vx,
VY: vy,
})
}
pfs := &myproto.CADData{
LayerName: "Layer",
Polygon: comp,
}
data, err := proto.Marshal(pfs)
if err != nil {
log.Fatal("marshaling error: ", err)
}
EndedAtMarshal := time.Now()
SeElaprseTime := EndedAtMarshal.Sub(StartedAtMarshal).String()
file := "/tmp/myproto_" + strconv.Itoa(size) + ".txt"
if err := ioutil.WriteFile(file, data, 0644); err != nil {
log.Fatalln("Failed to write address book:", err)
}
StartedAt := time.Now()
serialized := &myproto.CADData{}
proto.Unmarshal(data, serialized)
EndedAt := time.Now()
DeElapseTime := EndedAt.Sub(StartedAt).String()
fmt.Println(size, ",", vertices, ", ", SeElaprseTime, ",", DeElapseTime, ",", (len(data) / 1024), ",", rawSize)
}
func main() {
data := []int{500000, 1000000, 1500000, 3000000, 8000000}
for _, size := range data {
// calculation(size, 5)
//calculation(size, 10)
calculation(size, 20)
}
}
The time you give, is that for serialization, de-serialization, or both?
Your de-serialization code is likely entirely dominated by fmt.Println. Why don't you instead do sum += objVertices.X() + objVertices.Y() and print sum after timing is done? Can you pull objVertices := &MyFlat.Vertices{} outside of the loop?
You didn't post your protobuf code. Are you including in the timing the time to create the tree of objects which is being serialized (which is required for use in Protobuf but not in FlatBuffers)? Similarly, are you doing the timed (de-)serialization at least a 1000x or so, so you can include the cost of GC (Protobuf allocates a LOT of objects, FlatBuffers allocates few/none) in your comparison?
If after you do the above, it is still slower, post on the FlatBuffers github issues, the authors of the Go port may be able to help further. Make sure you post full code for both systems, and full timings.
Note generally: the design of FlatBuffers is such that it will create the biggest performance gap with Protobuf in C/C++. That said, it should still be a lot faster in Go also. There are unfortunate things about Go however that prevent it from maximizing the performance potential.
b := flatbuffers.NewBuilder(0)
I'm not sure what the "grows automatically" behavior is in Go for flatbuffers, but I'm pretty sure requiring the buffer to grow automatically is not the preferred pattern. Could you try doing your same timing comparison after initializing the buffer with flatbuffers.NewBuilder(moreBytesThanTheMessageNeeds)?

Saving results from a parallelized goroutine

I am trying to parallelize an operation in golang and save the results in a manner that I can iterate over to sum up afterwords.
I have managed to set up the parameters so that no deadlock occurs, and I have confirmed that the operations are working and being saved correctly within the function. When I iterate over the Slice of my struct and try and sum up the results of the operation, they all remain 0. I have tried passing by reference, with pointers, and with channels (causes deadlock).
I have only found this example for help: https://golang.org/doc/effective_go.html#parallel. But this seems outdated now, as Vector as been deprecated? I also have not found any references to the way this function (in the example) was constructed (with the func (u Vector) before the name). I tried replacing this with a Slice but got compile time errors.
Any help would be very appreciated. Here is the key parts of my code:
type job struct {
a int
b int
result *big.Int
}
func choose(jobs []Job, c chan int) {
temp := new(big.Int)
for _,job := range jobs {
job.result = //perform operation on job.a and job.b
//fmt.Println(job.result)
}
c <- 1
}
func main() {
num := 100 //can be very large (why we need big.Int)
n := num
k := 0
const numCPU = 6 //runtime.NumCPU
count := new(big.Int)
// create a 2d slice of jobs, one for each core
jobs := make([][]Job, numCPU)
for (float64(k) <= math.Ceil(float64(num / 2))) {
// add one job to each core, alternating so that
// job set is similar in difficulty
for i := 0; i < numCPU; i++ {
if !(float64(k) <= math.Ceil(float64(num / 2))) {
break
}
jobs[i] = append(jobs[i], Job{n, k, new(big.Int)})
n -= 1
k += 1
}
}
c := make(chan int, numCPU)
for i := 0; i < numCPU; i++ {
go choose(jobs[i], c)
}
// drain the channel
for i := 0; i < numCPU; i++ {
<-c
}
// computations are done
for i := range jobs {
for _,job := range jobs[i] {
//fmt.Println(job.result)
count.Add(count, job.result)
}
}
fmt.Println(count)
}
Here is the code running on the go playground https://play.golang.org/p/X5IYaG36U-
As long as the []Job slice is only modified by one goroutine at a time, there's no reason you can't modify the job in place.
for i, job := range jobs {
jobs[i].result = temp.Binomial(int64(job.a), int64(job.b))
}
https://play.golang.org/p/CcEGsa1fLh
You should also use a WaitGroup, rather than rely on counting tokens in a channel yourself.

How to generate a stream of *unique* random numbers in Go using the standard library

How can I generate a stream of unique random number in Go?
I want to guarantee there are no duplicate values in array a using math/rand and/or standard Go library utilities.
func RandomNumberGenerator() *rand.Rand {
s1 := rand.NewSource(time.Now().UnixNano())
r1 := rand.New(s1)
return r1
}
rng := RandomNumberGenerator()
N := 10000
for i := 0; i < N; i++ {
a[i] = rng.Int()
}
There are questions and solutions on how to generate a series of random number in Go, for example, here.
But I would like to generate a series of random numbers that does not duplicate previous values. Is there a standard/recommended way to achieve this in Go?
My guess is to (1) use permutation or to (2) keep track of previously generated numbers and regenerate a value if it's been generated before.
But solution (1) sounds like overkill if I only want a few number and (2) sounds very time consuming if I end up generating a long series of random numbers due to collision, and I guess it's also very memory-consuming.
Use Case: To benchmark a Go program with 10K, 100K, 1M pseudo-random number that has no duplicates.
You should absolutely go with approach 2. Let's assume you're running on a 64-bit machine, and thus generating 63-bit integers (64 bits, but rand.Int never returns negative numbers). Even if you generate 4 billion numbers, there's still only a 1 in 4 billion chance that any given number will be a duplicate. Thus, you'll almost never have to regenerate, and almost never never have to regenerate twice.
Try, for example:
type UniqueRand struct {
generated map[int]bool
}
func (u *UniqueRand) Int() int {
for {
i := rand.Int()
if !u.generated[i] {
u.generated[i] = true
return i
}
}
}
I had similar task to pick elements from initial slice by random uniq index. So from slice with 10k elements get 1k random uniq elements.
Here is simple head on solution:
import (
"time"
"math/rand"
)
func getRandomElements(array []string) []string {
result := make([]string, 0)
existingIndexes := make(map[int]struct{}, 0)
randomElementsCount := 1000
for i := 0; i < randomElementsCount; i++ {
randomIndex := randomIndex(len(array), existingIndexes)
result = append(result, array[randomIndex])
}
return result
}
func randomIndex(size int, existingIndexes map[int]struct{}) int {
rand.Seed(time.Now().UnixNano())
for {
randomIndex := rand.Intn(size)
_, exists := existingIndexes[randomIndex]
if !exists {
existingIndexes[randomIndex] = struct{}{}
return randomIndex
}
}
}
I see two reasons for wanting this. You want to test a random number generator, or you want unique random numbers.
You're Testing A Random Number Generator
My first question is why? There's plenty of solid random number generators available. Don't write your own, it's basically dabbling in cryptography and that's never a good idea. Maybe you're testing a system that uses a random number generator to generate random output?
There's a problem: there's no guarantee random numbers are unique. They're random. There's always a possibility of collision. Testing that random output is unique is incorrect.
Instead, you want to test the results are distributed evenly. To do this I'll reference another answer about how to test a random number generator.
You Want Unique Random Numbers
From a practical perspective you don't need guaranteed uniqueness, but to make collisions so unlikely that it's not a concern. This is what UUIDs are for. They're 128 bit Universally Unique IDentifiers. There's a number of ways to generate them for particular scenarios.
UUIDv4 is basically just a 122 bit random number which has some ungodly small chance of a collision. Let's approximate it.
n = how many random numbers you'll generate
M = size of the keyspace (2^122 for a 122 bit random number)
P = probability of collision
P = n^2/2M
Solving for n...
n = sqrt(2MP)
Setting P to something absurd like 1e-12 (one in a trillion), we find you can generate about 3.2 trillion UUIDv4s with a 1 in a trillion chance of collision. You're 1000 times more likely to win the lottery than have a collision in 3.2 trillion UUIDv4s. I think that's acceptable.
Here's a UUIDv4 library in Go to use and a demonstration of generating 1 million unique random 128 bit values.
package main
import (
"fmt"
"github.com/frankenbeanies/uuid4"
)
func main() {
for i := 0; i <= 1000000; i++ {
uuid := uuid4.New().Bytes()
// use the uuid
}
}
you can generate a unique random number with len(12) using UnixNano in golang time package :
uniqueNumber:=time.Now().UnixNano()/(1<<22)
println(uniqueNumber)
it's always random :D
1- Fast positive and negative int32 unique pseudo random numbers in 296ms using std lib:
package main
import (
"fmt"
"math/rand"
"time"
)
func main() {
const n = 1000000
rand.Seed(time.Now().UTC().UnixNano())
duplicate := 0
mp := make(map[int32]struct{}, n)
var r int32
t := time.Now()
for i := 0; i < n; {
r = rand.Int31()
if i&1 == 0 {
r = -r
}
if _, ok := mp[r]; ok {
duplicate++
} else {
mp[r] = zero
i++
}
}
fmt.Println(time.Since(t))
fmt.Println("len: ", len(mp))
fmt.Println("duplicate: ", duplicate)
positive := 0
for k := range mp {
if k > 0 {
positive++
}
}
fmt.Println(`n=`, n, `positive=`, positive)
}
var zero = struct{}{}
output:
296.0169ms
len: 1000000
duplicate: 118
n= 1000000 positive= 500000
2- Just fill the map[int32]struct{}:
for i := int32(0); i < n; i++ {
m[i] = zero
}
When reading it is not in order in Go:
for k := range m {
fmt.Print(k, " ")
}
And this just takes 183ms for 1000000 unique numbers, no duplicate (The Go Playground):
package main
import (
"fmt"
"time"
)
func main() {
const n = 1000000
m := make(map[int32]struct{}, n)
t := time.Now()
for i := int32(0); i < n; i++ {
m[i] = zero
}
fmt.Println(time.Since(t))
fmt.Println("len: ", len(m))
// for k := range m {
// fmt.Print(k, " ")
// }
}
var zero = struct{}{}
3- Here is the simple but slow (this takes 22s for 200000 unique numbers), so you may generate and save it to a file once:
package main
import "time"
import "fmt"
import "math/rand"
func main() {
dup := 0
t := time.Now()
const n = 200000
rand.Seed(time.Now().UTC().UnixNano())
var a [n]int32
var exist bool
for i := 0; i < n; {
r := rand.Int31()
exist = false
for j := 0; j < i; j++ {
if a[j] == r {
dup++
fmt.Println(dup)
exist = true
break
}
}
if !exist {
a[i] = r
i++
}
}
fmt.Println(time.Since(t))
}
Temporary workaround based on #joshlf's answer
type UniqueRand struct {
generated map[int]bool //keeps track of
rng *rand.Rand //underlying random number generator
scope int //scope of number to be generated
}
//Generating unique rand less than N
//If N is less or equal to 0, the scope will be unlimited
//If N is greater than 0, it will generate (-scope, +scope)
//If no more unique number can be generated, it will return -1 forwards
func NewUniqueRand(N int) *UniqueRand{
s1 := rand.NewSource(time.Now().UnixNano())
r1 := rand.New(s1)
return &UniqueRand{
generated: map[int]bool{},
rng: r1,
scope: N,
}
}
func (u *UniqueRand) Int() int {
if u.scope > 0 && len(u.generated) >= u.scope {
return -1
}
for {
var i int
if u.scope > 0 {
i = u.rng.Int() % u.scope
}else{
i = u.rng.Int()
}
if !u.generated[i] {
u.generated[i] = true
return i
}
}
}
Client side code
func TestSetGet2(t *testing.T) {
const N = 10000
for _, mask := range []int{0, -1, 0x555555, 0xaaaaaa, 0x333333, 0xcccccc, 0x314159} {
rng := NewUniqueRand(2*N)
a := make([]int, N)
for i := 0; i < N; i++ {
a[i] = (rng.Int() ^ mask) << 1
}
//Benchmark Code
}
}

Golang share big chunk of data between goroutines

I have a need to read structure fields set from another goroutine, afaik doing so directly even when knowing for sure there will be no concurrent access(write finished before read occurred, signaled via chan struct{}) may result in stale data
Will sending a pointer to the structure(created in the 1st goroutine, modified in the 2nd, read by the 3rd) resolve the possible staleness issue, considering I can guarantee no concurrent access?
I would like to avoid copying as structure is big and contains huge Bytes.Buffer filled in the 2nd goroutine, I need to read from the 3rd
There is an option for locking, but seems like an overkill considering I know that there will be no concurrent access
There are many answers to this, and it depends to your data structure and program logic.
see: How to lock/synchronize access to a variable in Go during concurrent goroutines?
and: How to use RWMutex in Golang?
1- using Stateful Goroutines and channels
2- using sync.Mutex
3- using sync/atomic
4- using WaitGroup
5- using program logic(Semaphore)
...
1: Stateful Goroutines and channels:
I simulated very similar sample(imagine you want to read from one SSD and write to another SSD with different speed):
In this sample code one goroutine (named write) does some job prepares data and fills the big struct, and another goroutine (named read) reads data from big struct then do some job, And the manger goroutine, guarantee no concurrent access to same data.
And communication between three goroutines done with channels. And in your case you can use pointers for channel data, or global struct like this sample.
output will be like this:
mean= 36.6920166015625 stdev= 6.068973186592054
I hope this helps you to get the idea.
Working sample code:
package main
import (
"fmt"
"math"
"math/rand"
"runtime"
"sync"
"time"
)
type BigStruct struct {
big []uint16
rpos int
wpos int
full bool
empty bool
stopped bool
}
func main() {
wg.Add(1)
go write()
go read()
go manage()
runtime.Gosched()
stopCh <- <-time.After(5 * time.Second)
wg.Wait()
mean := Mean(hist)
stdev := stdDev(hist, mean)
fmt.Println("mean=", mean, "stdev=", stdev)
}
const N = 1024 * 1024 * 1024
var wg sync.WaitGroup
var stopCh chan time.Time = make(chan time.Time)
var hist []int = make([]int, 65536)
var s *BigStruct = &BigStruct{empty: true,
big: make([]uint16, N), //2GB
}
var rc chan uint16 = make(chan uint16)
var wc chan uint16 = make(chan uint16)
func next(pos int) int {
pos++
if pos >= N {
pos = 0
}
return pos
}
func manage() {
dataReady := false
var data uint16
for {
if !dataReady && !s.empty {
dataReady = true
data = s.big[s.rpos]
s.rpos++
if s.rpos >= N {
s.rpos = 0
}
s.empty = s.rpos == s.wpos
s.full = next(s.wpos) == s.rpos
}
if dataReady {
select {
case rc <- data:
dataReady = false
default:
runtime.Gosched()
}
}
if !s.full {
select {
case d := <-wc:
s.big[s.wpos] = d
s.wpos++
if s.wpos >= N {
s.wpos = 0
}
s.empty = s.rpos == s.wpos
s.full = next(s.wpos) == s.rpos
default:
runtime.Gosched()
}
}
if s.stopped {
if s.empty {
wg.Done()
return
}
}
}
}
func read() {
for {
d := <-rc
hist[d]++
}
}
func write() {
for {
wc <- uint16(rand.Intn(65536))
select {
case <-stopCh:
s.stopped = true
return
default:
runtime.Gosched()
}
}
}
func stdDev(data []int, mean float64) float64 {
sum := 0.0
for _, d := range data {
sum += math.Pow(float64(d)-mean, 2)
}
variance := sum / float64(len(data)-1)
return math.Sqrt(variance)
}
func Mean(data []int) float64 {
sum := 0.0
for _, d := range data {
sum += float64(d)
}
return sum / float64(len(data))
}
5: another way(faster) for some use cases:
here another way to use shared data structure for read job/write job/ processing job which it was separated in first post, now here doing same 3 jobs without channels and without mutex.
working sample:
package main
import (
"fmt"
"math"
"math/rand"
"time"
)
type BigStruct struct {
big []uint16
rpos int
wpos int
full bool
empty bool
stopped bool
}
func manage() {
for {
if !s.empty {
hist[s.big[s.rpos]]++ //sample read job with any time len
nextPtr(&s.rpos)
}
if !s.full && !s.stopped {
s.big[s.wpos] = uint16(rand.Intn(65536)) //sample wrire job with any time len
nextPtr(&s.wpos)
}
if s.stopped {
if s.empty {
return
}
} else {
s.stopped = time.Since(t0) >= 5*time.Second
}
}
}
func main() {
t0 = time.Now()
manage()
mean := Mean(hist)
stdev := StdDev(hist, mean)
fmt.Println("mean=", mean, "stdev=", stdev)
d0 := time.Since(t0)
fmt.Println(d0) //5.8523347s
}
var t0 time.Time
const N = 100 * 1024 * 1024
var hist []int = make([]int, 65536)
var s *BigStruct = &BigStruct{empty: true,
big: make([]uint16, N), //2GB
}
func next(pos int) int {
pos++
if pos >= N {
pos = 0
}
return pos
}
func nextPtr(pos *int) {
*pos++
if *pos >= N {
*pos = 0
}
s.empty = s.rpos == s.wpos
s.full = next(s.wpos) == s.rpos
}
func StdDev(data []int, mean float64) float64 {
sum := 0.0
for _, d := range data {
sum += math.Pow(float64(d)-mean, 2)
}
variance := sum / float64(len(data)-1)
return math.Sqrt(variance)
}
func Mean(data []int) float64 {
sum := 0.0
for _, d := range data {
sum += float64(d)
}
return sum / float64(len(data))
}
To prevent concurrent modifications to a struct while retaining the ability to read, you'd typically embed a sync.RWMutex. This is no exemption. You can simply lock your struct for writes while it is in transit and unlock it at a point in time of your convenience.
package main
import (
"fmt"
"sync"
"time"
)
// Big simulates your big struct
type Big struct {
sync.RWMutex
value string
}
// pump uses a groutine to take the slice of pointers to Big,
// locks the underlying structs and sends the pointers to
// the locked instances of Big downstream
func pump(bigs []*Big) chan *Big {
// We make the channel buffered for this example
// for illustration purposes
c := make(chan *Big, 3)
go func() {
for _, big := range bigs {
// We lock the struct before sending it to the channel
// so it can not be changed via pointer while in transit
big.Lock()
c <- big
}
close(c)
}()
return c
}
// sink reads pointers to the locked instances of Big
// reads them and unlocks them
func sink(c chan *Big) {
for big := range c {
fmt.Println(big.value)
time.Sleep(1 * time.Second)
big.Unlock()
}
}
// modify tries to achieve locks to the instances and modify them
func modify(bigs []*Big) {
for _, big := range bigs {
big.Lock()
big.value = "modified"
big.Unlock()
}
}
func main() {
bigs := []*Big{&Big{value: "Foo"}, &Big{value: "Bar"}, &Big{value: "Baz"}}
c := pump(bigs)
// For the sake of this example, we wait until all entries are
// send into the channel and hence are locked
time.Sleep(1 * time.Second)
// Now we try to modify concurrently before we even start to read
// the struct of which the pointers were sent into the channel
go modify(bigs)
sink(c)
// We use sleep here to keep waiting for modify() to finish simple.
// Usually, you'd use a sync.waitGroup
time.Sleep(1 * time.Second)
for _, big := range bigs {
fmt.Println(big.value)
}
}
Run on playground

Resources