How to send of GO routines in a worker pool - go

im writing an algorithm to break down an image into segments and manipulate it, however the way im currently using Go routines isn't quite optimal.
I'd like to split it into a worker pool, firing off routines and having each worker take a new job until the image is completed.
I have it split into 8 as such:
var bounds = img.Bounds()
var halfHeight = bounds.Max.Y / 2
var eighthOne = halfHeight / 4
var eighthTwo = eighthOne + eighthOne
var eighthThree = eighthOne + eighthTwo
var eighthFive = halfHeight + eighthOne
var eighthSix = halfHeight + eighthTwo
var eighthSeven = halfHeight + eighthThree
elapsed := time.Now()
go Threshold(pic, c2, 0, eighthOne)
go Threshold(pic, c5, eighthOne, eighthTwo)
go Threshold(pic, c6, eighthTwo, eighthThree)
go Threshold(pic, c7, eighthThree, halfHeight)
go Threshold(pic, c8, halfHeight, eighthFive)
go Threshold(pic, c9, eighthFive, eighthSix)
go Threshold(pic, c10, eighthSix, eighthSeven)
go Threshold(pic, c11, eighthSeven, bounds.Max.Y)
From which i then fire off Go routines one after another, how can i optimise this into a worker system?
Thanks

Here you have a generic pattern for implementing concurrent image processors giving control to the caller over the image partitioning to split the work in n parts and over the concurrency level of the execution (i.e. the number of worker goroutines used for executing the (possibly different) number of processing jobs).
See the pprocess func which implements the whole pattern taking a Partitioner and a Processor, the former being a func that takes the job of returning n image partitions to operate on, and the latter being a func which will be used for processing each partition.
I implemented the vertical splitting you expressed in your code example in the func splitVert which returns a function which can split an image in n vertical sections.
For doing some actual work I implemented the gray func which is a Processor that transform pixel colors to gray levels (luminance).
Here's the working code:
type MutableImage interface {
image.Image
Set(x, y int, c color.Color)
}
type Processor func(MutableImage, image.Rectangle)
type Partitioner func(image.Image) []image.Rectangle
func pprocess(i image.Image, concurrency int, part Partitioner, proc Processor) image.Image {
m := image.NewRGBA(i.Bounds())
draw.Draw(m, i.Bounds(), i, i.Bounds().Min, draw.Src)
var wg sync.WaitGroup
c := make(chan image.Rectangle, concurrency*2)
for n := 0; n < concurrency; n++ {
wg.Add(1)
go func() {
for r := range c {
proc(m, r)
}
wg.Done()
}()
}
for _, p := range part(i) {
c <- p
}
close(c)
wg.Wait()
return m
}
func gray(i MutableImage, r image.Rectangle) {
for x := r.Min.X; x <= r.Max.X; x++ {
for y := r.Min.Y; y <= r.Max.Y; y++ {
c := i.At(x, y)
r, g, b, _ := c.RGBA()
l := 0.299*float64(r) + 0.587*float64(g) + 0.114*float64(b)
i.Set(x, y, color.Gray{uint8(l / 256)})
}
}
}
func splitVert(c int) Partitioner {
return func(i image.Image) []image.Rectangle {
b := i.Bounds()
s := float64(b.Dy()) / float64(c)
rs := make([]image.Rectangle, c)
for n := 0; n < c; n++ {
m := float64(n)
x0 := b.Min.X
y0 := b.Min.Y + int(0.5+m*s)
x1 := b.Max.X
y1 := b.Min.Y + int(0.5+(m+1)*s)
if n < c-1 {
y1--
}
rs[n] = image.Rect(x0, y0, x1, y1)
}
return rs
}
}
func main() {
i, err := jpeg.Decode(os.Stdin)
if err != nil {
log.Fatalf("decoding image: %v", err)
}
o := pprocess(i, runtime.NumCPU(), splitVert(8), gray)
err = jpeg.Encode(os.Stdout, o, nil)
if err != nil {
log.Fatalf("encoding image: %v", err)
}
}

Related

Threading Decasteljau algorithm in golang

I'm trying to write a threaded Decasteljau algorithm for control polygons with any set of points in golang but can't get the goroutines to work right because their work randomly and i can't manage to get all the goroutines to work .
here's my code for the decasteljau.go file :
package main
import (
"fmt"
)
type ControlPolygon struct {
Vertices []Vertex
}
type Spline struct {
Vertices map[int]Vertex
}
type splinePoint struct {
index int
vertex Vertex
}
func (controlPolygon ControlPolygon) Decasteljau(levelOfDetail int) {
//
// LevelOfDetail is the number of points in the spline
//
spline := Spline{make(map[int]Vertex)}
splinePointsChannel := make(chan splinePoint)
for index := 1; index < levelOfDetail; index++ {
splinePoint := splinePoint{}
splinePoint.index = index
pointPosition := float64(index) / float64(levelOfDetail)
go func() {
fmt.Println("goroutine number:", index)
splinePoint.findSplinePoint(controlPolygon.Vertices, pointPosition)
splinePointsChannel <- splinePoint
}()
}
point := <-splinePointsChannel
spline.Vertices[point.index] = point.vertex
fmt.Println(spline)
}
func (point *splinePoint) findSplinePoint(vertices []Vertex, pointPosition float64) {
var interpolationPoints []Vertex
if len(vertices) == 1 {
fmt.Println("vertices : ", vertices)
point.vertex = vertices[0]
}
if len(vertices) > 1 {
for i := 0; i < len(vertices)-1; i++ {
interpolationPoint := vertices[i].GetInterpolationPoint(vertices[i+1], pointPosition)
interpolationPoints = append(interpolationPoints, interpolationPoint)
}
point.findSplinePoint(interpolationPoints, pointPosition)
fmt.Println()
} else {
fmt.Println("Done Detailing ..")
return
}
}
func main() {
v1 := Vertex{0, 0, 0}
v2 := Vertex{0, 0, 1}
v3 := Vertex{0, 1, 1}
v4 := Vertex{0, 1, 0}
vectices := []Vertex{v1, v2, v3, v4}
controlPolygon := ControlPolygon{vectices}
controlPolygon.Decasteljau(10)
}
I'm also new to go concurrency and after a lot of research i'm still wondering if i need to use buffered or unbuffered channels for my case .
I also found that goroutines are mostly used for managing networks rather than optimizing 3D so i would love to know if i'm using a good stack for writing concurrent 3D algorithms

Implementing a gradient descent

I'm trying to implement a gradient descent in Go. My goal is to predict the cost of a car from it's mileage.
Here is my data set:
km,price
240000,3650
139800,3800
150500,4400
185530,4450
176000,5250
114800,5350
166800,5800
89000,5990
144500,5999
84000,6200
82029,6390
63060,6390
74000,6600
97500,6800
67000,6800
76025,6900
48235,6900
93000,6990
60949,7490
65674,7555
54000,7990
68500,7990
22899,7990
61789,8290
I've tried various approaches, like normalizing the data set, not normalizing it, leaving thetas as is, denormalizing thetas... But I cannot get the correct result.
My maths must be off somewhere, but I cannot figure out where.
The result I'm trying to get should be approximately t0 = 8500, t1 = -0.02
My implementation is the following:
package main
import (
"encoding/csv"
"fmt"
"log"
"math"
"os"
"strconv"
)
const (
dataFile = "data.csv"
iterations = 20000
learningRate = 0.1
)
type dataSet [][]float64
var minKm, maxKm, minPrice, maxPrice float64
func (d dataSet) getExtremes(column int) (float64, float64) {
min := math.Inf(1)
max := math.Inf(-1)
for _, row := range d {
item := row[column]
if item > max {
max = item
}
if item < min {
min = item
}
}
return min, max
}
func normalizeItem(item, min, max float64) float64 {
return (item - min) / (max - min)
}
func (d *dataSet) normalize() {
minKm, maxKm = d.getExtremes(0)
minPrice, maxPrice = d.getExtremes(1)
for _, row := range *d {
row[0], row[1] = normalizeItem(row[0], minKm, maxKm), normalizeItem(row[1], minPrice, maxPrice)
}
}
func processEntry(entry []string) []float64 {
if len(entry) != 2 {
log.Fatalln("expected two fields")
}
km, err := strconv.ParseFloat(entry[0], 64)
if err != nil {
log.Fatalln(err)
}
price, err := strconv.ParseFloat(entry[1], 64)
if err != nil {
log.Fatalln(err)
}
return []float64{km, price}
}
func getData() dataSet {
file, err := os.Open(dataFile)
if err != nil {
log.Fatalln(err)
}
reader := csv.NewReader(file)
entries, err := reader.ReadAll()
if err != nil {
log.Fatalln(err)
}
entries = entries[1:]
data := make(dataSet, len(entries))
for k, entry := range entries {
data[k] = processEntry(entry)
}
return data
}
func outputResult(theta0, theta1 float64) {
file, err := os.OpenFile("weights.csv", os.O_WRONLY, 0644)
if err != nil {
log.Fatalln(err)
}
defer file.Close()
file.Truncate(0)
file.Seek(0, 0)
file.WriteString(fmt.Sprintf("theta0,%.6f\ntheta1,%.6f\n", theta0, theta1))
}
func estimatePrice(theta0, theta1, mileage float64) float64 {
return theta0 + theta1*mileage
}
func (d dataSet) computeThetas(theta0, theta1 float64) (float64, float64) {
dataSize := float64(len(d))
t0sum, t1sum := 0.0, 0.0
for _, it := range d {
mileage := it[0]
price := it[1]
err := estimatePrice(theta0, theta1, mileage) - price
t0sum += err
t1sum += err * mileage
}
return theta0 - (t0sum / dataSize * learningRate), theta1 - (t1sum / dataSize * learningRate)
}
func denormalize(theta, min, max float64) float64 {
return theta*(max-min) + min
}
func main() {
data := getData()
data.normalize()
theta0, theta1 := 0.0, 0.0
for k := 0; k < iterations; k++ {
theta0, theta1 = data.computeThetas(theta0, theta1)
}
theta0 = denormalize(theta0, minKm, maxKm)
theta1 = denormalize(theta1, minPrice, maxPrice)
outputResult(theta0, theta1)
}
What should I fix in order to properly implement a gradient descent?
Linear Regression is really simple:
// yi = alpha + beta*xi + ei
func linearRegression(x, y []float64) (float64, float64) {
EX := expected(x)
EY := expected(y)
EXY := expectedXY(x, y)
EXX := expectedXY(x, x)
covariance := EXY - EX*EY
variance := EXX - EX*EX
beta := covariance / variance
alpha := EY - beta*EX
return alpha, beta
}
Try it here, Output:
8499.599649933218 -0.021448963591702314 396270.87871142407
Code:
package main
import (
"encoding/csv"
"fmt"
"strconv"
"strings"
)
func main() {
x, y := readXY(`data.csv`)
alpha, beta := linearRegression(x, y)
fmt.Println(alpha, beta, -alpha/beta) // 8499.599649933218 -0.021448963591702314 396270.87871142407
}
// https://en.wikipedia.org/wiki/Ordinary_least_squares#Simple_linear_regression_model
// yi = alpha + beta*xi + ei
func linearRegression(x, y []float64) (float64, float64) {
EX := expected(x)
EY := expected(y)
EXY := expectedXY(x, y)
EXX := expectedXY(x, x)
covariance := EXY - EX*EY
variance := EXX - EX*EX
beta := covariance / variance
alpha := EY - beta*EX
return alpha, beta
}
// E[X]
func expected(x []float64) float64 {
sum := 0.0
for _, v := range x {
sum += v
}
return sum / float64(len(x))
}
// E[XY]
func expectedXY(x, y []float64) float64 {
sum := 0.0
for i, v := range x {
sum += v * y[i]
}
return sum / float64(len(x))
}
func readXY(filename string) ([]float64, []float64) {
// file, err := os.Open(filename)
// if err != nil {
// panic(err)
// }
// defer file.Close()
file := strings.NewReader(data)
reader := csv.NewReader(file)
records, err := reader.ReadAll()
if err != nil {
panic(err)
}
records = records[1:]
size := len(records)
x := make([]float64, size)
y := make([]float64, size)
for i, v := range records {
val, err := strconv.ParseFloat(v[0], 64)
if err != nil {
panic(err)
}
x[i] = val
val, err = strconv.ParseFloat(v[1], 64)
if err != nil {
panic(err)
}
y[i] = val
}
return x, y
}
var data = `km,price
240000,3650
139800,3800
150500,4400
185530,4450
176000,5250
114800,5350
166800,5800
89000,5990
144500,5999
84000,6200
82029,6390
63060,6390
74000,6600
97500,6800
67000,6800
76025,6900
48235,6900
93000,6990
60949,7490
65674,7555
54000,7990
68500,7990
22899,7990
61789,8290`
Gradient descent is based on the observation that if the multi-variable function F(x) is defined and differentiable in a neighborhood of a point a , then F(x) decreases fastest if one goes from a in the direction of the negative gradient of F at a,-∇F(a), for example:
// F(x)
f := func(x float64) float64 {
return alpha + beta*x // write your target function here
}
Derivative function:
h := 0.000001
// Derivative function ∇F(x)
df := func(x float64) float64 {
return (f(x+h) - f(x-h)) / (2 * h) // write your target function derivative here
}
Search:
minimunAt := 1.0 // We start the search here
gamma := 0.01 // Step size multiplier
precision := 0.0000001 // Desired precision of result
max := 100000 // Maximum number of iterations
currentX := 0.0
step := 0.0
for i := 0; i < max; i++ {
currentX = minimunAt
minimunAt = currentX - gamma*df(currentX)
step = minimunAt - currentX
if math.Abs(step) <= precision {
break
}
}
fmt.Printf("Minimum at %.8f value: %v\n", minimunAt, f(minimunAt))

goroutine blocking and non-blocking usage

I am trying to understand how go-routines work. Here is some code:
//parallelSum.go
func sum(a []int, c chan<- int, func_id string) {
sum := 0
for _, n := range a {
sum += n
}
log.Printf("func_id %v is DONE!", func_id)
c <- sum
}
func main() {
ELEM_COUNT := 10000000
test_arr := make([]int, ELEM_COUNT)
for i := 0; i < ELEM_COUNT; i++ {
test_arr[i] = i * 2
}
c1 := make(chan int)
c2 := make(chan int)
go sum(test_arr[:len(test_arr)/2], c1, "1")
go sum(test_arr[len(test_arr)/2:], c2, "2")
x := <-c1
y := <-c2
//x, y := <-c, <-c
log.Printf("x= %v, y = %v, sum = %v", x, y, x+y)
}
The above program runs fine and returns the output. I have an iterative version of the same program:
//iterSum.go
func sumIter(a []int, c *int, func_id string) {
sum := 0
log.Printf("entered the func %s", func_id)
for _, n := range a {
sum += n
}
log.Printf("func_id %v is DONE!", func_id)
*c = sum
}
func main() {
*/
ELEM_COUNT := 10000000
test_arr := make([]int, ELEM_COUNT)
for i := 0; i < ELEM_COUNT; i++ {
test_arr[i] = i * 2
}
var (
i1 int
i2 int
)
sumIter(test_arr[:len(test_arr)/2], &i1, "1")
sumIter(test_arr[len(test_arr)/2:], &i2, "2")
x := i1
y := i2
log.Printf("x= %v, y = %v, sum = %v", x, y, x+y)
}
I ran the program 20 times and averaged the run time for each program. I see the average almost equal? Shouldn't parallelizing make things faster? What am I doing wrong?
Here is the python program to run it 20 times:
iterCmd = 'go run iterSum.go'
parallelCmd = 'go run parallelSum.go'
runCount = 20
def analyzeCmd(cmd, runCount):
runData = []
print("running cmd (%s) for (%s) times" % (cmd, runCount))
for i in range(runCount):
┆ start_time = time.time()
┆ cmd_out = subprocess.check_call(shlex.split(cmd))
run_time = time.time() - start_time
┆ curr_data = {'iteration': i, 'run_time' : run_time}
┆ runData.append(curr_data)
return runData
iterOut = analyzeCmd(iterCmd, runCount)
parallelOut = analyzeCmd(parallelCmd, runCount)
print("iter cmd data -->")
print(iterOut)
with open('iterResults.json', 'w') as f:
json.dump(iterOut, f)
print("parallel cmd data -->")
print(parallelOut)
with open('parallelResults.json', 'w') as f:
json.dump(parallelOut, f)
avg = lambda results: sum(i['run_time'] for i in results) / len(results)
print("average time for iterSum = %3.2f" % (avg(iterOut)))
print("average time for parallelSum = %3.2f" % (avg(parallelOut)))
Here is output of 1 run:
average time for iterSum = 0.27
average time for parallelSum = 0.29
So, several problems here. Firstly, your channels aren't buffered in the concurrent example, which means the receives still may have to wait a bit on each other. Second, concurrent doesn't mean parallel. Are you sure these are actually running in parallel and not simply being scheduled on the same OS thread?
That said, your main problem here is that your Python code is using go run for each iteration, which means the vast majority of your recorded "run time" is actually the compilation of your code (go run compiles and then runs the specified file, and it specifically by design does not cache any of that). If you want to test run time, use Go's benchmark system, not your own cobbled-together version. You'll get far more accurate results. For example, beyond the compilation bottleneck, there's also no way to identify how much of a bottleneck the Python code itself is introducing.
Oh, and you should get out of the habit of using reference arguments to functions as a way to "return" values. Go supports multiple returns, so the C style of modifying arguments in-place is generally considered an anti-pattern unless there's a really compelling reason to do it.

RGBA to Grayscale in parallel Golang

I have written a program which converts RGBA image to Grayscale sequentially. I'm now trying to convert it so it runs in parallel.
I kind of understand how I need to be doing this but I'm struggling to get started.
Here is what I have so far.
package main
import (
"image"
"image/color"
"image/jpeg"
"log"
"os"
)
var lum float64
type ImageSet interface {
Set(x, y int, c color.Color)
}
func rgbtogray(r uint32, g uint32, b uint32) float64{
lum = 0.299*float64(r) + 0.587*float64(g) + 0.114*float64(b)
return lum
}
func main() {
file, err := os.Open("flower.jpg")
if err != nil {
log.Fatal(err)
}
defer file.Close()
img, err := jpeg.Decode(file)
if err != nil {
log.Fatal(os.Stderr, "%s: %v\n", "flower.jpg", err)
}
channel1 := make(chan float64)
channel2 := make(chan float64)
b := img.Bounds()
imgSet := image.NewRGBA(b)
halfImage := b.Max.X/2
fullImage := b.Max.X
for y := 0; y < b.Max.Y; y++ {
go func() {
for x := 0; x < halfImage; x++ {
oldPixel := img.At(x, y)
r, g, b, _ := oldPixel.RGBA()
channel1 <- rgbtogray(r, g, b)
pixel := color.Gray{uint8(lum / 256)}
imgSet.Set(x, y, pixel)
}
}()
go func() {
for x := halfImage; x< fullImage; x++ {
oldPixel := img.At(x, y)
r, g, b, _ := oldPixel.RGBA()
channel2 <- rgbtogray(r, g, b)
pixel := color.Gray{uint8(lum / 256)}
imgSet.Set(x, y, pixel)
}
}()
}
outFile, err := os.Create("changed.jpg")
if err != nil {
log.Fatal(err)
}
defer outFile.Close()
jpeg.Encode(outFile, imgSet, nil)
}
This runs but just returns a black image. I know the way I'm going about it is wrong but I'm not 100% what route I need to be taking.
My idea is to split the image down the middle, have one channel work on the pixels on the left and one channel work on the pixels on the right. Before moving down to the next y coordinate and so on.
I've tried moving all of the code in my go functions into my rgbatogray function but I was getting multiple errors to do with passing through variables etc. Would it be best to create another function which deals with the splitting etc as I think I calling my go functions should just look something like:
go func() {
channel1 <- rgbtogray(r, g, b)
}()
go func() {
channel2 <- rgbtogray(r, g, b)
}()
I'm unsure what steps I should be taking next on this so any tips and help greatly appreciated.
Here's an implementation of #JimB's suggestion in comments. It exploits the fact of JPEG images being in YCbCr to process the image setting inplace Cb and Cr components to 128 using one goroutine for each of them.
func set(wg *sync.WaitGroup, a []uint8, v uint8) {
for i := 0; i < len(a); i++ {
a[i] = v
}
wg.Done()
}
func gray(i *image.YCbCr) {
var wg sync.WaitGroup
wg.Add(2)
go set(&wg, i.Cb, 128)
go set(&wg, i.Cr, 128)
wg.Wait()
}
func main() {
i, err := jpeg.Decode(os.Stdin)
if err != nil {
log.Fatalf("decoding image: %v", err)
}
gray(i.(*image.YCbCr))
err = jpeg.Encode(os.Stdout, i, nil)
if err != nil {
log.Fatalf("encoding image: %v", err)
}
}
It turned out pretty simple.
Of course it could be extended to create more goroutines (possibly one per available core) assigning slices of Cb & Cr arrays to each for processing.
How about creating a pipeline of pixels read from the image, converted to gray and finally set to a new image, where all steps run concurrently and each step could be internally parallelized?
Then Go's runtime will parallelize the tasks using all available cores.
This implementation works:
package main
import (
"image"
"image/color"
"image/jpeg"
"log"
"os"
"sync"
)
type Setter interface {
Set(x, y int, c color.Color)
}
type Pixel struct {
X, Y int
C color.Color
}
func sendPixels(in image.Image, out chan Pixel) {
b := in.Bounds()
for x := 0; x < b.Max.X; x++ {
for y := 0; y < b.Max.Y; y++ {
out <- Pixel{x, y, in.At(x, y)}
}
}
close(out)
}
func convertToGray(in chan Pixel, out chan Pixel) {
var wg sync.WaitGroup
for p := range in {
wg.Add(1)
go func(p Pixel) {
r, g, b, _ := p.C.RGBA()
l := 0.299*float64(r) + 0.587*float64(g) + 0.114*float64(b)
out <- Pixel{p.X, p.Y, color.Gray{uint8(l / 256)}}
wg.Done()
}(p)
}
wg.Wait()
close(out)
}
func buildImage(in chan Pixel, out Setter, done chan int) {
for p := range in {
out.Set(p.X, p.Y, p.C)
}
close(done)
}
func main() {
i, err := jpeg.Decode(os.Stdin)
if err != nil {
log.Fatalf("decoding image: %v", err)
}
a := make(chan Pixel, 1000)
go sendPixels(i, a)
b := make(chan Pixel, 1000)
go convertToGray(a, b)
c := image.NewRGBA(i.Bounds())
d := make(chan int)
go buildImage(b, c, d)
<-d
err = jpeg.Encode(os.Stdout, c, nil)
if err != nil {
log.Fatalf("encoding image: %v", err)
}
}
Which can be tested running:
go run main.go <in.jpg >out.jpg

Julia set image rendering ruined by concurrency

I have the following code that I am to change into a concurrent program.
// Stefan Nilsson 2013-02-27
// This program creates pictures of Julia sets (en.wikipedia.org/wiki/Julia_set).
package main
import (
"image"
"image/color"
"image/png"
"log"
"math/cmplx"
"os"
"strconv"
)
type ComplexFunc func(complex128) complex128
var Funcs []ComplexFunc = []ComplexFunc{
func(z complex128) complex128 { return z*z - 0.61803398875 },
func(z complex128) complex128 { return z*z + complex(0, 1) },
}
func main() {
for n, fn := range Funcs {
err := CreatePng("picture-"+strconv.Itoa(n)+".png", fn, 1024)
if err != nil {
log.Fatal(err)
}
}
}
// CreatePng creates a PNG picture file with a Julia image of size n x n.
func CreatePng(filename string, f ComplexFunc, n int) (err error) {
file, err := os.Create(filename)
if err != nil {
return
}
defer file.Close()
err = png.Encode(file, Julia(f, n))
return
}
// Julia returns an image of size n x n of the Julia set for f.
func Julia(f ComplexFunc, n int) image.Image {
bounds := image.Rect(-n/2, -n/2, n/2, n/2)
img := image.NewRGBA(bounds)
s := float64(n / 4)
for i := bounds.Min.X; i < bounds.Max.X; i++ {
for j := bounds.Min.Y; j < bounds.Max.Y; j++ {
n := Iterate(f, complex(float64(i)/s, float64(j)/s), 256)
r := uint8(0)
g := uint8(0)
b := uint8(n % 32 * 8)
img.Set(i, j, color.RGBA{r, g, b, 255})
}
}
return img
}
// Iterate sets z_0 = z, and repeatedly computes z_n = f(z_{n-1}), n ≥ 1,
// until |z_n| > 2 or n = max and returns this n.
func Iterate(f ComplexFunc, z complex128, max int) (n int) {
for ; n < max; n++ {
if real(z)*real(z)+imag(z)*imag(z) > 4 {
break
}
z = f(z)
}
return
}
I have decided to try and make the Julia() function concurrent. So I changed it to:
func Julia(f ComplexFunc, n int) image.Image {
bounds := image.Rect(-n/2, -n/2, n/2, n/2)
img := image.NewRGBA(bounds)
s := float64(n / 4)
for i := bounds.Min.X; i < bounds.Max.X; i++ {
for j := bounds.Min.Y; j < bounds.Max.Y; j++ {
go func(){
n := Iterate(f, complex(float64(i)/s, float64(j)/s), 256)
r := uint8(0)
g := uint8(0)
b := uint8(n % 32 * 8)
img.Set(i, j, color.RGBA{r, g, b, 255})
}()
}
}
return img
This change causes the images to look very different. The patterns are essentially the same, but there are a lot of white pixels that were not there before.
What is happening here?
There are 2 problems:
You don't actually wait for your goroutines to finish.
You don't pass i and j to the goroutine, so they will almost always be the last i and j.
Your function should look something like:
func Julia(f ComplexFunc, n int) image.Image {
var wg sync.WaitGroup
bounds := image.Rect(-n/2, -n/2, n/2, n/2)
img := image.NewRGBA(bounds)
s := float64(n / 4)
for i := bounds.Min.X; i < bounds.Max.X; i++ {
for j := bounds.Min.Y; j < bounds.Max.Y; j++ {
wg.Add(1)
go func(i, j int) {
n := Iterate(f, complex(float64(i)/s, float64(j)/s), 256)
r := uint8(0)
g := uint8(0)
b := uint8(n % 32 * 8)
img.Set(i, j, color.RGBA{r, g, b, 255})
wg.Done()
}(i, j)
}
}
wg.Wait()
return img
}
A bonus tip, when diving into concurrency, it's usually a good idea to try your code with the race detector.
You might have to use a mutex to call img.Set but I'm not very sure and I can't test atm.

Resources