Better way to convert slice of int to hex value - go

I have a slice of int containing only zeros and ones ([]int{1,1,1,1,0,0,0,0})
I want to convert the string representation to hex value. I'm converting the slice of ints to a slice of strings then doing a strconv.ParseUint to convert.
package main
import (
"fmt"
"log"
"strconv"
"strings"
)
func IntToString(values []int) string {
valuesText := []string{}
for i := range values {
valuesText = append(valuesText, strconv.Itoa(values[i]))
}
return strings.Join(valuesText, "")
}
func IntSliceToHex(in []int) (string, error) {
intString := IntToString(in)
ui, err := strconv.ParseUint(intString, 2, 64)
if err != nil {
return "", err
}
return fmt.Sprintf("%X", ui), nil
}
func HexToBin(hex string) (string, error) {
ui, err := strconv.ParseUint(hex, 16, 64)
if err != nil {
return "", err
}
return fmt.Sprintf("%b", ui), nil
}
func main() {
profile := []int{1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1}
hex, err := IntSliceToHex(profile)
if err != nil {
log.Fatalln(err)
}
bin, err := HexToBin(hex)
if err != nil {
log.Fatalln(err)
}
fmt.Println(hex, bin)
}
OUTPUT: F0F 111100001111
Is there a better way to do this?

You should use bitshift operations to build up the actual number from the slice rather than converting each bit to string and parsing it.
You should also keep the built-up integer rather than converting back and forth to a string.
package main
import (
"fmt"
)
func main() {
profile := []int{1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}
final := uint64(profile[0])
for i := 1; i < len(profile); i++ {
final <<= 1
final += uint64(profile[i])
}
fmt.Printf("%X %b\n", final, final)
// Output: FFFFFFFFFFFF0000 1111111111111111111111111111111111111111111111110000000000000000
}
Note: final is an unsigned 64 bit integer and can handle profile slices of length up to (and including) 64. For larger sizes, use big.Int.

Related

Why do Go generics fail when function returns a function?

I've just started trying out generics on Go, and I run into a situation which I don't fully understand why it's failing.
I've refactored the following function, from this:
func PositivePercentageAbove(above int) func(list []uint8) bool {
return func(list []uint8) bool {
acum := 0
for _, x := range list {
acum += int(x)
}
return (float64(acum) / float64(len(list)) * 100) >= float64(above)
}
}
into this:
func PositivePercentageAbove[T constraints.Integer](above int) func(list []T) bool {
return func(list []T) bool {
acum := 0
for _, x := range list {
acum += int(x)
}
return (float64(acum) / float64(len(list)) * 100) >= float64(above)
}
}
The unit test for this function is failing with error: tests/utils/NumberUtils_test.go:82:50: cannot infer T . The source is:
func Test_TruePercentageAbove(t *testing.T) {
tables := []struct {
percentage int
list []uint8
output bool
}{
{percentage: 100, list: []uint8{1, 1, 1, 1}, output: true},
{percentage: 100, list: []uint8{1, 1, 0, 1}, output: false},
{percentage: 80, list: []uint8{1, 1, 1, 1, 0}, output: true},
{percentage: 90, list: []uint8{1, 1, 1, 1, 0}, output: false},
{percentage: 100, list: []uint8{1, 1, 1, 1, 0}, output: false},
{percentage: 40, list: []uint8{0, 1, 0, 1, 0, 1}, output: true},
{percentage: 60, list: []uint8{0, 1, 0, 1, 0, 1}, output: false},
{percentage: 70, list: []uint8{0, 1, 0, 1, 0, 1}, output: false},
}
for _, table := range tables {
result := utils.PositivePercentageAbove(table.percentage)(table.list)
if result != table.output {
t.Errorf("Slice %v with percentage above %v expected to return %v but returned %v", table.list, table.percentage, table.output, result)
}
}
}
I've changed similar functions from int to generics, I'm not sure why this one in particular is not working. I assume it might be somehow related with the function returning another function, but I can't figure exactly why. Thanks.
As often, the answer lies in the Type Parameters proposal:
The only type arguments that can be inferred are those that are used for the types of the function‘s (non-type) input parameters. If there are some type parameters that are used only for the function’s result parameter types, or only in the body of the function, then those type arguments cannot be inferred using function argument type inference.
In the case of
func PositivePercentageAbove[T constraints.Integer](above int) func(list []T) bool
because type parameter T does not appear in the parameter list, the corresponding type argument cannot be inferred.

Performant mean calculation across array of array elements

Apologies if this is a "please do my homework for me" style of question. But I was looking for advise from the community around performance of array operations in go.
My problem space: I am using Go to perform astrophotographic image calibration. To perform astrophotographic calibration, there is a requirement to calculate the average pixel element value across many multiples data arrays.
For argument sake let's give each sub-exposure a data array type []float32.
What I want to have, is a function that can perform a stacking and mean for any number of these data arrays, i.e., for a good master bias frame a good number of bias frame sub-exposures are needed, and this could be in the region of 10 to 100, depending on how statistically thorough the end user needs to be.
Therefore, I have proposed that I would need a function setup like this:
func MeanFloat32Arrays(a [][]float32) ([]float32, error) {
// calculate the mean across the array of arrays here
}
That is, I am happy for the user to call the function as follows:
m, err := MeanFloat32Arrays([][]float32{i.Dark.data, j.Bias.data, k.Bias.data, ... , z.Bias.data })
The construction of the arg var I feel is just another detail we can gloss over for the moment.
My problem: How do I optimise the "mean stacking" process for this function? That is to say, how should I go about making MeanFloat32Arrays as performant as possible?
My initial code is as follows (which passes the test suite outlined below):
func MeanFloat32Arrays(a [][]float32) ([]float32, error) {
if len(a) == 0 {
return nil, errors.New("to divide arrays they must be of same length")
}
m := make([]float32, len(a[0]))
for i := range m {
for j := range a {
// Ensure that each sub-array has the same length as the first one:
if len(a[j]) != len(a[0]) {
return nil, fmt.Errorf("issue at array input %v: to compute the mean of multiple arrays the length of each array must be the same", i)
}
if a[j][i] == 0 {
continue
}
m[i] += a[j][i]
}
m[i] /= float32(len(a))
}
return m, nil
}
My current unit test suite is as follows:
func TestMeanABC(t *testing.T) {
a := []float32{10, 9, 8, 7, 6, 5, 4, 3, 2, 1}
b := []float32{1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
c := []float32{1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
m, err := MeanFloat32Arrays([][]float32{a, b, c})
if err != nil {
t.Errorf("error should be nil, but got %v", err)
}
if len(m) != len(a) {
t.Errorf("result should be of same length as a")
}
if m[0] != 4 {
t.Errorf("result should be 1, but got %v", m[0])
}
if m[1] != 4.333333333333333 {
t.Errorf("result should be 1, but got %v", m[1])
}
if m[2] != 4.666666666666667 {
t.Errorf("result should be 1, but got %v", m[2])
}
if m[3] != 5 {
t.Errorf("result should be 1, but got %v", m[3])
}
if m[4] != 5.333333333333333 {
t.Errorf("result should be 6 but got %v", m[4])
}
if m[5] != 5.666666666666667 {
t.Errorf("result should be 6 but got %v", m[5])
}
//... Assume here that the mean calculation is correct for all other elements
}
func TestMeanABCD(t *testing.T) {
a := []float32{10, 9, 8, 7, 6, 5, 4, 3, 2, 1}
b := []float32{1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
c := []float32{1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
d := []float32{1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
m, err := MeanFloat32Arrays([][]float32{a, b, c, d})
if err != nil {
t.Errorf("error should be nil, but got %v", err)
}
if len(m) != len(a) {
t.Errorf("result should be of same length as a")
}
if m[0] != 3.25 {
t.Errorf("result should be 1, but got %v", m[0])
}
if m[1] != 3.75 {
t.Errorf("result should be 1, but got %v", m[1])
}
if m[2] != 4.25 {
t.Errorf("result should be 1, but got %v", m[2])
}
if m[3] != 4.75 {
t.Errorf("result should be 1, but got %v", m[3])
}
if m[4] != 5.25 {
t.Errorf("result should be 6 but got %v", m[4])
}
if m[5] != 5.75 {
t.Errorf("result should be 6 but got %v", m[5])
}
//... Assume here that the mean calculation is correct for all other elements
}
func TestMeanABNotEqualLengthPanic(t *testing.T) {
a := []float32{2, 3, 4, 5, 6, 7, 8, 9, 10, 11}
b := []float32{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11}
_, err := MeanFloat32Arrays([][]float32{a, b})
if err == nil {
t.Errorf("error should not be nil for two arrays of unequal length")
}
}
So my question, what tricks could I apply to reduce the linear behaviour of the function as much as possible, could I use multiple threads? Would this be thread safe? I'd welcome any comments as I am still in the phase of writing go as a hobbyist ...

Golang remove dup ints from slice append function "evaluated but not used"

I can't get this Go lang test program to run. The compiler keeps giving an error on the append() function call below with an "evaluated but not used" error. I can't figure out why.
package main
import (
"fmt"
)
func removeDuplicates(testArr *[]int) int {
prevValue := (*testArr)[0]
for curIndex := 1; curIndex < len((*testArr)); curIndex++ {
curValue := (*testArr)[curIndex]
if curValue == prevValue {
append((*testArr)[:curIndex], (*testArr)[curIndex+1:]...)
}
prevValue = curValue
}
return len(*testArr)
}
func main() {
testArr := []int{0, 0, 1, 1, 1, 2, 2, 3, 3, 4}
nonDupSize := removeDuplicates(&testArr)
fmt.Printf("nonDupSize = %d", nonDupSize)
}
"evaluated but not used" error.
Code below is my idea. I think your code is not very clear.
package main
import (
"fmt"
)
func removeDuplicates(testArr *[]int) int {
m := make(map[int]bool)
arr := make([]int, 0)
for curIndex := 0; curIndex < len((*testArr)); curIndex++ {
curValue := (*testArr)[curIndex]
if has :=m[curValue]; !has {
m[curValue] = true
arr = append(arr, curValue)
}
}
*testArr = arr
return len(*testArr)
}
func main() {
testArr := []int{0, 0, 1, 1, 1, 2, 2, 3, 3, 4}
nonDupSize := removeDuplicates(&testArr)
fmt.Printf("nonDupSize = %d", nonDupSize)
}
Peter's answer nailed it, the compile error was due to not grabbing the return value from append()

Matrix multiplication with goroutine drops performance

I am optimizing matrix multiplication via goroutines in Go.
My benchmark shows, introducing concurrency per row or per element largely drops performance:
goos: darwin
goarch: amd64
BenchmarkMatrixDotNaive/A.MultNaive-8 2000000 869 ns/op 0 B/op 0 allocs/op
BenchmarkMatrixDotNaive/A.ParalMultNaivePerRow-8 100000 14467 ns/op 80 B/op 9 allocs/op
BenchmarkMatrixDotNaive/A.ParalMultNaivePerElem-8 20000 77299 ns/op 528 B/op 65 allocs/op
I know some basic prior knowledge of cache locality, it make sense that per element concurrency drops performance. However, why per row still drops the performance even in naive version?
In fact, I also wrote a block/tiling optimization, its vanilla version (without goroutine concurrency) even worse than naive version (not present here, let's focus on naive first).
What did I do wrong here? Why? How to optimize here?
Multiplication:
package naive
import (
"errors"
"sync"
)
// Errors
var (
ErrNumElements = errors.New("Error number of elements")
ErrMatrixSize = errors.New("Error size of matrix")
)
// Matrix is a 2d array
type Matrix struct {
N int
data [][]float64
}
// New a size by size matrix
func New(size int) func(...float64) (*Matrix, error) {
wg := sync.WaitGroup{}
d := make([][]float64, size)
for i := range d {
wg.Add(1)
go func(i int) {
defer wg.Done()
d[i] = make([]float64, size)
}(i)
}
wg.Wait()
m := &Matrix{N: size, data: d}
return func(es ...float64) (*Matrix, error) {
if len(es) != size*size {
return nil, ErrNumElements
}
for i := range es {
wg.Add(1)
go func(i int) {
defer wg.Done()
m.data[i/size][i%size] = es[i]
}(i)
}
wg.Wait()
return m, nil
}
}
// At access element (i, j)
func (A *Matrix) At(i, j int) float64 {
return A.data[i][j]
}
// Set set element (i, j) with val
func (A *Matrix) Set(i, j int, val float64) {
A.data[i][j] = val
}
// MultNaive matrix multiplication O(n^3)
func (A *Matrix) MultNaive(B, C *Matrix) (err error) {
var (
i, j, k int
sum float64
N = A.N
)
if N != B.N || N != C.N {
return ErrMatrixSize
}
for i = 0; i < N; i++ {
for j = 0; j < N; j++ {
sum = 0.0
for k = 0; k < N; k++ {
sum += A.At(i, k) * B.At(k, j)
}
C.Set(i, j, sum)
}
}
return
}
// ParalMultNaivePerRow matrix multiplication O(n^3) in concurrency per row
func (A *Matrix) ParalMultNaivePerRow(B, C *Matrix) (err error) {
var N = A.N
if N != B.N || N != C.N {
return ErrMatrixSize
}
wg := sync.WaitGroup{}
for i := 0; i < N; i++ {
wg.Add(1)
go func(i int) {
defer wg.Done()
for j := 0; j < N; j++ {
sum := 0.0
for k := 0; k < N; k++ {
sum += A.At(i, k) * B.At(k, j)
}
C.Set(i, j, sum)
}
}(i)
}
wg.Wait()
return
}
// ParalMultNaivePerElem matrix multiplication O(n^3) in concurrency per element
func (A *Matrix) ParalMultNaivePerElem(B, C *Matrix) (err error) {
var N = A.N
if N != B.N || N != C.N {
return ErrMatrixSize
}
wg := sync.WaitGroup{}
for i := 0; i < N; i++ {
for j := 0; j < N; j++ {
wg.Add(1)
go func(i, j int) {
defer wg.Done()
sum := 0.0
for k := 0; k < N; k++ {
sum += A.At(i, k) * B.At(k, j)
}
C.Set(i, j, sum)
}(i, j)
}
}
wg.Wait()
return
}
Benchmark:
package naive
import (
"os"
"runtime/trace"
"testing"
)
type Dot func(B, C *Matrix) error
var (
A = &Matrix{
N: 8,
data: [][]float64{
[]float64{1, 2, 3, 4, 5, 6, 7, 8},
[]float64{9, 1, 2, 3, 4, 5, 6, 7},
[]float64{8, 9, 1, 2, 3, 4, 5, 6},
[]float64{7, 8, 9, 1, 2, 3, 4, 5},
[]float64{6, 7, 8, 9, 1, 2, 3, 4},
[]float64{5, 6, 7, 8, 9, 1, 2, 3},
[]float64{4, 5, 6, 7, 8, 9, 1, 2},
[]float64{3, 4, 5, 6, 7, 8, 9, 0},
},
}
B = &Matrix{
N: 8,
data: [][]float64{
[]float64{9, 8, 7, 6, 5, 4, 3, 2},
[]float64{1, 9, 8, 7, 6, 5, 4, 3},
[]float64{2, 1, 9, 8, 7, 6, 5, 4},
[]float64{3, 2, 1, 9, 8, 7, 6, 5},
[]float64{4, 3, 2, 1, 9, 8, 7, 6},
[]float64{5, 4, 3, 2, 1, 9, 8, 7},
[]float64{6, 5, 4, 3, 2, 1, 9, 8},
[]float64{7, 6, 5, 4, 3, 2, 1, 0},
},
}
C = &Matrix{
N: 8,
data: [][]float64{
[]float64{0, 0, 0, 0, 0, 0, 0, 0},
[]float64{0, 0, 0, 0, 0, 0, 0, 0},
[]float64{0, 0, 0, 0, 0, 0, 0, 0},
[]float64{0, 0, 0, 0, 0, 0, 0, 0},
[]float64{0, 0, 0, 0, 0, 0, 0, 0},
[]float64{0, 0, 0, 0, 0, 0, 0, 0},
[]float64{0, 0, 0, 0, 0, 0, 0, 0},
[]float64{0, 0, 0, 0, 0, 0, 0, 0},
},
}
)
func BenchmarkMatrixDotNaive(b *testing.B) {
f, _ := os.Create("bench.trace")
defer f.Close()
trace.Start(f)
defer trace.Stop()
tests := []struct {
name string
f Dot
}{
{
name: "A.MultNaive",
f: A.MultNaive,
},
{
name: "A.ParalMultNaivePerRow",
f: A.ParalMultNaivePerRow,
},
{
name: "A.ParalMultNaivePerElem",
f: A.ParalMultNaivePerElem,
},
}
for _, tt := range tests {
b.Run(tt.name, func(b *testing.B) {
for i := 0; i < b.N; i++ {
tt.f(B, C)
}
})
}
}
Performing 8x8 matrix multipliciation is relatively small work.
Goroutines (although may be lightweight) do have overhead. If the work they do is "small", the overhead of launching, synchronizing and throwing them away may outweight the performance gain of utilizing multiple cores / threads, and overall you might not gain performance by executing such small tasks concurrently (hell, you may even do worse than without using goroutines). Measure.
If we increase the matrix size to 80x80, running the benchmark we already see some performance gain in case of ParalMultNaivePerRow:
BenchmarkMatrixDotNaive/A.MultNaive-4 2000 1054775 ns/op
BenchmarkMatrixDotNaive/A.ParalMultNaivePerRow-4 2000 709367 ns/op
BenchmarkMatrixDotNaive/A.ParalMultNaivePerElem-4 100 10224927 ns/op
(As you see in the results, I have 4 CPU cores, running it on your 8-core machine might show more performance gain.)
When rows are small, you are using goroutines to do minimal work, you may improve performance by not "throwing" away goroutines once they're done with their "tiny" work, but you may "reuse" them. See related question: Is this an idiomatic worker thread pool in Go?
Also see related / possible duplicate: Vectorise a function taking advantage of concurrency

Go array initialization

func identityMat4() [16]float {
return {
1, 0, 0, 0,
0, 1, 0, 0,
0, 0, 1, 0,
0, 0, 0, 1 }
}
I hope you get the idea of what I'm trying to do from the example. How do I do this in Go?
func identityMat4() [16]float64 {
return [...]float64{
1, 0, 0, 0,
0, 1, 0, 0,
0, 0, 1, 0,
0, 0, 0, 1 }
}
(Click to play)
How to use an array initializer to initialize a test table block:
tables := []struct {
input []string
result string
} {
{[]string{"one ", " two", " three "}, "onetwothree"},
{[]string{" three", "four ", " five "}, "threefourfive"},
}
for _, table := range tables {
result := StrTrimConcat(table.input...)
if result != table.result {
t.Errorf("Result was incorrect. Expected: %v. Got: %v. Input: %v.", table.result, result, table.input)
}
}
If you were writing your program using Go idioms, you would be using slices. For example,
package main
import "fmt"
func Identity(n int) []float {
m := make([]float, n*n)
for i := 0; i < n; i++ {
for j := 0; j < n; j++ {
if i == j {
m[i*n+j] = 1.0
}
}
}
return m
}
func main() {
fmt.Println(Identity(4))
}
Output: [1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1]

Resources