I have go a sample recursive code in go playground, there are 2 "?", the target is to generate all binary strings replacing ? with 0 or 1 , it supposes to display 4 results, but only display 3. ie missing 1100101
package main
import (
"fmt"
//"strings"
//"strconv"
)
func main() {
str := "1?0?101"
mstr := []byte(str)
q := []byte("?")[0]
a := []byte("0")[0]
b := []byte("1")[0]
fmt.Println(mstr)
allstr(mstr, 0, len(mstr), q, a, b)
}
func allstr(mstr []byte, index int, size int, q, a, b byte) {
if index >= size {
fmt.Println(string(mstr))
return
}
if mstr[index] == q {
mstr[index] = a
allstr(mstr, index+1, size, q, a, b)
mstr[index] = b
allstr(mstr, index+1, size, q, a, b)
} else {
allstr(mstr, index+1, size, q, a, b)
}
}
Go playground: https://play.golang.org/p/4e5NIOS9fG4
Output:
[49 63 48 63 49 48 49]
1000101
1001101
1101101
You need to undo the writes to the master byte-slice during recursive-backtracking:
if mstr[index] == q {
mstr[index] = a
allstr(mstr, index+1, size, q, a, b)
mstr[index] = b
allstr(mstr, index+1, size, q, a, b)
mstr[index] = q // <--- add this
}
https://play.golang.org/p/-JEsVGFcsQo
Related
I'm working on some code to find all palindromes from a string:
func palindrome(s string) bool {
for i, j := 0, len(s) - 1; i < j; i, j = i + 1, j - 1 {
if s[i] != s[j] {
return false
}
}
return true
}
func dfs(s string, start int, sol *[][]string, curr *[]string) {
if start == len(s) {
*sol = append(*sol, *curr)
fmt.Println("intermediate value:", *sol)
return
}
for i := start + 1; i <= len(s); i++ {
substr := s[start:i]
if palindrome(substr) {
*curr = append(*curr, substr)
dfs(s, i, sol, curr)
*curr = (*curr)[:len(*curr) - 1]
}
}
}
func main() {
sol := [][]string{}
dfs("aab", 0, &sol, new([]string))
fmt.Println("last value:", sol)
}
The program outputs:
intermediate value: [[a a b]]
intermediate value: [[aa b b] [aa b]]
last value: [[aa b b] [aa b]]
Looks like when function dfs() returns, sol gets corrupted and its first element changes from [a a b] to [aa b b].
I can't figure out what's wrong with how I declare and use parameters sol and curr.
From the comments posted by JimB and Ricardo Souza, the fix is an extra append needed when updating *sol:
*sol = append(*sol, append([]string{}, (*curr)...))
This code change makes a copy of the contents of *curr.
Also, curr doesn't need to be a pointer type.
I am implementing a matrix-matrix multiplication algorithm in Go and I cannot reason how to change the output matrix in-place. I have tried changing the input to a pointer but 2D slices cannot be pointers?
package main
import (
"fmt"
"strconv"
"math/rand"
"os"
"time"
)
func main() {
L := len(os.Args)
m, n, p, q, err := mapVars(L, os.Args)
if err != 0 {
fmt.Fprintf(os.Stderr, "error: Incorrect command line arguments.\n")
os.Exit(1)
}
fmt.Println("The product array has dimensions.")
fmt.Printf("\tC is %dx%d\n", m, q)
fmt.Println("\nPopulating matrix A.")
A, _ := createMat(m, n)
fmt.Println("Matrix A.")
printMat(m, A)
fmt.Println("\nPopulating matrix B.")
B, _ := createMat(p, q)
fmt.Println("Matrix B.")
printMat(p, B)
fmt.Println("\nPerforming row-wise matrix-matrix multiplication AB.")
startRow := time.Now()
C := rowMultMat(m, n, q, A, B)
dtRow := time.Since(startRow)
fmt.Printf("Time elapsed: %v\n", dtRow)
fmt.Println("Matrix C.")
printMat(q, C)
}
func mapVars(l int, args []string) (m int, n int, p int, q int, err int) {
if l == 2 {
m, _ := strconv.Atoi(args[1])
n, _ := strconv.Atoi(args[1])
p, _ := strconv.Atoi(args[1])
q, _ := strconv.Atoi(args[1])
fmt.Printf("Creating two arrays, A, B, with square dimensions.\n")
fmt.Printf("\tA is %dx%d\n\tB is %dx%d\n", m, n, p, q)
return m, n, p, q, 0
} else if (l == 5 || n != p) {
m, _ := strconv.Atoi(args[1])
n, _ := strconv.Atoi(args[2])
p, _ := strconv.Atoi(args[3])
q, _ := strconv.Atoi(args[4])
fmt.Println("Creating two arrays, A, B, with dimensions.")
fmt.Printf("\tA is %dx%d\n\tB is %dx%d\n", m, n, p, q)
return m, n, p, q, 0
} else {
fmt.Println("Incorrect command line arguments.\n")
return 0, 0, 0, 0, 1
}
}
func initMat(m int, n int) (M [][]float64, rows []float64) {
M = make([][]float64, m)
rows = make([]float64, n*m)
for i := 0; i < m; i++ {
M[i] = rows[i*n : (i+1)*n]
}
return M, rows
}
func createMat(m int, n int) (M [][]float64, rows []float64) {
M = make([][]float64, m)
rows = make([]float64, n*m)
for i := 0; i < m; i++ {
for j := 0; j < n; j++ {
rows[i*n + j] = float64(rand.Int63()%10)
}
M[i] = rows[i*n : (i+1)*n]
}
return M, rows
}
func printMat(row int, M [][]float64) {
for i := 0; i < row; i++ {
fmt.Printf("%v\n", M[i])
}
}
func rowMultMat(m int, n int, q int, A [][]float64, B [][]float64) (C [][]float64) {
C, _ = initMat(m, q)
var total float64 = 0.0
for i := 0; i < m; i++ {
for j := 0; j < q; j++ {
for k := 0; k < n; k++ {
total += A[i][k] * (B[k][j])
}
C[i][j] = total
total = 0
}
}
return C
}
Currently I am initializing the matrix inside rowMultMat because I am unable to pass C as a pointer to a 2D slice. For example, run main.go 2 3 3 2 will multiply a 2x3 with 3x2 to yield 2x2.
A slice is already a reference value. If you pass a slice into a function, the function can modify its contents (*) and the modifications will be visible to the caller once it returns.
Alternatively, returning a new slice is also efficient - because again, slices are just references and don't take up much memory.
(*) By contents here I mean the contents of the underlying array the slice points to. Some attributes like the slice's length cannot be changed in this way; if your function needs to make the slice longer, for example, you'll have to pass in a pointer to a slice.
I've got stuck with ex4.1 for the book which says:
Write a function that counts the number of bits that are different in two SHA256 hashes.
The partial solution I came up with is pasted below, but it's wrong - it counts number of different bytes not bits.
Could you please point me in the right direction?
package main
import (
"crypto/sha256"
"fmt"
)
var s1 string = "unodostresquatro"
var s2 string = "UNODOSTRESQUATRO"
var h1 = sha256.Sum256([]byte(s1))
var h2 = sha256.Sum256([]byte(s2))
func main() {
fmt.Printf("s1: %s h1: %X h1 type: %T\n", s1, h1, h1)
fmt.Printf("s2: %s h2: %X h2 type: %T\n", s2, h2, h2)
fmt.Printf("Number of different bits: %d\n", 8 * DifferentBits(h1, h2))
}
func DifferentBits(c1 [32]uint8, c2 [32]uint8) int {
var counter int
for x := range c1 {
if c1[x] != c2[x] {
counter += 1
}
}
return counter
}
The Go Programming Language
Alan A. A. Donovan · Brian W.Kernighan
Exercise 4.1: Write a function that counts the number of bits that
are different in two SHA256 hashes.
The C Programming Language
Brian W.Kernighan · Dennis M. Ritchie
Exercise 2-9. In a two's complement number system, x &= (x-1) deletes
the rightmost 1-bit in x. Use this observation to write a faster
version of bitcount.
Bit Twiddling Hacks
Sean Eron Anderson
Counting bits set, Brian Kernighan's way
unsigned int v; // count the number of bits set in v
unsigned int c; // c accumulates the total bits set in v
for (c = 0; v; c++)
{
v &= v - 1; // clear the least significant bit set
}
For exercise 4.1, you are counting the number of bytes that are different. Count the number of bits that are different. For example,
package main
import (
"crypto/sha256"
"fmt"
)
func BitsDifference(h1, h2 *[sha256.Size]byte) int {
n := 0
for i := range h1 {
for b := h1[i] ^ h2[i]; b != 0; b &= b - 1 {
n++
}
}
return n
}
func main() {
s1 := "unodostresquatro"
s2 := "UNODOSTRESQUATRO"
h1 := sha256.Sum256([]byte(s1))
h2 := sha256.Sum256([]byte(s2))
fmt.Println(BitsDifference(&h1, &h2))
}
Output:
139
Here is how I would do it
package main
import (
"crypto/sha256"
"fmt"
)
var (
s1 string = "unodostresquatro"
s2 string = "UNODOSTRESQUATRO"
h1 = sha256.Sum256([]byte(s1))
h2 = sha256.Sum256([]byte(s2))
)
func main() {
fmt.Printf("s1: %s h1: %X h1 type: %T\n", s1, h1, h1)
fmt.Printf("s2: %s h2: %X h2 type: %T\n", s2, h2, h2)
fmt.Printf("Number of different bits: %d\n", DifferentBits(h1, h2))
}
// bitCount counts the number of bits set in x
func bitCount(x uint8) int {
count := 0
for x != 0 {
x &= x - 1
count++
}
return count
}
func DifferentBits(c1 [32]uint8, c2 [32]uint8) int {
var counter int
for x := range c1 {
counter += bitCount(c1[x] ^ c2[x])
}
return counter
}
Playground
I'm building a Lisp, and I want 32 bit integers to automatically switch to 64 bit integers if a computation would cause them to otherwise overflow. And likewise, for 64 bit overflows, switch to arbitrarily sized integers.
The problem I have is that I don't know what the "correct" way is to detect an integer overflow.
a, b := 2147483647, 2147483647
c := a + b
How can I efficiently check if c overflowed?
I have considered always converting to 64 bit values to do the calculation, then down-sizing again afterwards when possible, but that seems expensive and memory wasteful for something that is as primitive and core to the language as basic arithmetic.
For example, to detect 32-bit integer overflow for addition,
package main
import (
"errors"
"fmt"
"math"
)
var ErrOverflow = errors.New("integer overflow")
func Add32(left, right int32) (int32, error) {
if right > 0 {
if left > math.MaxInt32-right {
return 0, ErrOverflow
}
} else {
if left < math.MinInt32-right {
return 0, ErrOverflow
}
}
return left + right, nil
}
func main() {
var a, b int32 = 2147483327, 2147483327
c, err := Add32(a, b)
if err != nil {
// handle overflow
fmt.Println(err, a, b, c)
}
}
Output:
integer overflow 2147483327 2147483327 0
For 32 bit integers, the standard way is as you said, to cast to 64 bit, then down size again [1]:
package main
func add32(x, y int32) (int32, int32) {
sum64 := int64(x) + int64(y)
return x + y, int32(sum64 >> 31)
}
func main() {
{
s, c := add32(2147483646, 1)
println(s == 2147483647, c == 0)
}
{
s, c := add32(2147483647, 1)
println(s == -2147483648, c == 1)
}
}
However if you don't like that, you can use some bit operations [2]:
func add32(x, y int32) (int32, int32) {
sum := x + y
return sum, x & y | (x | y) &^ sum >> 30
}
https://github.com/golang/go/blob/go1.16.3/src/math/bits/bits.go#L368-L373
https://github.com/golang/go/blob/go1.16.3/src/math/bits/bits.go#L380-L387
I'm writing a function that returns a sequence of numbers of variable length:
func fib(n int) ??? {
retval := ???
a, b := 0, 1
for ; n > 0; n-- {
??? // append a onto retval here
c := a + b
a = b
b = c
}
}
It can be observed that the final length of the returned sequence will be n. How and what should fib return to achieve idiomatic Go? If the length was not known in advance, how would the return value, and usage differ? How do I insert values into retval?
Here, we know how many numbers; we want n Fibonacci numbers.
package main
import "fmt"
func fib(n int) (f []int) {
if n < 0 {
n = 0
}
f = make([]int, n)
a, b := 0, 1
for i := 0; i < len(f); i++ {
f[i] = a
a, b = b, a+b
}
return
}
func main() {
f := fib(7)
fmt.Println(len(f), f)
}
Output: 7 [0 1 1 2 3 5 8]
Here, we don't know how many numbers; we want all the Fibonacci numbers less than or equal to n.
package main
import "fmt"
func fibMax(n int) (f []int) {
a, b := 0, 1
for a <= n {
f = append(f, a)
a, b = b, a+b
}
return
}
func main() {
f := fibMax(42)
fmt.Println(len(f), f)
}
Output: 10 [0 1 1 2 3 5 8 13 21 34]
You could also use IntVector from the Go vector package. Note that type IntVector []int.
Don't use Vectors, use slices. Here are some mapping of various vector operations to idiomatic slice operations.