How to unpack 2, 2 and 3 bits out of a byte - go

Assuming I have 3 bytes (2x2bits and 1x3bits) packed like this:
func pack(a, b, c byte) byte { // is there a more efficient way to pack them?
return a<<6 | b<<4 | c
}
func main() {
v := pack(1, 2, 6)
a := v >> 6
b := v >> 4 // wrong
c := v & 7
fmt.Println(v, a, b, c)
}
How do I unpack b?

You need to mask off the unused bits like you've already done for c. I also added masks to the pack function, to prevent accidental overlapping of values:
const (
threeBits = 0x7
twoBits = 0x3
)
func pack(a, b, c byte) byte {
return a<<6 | b&twoBits<<4 | c&threeBits
}
func main() {
v := pack(1, 2, 6)
a := v >> 6
b := v >> 4 & twoBits
c := v & threeBits
fmt.Println(v, a, b, c)
}

Related

Creating 8 bit binary data from 4,3, and 1 bit data in Golang

I need to form a header (8 bits) using a version (4 bits), count (3 bits), identifier (1 bit). How can I achieve this in Go? For example:
version: 1 (0001)
count: 3 (011)
identifier: 1(1)
Header: 00010111 (23)
I'm doing the following which works but there's a lot of cumbersome code. How can I do this efficiently?
const (
VersionSize binary.Bits = 4
countSize binary.Bits = 3
IdentifierSize binary.Bits = 1
)
type header struct {
version uint8
count uint8
identifier uint8
}
func main() {
headerObj := &header{version:1, count:3, identifier:1}
headerBytes := encode(headerObj)
// prints [23]
fmt.PrintLn(headerBytes)
}
func (h *header) encode() []byte {
var header []byte
vercountIdBinary := toBinary(h.version, versionSize) + toBinary(h.count,countSize) + toBinary(h.identifier, IdentifierSize)
vercountIdByte, _ := strconv.ParseInt(vercountIdBinary, 2, 8)
header = append(header, byte(vercountIdByte))
return header
}
func toBinary(value interface{}, bitSize binary.Bits) string {
format := "%0" + strconv.Itoa(int(bitSize)) + "b"
return fmt.Sprintf(format, value)
}
Packing and unpacking bits into a number can be achieved simply with bit masking and shifting.
For example to pack bits into a number, mask and assign the first, then shift left the result with the bits number or the next data (to make enough room for it). Then mask the 2nd number, and "add" it using bitwise OR. Then shift again with the size of the 3rd number, and repeat.
To unpack: mask the result with the size of the last field, and you got the last number. Shift right the data with the size of the decoded number, and mask with the next (in reverse order) number's size, and you got the number. Repeat this process until you have decoded all numbers.
For example, this packs identifier to most significant bits, count in middle and version to least significant bits, but you may do the opposite order by packing fields in reverse order:
const (
BitsVersion = 4
BitsCount = 3
BitsId = 1
)
const (
MaskVersion = 1<<BitsVersion - 1
MaskCount = 1<<BitsCount - 1
MaskId = 1<<BitsId - 1
)
type header struct {
version uint8
count uint8
identifier uint8
}
func (h *header) ToByte() uint8 {
var b uint8
b = h.identifier & MaskId
b <<= BitsCount
b |= h.count & MaskCount
b <<= BitsVersion
b |= h.version & MaskVersion
return b
}
func (h *header) ParseByte(b uint8) {
h.version = b & MaskVersion
b >>= BitsVersion
h.count = b & MaskCount
b >>= BitsCount
h.identifier = b & MaskId
}
Testing it:
h := &header{
version: 3,
count: 2,
identifier: 1,
}
fmt.Printf("%+v\n", h)
b := h.ToByte()
h2 := &header{}
h2.ParseByte(b)
fmt.Printf("%+v\n", h2)
Which will output (try it on the Go Playground):
&{version:3 count:2 identifier:1}
&{version:3 count:2 identifier:1}
Note: the above example encodes the fields in id-count-version order. The order of fields doesn't matter as long as both the packing and unpacking works with the same order. If you need reverse order (version-count-id), simply reverse the order in which fields are packed / unpacked. Here's how to do that:
func (h *header) ToByte() uint8 {
var b uint8
b = h.version & MaskVersion
b <<= BitsCount
b |= h.count & MaskCount
b <<= BitsId
b |= h.identifier & MaskId
return b
}
func (h *header) ParseByte(b uint8) {
h.identifier = b & MaskId
b >>= BitsId
h.count = b & MaskCount
b >>= BitsCount
h.version = b & MaskVersion
}
This outputs the same. Try this one on the Go Playground.
Note that if you have to do this with multiple data, targeting an io.Writer stream, you may use the github.com/icza/bitio library (disclosure: I'm the author).

What should I change in the code to generate a fibonacci sequence starting from 0 1 1

I've searched older questions, there are tons of them. However I couldn't find the answer to my case.
func fibonacci() func() int {
y := 0
z := 1
return func () int {
res := y + z
y = z
z = res
return res
}
}
func main() {
f := fibonacci()
for i := 0; i < 10; i++ {
fmt.Println(f())
}
}
This produces 1 2 3 5 8
What should I change (as little as possible) to get 0 1 1 2 3 5 8 ?
Actually I managed to solve that if initial y and z were like this:
y := -1
z := 1
But that's a fortunate hack, and I want a logical solution.
Change your function to return res to this:
return func () int {
res := y
y = z
z = res + z
return res
}
This way you output the initial values first, and calculate the next values. Your current solution overwrites the initial values before they are returned.
If you added:
x := y
and changed the return statement to
return x
you would be returning the initial y := 0 value, instead of the computed res := y + z, so returning the values 2 earlier in the sequence, giving you 0, 1, 1, 2, 3, 5, ...
(But I wouldn’t consider the -1, 1 initializer a hack.)
For example,
package main
import "fmt"
// fibonacci returns a function that returns
// successive Fibonacci numbers.
func fibonacci() func() int {
a, b := 0, 1
return func() (f int) {
if a < 0 {
panic("overflow")
}
f, a, b = a, b, a+b
return f
}
}
func main() {
f := fibonacci()
for i := 0; i < 10; i++ {
fmt.Println(f())
}
}
Playground: https://play.golang.org/p/uYHEK_ZgE6K
Output:
0
1
1
2
3
5
8
13
21
34

New to go; how to use math/big

I am new to Go but not to programming. I am trying to implement a few functions on prime numbers as a way to learn. Here's my code, which you can run at http://ideone.com/qxLQ0D:
// prime numbers
package main
import (
"fmt"
)
// list of primes less than n:
// sieve of eratosthenes
func primes(n int) (ps []int) {
sieve := make([]bool, n)
for i := 2; i < n; i++ {
if !(sieve[i]) {
ps = append(ps, i)
for j := i * i; j < n; j += i {
sieve[j] = true
}
}
}
return ps
}
// true if n is prime, else false:
// trial division via 2,3,5-wheel
func isPrime(n int) (bool) {
wheel := [11]int{1,2,2,4,2,4,2,4,6,2,6}
w := 0
f := 2
for f*f <= n {
if n % f == 0 { return false }
f += wheel[w]
w += 1
if w == 11 { w = 3 }
}
return true
}
// greatest common divisor of x and y:
// via euclid's algorithm
func gcd(x int, y int) (int) {
for y != 0 {
x, y = y, x % y
}
return x
}
// absolute value of x
func abs(x int) (int) {
if x < 0 {
return -1 * x
}
return x
}
// list of prime factors of n:
// trial division via 2,3,5-wheel
// to 1000 followed by pollard rho
func factors(n int) (fs []int) {
wheel := [11]int{1,2,2,4,2,4,2,4,6,2,6}
w := 0 // wheel pointer
f := 2 // current trial factor
for f*f <= n && f < 1000 {
for n % f == 0 {
fs = append(fs, f)
n /= f
}
f += wheel[w]; w += 1
if w == 11 { w = 3 }
}
if n == 1 { return fs }
h := 1 // hare
t := 1 // turtle
g := 1 // greatest common divisor
c := 1 // random number parameter
for !(isPrime(n)) {
for g == 1 {
h = (h*h+c) % n // the hare runs
h = (h*h+c) % n // twice as fast
t = (t*t+c) % n // as the tortoise
g = gcd(abs(t-h), n)
}
if isPrime(g) {
for n % g == 0 {
fs = append(fs, g)
n /= g
}
}
h, t, g, c = 1, 1, 1, c+1
}
fs = append(fs, n)
return fs
}
func main() {
fmt.Println(primes(100))
fmt.Println(isPrime(997))
fmt.Println(isPrime(13290059))
fmt.Println(factors(13290059))
}
That works fine. I would like to know how to initialize wheel as a constant at compile time so that it can be shared by isPrime and factors, and I would appreciate any comments on style or other aspects of my program.
I eventually want to implement some factoring algorithms on big integers, using the math/big package. But I'm having much trouble. Simplifying to just the trial division via a 2,3,5-wheel part of the algorithm, here's my code:
package main
import (
"fmt"
"math/big"
)
func factors(n big.Int) (fs []big.Int) {
zero := big.NewInt(0);
one := big.NewInt(1);
two := big.NewInt(2);
four := big.NewInt(4);
six := big.NewInt(6);
wheel := [11]big.Int{*one,*two,*two,*four,*two,*four,*two,*four,*six,*two,*six}
w := 0;
f := two;
for big.Mul(*f, *f).Cmp(n) <= 0 {
for big.Mod(n, *f).Cmp(*zero) {
fs = append(fs, *f)
n = big.Div(n, *f)
}
f = big.Add(f, wheel[w])
w += 1
if w > 11 { w = 3 }
}
fs = append(fs, n)
return fs
}
func main() {
fmt.Println(factors(*big.NewInt(13290059)))
}
That doesn't work; ideone complains that the Add, Div, Mod and Mul functions are not found. And it looks rather ugly to me, stylistically.
Please tell me how to fix my factors function.
EDIT 1: Thanks to #TClaverie, I now have a function that compiles. Now I am getting a runtime error, and ideone points to the Mul function. Once again, can anyone help? My revised code is shown below and at http://ideone.com/aVBgJg:
package main
import (
"fmt"
"math/big"
)
func factors(n *big.Int) (fs []big.Int) {
var z *big.Int
zero := big.NewInt(0)
one := big.NewInt(1)
two := big.NewInt(2)
four := big.NewInt(4)
six := big.NewInt(6)
wheel := [11]*big.Int{one,two,two,four,two,four,two,four,six,two,six}
w := 0
f := two
z.Mul(f, f)
for z.Cmp(n) <= 0 {
z.Mod(n, f)
for z.Cmp(zero) == 0 {
fs = append(fs, *f)
n.Div(n, f)
z.Mod(n, f)
}
f.Add(f, wheel[w])
w += 1
if w > 11 { w = 3 }
z.Mul(f, f)
}
fs = append(fs, *n)
return fs
}
func main() {
fmt.Println(factors(big.NewInt(13290059)))
}
EDIT 2: Thanks to #TClaverie, I've learned a lot about Go, and I'm close to a solution. But I still have one problem; the program
package main
import (
"fmt"
"math/big"
)
func main() {
one := big.NewInt(1);
two := big.NewInt(2);
four := big.NewInt(4);
six := big.NewInt(6);
wheel := [11]*big.Int{one,two,two,four,two,four,two,four,six,two,six}
f := two; w := 0
for f.Cmp(big.NewInt(40)) < 0 {
fmt.Println(f, w, wheel)
f.Add(f, wheel[w])
w += 1; if w == 11 { w = 3 }
}
}
prints the following output, which shows that wheel is being modified in the call to Add:
2 0 [1 2 2 4 2 4 2 4 6 2 6]
3 1 [1 3 3 4 3 4 3 4 6 3 6]
6 2 [1 6 6 4 6 4 6 4 6 6 6]
12 3 [1 12 12 4 12 4 12 4 6 12 6]
16 4 [1 16 16 4 16 4 16 4 6 16 6]
32 5 [1 32 32 4 32 4 32 4 6 32 6]
36 6 [1 36 36 4 36 4 36 4 6 36 6]
What's the right way to prevent that from happening?
So, if you look at the documentation, you'll see that Add, Div and Mul are defined for the type *big.Int, so you have to call them using a *big.Int with the dot notation. Also, they expect arguments of type *big.Int, but you're giving them big.Int.
If you look at the documentation, you'll also see that those functions are of the type: z.Op(x, y). They apply x Op y and store the result into another *big.Int called z. So, you need a dummy *big.Int, that I'll call z (the methods return it at the same time).
Finally, it's better to work with pointers in this case, as all methods work with pointers.
func factors(n big.Int) (fs []big.Int) --> func factors(n *big.Int) (fs []big.Int)
wheel := [11]big.Int{*one,*two,*two,*four,*two,*four,*two,*four,*six,*two,*six} -->
wheel := [11]*big.Int{one,two,two,four,two,four,two,four,six,two,six}
big.Mul(*f, *f) --> z.Mul(f, f)
big.Mod(n, *f) --> z.Mod(n, f)
n = big.Div(n, *f) --> n.Div(n, f)
f = big.Add(f, wheel[w]) -−> f.Add(f, wheel[w])
A last thing: your condition is broken in the second for, because you are giving it an int instead of a boolean.
So, I do not guarantee the code works after those modifications, but you will be able to make it compile and debug it.

Why do floats and ints = Nan? in go

package main
import (
"fmt"
"math"
)
func main() {
// x= +- sqrtB-4ac/2a
cal()
}
func cal() {
b := 3
a := 4
c := 2
b2 := float64(b*b)
ac := float64(4)*float64(a)*float64(c)
q := math.Sqrt(b2-ac)
fmt.Print(q)
}
This will output a NaN, but why. I am trying to make a quadratic calculator. All I want is for this to output the number.
Because you're trying to take the square root of a negative number which isn't a valid operation (not just in Go, in math) and so it returns NaN which is an acronym for Not A Number.
b := 3
a := 4
c := 2
b2 := float64(b*b) // sets b2 == 9
ac := float64(4)*float64(a)*float64(c) // ac == 32
q := math.Sqrt(b2-ac) // Sqrt(9-32) == Sqrt(-23) == NaN
fmt.Print(q)
q = math.Sqrt(math.Abs(b2-ac)) // suggested in comments does Sqrt(23) == ~4.79
// perhaps the outcome you're looking for.
EDIT: please don't argue semantics on the math bit. If you want to discuss square roots of negative numbers this isn't the place. Generally speaking, it is not possible to take the square root of a negative number.
Since you're taking the square root of a negative number, you've got an imaginary result (sqrt(-9) == 3i). This is assuredly NOT what you're trying to do. Instead, do:
func main() {
b := float64(3)
a := float64(4)
c := float64(2)
result := [2]float64{(-b + math.Sqrt(math.Abs(b*b - 4*a*c))) / 2 * a,
(-b - math.Sqrt(math.Abs(b*b - 4*a*c))) / 2 * a)}
fmt.Println(result)
}
You try Sqrt Negative Number for this reason return always NaN ( Not a Number )
I run you code and print the results:
b := 3
a := 4
c := 2
b2 := float64(b*b)
fmt.Printf("%.2f \n", b2)
ac := float64(4)*float64(a)*float64(c)
fmt.Printf("%.2f \n", ac)
fmt.Printf("%.2f \n", b2-ac)
q := math.Sqrt(b2-ac)
fmt.Print(q)
Console:
9.00
32.00
-23.00
NaN
Sqrt in Golang : https://golang.org/pkg/math/#Sqrt

Returning the lenght of a vector idiomatically

I'm writing a function that returns a sequence of numbers of variable length:
func fib(n int) ??? {
retval := ???
a, b := 0, 1
for ; n > 0; n-- {
??? // append a onto retval here
c := a + b
a = b
b = c
}
}
It can be observed that the final length of the returned sequence will be n. How and what should fib return to achieve idiomatic Go? If the length was not known in advance, how would the return value, and usage differ? How do I insert values into retval?
Here, we know how many numbers; we want n Fibonacci numbers.
package main
import "fmt"
func fib(n int) (f []int) {
if n < 0 {
n = 0
}
f = make([]int, n)
a, b := 0, 1
for i := 0; i < len(f); i++ {
f[i] = a
a, b = b, a+b
}
return
}
func main() {
f := fib(7)
fmt.Println(len(f), f)
}
Output: 7 [0 1 1 2 3 5 8]
Here, we don't know how many numbers; we want all the Fibonacci numbers less than or equal to n.
package main
import "fmt"
func fibMax(n int) (f []int) {
a, b := 0, 1
for a <= n {
f = append(f, a)
a, b = b, a+b
}
return
}
func main() {
f := fibMax(42)
fmt.Println(len(f), f)
}
Output: 10 [0 1 1 2 3 5 8 13 21 34]
You could also use IntVector from the Go vector package. Note that type IntVector []int.
Don't use Vectors, use slices. Here are some mapping of various vector operations to idiomatic slice operations.

Resources