Color operation in go - go

There're some simple color operations, but the output is wrong. I'm just wondering what happened here.
main.c:
package main
import (
"fmt"
"image/color"
)
func main() {
startColor := color.RGBA{0x34, 0xeb, 0x64, 0xff}
endColor := color.RGBA{0x34, 0xc9, 0xeb, 0xff}
fmt.Printf("%d-%d=%d\n", endColor.G, startColor.G, endColor.G-startColor.G)
}
output:
201-235=222

color.RGBA.G is a uint8. Since 235 is bigger than 201, but uint8 doesn't store negative numbers like -34, the value is instead wrapping.
There's nothing color specific about the situation.
You get the same answer (222) with:
var g1, g2 uint8 = 0xc9, 0xeb
fmt.Println(g1 - g2)
So nothing unusual, just standard Go unsigned integer overflow wrapping. It isn't even undefined behavior.

Related

Why am I getting a compile error 'cannot use ... as type uint8 in argument to ...' when the parameter is an int

I am new to Go and was working through a problem in The Go Programming Language. The code should create GIF animations out of random Lissajous figures with the images being produced in the different colors from palate:
// Copyright © 2016 Alan A. A. Donovan & Brian W. Kernighan.
// License: https://creativecommons.org/licenses/by-nc-sa/4.0/
// Run with "web" command-line argument for web server.
// See page 13.
//!+main
// Lissajous generates GIF animations of random Lissajous figures.
package main
import (
"image"
"image/color"
"image/gif"
"io"
"math"
"math/rand"
"os"
)
//!-main
// Packages not needed by version in book.
import (
"log"
"net/http"
"time"
)
//!+main
// #00ff55
var palette = []color.Color{color.RGBA{0x00, 0xff, 0x55, 0xFF}, color.Black, color.RGBA{0x00, 0x00, 0xff, 0xFF}, color.RGBA{0xff, 0x00, 0xff, 0xFF}}
const (
whiteIndex = 0 // first color in palette
)
func main() {
//!-main
// The sequence of images is deterministic unless we seed
// the pseudo-random number generator using the current time.
// Thanks to Randall McPherson for pointing out the omission.
rand.Seed(time.Now().UTC().UnixNano())
if len(os.Args) > 1 && os.Args[1] == "web" {
//!+http
handler := func(w http.ResponseWriter, r *http.Request) {
lissajous(w)
}
http.HandleFunc("/", handler)
//!-http
log.Fatal(http.ListenAndServe("localhost:8000", nil))
return
}
//!+main
lissajous(os.Stdout)
}
func lissajous(out io.Writer) {
const (
cycles = 5 // number of complete x oscillator revolutions
res = 0.001 // angular resolution
size = 100 // image canvas covers [-size..+size]
nframes = 64 // number of animation frames
delay = 8 // delay between frames in 10ms units
)
freq := rand.Float64() * 3.0 // relative frequency of y oscillator
anim := gif.GIF{LoopCount: nframes}
phase := 0.0 // phase difference
colorIndex := 2
for i := 0; i < nframes; i++ {
rect := image.Rect(0, 0, 2*size+1, 2*size+1)
img := image.NewPaletted(rect, palette)
for t := 0.0; t < cycles*2*math.Pi; t += res {
x := math.Sin(t)
y := math.Sin(t*freq + phase)
img.SetColorIndex(size+int(x*size+0.5), size+int(y*size+0.5), colorIndex)
colorIndex++
}
phase += 0.1
anim.Delay = append(anim.Delay, delay)
anim.Image = append(anim.Image, img)
}
gif.EncodeAll(out, &anim) // NOTE: ignoring encoding errors
}
//!-main
Here is the error I am getting
lissajous/main.go:76: cannot use colorIndex (type int) as type uint8 in argument to img.SetColorIndex
Is there a difference between int and uint8 types or something?
The type of colorIndex is int. The argument type is uint8. An int cannot be assigned to a uint8. Here are some options for fixing the program:
Declare colorIndex as an untyped constant.
const colorIndex = 2
Declare colorIndex as uint8 type:
colorIndex := uint8(3)
Convert the value at the call:
img.SetColorIndex(size+int(x*size+0.5), size+int(y*size+0.5), uint8(colorIndex))
You can replace all uses of uint8 in this answer with byte because byte is an alias for uint8.
In variable declarations, a default type is used, and in your case, colorIndex := 2, i.e. colorIndex becomes int, not uint8.
From the docs ( https://golang.org/ref/spec#Short_variable_declarations ):
"If a type is present, each variable is given that type. Otherwise, each variable is given the type of the corresponding initialization value in the assignment. If that value is an untyped constant, it is first converted to its default type;..."
"var i = 42 // i is int"
and then
"An untyped constant has a default type which is the type to which the constant is implicitly converted in contexts where a typed value is required, for instance, in a short variable declaration such as i := 0 where there is no explicit type. The default type of an untyped constant is bool, rune, int, float64, complex128 or string respectively, depending on whether it is a boolean, rune, integer, floating-point, complex, or string constant."
So to get uint8, you should either explicitly declare colorIndex as uint8 var colorIndex uint8 = 2 or cast uint8 in img.SetColorIndex as :
img.SetColorIndex(size+int(x*size+0.5), size+int(y*size+0.5), uint8(colorIndex))

Go - Convert 2 byte array into a uint16 value

If I have a slice of bytes in Go, similar to this:
numBytes := []byte { 0xFF, 0x10 }
How would I convert it to it's uint16 value (0xFF10, 65296)?
you may use binary.BigEndian.Uint16(numBytes)
like this working sample code (with commented output):
package main
import (
"encoding/binary"
"fmt"
)
func main() {
numBytes := []byte{0xFF, 0x10}
u := binary.BigEndian.Uint16(numBytes)
fmt.Printf("%#X %[1]v\n", u) // 0XFF10 65296
}
and see inside binary.BigEndian.Uint16(b []byte):
func (bigEndian) Uint16(b []byte) uint16 {
_ = b[1] // bounds check hint to compiler; see golang.org/issue/14808
return uint16(b[1]) | uint16(b[0])<<8
}
I hope this helps.
To combine two bytes into uint16
x := uint16(numBytes[i])<<8 | uint16(numBytes[i+1])
where i is the starting position of the uint16. So if your array is always only two items it would be x := uint16(numBytes[0])<<8 | uint16(numBytes[1])
Firstly you have a slice not an array - an array has a fixed size and would be declared like this [2]byte.
If you just have a 2 bytes slice, I wouldn't do anything fancy, I'd just do
numBytes := []byte{0xFF, 0x10}
n := int(numBytes[0])<<8 + int(numBytes[1])
fmt.Printf("n =0x%04X = %d\n", n, n)
Playground
EDIT: Just noticed you wanted uint16 - replace int with that in the above!
You can use the following unsafe conversion:
*(*uint16)(unsafe.Pointer(&numBytes[0])

Sum hexadecimal on Golang

Thanks for reading my question.
I am trying to count ASTM checksum on Golang but couldn't figure it out how to convert string or byte to hexadecimal that is countable by myself and Google.
Please let me request help, thanks.
At Golang, how to convert a character to hexadecimal that can allow performing a sum?
Example:
// Convert character "a" to hex 0x61 ( I understand this will not work for my case as it became a string.)
hex := fmt.Sprintf("%x","a")
// sum the 0x61 with 0x01 so it will become 0x62 = "b"
fmt.Printf("%v",hex + 0x01)
Thank you so much and please have a nice day.
Thanks for everyone answering my question! peterSO and ANisus answers both solved my problem. Please let me choose ANisus's reply as answer as it including ASTM special character in it. I wish StackOverflow could choose multiple answers. Thanks for everybody answering me and please have a nice day!
Intermernet's answer shows you how to convert a hexadecimal string into an int value.
But your question seems to suggest that you want to want to get the code point value of the letter 'a' and then do aritmetics on that value. To do this, you don't need hexadecimal. You can do the following:
package main
import "fmt"
func main() {
// Get the code point value of 'a' which is 0x61
val := 'a'
// sum the 0x61 with 0x01 so it will become 0x62 = 'b'
fmt.Printf("%v", string(val + 0x01))
}
Result:
b
Playground: http://play.golang.org/p/SbsUHIcrXK
Edit:
Doing the actual ASTM checksum from a string using the algorithm described here can be done with the following code:
package main
import (
"fmt"
)
const (
ETX = 0x03
ETB = 23
STX = 0x02
)
func ASTMCheckSum(frame string) string {
var sumOfChars uint8
//take each byte in the string and add the values
for i := 0; i < len(frame) ; i++ {
byteVal := frame[i]
sumOfChars += byteVal
if byteVal == STX {
sumOfChars = 0
}
if byteVal == ETX || byteVal == ETB {
break
}
}
// return as hex value in upper case
return fmt.Sprintf("%02X", sumOfChars)
}
func main() {
data := "\x025R|2|^^^1.0000+950+1.0|15|||^5^||V||34001637|20080516153540|20080516153602|34001637\r\x033D\r\n"
//fmt.Println(data)
fmt.Println(ASTMCheckSum(data))
}
Result:
3D
Playground: http://play.golang.org/p/7cbwryZk8r
You can use ParseInt from the strconv package.
ParseInt interprets a string s in the given base (2 to 36) and returns the corresponding value i. If base == 0, the base is implied by the string's prefix: base 16 for "0x", base 8 for "0", and base 10 otherwise.
package main
import (
"fmt"
"strconv"
)
func main() {
start := "a"
result, err := strconv.ParseInt(start, 16, 0)
if err != nil {
panic(err)
}
fmt.Printf("%x", result+1)
}
Playground
You do not want to "convert a character to hex" because hexadecimal (and decimal and binary and all other base-N representations of integers) are here for displaying numbers to humans and consuming them back. A computer is free to actually store the number it operates on in any form it wishes; while most (all?) real-world computers store them in binary form—using bits, they don't have to.
What I'm leading you to, is that you actually want to convert your character representing a number using hexadecimal notation ("display form") to a number (what computers operate on). For this, you can either use the strconv package as already suggested or roll your own simple conversion code. Or you can just grab one from the encoding/hex standard package—see its fromHexChar function.
For example,
package main
import "fmt"
func ASTMCheckSum(data []byte) []byte {
cs := byte(0)
for _, b := range data {
cs += b
}
return []byte(fmt.Sprintf("%02X", cs))
}
func main() {
data := []byte{0x01, 0x08, 0x1f, 0xff, 0x07}
fmt.Printf("%x\n", data)
cs := ASTMCheckSum(data)
fmt.Printf("%s\n", cs)
}
Output:
01081fff07
2E

Decoding data from a byte slice to Uint32

package main
import (
"bytes"
"encoding/binary"
"fmt"
)
func main() {
aa := uint(0xFFFFFFFF)
fmt.Println(aa)
byteNewbuf := []byte{0xFF, 0xFF, 0xFF, 0xFF}
buf := bytes.NewBuffer(byteNewbuf)
tt, _ := binary.ReadUvarint(buf)
fmt.Println(tt)
}
Need to convert 4 bytes array to uint32 but why the results are not same ?
go verion : beta 1.1
You can do this with one of the ByteOrder objects from the encoding/binary package. For instance:
package main
import (
"encoding/binary"
"fmt"
)
func main() {
aa := uint(0x7FFFFFFF)
fmt.Println(aa)
slice := []byte{0xFF, 0xFF, 0xFF, 0x7F}
tt := binary.LittleEndian.Uint32(slice)
fmt.Println(tt)
}
If your data is in big endian format, you can instead use the same methods on binary.BigEndian.
tt := uint32(buf[0])<<24 | uint32(buf[1])<<16 | uint32(buf[2]) <<8 |
uint32(buf[3])
for BE or
tt := uint32(buf[0]) | uint32(buf[1])<<8 | uint32(buf[2]) <<16 |
uint32(buf[3]) <<24
for LE.
[u]varint is a different kind of encoding (32 bit numbers can have as much as 5 bytes in the encoded form, 64 bit numbers up to 10).
No need to create a buffer for []byte. Use Varint or Uvarint directly on the byte slice instead.
You're throwing away the error returned by the function. The second result indicates how many bytes were read or if there was a problem. There is a problem while decoding 0xff, 0xff, 0xff, 0xff as an uvarint.
Here is how to use the encoding/binary package to do what you want. Note that you don't want to use any of the var functions as those do variable length encoding.
Playground version
package main
import (
"bytes"
"encoding/binary"
"fmt"
"log"
)
func main() {
aa := uint(0xFFFFFF0F)
fmt.Println(aa)
tt := uint32(0)
byteNewbuf := []byte{0x0F, 0xFF, 0xFF, 0xFF}
buf := bytes.NewBuffer(byteNewbuf)
err := binary.Read(buf, binary.LittleEndian, &tt)
if err != nil {
log.Fatalf("Decode failed: %s", err)
}
fmt.Println(tt)
}
Result is
4294967055
4294967055
Numeric types
byte alias for uint8
Since byte is an alias for uint8, your question, "Need to convert 4 bytes array to uint32", has already been answered:
How to convert [4]uint8 into uint32 in Go?
Package binary
[Uvarints and] Varints are a method of encoding integers using one
or more bytes; numbers with smaller absolute value take a smaller
number of bytes. For a specification, see
http://code.google.com/apis/protocolbuffers/docs/encoding.html.
Since Uvarints are a peculiar form of integer representation and storage, you should only use the ReadUvarint function on values that have been written with the Uvarint function.
For example,
package main
import (
"bytes"
"encoding/binary"
"fmt"
)
func main() {
buf := make([]byte, 10)
x := uint64(0xFFFFFFFF)
fmt.Printf("%2d %2d %v\n", x, len(buf), buf)
n := binary.PutUvarint(buf, x)
buf = buf[:n]
fmt.Printf("%2d %2d %v\n", x, len(buf), buf)
y, err := binary.ReadUvarint(bytes.NewBuffer(buf))
if err != nil {
fmt.Println(err)
return
}
fmt.Printf("%2d %2d %v\n", y, len(buf), buf)
}
Output:
4294967295 10 [0 0 0 0 0 0 0 0 0 0]
4294967295 5 [255 255 255 255 15]
4294967295 5 [255 255 255 255 15]

Convert [8]byte to a uint64

all. I'm encountering what seems to be a very strange problem. (It could be that it's far past when I should be asleep, and I'm overlooking something obvious.)
I have a []byte with length 8 as a result of some hex decoding. I need to produce a uint64 in order to use it. I have tried using binary.Uvarint(), from encoding/binary to do so, but it seems to only use the first byte in the array. Consider the following example.
package main
import (
"encoding/binary"
"fmt"
)
func main() {
array := []byte{0x00, 0x01, 0x08, 0x00, 0x08, 0x01, 0xab, 0x01}
num, _ := binary.Uvarint(array[0:8])
fmt.Printf("%v, %x\n", array, num)
}
Here it is on play.golang.org.
When that is run, it displays the num as 0, even though, in hex, it should be 000108000801ab01. Furthermore, if one catches the second value from binary.Uvarint(), it is the number of bytes read from the buffer, which, to my knowledge, should be 8, even though it is actually 1.
Am I interpreting this wrong? If so, what should I be using instead?
Thanks, you all. :)
You're decoding using a function whose use isn't the one you need :
Varints are a method of encoding integers using one or more bytes;
numbers with smaller absolute value take a smaller number of bytes.
For a specification, see
http://code.google.com/apis/protocolbuffers/docs/encoding.html.
It's not the standard encoding but a very specific, variable byte number, encoding. That's why it stops at the first byte whose value is less than 0x080.
As pointed by Stephen, binary.BigEndian and binary.LittleEndian provide useful functions to decode directly :
type ByteOrder interface {
Uint16([]byte) uint16
Uint32([]byte) uint32
Uint64([]byte) uint64
PutUint16([]byte, uint16)
PutUint32([]byte, uint32)
PutUint64([]byte, uint64)
String() string
}
So you may use
package main
import (
"encoding/binary"
"fmt"
)
func main() {
array := []byte{0x00, 0x01, 0x08, 0x00, 0x08, 0x01, 0xab, 0x01}
num := binary.LittleEndian.Uint64(array)
fmt.Printf("%v, %x", array, num)
}
or (if you want to check errors instead of panicking, thanks jimt for pointing this problem with the direct solution) :
package main
import (
"encoding/binary"
"bytes"
"fmt"
)
func main() {
array := []byte{0x00, 0x01, 0x08, 0x00, 0x08, 0x01, 0xab, 0x01}
var num uint64
err := binary.Read(bytes.NewBuffer(array[:]), binary.LittleEndian, &num)
fmt.Printf("%v, %x", array, num)
}
If don't care byte order, you can try this:
arr := [8]byte{1,2,3,4,5,6,7,8}
num := *(*uint64)(unsafe.Pointer(&arr[0]))
http://play.golang.org/p/aM2r40ANQC
If you look at the function for Uvarint you will see that it is not as straight a conversion as you expect.
To be honest, I haven't yet figured out what kind of byte format it expects (see edit).
But to write your own is close to trivial:
func Uvarint(buf []byte) (x uint64) {
for i, b := range buf {
x = x << 8 + uint64(b)
if i == 7 {
return
}
}
return
}
Edit
The byte format is none I am familiar.
It is a variable width encoding where the highest bit of each byte is a flag.
If set to 0, that byte is the last in the sequence.
If set to 1, the encoding should continue with the next byte.
Only the lower 7 bits of each byte are used to build the uint64 value. The first byte will set the lowest 7 bits of the uint64, the following byte bit 8-15, etc.

Resources