I have this string representing hex:
00000000ff00ff00000900000600020a
I'm trying to convert it to IPv6 with net package
the result I'm expecting is(big endian):
20a:600::9:ff00:ff00::
I tried this:
ip := "00000000ff00ff00000900000600020a"
res := make(net.IP, net.IPv6len)
var err error
res,err = hex.DecodeString(ip)
if err != nil {
fmt.Println("error")
}
for i := 0; i < 16/2; i++ {
res[i], res[16-1-i] = res[16-1-i], res[i]
}
fmt.Println(res.String())
but I'm getting this:
a02:6:0:900:ff:ff::
Thanks!
Your question is not clear what is being reversed, when you compare. Typically, when switching endianness, you reverse the bytes, but that is not what you seem to want here.
In any case, here is code to reverse in many different ways, using the IPAddress Go library. Disclaimer: I am the project manager.
str := "00000000ff00ff00000900000600020a"
ipAddr := ipaddr.NewIPAddressString(str).GetAddress()
reversedAddr, _ := ipAddr.ReverseBytes()
reverseEachSegment := ipAddr.ReverseSegments()
reverseBitsEachByte, _ := ipAddr.ReverseBits(true)
reverseBits, _ := ipAddr.ReverseBits(false)
fmt.Println("original", ipAddr)
fmt.Println("bytes reversed", reversedAddr)
fmt.Println("bytes reversed in each segment", reverseEachSegment)
fmt.Println("bits reversed in each byte", reverseBitsEachByte)
fmt.Println("bits reversed", reverseBits)
Output:
original ::ff00:ff00:9:0:600:20a
bytes reversed a02:6:0:900:ff:ff::
bytes reversed in each segment 20a:600:0:9:ff00:ff00::
bits reversed in each byte ::ff00:ff00:90:0:6000:4050
bits reversed 5040:60:0:9000:ff:ff::
For some reason it is reversing the bytes in each segment that gets you what you expect, although that is not switching endianness.
Try this:
for i := 0; i < 16/2; i += 2 {
res[i], res[16-2-i] = res[16-2-i], res[i]
res[i+1], res[16-1-i] = res[16-1-i], res[i+1]
}
bytes goes in pair, so you need to flip both at time
Related
I'm trying to do direct i/o on linux, so I need to create memory aligned buffers. I copied some code to do it, but I don't understand how it works:
package main
import (
"fmt"
"golang.org/x/sys/unix"
"unsafe"
"yottaStore/yottaStore-go/src/yfs/test/utils"
)
const (
AlignSize = 4096
BlockSize = 4096
)
// Looks like dark magic
func Alignment(block []byte, AlignSize int) int {
return int(uintptr(unsafe.Pointer(&block[0])) & uintptr(AlignSize-1))
}
func main() {
path := "/path/to/file.txt"
fd, err := unix.Open(path, unix.O_RDONLY|unix.O_DIRECT, 0666)
defer unix.Close(fd)
if err != nil {
panic(err)
}
file := make([]byte, 4096*2)
a := Alignment(file, AlignSize)
offset := 0
if a != 0 {
offset = AlignSize - a
}
file = file[offset : offset+BlockSize]
n, readErr := unix.Pread(fd, file, 0)
if readErr != nil {
panic(readErr)
}
fmt.Println(a, offset, offset+utils.BlockSize, len(file))
fmt.Println("Content is: ", string(file))
}
I understand that I'm generating a slice twice as big than what I need, and then extracting a memory aligned block from it, but the Alignment function doesn't make sense to me.
How does the Alignment function works?
If I try to fmt.Println the intermediate steps of that function I get different results, why? I guess because observing it changes its memory alignment (like in quantum physics :D)
Edit:
Example with fmt.println, where I don't need any more alignment:
package main
import (
"fmt"
"golang.org/x/sys/unix"
"unsafe"
)
func main() {
path := "/path/to/file.txt"
fd, err := unix.Open(path, unix.O_RDONLY|unix.O_DIRECT, 0666)
defer unix.Close(fd)
if err != nil {
panic(err)
}
file := make([]byte, 4096)
fmt.Println("Pointer: ", &file[0])
n, readErr := unix.Pread(fd, file, 0)
fmt.Println("Return is: ", n)
if readErr != nil {
panic(readErr)
}
fmt.Println("Content is: ", string(file))
}
Your AlignSize has a value of a power of 2. In binary representation it contains a 1 bit followed by full of zeros:
fmt.Printf("%b", AlignSize) // 1000000000000
A slice allocated by make() may have a memory address that is more or less random, consisting of ones and zeros following randomly in binary; or more precisely the starting address of its backing array.
Since you allocate twice the required size, that's a guarantee that the backing array will cover an address space that has an address in the middle somewhere that ends with as many zeros as the AlignSize's binary representation, and has BlockSize room in the array starting at this. We want to find this address.
This is what the Alignment() function does. It gets the starting address of the backing array with &block[0]. In Go there's no pointer arithmetic, so in order to do something like that, we have to convert the pointer to an integer (there is integer arithmetic of course). In order to do that, we have to convert the pointer to unsafe.Pointer: all pointers are convertible to this type, and unsafe.Pointer can be converted to uintptr (which is an unsigned integer large enough to store the uninterpreted bits of a pointer value), on which–being an integer–we can perform integer arithmetic.
We use bitwise AND with the value uintptr(AlignSize-1). Since AlignSize is a power of 2 (contains a single 1 bit followed by zeros), the number one less is a number whose binary representation is full of ones, as many as trailing zeros AlignSize has. See this example:
x := 0b1010101110101010101
fmt.Printf("AlignSize : %22b\n", AlignSize)
fmt.Printf("AlignSize-1 : %22b\n", AlignSize-1)
fmt.Printf("x : %22b\n", x)
fmt.Printf("result of & : %22b\n", x&(AlignSize-1))
Output:
AlignSize : 1000000000000
AlignSize-1 : 111111111111
x : 1010101110101010101
result of & : 110101010101
So the result of & is the offset which if you subtract from AlignSize, you get an address that has as many trailing zeros as AlignSize itself: the result is "aligned" to the multiple of AlignSize.
So we will use the part of the file slice starting at offset, and we only need BlockSize:
file = file[offset : offset+BlockSize]
Edit:
Looking at your modified code trying to print the steps: I get an output like:
Pointer: 0xc0000b6000
Unsafe pointer: 0xc0000b6000
Unsafe pointer, uintptr: 824634466304
Unpersand: 0
Cast to int: 0
Return is: 0
Content is:
Note nothing is changed here. Simply the fmt package prints pointer values using hexadecimal representation, prefixed by 0x. uintptr values are printed as integers, using decimal representation. Those values are equal:
fmt.Println(0xc0000b6000, 824634466304) // output: 824634466304 824634466304
Also note the rest is 0 because in my case 0xc0000b6000 is already a multiple of 4096, in binary it is 1100000000000000000100001110000000000000.
Edit #2:
When you use fmt.Println() to debug parts of the calculation, that may change escape analysis and may change the allocation of the slice (from stack to heap). This depends on the used Go version too. Do not rely on your slice being allocated at an address that is (already) aligned to AlignSize.
See related questions for more details:
Mix print and fmt.Println and stack growing
why struct arrays comparing has different result
Addresses of slices of empty structs
write now i have a huge string which i get from 250-300 characters and i'm writing to file using
file, err := ioutil.TempFile("/Downloads", "*.txt")
if err != nil {
log.Fatal(err)
}
file.Write(mystring)
This writes everything in one line, but is there a way to pad the lines so that automatically after 76 char, we get onto new line.
found a solution which does exactly the above requirment.
made it a generic solution to split based on "n" length and whatever delimeter is required.
you can try it in the playground if you wish (https://play.golang.org/p/5ZHCC_Z5uqc)
func insertNth(s string, n int) string {
var buffer bytes.Buffer
var n_1 = n - 1
var l_1 = len(s) - 1
for i, rune := range s {
buffer.WriteRune(rune)
if i%n == n_1 && i != l_1 {
buffer.WriteRune('\n')
}
}
return buffer.String()
}
https://play.golang.org/p/5ZHCC_Z5uqc
Did some digging in and actually found it not that difficult, posted my solution above.
The Problem:
Right now, I'm logging my SQL query and the args that related to that query, but what will happen if my args weight a lot? say 100MB?
The Solution:
I want to iterate over the args and once they exceeded the 0.5MB I want to take the args up till this point and only log them (of course I'll use the entire args set in the actual SQL query).
Where am stuck:
I find it hard to find the size on the disk of an interface{}.
How can I print it? (there is a nicer way to do it than %v?)
The concern is mainly focused on the first section, how can I find the size, I need to know the type, if its an array, stack, heap, etc..
If code helps, here is my code structure (everything sits in dal pkg in util file):
package dal
import (
"fmt"
)
const limitedLogArgsSizeB = 100000 // ~ 0.1MB
func parsedArgs(args ...interface{}) string {
currentSize := 0
var res string
for i := 0; i < len(args); i++ {
currentEleSize := getSizeOfElement(args[i])
if !(currentSize+currentEleSize =< limitedLogArgsSizeB) {
break
}
currentSize += currentEleSize
res = fmt.Sprintf("%s, %v", res, args[i])
}
return "[" + res + "]"
}
func getSizeOfElement(interface{}) (sizeInBytes int) {
}
So as you can see I expect to get back from parsedArgs() a string that looks like:
"[4378233, 33, true]"
for completeness, the query that goes with it:
INSERT INTO Person (id,age,is_healthy) VALUES ($0,$1,$2)
so to demonstrate the point of all of this:
lets say the first two args are equal exactly to the threshold of the size limit that I want to log, I will only get back from the parsedArgs() the first two args as a string like this:
"[4378233, 33]"
I can provide further details upon request, Thanks :)
Getting the memory size of arbitrary values (arbitrary data structures) is not impossible but "hard" in Go. For details, see How to get memory size of variable in Go?
The easiest solution could be to produce the data to be logged in memory, and you can simply truncate it before logging (e.g. if it's a string or a byte slice, simply slice it). This is however not the gentlest solution (slower and requires more memory).
Instead I would achieve what you want differently. I would try to assemble the data to be logged, but I would use a special io.Writer as the target (which may be targeted at your disk or at an in-memory buffer) which keeps track of the bytes written to it, and once a limit is reached, it could discard further data (or report an error, whatever suits you).
You can see a counting io.Writer implementation here: Size in bits of object encoded to JSON?
type CounterWr struct {
io.Writer
Count int
}
func (cw *CounterWr) Write(p []byte) (n int, err error) {
n, err = cw.Writer.Write(p)
cw.Count += n
return
}
We can easily change it to become a functional limited-writer:
type LimitWriter struct {
io.Writer
Remaining int
}
func (lw *LimitWriter) Write(p []byte) (n int, err error) {
if lw.Remaining == 0 {
return 0, io.EOF
}
if lw.Remaining < len(p) {
p = p[:lw.Remaining]
}
n, err = lw.Writer.Write(p)
lw.Remaining -= n
return
}
And you can use the fmt.FprintXXX() functions to write into a value of this LimitWriter.
An example writing to an in-memory buffer:
buf := &bytes.Buffer{}
lw := &LimitWriter{
Writer: buf,
Remaining: 20,
}
args := []interface{}{1, 2, "Looooooooooooong"}
fmt.Fprint(lw, args)
fmt.Printf("%d %q", buf.Len(), buf)
This will output (try it on the Go Playground):
20 "[1 2 Looooooooooooon"
As you can see, our LimitWriter only allowed to write 20 bytes (LimitWriter.Remaining), and the rest were discarded.
Note that in this example I assembled the data in an in-memory buffer, but in your logging system you can write directly to your logging stream, just wrap it in LimitWriter (so you can completely omit the in-memory buffer).
Optimization tip: if you have the arguments as a slice, you may optimize the truncated rendering by using a loop, and stop printing arguments once the limit is reached.
An example doing this:
buf := &bytes.Buffer{}
lw := &LimitWriter{
Writer: buf,
Remaining: 20,
}
args := []interface{}{1, 2, "Loooooooooooooooong", 3, 4, 5}
io.WriteString(lw, "[")
for i, v := range args {
if _, err := fmt.Fprint(lw, v, " "); err != nil {
fmt.Printf("Breaking at argument %d, err: %v\n", i, err)
break
}
}
io.WriteString(lw, "]")
fmt.Printf("%d %q", buf.Len(), buf)
Output (try it on the Go Playground):
Breaking at argument 3, err: EOF
20 "[1 2 Loooooooooooooo"
The good thing about this is that once we reach the limit, we don't have to produce the string representation of the remaining arguments that would be discarded anyway, saving some CPU (and memory) resources.
I have the following function which encodes the string using Blowfish.If I put just a string to byte array it works. The problem is with line
cipher.Encrypt(enc[0:],src)
func BlowFish(str string){
key := []byte("super secret key")
cipher,err := blowfish.NewCipher(key)
if err != nil {
log.Fatal(err)
}
//very weird that I get index out of range if I insert a var
src :=[]byte(str+"\n\n\n")
var enc [512]byte
cipher.Encrypt(enc[0:],src)
fmt.Println("Encoded",enc)
var decrypt[8] byte
cipher.Decrypt(decrypt[0:],enc[0:])
result:=bytes.NewBuffer(nil)
result.Write(decrypt[0:8])
fmt.Println(string(result.Bytes()))
}
I don't understand the problem
While this may result in an error using Go Blowfish, it is correct. Blowfish is a 64-bit (read 8-byte) block cipher. As you've discovered, not only does your string have to be 8 bytes with padding, but any data you wish to encrypt must be padded so that all blocks are equal to eight bytes.
To do so, you should be checking the modulus of your data, and padding the remainder so that the length of the data is a multiple of 8, like so.
func blowfishChecksizeAndPad(pt []byte) []byte {
// calculate modulus of plaintext to blowfish's cipher block size
// if result is not 0, then we need to pad
modulus := len(pt) % blowfish.BlockSize
if modulus != 0 {
// how many bytes do we need to pad to make pt to be a multiple of
//blowfish's block size?
padlen := blowfish.BlockSize - modulus
// let's add the required padding
for i := 0; i < padlen; i++ {
// add the pad, one at a time
pt = append(pt, 0)
}
}
// return the whole-multiple-of-blowfish.BlockSize-sized plaintext
// to the calling function
return pt
}
Looks like I found what's wrong. cypher.Encrypt accepts byte array of length 8. But the length of byte array []byte(str+"\n\n\n") is 4. That's why I get an index out of range. If I have an array []byte("My str to encode"+"\n\n\n"). It's len is len of 2 strings. The solution for now is to add more \n chars to have the length of array str+"\n....\n" >=than 8
To teach myself Go I'm building a simple server that takes some input, does some processing, and sends output back to the client (that includes the original input).
The input can vary in length from around 5 - 13 characters + endlines and whatever other guff the client sends.
The input is read into a byte array and then converted to a string for some processing. Another string is appended to this string and the whole thing is converted back into a byte array to get sent back to the client.
The problem is that the input is padded with a bunch of NUL characters, and I'm not sure how to get rid of them.
So I could loop through the array and when I come to a nul character, note the length (n), create a new byte array of that length, and copy the first n characters over to the new byte array and use that. Is that the best way, or is there something to make this easier for me?
Some stripped down code:
data := make([]byte, 16)
c.Read(data)
s := strings.Replace(string(data[:]), "an", "", -1)
s = strings.Replace(s, "\r", "", -1)
s += "some other string"
response := []byte(s)
c.Write(response)
c.close()
Also if I'm doing anything else obviously stupid here it would be nice to know.
In package "bytes", func Trim(s []byte, cutset string) []byte is your friend:
Trim returns a subslice of s by slicing off all leading and trailing UTF-8-encoded Unicode code points contained in cutset.
// Remove any NULL characters from 'b'
b = bytes.Trim(b, "\x00")
Your approach sounds basically right. Some remarks:
When you have found the index of the first nul byte in data, you don't need to copy, just truncate the slice: data[:idx].
bytes.Index should be able to find that index for you.
There is also bytes.Replace so you don't need to convert to string.
The io.Reader documentation says:
Read reads up to len(p) bytes into p. It returns the number of bytes read (0 <= n <= len(p)) and any error encountered.
If the call to Read in the application does not read 16 bytes, then data will have trailing zero bytes. Use the number of bytes read to trim the zero bytes from the buffer.
data := make([]byte, 16)
n, err := c.Read(data)
if err != nil {
// handle error
}
data = data[:n]
There's another issue. There's no guarantee that Read slurps up all of the "message" sent by the peer. The application may need to call Read more than once to get the complete message.
You mention endlines in the question. If the message from the client is terminated but a newline, then use bufio.Scanner to read lines from the connection:
s := bufio.NewScanner(c)
if s.Scan() {
data = s.Bytes() // data is next line, not including end lines, etc.
}
if s.Err() != nil {
// handle error
}
You could utilize the return value of Read:
package main
import "strings"
func main() {
r, b := strings.NewReader("north east south west"), make([]byte, 16)
n, e := r.Read(b)
if e != nil {
panic(e)
}
b = b[:n]
println(string(b) == "north east south")
}
https://golang.org/pkg/io#Reader