I wish to ignore a particular field whilst processing a string with sscanf.
Man page for sscanf says
An optional '*' assignment-suppression character: scanf() reads input as directed by the conversion specification, but discards the input. No corresponding pointer argument is required, and this specification is not included in the count of successful assignments returned by scanf().
Attempting to use this in Golang, to ignore the 3rd field:
if c, err := fmt.Sscanf(str, " %s %d %*d %d ", &iface.Name, &iface.BTx, &iface.BytesRx); err != nil || c != 3 {
compiles OK, but at runtime err is set to:
bad verb %* for integer
Golang doco doesn't specifically mention the %* conversion specification, but it does say,
Package fmt implements formatted I/O with functions analogous to C's printf and scanf.
It doesn't indicate that %* is not implemented, so... Am I doing it wrong? Or has it just been quietly omitted? ...but then, why does it compile?
To the best of my knowledge there is no such verb (as the format specifiers are called in the fmt package) for this task. What you can do however, is specifying some verb and ignoring its value. This is not particularly memory friendly, though. Ideally this would work:
fmt.Scan(&a, _, &b)
Sadly, it doesn't. So your next best option would be to declare the variables and ignore the one
you don't want:
var a,b,c int
fmt.Scanf("%d %v %d", &a, &b, &c)
fmt.Println(a,c)
%v would read a space separated token. Depending on what you're scanning on, you may fast forward the
stream to the position you need to scan on. See this answer
for details on seeking in buffers. If you're using stdio or you don't know which length your input may
have, you seem to be out of luck here.
It doesn't indicate that %* is not implemented, so... Am I doing it
wrong? Or has it just been quietly omitted? ...but then, why does it
compile?
It compiles because for the compiler a format string is just a string like any other. The content of that string is evaluated at run time by functions of the fmt package. Some C compilers may check format strings
for correctness, but this is a feature, not the norm. With go, the go vet command will try to warn you about format string errors with mismatched arguments.
Edit:
For the special case of needing to parse a row of integers and just caring for some of them, you
can use fmt.Scan in combination with a slice of integers. The following example reads 3 integers
from stdin and stores them in the slice named vals:
ints := make([]interface{}, 3)
vals := make([]int, len(ints))
for i, _ := range ints {
ints[i] = interface{}(&vals[i])
}
fmt.Scan(ints...)
fmt.Println(vals)
This is probably shorter than the conventional split/trim/strconv chain. It makes a slice of pointers
which each points to a value in vals. fmt.Scan then fills these pointers. With this you can even
ignore most of the values by assigning the same pointer over and over for the values you don't want:
ignored := 0
for i, _ := range ints {
if(i == 0 || i == 2) {
ints[i] = interface{}(&vals[i])
} else {
ints[i] = interface{}(&ignored)
}
}
The example above would assign the address of ignore to all values except the first and the second, thus
effectively ignoring them by overwriting.
Related
I have the following code snippet which "go vet" complains about with the warning "possible misuse of reflect.SliceHeader". I can not find very much information about this warning other then this. After reading that it is not very clear to me what is needed to do this in a way that makes go vet happy - and without possible gc issues.
The goal of the snippet is to have a go function copy data to memory which is managed by an opaque C library. The Go function expects a []byte as a parameter.
func Callback(ptr unsafe.Pointer, buffer unsafe.Pointer, size C.longlong) C.longlong {
...
sh := &reflect.SliceHeader{
Data: uintptr(buffer),
Len: int(size),
Cap: int(size),
}
buf := *(*[]byte)(unsafe.Pointer(sh))
err := CopyToSlice(buf)
if err != nil {
log.Fatal("failed to copy to slice")
}
...
}
https://pkg.go.dev/unsafe#go1.19.4#Pointer
Pointer represents a pointer to an arbitrary type. There are four
special operations available for type Pointer that are not available
for other types:
A pointer value of any type can be converted to a Pointer.
A Pointer can be converted to a pointer value of any type.
A uintptr can be converted to a Pointer.
A Pointer can be converted to a uintptr.
Pointer therefore allows a program to defeat the type system and read
and write arbitrary memory. It should be used with extreme care.
The following patterns involving Pointer are valid. Code not using
these patterns is likely to be invalid today or to become invalid in
the future. Even the valid patterns below come with important caveats.
Running "go vet" can help find uses of Pointer that do not conform to
these patterns, but silence from "go vet" is not a guarantee that the
code is valid.
(6) Conversion of a reflect.SliceHeader or reflect.StringHeader Data
field to or from Pointer.
As in the previous case, the reflect data structures SliceHeader and
StringHeader declare the field Data as a uintptr to keep callers from
changing the result to an arbitrary type without first importing
"unsafe". However, this means that SliceHeader and StringHeader are
only valid when interpreting the content of an actual slice or string
value.
var s string
hdr := (*reflect.StringHeader)(unsafe.Pointer(&s)) // case 1
hdr.Data = uintptr(unsafe.Pointer(p)) // case 6 (this case)
hdr.Len = n
In this usage hdr.Data is really an alternate way to refer to the
underlying pointer in the string header, not a uintptr variable
itself.
In general, reflect.SliceHeader and reflect.StringHeader should be used only as *reflect.SliceHeader and *reflect.StringHeader pointing at actual slices or strings, never as plain structs. A program should not declare or allocate variables of these struct types.
// INVALID: a directly-declared header will not hold Data as a reference.
var hdr reflect.StringHeader
hdr.Data = uintptr(unsafe.Pointer(p))
hdr.Len = n
s := *(*string)(unsafe.Pointer(&hdr)) // p possibly already lost
It looks like JimB (from the comments) hinted upon the most correct answer, though he didn't post it as an answer and he didn't include an example. The following passes go vet, staticcheck, and golangci-lint - and doesn't segfault so I think it is the correct answer.
func Callback(ptr unsafe.Pointer, buffer unsafe.Pointer, size C.longlong) C.longlong {
...
buf := unsafe.Slice((*byte)(buffer), size)
err := CopyToSlice(buf)
if err != nil {
log.Fatal("failed to copy to slice")
}
...
}
I've a terratest where I get an output from terraform like so s := "[a b]". The terraform output's value = toset([resource.name]), it's a set of strings.
Apparently fmt.Printf("%T", s) returns string. I need to iterate to perform further validation.
I tried the below approach but errors!
var v interface{}
if err := json.Unmarshal([]byte(s), &v); err != nil {
fmt.Println(err)
}
My current implementation to convert to a slice is:
s := "[a b]"
s1 := strings.Fields(strings.Trim(s, "[]"))
for _, v:= range s1 {
fmt.Println("v -> " + v)
}
Looking for suggestions to current approach or alternative ways to convert to arr/slice that I should be considering. Appreciate any inputs. Thanks.
Actually your current implementation seems just fine.
You can't use JSON unmarshaling because JSON strings must be enclosed in double quotes ".
Instead strings.Fields does just that, it splits a string on one or more characters that match unicode.IsSpace, which is \t, \n, \v. \f, \r and .
Moeover this works also if terraform sends an empty set as [], as stated in the documentation:
returning [...] an empty slice if s contains only white space.
...which includes the case of s being empty "" altogether.
In case you need additional control over this, you can use strings.FieldsFunc, which accepts a function of type func(rune) bool so you can determine yourself what constitutes a "space". But since your input string comes from terraform, I guess it's going to be well-behaved enough.
There may be third-party packages that already implement this functionality, but unless your program already imports them, I think the native solution based on the standard lib is always preferrable.
unicode.IsSpace actually includes also the higher runes 0x85 and 0xA0, in which case strings.Fields calls FieldsFunc(s, unicode.IsSpace)
package main
import (
"fmt"
"strings"
)
func main() {
src := "[a b]"
dst := strings.Split(src[1:len(src)-1], " ")
fmt.Println(dst)
}
https://play.golang.org/p/KVY4r_8RWv6
I know that converting from a []byte to a string, or vice versa, results in a copy of the underlying array being made. This makes sense to me, from the point of view of strings being immutable.
Then I read here that two optimisations get made by the compiler in specific cases:
"The first optimization avoids extra allocations when []byte keys are used to lookup entries in map[string] collections: m[string(key)]."
This makes sense because the conversion is only scoped to the square brackets, so no risk of mutating the string there.
"The second optimization avoids extra allocations in for range clauses where strings are converted to []byte: for i,v := range []byte(str) {...}."
This makes sense because once again - no way of mutating the string here.
Also mentioned is further optimisations on the todo list (not sure which todo list is being referred to), so my question is:
Does any other such (further) optimisations exist in Go 1.6 and if so, what are they?
[]byte to string
For []byte to string conversion, the compiler generates a call to the internal runtime.slicebytetostringtmp function (link to source) when it can prove
that the string form will be discarded before the calling goroutine
could possibly modify the original slice or synchronize with another
goroutine.
runtime.slicebytetostringtmp returns a string referring to the actual []byte bytes, so it does not allocate. The comment in the function says
// First such case is a m[string(k)] lookup where
// m is a string-keyed map and k is a []byte.
// Second such case is "<"+string(b)+">" concatenation where b is []byte.
// Third such case is string(b)=="foo" comparison where b is []byte.
In short, for a b []byte:
map lookup m[string(b)] does not allocate
"<"+string(b)+"> concatenation does not allocate
string(b)=="foo" comparison does not allocate
The second optimization was implemented on 22 Jan 2015, and it is in go1.6
The third optimization was implemented on 27 Jan 2015, and it is in go1.6
So, for example, in the following:
var bs []byte = []byte{104, 97, 108, 108, 111}
func main() {
x := string(bs) == "hello"
println(x)
}
the comparison does not cause allocations in go1.6.
String to []byte
Similarly, the runtime.stringtoslicebytetmp function (link to source) says:
// Return a slice referring to the actual string bytes.
// This is only for use by internal compiler optimizations
// that know that the slice won't be mutated.
// The only such case today is:
// for i, c := range []byte(str)
so i, c := range []byte(str) does not allocate, but you already knew that.
In Go, iterating over a string using
for i := 0; i < len(myString); i++{
doSomething(myString[i])
}
only accesses individual bytes in the string, whereas iterating over a string via
for i, c := range myString{
doSomething(c)
}
iterates over individual Unicode codepoints (calledrunes in Go), which may span multiple bytes.
My question is: how does one go about jumping ahead while iterating over a string with range Mystring? continue can jump ahead by one unicode codepoint, but it's not possible to just do i += 3 for instance if you want to jump ahead three codepoints. So what would be the most idiomatic way to advance forward by n codepoints?
I asked this question on the golang nuts mailing list, and it was answered, courtesy of some of the helpful folks on the list. Someone messaged me however suggesting I create a self-answered question on Stack Overflow for this, to save the next person with the same issue some trouble. That's what this is.
I'd consider avoiding the conversion to []rune, and code this directly.
skip := 0
for _, c := range myString {
if skip > 0 {
skip--
continue
}
skip = doSomething(c)
}
It looks inefficient to skip runes one by one like this, but it's the same amount of work as the conversion to []rune would be. The advantage of this code is that it avoids allocating the rune slice, which will be approximately 4 times larger than the original string (depending on the number of larger code points you have). Of course converting to []rune is a bit simpler so you may prefer that.
It turns out this can be done quite easily simply by casting the string into a slice of runes.
runes := []rune(myString)
for i := 0; i < len(runes); i++{
jumpHowFarAhead := doSomething(runes[i])
i += jumpHowFarAhead
}
I am trying to implement a PPM decoder in Go. PPM is an image format that consists of a plaintext header and then some binary image data. The header looks like this (from the spec):
Each PPM image consists of the following:
A "magic number" for identifying the file type. A ppm image's magic number is the two characters "P6".
Whitespace (blanks, TABs, CRs, LFs).
A width, formatted as ASCII characters in decimal.
Whitespace.
A height, again in ASCII decimal.
Whitespace.
The maximum color value (Maxval), again in ASCII decimal. Must be less than 65536 and more than zero.
A single whitespace character (usually a newline).
I try to decode this header with the fmt.Fscanf function. The following call to
fmt.Fscanf parses the header (not addressing the caveat explained below):
var magic string
var width, height, maxVal uint
fmt.Fscanf(input,"%2s %d %d %d",&magic,&width,&height,&maxVal)
The documentation of fmt states:
Note: Fscan etc. can read one character (rune) past the input they
return, which means that a loop calling a scan routine may skip some
of the input. This is usually a problem only when there is no space
between input values. If the reader provided to Fscan implements
ReadRune, that method will be used to read characters. If the reader
also implements UnreadRune, that method will be used to save the
character and successive calls will not lose data. To attach ReadRune
and UnreadRune methods to a reader without that capability, use
bufio.NewReader.
As the very next character after the final whitespace is already the beginning of the image data, I have to be certain about how many whitespace fmt.Fscanf did consume after reading MaxVal. My code must work on whatever reader the was provided by the caller and parts of it must not read past the end of the header, therefore wrapping stuff into a buffered reader is not an option; the buffered reader might read more from the input than I actually want to read.
Some testing suggests that parsing a dummy character at the end solves the issues:
var magic string
var width, height, maxVal uint
var dummy byte
fmt.Fscanf(input,"%2s %d %d %d%c",&magic,&width,&height,&maxVal,&dummy)
Is that guaranteed to work according to the specification?
No, I would not consider that safe. While it works now, the documentation states that the function reserves the right to read past the value by one character unless you have an UnreadRune() method.
By wrapping your reader in a bufio.Reader, you can ensure the reader has an UnreadRune() method. You will then need to read the final whitespace yourself.
buf := bufio.NewReader(input)
fmt.Fscanf(buf,"%2s %d %d %d",&magic,&width,&height,&maxVal)
buf.ReadRune() // remove next rune (the whitespace) from the buffer.
Edit:
As we discussed in the chat, you can assume the dummy char method works and then write a test so you know when it stops working. The test can be something like:
func TestFmtBehavior(t *testing.T) {
// use multireader to prevent r from implementing io.RuneScanner
r := io.MultiReader(bytes.NewReader([]byte("data ")))
n, err := fmt.Fscanf(r, "%s%c", new(string), new(byte))
if n != 2 || err != nil {
t.Error("failed scan", n, err)
}
// the dummy char read 1 extra char past "data".
// one byte should still remain
if n, err := r.Read(make([]byte, 5)); n != 1 {
t.Error("assertion failed", n, err)
}
}