Can't indexing an array without using another var - go

I think is something that I miss theoretically from the passing by reference topic but I can't find a way to read the ID without using the support networkInterfaceReference
package main
import (
"context"
"fmt"
"github.com/Azure/azure-sdk-for-go/profiles/preview/resources/mgmt/resources"
"github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2021-03-01/compute"
"github.com/Azure/azure-sdk-for-go/services/subscription/mgmt/2020-09-01/subscription"
"github.com/Azure/go-autorest/autorest/azure/auth"
"github.com/ktr0731/go-fuzzyfinder"
)
var selectedSub subscription.Model
var selectedRG resources.Group
var selectedVM compute.VirtualMachine
func main() {
selectedSub = GetSubscription()
selectedRG = GetResourceGroup()
selectedVM = GetVM()
fmt.Printf("Sub: %s\nRG: %s\nVM: %s\n", *selectedSub.DisplayName, *selectedRG.Name, *selectedVM.Name)
// THIS WORK
networkInterfaceReference := *selectedVM.NetworkProfile.NetworkInterfaces
fmt.Printf("%s", *networkInterfaceReference[0].ID)
// THIS DOESN'T WORK
fmt.Printf("%s", *selectedVM.NetworkProfile.NetworkInterfaces[0].ID)
}
...
...
...
func GetVM() compute.VirtualMachine {
vmClient := compute.NewVirtualMachinesClient(*selectedSub.SubscriptionID)
authorizer, err := auth.NewAuthorizerFromCLI()
if err == nil {
vmClient.Authorizer = authorizer
}
vmList, err := vmClient.List(context.TODO(), *selectedRG.Name)
if err != nil {
panic(err)
}
idx, err := fuzzyfinder.Find(vmList.Values(), func(i int) string {
return *vmList.Values()[i].Name
})
if err != nil {
panic(err)
}
return vmList.Values()[idx]
}
Hovering to the error showed the following error message:
field NetworkProfile *[]compute.NetworkProfile
(compute.VirtualMachineProperties).NetworkProfile on pkg.go.dev
NetworkProfile - Specifies the network interfaces of the virtual machine.
invalid operation: cannot index selectedVM.NetworkProfile.NetworkInterfaces (variable of type *[]compute.NetworkInterfaceReference)compiler (NonIndexableOperand)

If you want the 2nd way to work:
// WORKS
networkInterfaceReference := *selectedVM.NetworkProfile.NetworkInterfaces
fmt.Printf("%s", *networkInterfaceReference[0].ID)
// THIS DOESN'T WORK
fmt.Printf("%s", *selectedVM.NetworkProfile.NetworkInterfaces[0].ID)
studying the compilation error you are getting (P.S. please don't post screen-shots of code/errors) - the error is failing because you are trying to index a pointer which is not allowed in Go. You can only index maps, arrays or slices.
The fix is simple, since you do two (2) pointer dereferences in the working version, you need to do two (2) same in the single expression - but also you need to ensure the lexical binding order so the indexing is done after the pointer dereference:
fmt.Printf("%s", *(*selectedVM.NetworkProfile.NetworkInterfaces)[0].ID)
Finally, there is no pass-by-reference in Go. Everything is by value. If you want to change a value, pass a pointer to it - but that pointer is a still just a value that is copied.

Related

Recursive data structure unmarshalling gives error "cannot parse invalid wire-format data" in Go Lang Protobuf

OS and protobuf version
go1.18.1 linux/amd64, github.com/golang/protobuf v1.5.2
Introduction
I am trying to use recursive proto definitions.
.proto file
message AsyncConsensus {
int32 sender = 1;
int32 receiver = 2;
string unique_id = 3; // to specify the fall back block id to which the vote asyn is for
int32 type = 4; // 1-propose, 2-vote, 3-timeout, 4-propose-async, 5-vote-async, 6-timeout-internal, 7-consensus-external-request, 8-consensus-external-response, 9-fallback-complete
string note = 5;
int32 v = 6 ; // view number
int32 r = 7;// round number
message Block {
string id = 1;
int32 v = 2 ; // view number
int32 r = 3;// round number
Block parent = 4;
repeated int32 commands = 5;
int32 level = 6; // for the fallback mode
}
Block blockHigh = 8;
Block blockNew = 9;
Block blockCommit = 10;
}
The following is how I Marshal and Un-Marshal
func (t *AsyncConsensus) Marshal(wire io.Writer) error {
data, err := proto.Marshal(t)
if err != nil {
return err
}
lengthWritten := len(data)
var b [8]byte
bs := b[:8]
binary.LittleEndian.PutUint64(bs, uint64(lengthWritten))
_, err = wire.Write(bs)
if err != nil {
return err
}
_, err = wire.Write(data)
if err != nil {
return err
}
return nil
}
func (t *AsyncConsensus) Unmarshal(wire io.Reader) error {
var b [8]byte
bs := b[:8]
_, err := io.ReadFull(wire, bs)
if err != nil {
return err
}
numBytes := binary.LittleEndian.Uint64(bs)
data := make([]byte, numBytes)
length, err := io.ReadFull(wire, data)
if err != nil {
return err
}
err = proto.Unmarshal(data[:length], t)
if err != nil {
return err
}
return nil
}
func (t *AsyncConsensus) New() Serializable {
return new(AsyncConsensus)
}
My expected outcome
When marshaled and sent to the same process via TCP, it should correctly unmarshal and produce correct data structures.
Resulting error
error "cannot parse invalid wire-format data"
Additional information
I tried with non-recursive .proto definitions, and never had this issue before.
The stupidest error I can think about is that the wire.Write(bs) don’t write as many bytes as the io.ReadFull(wire, bs) read - so I’d just make sure that their return value is actually 8 in both cases.
Then I don’t know the golang/protobuf very well, but I guess it should be able to do this. Shouldn’t you create the go-code and then call out to it? I’m not sure how to call it.
If you think that it’s actually a problem in the protobuf implementation, there are some online protobuf-decoders, which can help. But they sometimes interpret the stream incorrectly, which could be the case here with a recursive pattern, so you have to be careful. But at least they helped me to debug the dedis/protobuf package more than once.
As a last resort you can make a minimal example with recursive data, check if it works, and then slowly add fields until it breaks…
This is not a bug with Protobuf, but its a mater of how you marshal and unmarshal protobuf structs.
As a concrete guideline, never concurrently marshal and unmarshal protobuf structs as it my lead to race conditions.
In the specific example you have provided, I see recursive data structs, so even if you use a separate struct for each invocation of marshal and unmarshal, it's likely that the pointers in the parent can lead to shared pointers.
Use a deep copy technique to remove any dependency so that you do not run in to race conditions.
func CloneMyStruct(orig *proto.AsyncConsensus_Block) (*proto.AsyncConsensus_Block, error) {
origJSON, err := json.Marshal(orig)
if err != nil {
return nil, err
}
clone := proto.AsyncConsensus_Block{}
if err = json.Unmarshal(origJSON, &clone); err != nil {
return nil, err
}
return &clone, nil
}

github.com/shopspring/decimal: different values produced by FromString and FromFloat

I'm trying to use gomock to mock an interface that takes a decimal. The usecase:
v, err := decimal.NewFromString(order.Value)
if err != nil {
return err
}
if err := p.CDI.CreateBuyEvent(ctx, v); err != nil {
return err
}
And in the tests:
value := decimal.NewFromFloat64(1000)
cfg.cdi.EXPECT().CreateBuyEvent(ctx, value).Return(nil)
Running this, I get:
expected call doesn't match the argument at index 1.
Got: 1000 (entities.Order)
Want: is equal to 1000 (entities.Order)
However, if I instead instantiate the decimal using NewFromString("1000") in the tests, it passes. My question is: why is the underlying value different for NewFromString and NewFromFloat?
Because the two values happen to have different memory representations while logically containing the same values.
Use the Equal method to compare shopspring.Decimal values.
Demonstration:
package main
import (
"fmt"
"github.com/shopspring/decimal"
)
func main() {
fs, err := decimal.NewFromString("1000")
if err != nil {
panic(err)
}
ff := decimal.NewFromFloat(1000)
if ff != fs {
fmt.Printf("%#v != %#v\n", fs, ff)
}
fmt.Println(ff.Equal(fs))
}
Produces:
decimal.Decimal{value:(*big.Int)(0xc0001103a0), exp:0} != decimal.Decimal{value:(*big.Int)(0xc0001103c0), exp:3}
true
Playground.
I would add that there's no need to rush for SO to ask a question like this: you should have first performed at least minimal debugging.
Really, if you have two values which must be the same but they do not compare equal using the == operator which is dumb — it compares struct vaues field-wise, — just see what the values actually contain.

Passing []net.Conn to io.MultiWriter

I have many of net.Conn and i want to implement multi writer. so i can send data to every available net.Conn.
My current approach is using io.MultiWriter.
func route(conn, dst []net.Conn) {
targets := io.MultiWriter(dst[0], dst[1])
_, err := io.Copy(targets, conn)
if err != nil {
log.Println(err)
}
}
but the problem is, i must specify every net.Conn index in io.MultiWriter and it will be a problem, because the slice size is dynamic.
when i try another approach by pass the []net.Conn to io.MultiWriter, like code below
func route(conn, dst []net.Conn) {
targets := io.MultiWriter(dst...)
_, err := io.Copy(targets, conn)
if err != nil {
log.Println(err)
}
}
there is an error "cannot use mirrors (variable of type []net.Conn) as []io.Writer value in argument to io.MultiWriter"
Is there proper way to handle this case? so i can pass the net.Conn slice to io.MultiWriter.
Thank you.
io.MultiWriter() has a param of type ...io.Writer, so you may only pass a slice of type []io.Writer.
So first create a slice of the proper type, copy the net.Conn values to it, then pass it like this:
ws := make([]io.Writer, len(dst))
for i, c := range dst {
ws[i] = c
}
targets := io.MultiWriter(ws...)

Is there a simple way to convert data base rows to JSON in Golang

Currently on what I've seen so far is that, converting database rows to JSON or to []map[string]interface{} is not simple. I have to create two slices and then loop through columns and create keys every time.
...Some code
tableData := make([]map[string]interface{}, 0)
values := make([]interface{}, count)
valuePtrs := make([]interface{}, count)
for rows.Next() {
for i := 0; i < count; i++ {
valuePtrs[i] = &values[i]
}
rows.Scan(valuePtrs...)
entry := make(map[string]interface{})
for i, col := range columns {
var v interface{}
val := values[i]
b, ok := val.([]byte)
if ok {
v = string(b)
} else {
v = val
}
entry[col] = v
}
tableData = append(tableData, entry)
}
Is there any package for this ? Or I am missing some basics here
I'm dealing with the same issue, as far as my investigation goes it looks that there is no other way.
All the packages that I have seen use basically the same method
Few things you should know, hopefully will save you time:
database/sql package converts all the data to the appropriate types
if you are using the mysql driver(go-sql-driver/mysql) you need to add
params to your db string for it to return type time instead of a string
(use ?parseTime=true, default is false)
You can use tools that were written by the community, to offload the overhead:
A minimalistic wrapper around database/sql, sqlx, uses similar way internally with reflection.
If you need more functionality, try using an "orm": gorp, gorm.
If you interested in diving deeper check out:
Using reflection in sqlx package, sqlx.go line 560
Data type conversion in database/sql package, convert.go line 86
One thing you could do is create a struct that models your data.
**Note: I am using MS SQLServer
So lets say you want to get a user
type User struct {
ID int `json:"id,omitempty"`
UserName string `json:"user_name,omitempty"`
...
}
then you can do this
func GetUser(w http.ResponseWriter, req *http.Request) {
var r Role
params := mux.Vars(req)
db, err := sql.Open("mssql", "server=ServerName")
if err != nil {
log.Fatal(err)
}
err1 := db.QueryRow("select Id, UserName from [Your Datavse].dbo.Users where Id = ?", params["id"]).Scan(&r.ID, &r.Name)
if err1 != nil {
log.Fatal(err1)
}
json.NewEncoder(w).Encode(&r)
if err != nil {
log.Fatal(err)
}
}
Here are the imports I used
import (
"database/sql"
"net/http"
"log"
"encoding/json"
_ "github.com/denisenkom/go-mssqldb"
"github.com/gorilla/mux"
)
This allowed me to get data from the database and get it into JSON.
This takes a while to code, but it works really well.
Not in the Go distribution itself, but there is the wonderful jmoiron/sqlx:
import "github.com/jmoiron/sqlx"
tableData := make([]map[string]interface{}, 0)
for rows.Next() {
entry := make(map[string]interface{})
err := rows.MapScan(entry)
if err != nil {
log.Fatal("SQL error: " + err.Error())
}
tableData = append(tableData, entry)
}
If you know the data type that you are reading, then you can read into the data type without using generic interface.
Otherwise, there is no solution regardless of the language used due to nature of JSON itself.
JSON does not have description of composite data structures. In other words, JSON is a generic key-value structure. When parser encounters what is supposed to be a specific structure there is no identification of that structure in JSON itself. For example, if you have a structure User the parser would not know how a set of key-value pairs maps to your structure User.
The problem of type recognition is usually addressed with document schema (a.k.a. XSD in XML world) or explicitly through passed expected data type.
One quick way to go about being able to get an arbirtrary and generic []map[string]interface{} from these query libraries is to populate an array of interface pointers with the same size of the amount of columns on the query, and then pass that as a parameter on the scan function:
// For example, for the go-mssqldb lib:
queryResponse, err := d.pool.Query(query)
if err != nil {
return nil, err
}
defer queryResponse.Close()
// Holds all the end-results
results := []map[string]interface{}{}
// Getting details about all the fields from the query
fieldNames, err := queryResponse.Columns()
if err != nil {
return nil, err
}
// Creating interface-type pointers within an array of the same
// size of the number of columns we have, so that we can properly
// pass this to the "Scan" function and get all the query parameters back :)
var scanResults []interface{}
for range fieldNames {
var v interface{}
scanResults = append(scanResults, &v)
}
// Parsing the query results into the result map
for queryResponse.Next() {
// This variable will hold the value for all the columns, named by the column name
rowValues := map[string]interface{}{}
// Cleaning up old values just in case
for _, column := range scanResults {
*(column.(*interface{})) = nil
}
// Scan into the array of pointers
err := queryResponse.Scan(scanResults...)
if err != nil {
return nil, err
}
// Map the pointers back to their value and the associated column name
for index, column := range scanResults {
rowValues[fieldNames[index]] = *(column.(*interface{}))
}
results = append(results, rowValues)
}
return results, nil

Go — handling multiple errors elegantly?

Is there a way to clean up this (IMO) horrific-looking code?
aJson, err1 := json.Marshal(a)
bJson, err2 := json.Marshal(b)
cJson, err3 := json.Marshal(c)
dJson, err4 := json.Marshal(d)
eJson, err5 := json.Marshal(e)
fJson, err6 := json.Marshal(f)
gJson, err4 := json.Marshal(g)
if err1 != nil {
return err1
} else if err2 != nil {
return err2
} else if err3 != nil {
return err3
} else if err4 != nil {
return err4
} else if err5 != nil {
return err5
} else if err5 != nil {
return err5
} else if err6 != nil {
return err6
}
Specifically, I'm talking about the error handling. It would be nice to be able to handle all the errors in one go.
var err error
f := func(dest *D, src S) bool {
*dest, err = json.Marshal(src)
return err == nil
} // EDIT: removed ()
f(&aJson, a) &&
f(&bJson, b) &&
f(&cJson, c) &&
f(&dJson, d) &&
f(&eJson, e) &&
f(&fJson, f) &&
f(&gJson, g)
return err
Put the result in a slice instead of variables, put the intial values in another slice to iterate and return during the iteration if there's an error.
var result [][]byte
for _, item := range []interface{}{a, b, c, d, e, f, g} {
res, err := json.Marshal(item)
if err != nil {
return err
}
result = append(result, res)
}
You could even reuse an array instead of having two slices.
var values, err = [...]interface{}{a, b, c, d, e, f, g}, error(nil)
for i, item := range values {
if values[i], err = json.Marshal(item); err != nil {
return err
}
}
Of course, this'll require a type assertion to use the results.
define a function.
func marshalMany(vals ...interface{}) ([][]byte, error) {
out := make([][]byte, 0, len(vals))
for i := range vals {
b, err := json.Marshal(vals[i])
if err != nil {
return nil, err
}
out = append(out, b)
}
return out, nil
}
you didn't say anything about how you'd like your error handling to work. Fail one, fail all? First to fail? Collect successes or toss them?
I believe the other answers here are correct for your specific problem, but more generally, panic can be used to shorten error handling while still being a well-behaving library. (i.e., not panicing across package boundaries.)
Consider:
func mustMarshal(v interface{}) []byte {
bs, err := json.Marshal(v)
if err != nil {
panic(err)
}
return bs
}
func encodeAll() (err error) {
defer func() {
if r := recover(); r != nil {
var ok bool
if err, ok = r.(error); ok {
return
}
panic(r)
}
}()
ea := mustMarshal(a)
eb := mustMarshal(b)
ec := mustMarshal(c)
return nil
}
This code uses mustMarshal to panic whenever there is a problem marshaling a value. But the encodeAll function will recover from the panic and return it as a normal error value. The client in this case is never exposed to the panic.
But this comes with a warning: using this approach everywhere is not idiomatic. It can also be worse since it doesn't lend itself well to handling each individual error specially, but more or less treating each error the same. But it has its uses when there are tons of errors to handle. As an example, I use this kind of approach in a web application, where a top-level handler can catch different kinds of errors and display them appropriately to the user (or a log file) depending on the kind of error.
It makes for terser code when there is a lot of error handling, but at the loss of idiomatic Go and handling each error specially. Another down-side is that it could prevent something that should panic from actually panicing. (But this can be trivially solved by using your own error type.)
You can use go-multierror by Hashicorp.
var merr error
if err := step1(); err != nil {
merr = multierror.Append(merr, err)
}
if err := step2(); err != nil {
merr = multierror.Append(merr, err)
}
return merr
You can create a reusable method to handle multiple errors, this implementation will only show the last error but you could return every error msg combined by modifying the following code:
func hasError(errs ...error) error {
for i, _ := range errs {
if errs[i] != nil {
return errs[i]
}
}
return nil
}
aJson, err := json.Marshal(a)
bJson, err1 := json.Marshal(b)
cJson, err2 := json.Marshal(c)
if error := hasError(err, err1, err2); error != nil {
return error
}
Another perspective on this is, instead of asking "how" to handle the abhorrent verbosity, whether we actually "should". This advice is heavily dependent on context, so be careful.
In order to decide whether handling the json.Marshal error is worth it, we can inspect its implementation to see when errors are returned. In order to return errors to the caller and preserve code terseness, json.Marshal uses panic and recover internally in a manner akin to exceptions. It defines an internal helper method which, when called, panics with the given error value. By looking at each call of this function, we learn that json.Marshal errors in the given scenarios:
calling MarshalJSON or MarshalText on a value/field of a type which implements json.Marshaler or encoding.TextMarshaler returns an error—in other words, a custom marshaling method fails;
the input is/contains a cyclic (self-referencing) structure;
the input is/contains a value of an unsupported type (complex, chan, func);
the input is/contains a floating-point number which is NaN or Infinity (these are not allowed by the spec, see section 2.4);
the input is/contains a json.Number string that is an incorrect number representation (for example, "foo" instead of "123").
Now, a usual scenario for marshaling data is creating an API response, for example. In that case, you will 100% have data types that satisfy all of the marshaler's constraints and valid values, given that the server itself generates them. In the situation user-provided input is used, the data should be validated anyway beforehand, so it should still not cause issues with the marshaler. Furthermore, we can see that, apart from the custom marshaler errors, all the other errors occur at runtime because Go's type system cannot enforce the required conditions by itself. With all these points given, here comes the question: given our control over the data types and values, do we need to handle json.Marshal's error at all?
Probably no. For a type like
type Person struct {
Name string
Age int
}
it is now obvious that json.Marshal cannot fail. It is trickier when the type looks like
type Foo struct {
Data any
}
(any is a new Go 1.18 alias for interface{}) because there is no compile-time guarantee that Foo.Data will hold a value of a valid type—but I'd still argue that if Foo is meant to be serialized as a response, Foo.Data will also be serializable. Infinity or NaN floats remain an issue, but, given the JSON standard limitation, if you want to serialize these two special values you cannot use JSON numbers anyway, so you'll have to look for another solution, which means that you'll end up avoiding the error anyway.
To conclude, my point is that you can probably do:
aJson, _ := json.Marshal(a)
bJson, _ := json.Marshal(b)
cJson, _ := json.Marshal(c)
dJson, _ := json.Marshal(d)
eJson, _ := json.Marshal(e)
fJson, _ := json.Marshal(f)
gJson, _ := json.Marshal(g)
and live fine with it. If you want to be pedantic, you can use a helper such as:
func must[T any](v T, err error) T {
if err != nil {
panic(err)
}
return v
}
(note the Go 1.18 generics usage) and do
aJson := must(json.Marshal(a))
bJson := must(json.Marshal(b))
cJson := must(json.Marshal(c))
dJson := must(json.Marshal(d))
eJson := must(json.Marshal(e))
fJson := must(json.Marshal(f))
gJson := must(json.Marshal(g))
This will work nice when you have something like an HTTP server, where each request is wrapped in a middleware that recovers from panics and responds to the client with status 500. It's also where you would care about these unexpected errors—when you don't want the program/service to crash at all. For one-time scripts you'll probably want to have the operation halted and a stack trace dumped.
If you're unsure of how your types will be changed in the future, you don't trust your tests, data may not be in your full control, the codebase is too big to trace the data or whatever other reason which causes uncertainty over the correctness of your data, it is better to handle the error. Pay attention to the context you're in!
P.S.: Pragmatically ignoring errors should be generally sought after. For example, the Write* methods on bytes.Buffer, strings.Builder never return errors; fmt.Fprintf, with a valid format string and a writer that doesn't return errors, also returns no errors; bufio.Writer aswell doesn't, if the underlying writer doesn't return. You will find some types implement interfaces with methods that return errors but don't actually return any. In these cases, if you know the concrete type, handling errors is unnecessarily verbose and redundant. What do you prefer,
var sb strings.Builder
if _, err := sb.WriteString("hello "); err != nil {
return err
}
if _, err := sb.WriteString("world!"); err != nil {
return err
}
or
var sb strings.Builder
sb.WriteString("hello ")
sb.WriteString("world!")
(of course, ignoring that it could be a single WriteString call)?
The given examples write to an in-memory buffer, which unless the machine is out of memory, an error which you cannot handle in Go, cannot ever fail. Other such situations will surface in your code—blindly handling errors adds little to no value! Caution is key—if an implementation changes and does return errors, you may be in trouble. Standard library or well-established packages are good candidates for eliding error checking, if possible.

Resources