How to have multi options list for one function? - go

I am using options pattern to configure the behavior of function, there are two function Foo and Bar which one dependent on the other, the code as follow
type fooOptions struct {
fooName string
}
type FooOptions func(*fooOptions)
func WithFooName(fooName string) {
return func(options *fooOptions) {
options.fooName = fooName
}
}
func Foo(options ...FooOptions) {
opts := &fooOptions{}
for _, opt := range options {
opt(opts)
}
// do something
}
type barOptions struct {
barName string
}
type BarOptions func(*barOptions)
func WithBarName(barName string) {
return func(options *barOptions) {
options.barName = barName
}
}
// compile error,unable have multi options list
func Bar(boptions ...BarOptions,fOptions ...FooOptions) {
bopts := &barOptions{}
for _, opt := range boptions {
opt(bopts)
}
Foo(fOptions...)
}
I wonder how can I have multi options list for the Bar function,or any other smart way, so I can reuse the code, otherwise I have to flatten fooOptions into barOptions and writing the withXXX again

Related

golang patch string values on an object, recursive with filtering

Community,
The mission
basic
Implement a func that patches all string fields on an objects
details
[done] fields shall only be patched if they match a matcher func
[done] value shall be processed via process func
patching shall be done recursive
it shall also work for []string, []*string and recursive for structs and []struct, []*struct
// update - removed old code
Solution
structs
updated the structs to use (though this does not affect the actual program, i use this for completeness
type Tag struct {
Name string `process:"yes,TagName"`
NamePtr *string `process:"no,TagNamePtr"`
}
type User struct {
ID int
Nick string
Name string `process:"yes,UserName"`
NamePtr *string `process:"yes,UserNamePtr"`
Slice []string `process:"yes,Slice"`
SlicePtr []*string `process:"yes,SlicePtr"`
SubStruct []Tag `process:"yes,SubStruct"`
SubStructPtr []*Tag `process:"yes,SubStructPtr"`
}
helper func
Further we need two helper funcs to check if a struct has a tag and to print to console
func Stringify(i interface{}) string {
s, _ := json.MarshalIndent(i, "", " ")
return string(s)
}
func HasTag(structFiled reflect.StructField, tagName string, tagValue string) bool {
tag := structFiled.Tag
if value, ok := tag.Lookup(tagName); ok {
parts := strings.Split(value, ",")
if len(parts) > 0 {
return parts[0] == tagValue
}
}
return false
}
patcher - the actual solution
type Patcher struct {
Matcher func(structFiled *reflect.StructField, v reflect.Value) bool
Process func(in string) string
}
func (p *Patcher) value(idx int, v reflect.Value, structFiled *reflect.StructField) {
if !v.IsValid() {
return
}
switch v.Kind() {
case reflect.Ptr:
p.value(idx, v.Elem(), structFiled)
case reflect.Struct:
for i := 0; i < v.NumField(); i++ {
var sf = v.Type().Field(i)
structFiled = &sf
p.value(i, v.Field(i), structFiled)
}
case reflect.Slice:
for i := 0; i < v.Len(); i++ {
p.value(i, v.Index(i), structFiled)
}
case reflect.String:
if p.Matcher(structFiled, v) {
v.SetString(p.Process(v.String()))
}
}
}
func (p *Patcher) Apply(in interface{}) {
p.value(-1, reflect.ValueOf(in).Elem(), nil)
}
how to use
func main() {
var NamePtr string = "golang"
var SubNamePtr string = "*secure"
testUser := User{
ID: 1,
Name: "lumo",
NamePtr: &NamePtr,
SubStruct: []Tag{{
Name: "go",
},
},
SubStructPtr: []*Tag{&Tag{
Name: "*go",
NamePtr: &SubNamePtr,
},
},
}
var p = Patcher{
// filter - return true if the field in struct has a tag process=true
Matcher: func(structFiled *reflect.StructField, v reflect.Value) bool {
return HasTag(*structFiled, "process", "yes")
},
// process
Process: func(in string) string {
if in != "" {
return fmt.Sprintf("!%s!", strings.ToUpper(in))
} else {
return "!empty!"
}
return in
},
}
p.Apply(&testUser)
fmt.Println("Output:")
fmt.Println(Stringify(testUser))
}
goplay
https://goplay.tools/snippet/-0MHDfKr7ax

Prettify ugly three nested for-loop

What would be the most Go way of prettifying this function?
This is what I have come up with, kind of does the trick but It's just too ugly, any help on prettifying this would be greatly appreciated.
Also wold love to be able to negate this functions as well if possible.
Could I not utilise the use of function literals, maps etc.
var UsageTypes = []string{
"PHYSICAL_SIZE",
"PHYSICAL_SIZE",
"PROVISIONED_SIZE",
"SNAPSHOT_SIZE",
"LOGICAL_SIZE_PERCENTAGE",
"TOTAL_VOLUME_SIZE",
"ALLOCATED_SIZE",
"ALLOCATED_USED",
"TOTAL_LOGICAL_SIZE",
"TOTAL_LOGICAL_SIZE_PERCENTAGE",
"TOTAL_SNAPSHOT_SIZE",
"LOGICAL_OR_ALLOCATED_GREATER_SIZE",
}
var MeasuredTypes = []string{
"LIF_RECEIVED_DATA",
"ECEIVED_ERRORS",
"LIF_RECEIVED_PACKET",
"LIF_SENT_DATA",
"LIF_SENT_ERRORS",
"LIF_SENT_PACKET",
"LINK_CURRENT_STATE",
"RX_BYTES",
"RX_DISCARDS",
"RX_CRC_ERRORS",
"RX_ERRORS",
"RX_FRAMES",
"LINK_UP_TO_DOWNS",
"TX_BYTES",
"TX_DISCARDS",
"TX_ERRORS",
"TX_HW_ERRORS",
"TX_FRAMES",
"LOGICAL_OR_ALLOCATED_GREATER_SIZE",
"LOGICAL_SIZE",
"PHYSICAL_SIZE",
"PROVISIONED_SIZE",
"SNAPSHOT_SIZE",
"VOLUME_ONLINE",
"TOTAL_THROUGHPUT",
"LOGICAL_SIZE_PERCENTAGE",
"READ_THROUGHPUT",
"WRITE_THROUGHPUT",
"OTHER_THROUGHPUT",
"TOTAL_IOPS",
"WRITE_IOPS",
"READ_IOPS",
"OTHER_IOPS",
"AVERAGE_TOTAL_LATENCY",
"AVERAGE_WRITE_LATENCY",
"AVERAGE_READ_LATENCY",
"AVERAGE_OTHER_LATENCY",
"FILESYSTEM_READ_OPS",
"FILESYSTEM_WRITE_OPS",
"FILESYSTEM_TOTAL_OPS",
"FILESYSTEM_OTHER_OPS",
"IO_BYTES_PER_READ_OPS",
"IO_BYTES_PER_WRITE_OPS",
"IO_BYTES_PER_OTHER_OPS",
"IO_BYTES_PER_TOTAL_OPS",
"READ_IO",
"WRITE_IO",
"TOTAL_IO",
"OTHER_IO",
"ACTIVE_CONNECTIONS",
"TOTAL_VOLUME_SIZE",
"ALLOCATED_SIZE",
"ALLOCATED_USED",
"TOTAL_LOGICAL_SIZE",
"TOTAL_LOGICAL_SIZE_PERCENTAGE",
"TOTAL_SNAPSHOT_SIZE",
"ONTAP_CAPACITY_DISK_CAPACITY",
"ONTAP_CAPACITY_TOTAL_STORAGE_EFFICIENCY_RATIO",
"ONTAP_CAPACITY_TOTAL_PHYSICAL_USED",
"ONTAP_CAPACITY_SIZE_USED",
"ONTAP_CAPACITY_MEMORY",
"ONTAP_CAPACITY_AVERAGE_PROCESSOR_BUSY",
"ONTAP_CAPACITY_PEAK_PROCESSOR_BUSY",
}
func isMeasuredTypeAUsageMetric(measuredTypeIn []string) []string {
result := []string{}
for i, _ := range measuredTypeIn {
var foundInBigList bool
for j, _ := range MeasuredTypes {
if measuredTypeIn[i] == MeasuredTypes[j] {
foundInBigList = true
fmt.Println("found in big list: ", measuredTypeIn[i])
for k, _ := range UsageTypes {
if measuredTypeIn[i] == UsageTypes[k] {
fmt.Println("found in inner list: ", measuredTypeIn[i])
result = append(result, measuredTypeIn[i])
}
}
}
}
if foundInBigList == false {
fmt.Println("not found, throw exception")
}
}
return result
}
func main() {
measuredTypeIn := []string{"LOGICAL_SIZE_PERCENTAGE", "LOGICAL_OR_ALLOCATED_GREATER_SIZE", "BUKK", "ONTAP_CAPACITY_PEAK_PROCESSOR_BUSY",}
fmt.Println(isMeasuredTypeAUsageMetric(measuredTypeIn))
}
Right level of abstraction is what you need:
func has(in string[], item string) bool {
for _,x:=range in {
if x==item {
return true
}
}
return false
}
func isMeasuredTypeAUsageMetric(measuredTypeIn []string) []string {
result:=[]string{}
for _,item:=range measuredTypeIn {
if has(MeasuredTypes,item) {
if has(UsageTypes,item) {
result=append(result,item)
}
} else {
///error
}
}
return result
}
This can be further simplified by using a map[string]bool instead of a []string for the literals.
var MeasuredTypes=map[string]bool{"itemInUsageTypes": true,
"itemNotInUsageTypes":false,
...
}
Then you can do:
usage,measured:=MeasuredTypes[item]
if measured {
// It is measured type
if usage {
// It is usage type
}
}

How to check if all fields of a *struct are nil?

I'm not quite sure how to address this question, please feel free to edit.
With the first code block below, I am able to check if a all fields of a struct are nil.
In reality however, the values injected in the struct, are received as args.Review (see second code block below).
In the second code block, how can I check if all fields from args.Review are nil?
Try it on Golang Playground
package main
import (
"fmt"
"reflect"
)
type review struct {
Stars *int32 `json:"stars" bson:"stars,omitempty" `
Commentary *string `json:"commentary" bson:"commentary,omitempty"`
}
func main() {
newReview := &review{
Stars: nil,
// Stars: func(i int32) *int32 { return &i }(5),
Commentary: nil,
// Commentary: func(i string) *string { return &i }("Great"),
}
if reflect.DeepEqual(review{}, *newReview) {
fmt.Println("Nothing")
} else {
fmt.Println("Hello")
}
}
Try the second code on Golang Playground
This code below gets two errors:
prog.go:32:14: type args is not an expression
prog.go:44:27: args.Review is not a type
package main
import (
"fmt"
"reflect"
)
type review struct {
Stars *int32 `json:"stars" bson:"stars,omitempty" `
Commentary *string `json:"commentary" bson:"commentary,omitempty"`
}
type reviewInput struct {
Stars *int32
Commentary *string
}
type args struct {
PostSlug string
Review *reviewInput
}
func main() {
f := &args {
PostSlug: "second-post",
Review: &reviewInput{
Stars: func(i int32) *int32 { return &i }(5),
Commentary: func(i string) *string { return &i }("Great"),
},
}
createReview(args)
}
func createReview(args *struct {
PostSlug string
Review *reviewInput
}) {
g := &review{
Stars: args.Review.Stars,
Commentary: args.Review.Commentary,
}
if reflect.DeepEqual(args.Review{}, nil) {
fmt.Println("Nothing")
} else {
fmt.Println("Something")
}
}
If you're dealing with a small number of fields you should use simple if statements to determine whether they are nil or not.
if args.Stars == nil && args.Commentary == nil {
// ...
}
If you're dealing with more fields than you would like to manually spell out in if statements you could use a simple helper function that takes a variadic number of interface{} arguments. Just keep in mind that there is this: Check for nil and nil interface in Go
func AllNil(vv ...interface{}) bool {
for _, v := range vv {
if v == nil {
continue
}
if rv := reflect.ValueOf(v); !rv.IsNil() {
return false
}
}
return true
}
if AllNil(args.Stars, args.Commentary, args.Foo, args.Bar, args.Baz) {
// ...
}
Or you can use the reflect package to do your bidding.
func NilFields(x interface{}) bool {
rv := reflect.ValueOf(args)
rv = rv.Elem()
for i := 0; i < rv.NumField(); i++ {
if f := rv.Field(i); f.IsValid() && !f.IsNil() {
return false
}
}
return true
}
if NilFields(args) {
// ...
}

Unmarshal map[string]DynamoDBAttributeValue into a struct

I'm trying to set-up an AWS-lambda using aws-sdk-go that is triggered whenever a new user is added to a certain dynamodb table.
Everything is working just fine but I can't find a way to unmarshal a map map[string]DynamoDBAttributeValue like:
{
"name": {
"S" : "John"
},
"residence_address": {
"M": {
"address": {
"S": "some place"
}
}
}
}
To a given struct, for instance, a User struct. Here is shown an example of unsmarhaling a map[string]*dynamodb.AttributeValue into a given interface, but I can't find a way to do the same thing with map[string]DynamoDBAttributeValue even though these types seem to fit the same purposes.
map[string]DynamoDBAttributeValue is returned by a events.DynamoDBEvents from package github.com/aws/aws-lambda-go/events. This is my code:
package handler
import (
"context"
"github.com/aws/aws-lambda-go/events"
"github.com/aws/aws-sdk-go/service/dynamodb/dynamodbattribute"
"github.com/aws/aws-sdk-go/service/dynamodb"
)
func HandleDynamoDBRequest(ctx context.Context, e events.DynamoDBEvent) {
for _, record := range e.Records {
if record.EventName == "INSERT" {
// User Struct
var dynamoUser model.DynamoDBUser
// Of course this can't be done for incompatible types
_ := dynamodbattribute.UnmarshalMap(record.Change.NewImage, &dynamoUser)
}
}
}
Of course, I can marshal record.Change.NewImage to JSON and unmarshal it back to a given struct, but then, I would have to manually initialize dynamoUser attributes starting from the latter ones.
Or I could even write a function that parses map[string]DynamoDBAttributeValue to map[string]*dynamodb.AttributeValue like:
func getAttributeValueMapFromDynamoDBStreamRecord(e events.DynamoDBStreamRecord) map[string]*dynamodb.AttributeValue {
image := e.NewImage
m := make(map[string]*dynamodb.AttributeValue)
for k, v := range image {
if v.DataType() == events.DataTypeString {
s := v.String()
m[k] = &dynamodb.AttributeValue{
S : &s,
}
}
if v.DataType() == events.DataTypeBoolean {
b := v.Boolean()
m[k] = &dynamodb.AttributeValue{
BOOL : &b,
}
}
// . . .
if v.DataType() == events.DataTypeMap {
// ?
}
}
return m
}
And then simply use dynamodbattribute.UnmarshalMap, but on events.DataTypeMap it would be quite a tricky process.
Is there a way through which I can unmarshal a DynamoDB record coming from a events.DynamoDBEvent into a struct with a similar method shown for map[string]*dynamodb.AttributeValue?
I tried the function you provided, and I met some problems with events.DataTypeList, so I managed to write the following function that does the trick:
// UnmarshalStreamImage converts events.DynamoDBAttributeValue to struct
func UnmarshalStreamImage(attribute map[string]events.DynamoDBAttributeValue, out interface{}) error {
dbAttrMap := make(map[string]*dynamodb.AttributeValue)
for k, v := range attribute {
var dbAttr dynamodb.AttributeValue
bytes, marshalErr := v.MarshalJSON(); if marshalErr != nil {
return marshalErr
}
json.Unmarshal(bytes, &dbAttr)
dbAttrMap[k] = &dbAttr
}
return dynamodbattribute.UnmarshalMap(dbAttrMap, out)
}
I was frustrated that the type of NewImage from the record wasn't map[string]*dynamodb.AttributeValue so I could use the dynamodbattribute package.
The JSON representation of events.DynamoDBAttributeValue seems to be the same as the JSON represenation of dynamodb.AttributeValue.
So I tried creating my own DynamoDBEvent type and changed the type of OldImage and NewImage, so it would be marshalled into map[string]*dynamodb.AttributeValue instead of map[string]events.DynamoDBAttributeValue
It is a little bit ugly but it works for me.
package main
import (
"github.com/aws/aws-lambda-go/events"
"github.com/aws/aws-lambda-go/lambda"
"github.com/aws/aws-sdk-go/service/dynamodb"
"github.com/aws/aws-sdk-go/service/dynamodb/dynamodbattribute"
"fmt"
)
func main() {
lambda.Start(lambdaHandler)
}
// changed type of event from: events.DynamoDBEvent to DynamoDBEvent (see below)
func lambdaHandler(event DynamoDBEvent) error {
for _, record := range event.Records {
change := record.Change
newImage := change.NewImage // now of type: map[string]*dynamodb.AttributeValue
var item IdOnly
err := dynamodbattribute.UnmarshalMap(newImage, &item)
if err != nil {
return err
}
fmt.Println(item.Id)
}
return nil
}
type IdOnly struct {
Id string `json:"id"`
}
type DynamoDBEvent struct {
Records []DynamoDBEventRecord `json:"Records"`
}
type DynamoDBEventRecord struct {
AWSRegion string `json:"awsRegion"`
Change DynamoDBStreamRecord `json:"dynamodb"`
EventID string `json:"eventID"`
EventName string `json:"eventName"`
EventSource string `json:"eventSource"`
EventVersion string `json:"eventVersion"`
EventSourceArn string `json:"eventSourceARN"`
UserIdentity *events.DynamoDBUserIdentity `json:"userIdentity,omitempty"`
}
type DynamoDBStreamRecord struct {
ApproximateCreationDateTime events.SecondsEpochTime `json:"ApproximateCreationDateTime,omitempty"`
// changed to map[string]*dynamodb.AttributeValue
Keys map[string]*dynamodb.AttributeValue `json:"Keys,omitempty"`
// changed to map[string]*dynamodb.AttributeValue
NewImage map[string]*dynamodb.AttributeValue `json:"NewImage,omitempty"`
// changed to map[string]*dynamodb.AttributeValue
OldImage map[string]*dynamodb.AttributeValue `json:"OldImage,omitempty"`
SequenceNumber string `json:"SequenceNumber"`
SizeBytes int64 `json:"SizeBytes"`
StreamViewType string `json:"StreamViewType"`
}
I have found the same problem and the solution is to perform a simple conversion of types. This is possible because in the end the type received by lambda events events.DynamoDBAttributeValue and the type used by the SDK V2 of AWS DynamoDB types.AttributeValue are the same. Next I show you the conversion code.
package aws_lambda
import (
"github.com/aws/aws-lambda-go/events"
"github.com/aws/aws-sdk-go-v2/feature/dynamodb/attributevalue"
"github.com/aws/aws-sdk-go-v2/service/dynamodb/types"
)
func UnmarshalDynamoEventsMap(
record map[string]events.DynamoDBAttributeValue, out interface{}) error {
asTypesMap := DynamoDbEventsMapToTypesMap(record)
err := attributevalue.UnmarshalMap(asTypesMap, out)
if err != nil {
return err
}
return nil
}
func DynamoDbEventsMapToTypesMap(
record map[string]events.DynamoDBAttributeValue) map[string]types.AttributeValue {
resultMap := make(map[string]types.AttributeValue)
for key, rec := range record {
resultMap[key] = DynamoDbEventsToTypes(rec)
}
return resultMap
}
// DynamoDbEventsToTypes relates the dynamo event received by AWS Lambda with the data type that is
// used in the Amazon SDK V2 to deal with DynamoDB data.
// This function is necessary because Amazon does not provide any kind of solution to make this
// relationship between types of data.
func DynamoDbEventsToTypes(record events.DynamoDBAttributeValue) types.AttributeValue {
var val types.AttributeValue
switch record.DataType() {
case events.DataTypeBinary:
val = &types.AttributeValueMemberB{
Value: record.Binary(),
}
case events.DataTypeBinarySet:
val = &types.AttributeValueMemberBS{
Value: record.BinarySet(),
}
case events.DataTypeBoolean:
val = &types.AttributeValueMemberBOOL{
Value: record.Boolean(),
}
case events.DataTypeList:
var items []types.AttributeValue
for _, value := range record.List() {
items = append(items, DynamoDbEventsToTypes(value))
}
val = &types.AttributeValueMemberL{
Value: items,
}
case events.DataTypeMap:
items := make(map[string]types.AttributeValue)
for k, v := range record.Map() {
items[k] = DynamoDbEventsToTypes(v)
}
val = &types.AttributeValueMemberM{
Value: items,
}
case events.DataTypeNull:
val = nil
case events.DataTypeNumber:
val = &types.AttributeValueMemberN{
Value: record.Number(),
}
case events.DataTypeNumberSet:
val = &types.AttributeValueMemberNS{
Value: record.NumberSet(),
}
case events.DataTypeString:
val = &types.AttributeValueMemberS{
Value: record.String(),
}
case events.DataTypeStringSet:
val = &types.AttributeValueMemberSS{
Value: record.StringSet(),
}
}
return val
}
There is a package that allows conversion from events.DynamoDBAttributeValue to dynamodb.AttributeValue
https://pkg.go.dev/github.com/aereal/go-dynamodb-attribute-conversions/v2
From there one can unmarshal AttributeValue into struct
func Unmarshal(attribute map[string]events.DynamoDBAttributeValue, out interface{}) error {
av := ddbconversions.AttributeValueMapFrom(attribute)
return attributevalue.UnmarshalMap(av, out)
}

Golang polymorphic parameters and returns

Say I have functions:
func ToModelList(cats *[]*Cat) *[]*CatModel {
list := *cats
newModelList := []*CatModel{}
for i := range list {
obj := obj[i]
newModelList = append(newModelList, obj.ToModel())
}
return &newModelList
}
func ToModelList(dogs *[]*Dog) *[]*DogModel {
list := *dogs
newModelList := []*DogModel{}
for i := range list {
obj := obj[i]
newModelList = append(newModelList, obj.ToModel())
}
return &newModelList
}
Is there a way to combine those two so I can do something like
func ToModelList(objs *[]*interface{}) *[]*interface{} {
list := *objs
// figure out what type struct type objs/list are
newModelList := []*interface{}
// type cast newModelList to the correct array struct type
for i := range list {
obj := obj[i]
// type cast obj based on objs's type
newModelList = append(newModelList, obj.ToModel())
}
return &newModelList
}
First, slices are already a reference, unless you need to change the slice itself, you do not need to pass it as a pointer.
Second, an interface{} can be regardless an object or a pointer to an object. You do not need to have *interface{}.
I am not sure what you are trying to achieve but you could do something like this:
package main
// Interface for Cat, Dog
type Object interface {
ToModel() Model
}
// Interface for CatModel, DogModel
type Model interface {
Name() string
}
type Cat struct {
name string
}
func (c *Cat) ToModel() Model {
return &CatModel{
cat: c,
}
}
type CatModel struct {
cat *Cat
}
func (c *CatModel) Name() string {
return c.cat.name
}
type Dog struct {
name string
}
func (d *Dog) ToModel() Model {
return &DogModel{
dog: d,
}
}
type DogModel struct {
dog *Dog
}
func (d *DogModel) Name() string {
return d.dog.name
}
func ToModelList(objs []Object) []Model {
newModelList := []Model{}
for _, obj := range objs {
newModelList = append(newModelList, obj.ToModel())
}
return newModelList
}
func main() {
cats := []Object{
&Cat{name: "felix"},
&Cat{name: "leo"},
&Dog{name: "octave"},
}
modelList := ToModelList(cats)
for _, model := range modelList {
println(model.Name())
}
}
You define interfaces for your Cat, Dogs etc and for your Model. Then you implement them as you want and it is pretty straight forward to do ToModelList().
you can make *CatModel and *DogModel both implement type PetModel {} interface, and just return []Pet in function signature.
func (cats []*Cat) []PetModel {
...
return []*CatModel {...}
}
func (dogs []*Dog) []PetModel {
...
return []*DogModel {...}
}
BTW: return a pointer of a slice in golang is useless.
If you strip away redundant assignments, and unnecessary pointers-to-slices, you'll find you have little code left, and duplicating it for each of your model types doesn't look so bad.
func CatsToCatModels(cats []*Cat) []*CatModel {
var result []*CatModel
for _, cat := range cats {
result = append(result, cat.ToModel())
}
return result
}
Unless this code is used in a lot of places I'd also consider just inlining it, since it's trivial code and only 4 lines when inlined.
Yes, you can replace all the types with interface{} and make the code generic, but I don't think it's a good tradeoff here.

Resources