I'm writing my first API endpoint in GoLang using GRPC/proto-buffers. I'm rather new to GoLang.
Below is the file I'm writing for my test case(s)
package my_package
import (
"context"
"testing"
"github.com/stretchr/testify/require"
"google.golang.org/protobuf/types/known/structpb"
"github.com/MyTeam/myproject/cmd/eventstream/setup"
v1handler "github.com/MyTeam/myproject/internal/handlers/myproject/v1"
v1interface "github.com/MyTeam/myproject/proto/.gen/go/myteam/myproject/v1"
)
func TestEndpoint(t *testing.T) {
conf := &setup.Config{}
// Initialize our API handlers
myhandler := v1handler.New(&v1handler.Config{})
t.Run("Success", func(t *testing.T) {
res, err := myhandler.Endpoint(context.Background(), &v1interface.EndpointRequest{
A: "S",
B: &structpb.Struct{
Fields: map[string]*structpb.Value{
"T": &structpb.Value{
Kind: &structpb.Value_StringValue{
StringValue: "U",
},
},
"V": &structpb.Value{
Kind: &structpb.Value_StringValue{
StringValue: "W",
},
},
},
},
C: ×tamppb.Timestamp{Seconds: 1590179525, Nanos: 0},
})
require.Nil(t, err)
// Assert we got what we want.
require.Equal(t, "Ok", res.Text)
})
}
This is how the EndpointRequest object is defined in the v1.go file included above:
// An v1 interface Endpoint Request object.
message EndpointRequest {
// a is something.
string a = 1 [(validate.rules).string.min_len = 1];
// b can be a complex object.
google.protobuf.Struct b = 2;
// c is a timestamp.
google.protobuf.Timestamp c = 3;
}
The test-case above seems to work fine.
I put validation rule in place that effectively makes argument a mandatory because it requires that a is a string of at least one. So if you omit a, the endpoint returns a 400.
But now I want to ensure that the endpoint returns 400 if c or b are omitted. How can I do that? In Protobufs 3, they got rid of the required keyword. So how can I check if a non-string argument was passed in and react accordingly?
Required fields were removed in proto3. Here is github issue where you can read detailed explanation why that was done. Here is excerpt:
We dropped required fields in proto3 because required fields are generally considered harmful and violating protobuf's compatibility semantics. The whole idea of using protobuf is that it allows you to add/remove fields from your protocol definition while still being fully forward/backward compatible with newer/older binaries. Required fields break this though. You can never safely add a required field to a .proto definition, nor can you safely remove an existing required field because both of these actions break wire compatibility
IMO, that was questionable decision and obviously i'm not alone, who's thinking that. Final decision should have been left to developer.
The short version: you can't.
required was removed mostly because it made changes backwards incompatible. Attempting to re-implement it using validation options is not quite as drastic (changes are easier), but will run into shortcomings as you can see.
Instead, keep the validation out of the proto definition and move it into the application itself. Anytime you receive a message, you should be checking its contents anyway (this was also true when required was a thing). It is a rare case that the simple validation provided by options or required is sufficient.
Related
I'm writing my first API endpoint in GoLang using GRPC/proto-buffers. I'm rather new to GoLang.
Below is the file I'm writing for my test case(s)
package my_package
import (
"context"
"testing"
"github.com/stretchr/testify/require"
"google.golang.org/protobuf/types/known/structpb"
"github.com/MyTeam/myproject/cmd/eventstream/setup"
v1handler "github.com/MyTeam/myproject/internal/handlers/myproject/v1"
v1interface "github.com/MyTeam/myproject/proto/.gen/go/myteam/myproject/v1"
)
func TestEndpoint(t *testing.T) {
conf := &setup.Config{}
// Initialize our API handlers
myhandler := v1handler.New(&v1handler.Config{})
t.Run("Success", func(t *testing.T) {
res, err := myhandler.Endpoint(context.Background(), &v1interface.EndpointRequest{
A: "S",
B: &structpb.Struct{
Fields: map[string]*structpb.Value{
"T": &structpb.Value{
Kind: &structpb.Value_StringValue{
StringValue: "U",
},
},
"V": &structpb.Value{
Kind: &structpb.Value_StringValue{
StringValue: "W",
},
},
},
},
C: ×tamppb.Timestamp{Seconds: 1590179525, Nanos: 0},
})
require.Nil(t, err)
// Assert we got what we want.
require.Equal(t, "Ok", res.Text)
})
}
This is how the EndpointRequest object is defined in the v1.go file included above:
// An v1 interface Endpoint Request object.
message EndpointRequest {
// a is something.
string a = 1 [(validate.rules).string.min_len = 1];
// b can be a complex object.
google.protobuf.Struct b = 2;
// c is a timestamp.
google.protobuf.Timestamp c = 3;
}
The test-case above seems to work fine.
I put validation rule in place that effectively makes argument a mandatory because it requires that a is a string of at least one. So if you omit a, the endpoint returns a 400.
But now I want to ensure that the endpoint returns 400 if c or b are omitted. How can I do that? In Protobufs 3, they got rid of the required keyword. So how can I check if a non-string argument was passed in and react accordingly?
Required fields were removed in proto3. Here is github issue where you can read detailed explanation why that was done. Here is excerpt:
We dropped required fields in proto3 because required fields are generally considered harmful and violating protobuf's compatibility semantics. The whole idea of using protobuf is that it allows you to add/remove fields from your protocol definition while still being fully forward/backward compatible with newer/older binaries. Required fields break this though. You can never safely add a required field to a .proto definition, nor can you safely remove an existing required field because both of these actions break wire compatibility
IMO, that was questionable decision and obviously i'm not alone, who's thinking that. Final decision should have been left to developer.
The short version: you can't.
required was removed mostly because it made changes backwards incompatible. Attempting to re-implement it using validation options is not quite as drastic (changes are easier), but will run into shortcomings as you can see.
Instead, keep the validation out of the proto definition and move it into the application itself. Anytime you receive a message, you should be checking its contents anyway (this was also true when required was a thing). It is a rare case that the simple validation provided by options or required is sufficient.
var response Response
switch wrapper.Domain {
case "":
response = new(TypeA)
case "TypeB":
response = new(TypeB)
case "TypeC":
response = new(TypeC)
case "TypeD":
response = new(TypeD)
}
_ = decoder.Decode(response)
As shown in the code snippet, I got enough information from the Domain filed of wrapper to determine the type of response, and for each type, the following operations are performed:
create a new instance of that type using new
use the decoder to decode the byte slice to the instance created in step 1
I am wondering if there is a way to make the first step more generic and get rid of the switch statement.
A bit about your code
As per discussion in comments, I would like to share some experience.
I do not see nothing bad in your solution, but there are few options to improve it, depends what you want to do.
Your code looks like classic Factory. The Factory is a pattern, that create object of a single family, based on some input parameters.
In Golang this is commonly used in simpler way as a Factory Method, sometimes called Factory function.
Example:
type Vehicle interface {};
type Car struct {}
func NewCar() Vehicle {
return &Car{}
}
But you can easily expand it to do something like you:
package main
import (
"fmt"
"strings"
)
type Vehicle interface {}
type Car struct {}
type Bike struct {}
type Motorbike struct {}
// NewDrivingLicenseCar returns a car for a user, to perform
// the driving license exam.
func NewDrivingLicenseCar(drivingLicense string) (Vehicle, error) {
switch strings.ToLower(drivingLicense) {
case "car":
return &Car{}, nil
case "motorbike":
return &Motorbike{}, nil
case "bike":
return &Bike{}, nil
default:
return nil, fmt.Errorf("Sorry, We are not allowed to make exam for your type of car: \"%s\"", drivingLicense)
}
}
func main() {
fmt.Println(NewDrivingLicenseCar("Car"))
fmt.Println(NewDrivingLicenseCar("Tank"))
}
Above code produces output:
&{} <nil>
<nil> Sorry, We are not allowed to make exam for your type of car: "Tank"
So probably you can improve your code by:
Closing into a single function, that takes a string and produces the Response object
Adding some validation and the error handling
Giving it some reasonable name.
There are few related patterns to the Factory, which can replace this pattern:
Chain of responsibility
Dispatcher
Visitor
Dependency injection
Reflection?
There is also comment from #icza about Reflection. I agree with him, this is used commonly, and We cannot avoid the reflection in our code, because sometimes things are so dynamic.
But in your scenario it is bad solution because:
You lose compile-time type checking
You have to modify code when you are adding new type, so why not to add new line in this Factory function?
You make your code slower(see references), it adds 50%-100% lose of performance.
You make your code so unreadable and complex
You have to add a much more error handling to cover not trivial errors from reflection.
Of course, you can add a lot of tests to cover a huge number of scenarios. You can support TypeA, TypeB, TypeC in your code and you can cover it with tests, but in production code sometime you can pass TypeXYZ and you will get runtime error if you do not catch it.
Conclusion
There is nothing bad with your switch/case scenario, probably this is the most readable and the easiest way to do what you want to do.
Reference
Factory method: https://www.sohamkamani.com/golang/2018-06-20-golang-factory-patterns/
Classic book about patterns in programming: Design Patterns: Elements of Reusable Object-Oriented Software, Erich Gamma and his band of four, ISBN: 978-0201633610
Reflection benchmarks: https://gist.github.com/crast/61779d00db7bfaa894c70d7693cee505
See https://github.com/grpc/grpc-node/issues/1202.
Usually in CRUD operations, the value not provided means do not change that field, and the empty array [] means to clear all items inside that field.
But if you tries to implement CRUD operations and provide them as services via grpc, then the above scenario is hard to implement.
service CRUD {
rpc updateTable(updateRequest) returns updateResponse {}
}
message updateRequest {
repeated string a = 1;
string b = 2;
}
message updateResponse {
boolean success = 1;
}
If you load the package with default options, then the client can't delete items of a by
client.CRUD.updateTable({a: []})
because the argument {a: []} becomes {} when it arrives the server side.
If you load the package with options {arrays: true}, then the field a will be cleared unintentionally while client side only tries to update other fields:
client.CRUD.updateTable({b: 'updated value'})
because the argument {b: 'updated value'} becomes {a: [], b: 'updated value'} when it arrives the server side.
Can anyone share some better ideas regards how to handle these 2 scenarios with grpc-node and proto3?
The protobuf encoding doesn't distinguish between these two cases. Since protobuf is language-agnostic, it doesn't understand the conceptual nuance of "undefined" versus "[]" of Javascript.
You would need to pass additional information inside the proto message in order to distinguish between the two cases.
I would highly suggest reading the design documentations here: https://developers.google.com/protocol-buffers
I want to compare two Kubernetes API objects (e.g. v1.PodSpecs): one of them was created manually (expected state), the other one was received from the Kubernetes API/client (actual state).
The problem is that even if the two objects are semantically equal, the manually created struct has zerovalues for unspecified fields where the other struct has default values, and so the two doesn't match. It means that a simple reflect.DeepEqual() call is not sufficient for comparison.
E.g. after this:
expected := &v1.Container{
Name: "busybox",
Image: "busybox",
}
actual := getContainerSpecFromApi(...)
expected.ImagePullPolicy will be "", while actual.ImagePullPolicy will be "IfNotPresent" (the default value), so the comparison fails.
Is there an idiomatic way to replace zerovalues with default values in Kubernetes API structs specifically? Or alternatively is a constructor function that initializes the struct with default values available for them somewhere?
EDIT:
Currently I am using handwritten equality tests for each K8s API object types, but this doesn't seem to be maintainable to me. I am looking for a simple (set of) function(s) that "knows" the default values for all built-in Kubernetes API object fields (maybe somewhere under k8s.io/api*?). Something like this:
expected = api.ApplyContainerDefaults(expected)
if !reflect.DeepEqual(expected, actual) {
reconcile(expected, actual)
}
There are helpers to fill in default values in place of empty/zero ones.
Look at the SetObjectDefaults_Deployment for Deployment for instance.
Looks like the proper way to call it is via (*runtime.Scheme).Default.
Below is the snippet to show the general idea:
import (
"reflect"
appsv1 "k8s.io/api/apps/v1"
"k8s.io/client-go/kubernetes/scheme"
)
func compare() {
scheme := scheme.Scheme
// fetch the existing &appsv1.Deployment via API
actual := ...
expected := &appsv1.Deployment{}
// fill in the fields to generate your expected state
// ...
scheme.Default(expected)
// now you should have your empty values filled in
if !reflect.DeepEqual(expected.Spec, actual.Spec) {
reconcile(expected, actual)
}
}
If you need less strict comparison for instance if you need to tolerate some injected containers then something more relaxed should be used like this.
I know from reading around that Maps are intentionally unordered in Go, but they offer a lot of benefits that I would like to use for this problem I'm working on. My question is how might I order a map FIFO style? Is it even worth trying to make this happen? Specifically I am looking to make it so that I can unmarshal into a set of structures hopefully off of an interface.
I have:
type Package struct {
Account string
Jobs []*Jobs
Libraries map[string]string
}
type Jobs struct {
// Name of the job
JobName string `mapstructure:"name" json:"name" yaml:"name" toml:"name"`
// Type of the job. should be one of the strings outlined in the job struct (below)
Job *Job `mapstructure:"job" json:"job" yaml:"job" toml:"job"`
// Not marshalled
JobResult string
// For multiple values
JobVars []*Variable
}
type Job struct {
// Sets/Resets the primary account to use
Account *Account `mapstructure:"account" json:"account" yaml:"account" toml:"account"`
// Set an arbitrary value
Set *Set `mapstructure:"set" json:"set" yaml:"set" toml:"set"`
// Contract compile and send to the chain functions
Deploy *Deploy `mapstructure:"deploy" json:"deploy" yaml:"deploy" toml:"deploy"`
// Send tokens from one account to another
Send *Send `mapstructure:"send" json:"send" yaml:"send" toml:"send"`
// Utilize eris:db's native name registry to register a name
RegisterName *RegisterName `mapstructure:"register" json:"register" yaml:"register" toml:"register"`
// Sends a transaction which will update the permissions of an account. Must be sent from an account which
// has root permissions on the blockchain (as set by either the genesis.json or in a subsequence transaction)
Permission *Permission `mapstructure:"permission" json:"permission" yaml:"permission" toml:"permission"`
// Sends a bond transaction
Bond *Bond `mapstructure:"bond" json:"bond" yaml:"bond" toml:"bond"`
// Sends an unbond transaction
Unbond *Unbond `mapstructure:"unbond" json:"unbond" yaml:"unbond" toml:"unbond"`
// Sends a rebond transaction
Rebond *Rebond `mapstructure:"rebond" json:"rebond" yaml:"rebond" toml:"rebond"`
// Sends a transaction to a contract. Will utilize eris-abi under the hood to perform all of the heavy lifting
Call *Call `mapstructure:"call" json:"call" yaml:"call" toml:"call"`
// Wrapper for mintdump dump. WIP
DumpState *DumpState `mapstructure:"dump-state" json:"dump-state" yaml:"dump-state" toml:"dump-state"`
// Wrapper for mintdum restore. WIP
RestoreState *RestoreState `mapstructure:"restore-state" json:"restore-state" yaml:"restore-state" toml:"restore-state"`
// Sends a "simulated call" to a contract. Predominantly used for accessor functions ("Getters" within contracts)
QueryContract *QueryContract `mapstructure:"query-contract" json:"query-contract" yaml:"query-contract" toml:"query-contract"`
// Queries information from an account.
QueryAccount *QueryAccount `mapstructure:"query-account" json:"query-account" yaml:"query-account" toml:"query-account"`
// Queries information about a name registered with eris:db's native name registry
QueryName *QueryName `mapstructure:"query-name" json:"query-name" yaml:"query-name" toml:"query-name"`
// Queries information about the validator set
QueryVals *QueryVals `mapstructure:"query-vals" json:"query-vals" yaml:"query-vals" toml:"query-vals"`
// Makes and assertion (useful for testing purposes)
Assert *Assert `mapstructure:"assert" json:"assert" yaml:"assert" toml:"assert"`
}
What I would like to do is to have jobs contain a map of string to Job and eliminate the job field, while maintaining order in which they were placed in from the config file. (Currently using viper). Any and all suggestions for how to achieve this are welcome.
You would need to hold the keys in a separate slice and work with that.
type fifoJob struct {
m map[string]*Job
order []string
result []string
// Not sure where JobVars will go.
}
func (str *fifoJob) Enqueue(key string, val *Job) {
str.m[key] = val
str.order = append(str.order, key)
}
func (str *fifoJob) Dequeue() {
if len(str.order) > 0 {
delete(str.m, str.order[0])
str.order = str.order[1:]
}
}
Anyways if you're using viper you can use something like the fifoJob struct defined above. Also note that I'm making a few assumptions here.
type Package struct {
Account string
Jobs *fifoJob
Libraries map[string]string
}
var config Package
config.Jobs = fifoJob{}
config.Jobs.m = map[string]*Job{}
// Your config file would need to store the order in an array.
// Would've been easy if viper had a getSlice method returning []interface{}
config.Jobs.order = viper.GetStringSlice("package.jobs.order")
for k,v := range viper.GetStringMap("package.jobs.jobmap") {
if job, ok := v.(Job); ok {
config.Jobs.m[k] = &job
}
}
for
PS: You're giving too many irrelevant details in your question. I was asking for a MCVE.
Maps are by nature unordered but you can fill up a slice instead with your keys. Then you can range over your slice and sort it however you like. You can pull out specific elements in your slice with [i].
Check out pages 170, 203, or 204 of some great examples of this:
Programming in Go