What's the best way to validate an untagged Go struct - validation

Whenever I create a Go struct, I use struct tags so that I can easily validate it using go-playground/validator - for example:
type DbBackedUser struct {
Name sql.NullString `validate:"required"`
Age sql.NullInt64 `validate:"required"`
}
I'm currently integrating with a third party. They provide a Go SDK, but the structs in the SDK don't contain validate tags, and they are unable/unwilling to add them.
Because of my team's data integrity requirements, all of the data pulled in through this third-party SDK has to be validated before we can use it.
The problem is that there doesn't seem to be a way to validate a struct using the Validator package without defining validation tags on the struct.
Our current proposed solution is to map all of the data from the SDK's structs into our own structs (with validation tags defined on them), perform the validation on these structs, and then, if that validation passes, we can use the data directly from the SDK's structs (we need to pass the data around our system using the structs defined by the SDK - so these structs with the validation tags could only be used for validation). The problem here is that we would then have to maintain these structs that are only used for validation, also, the mapping between structs has both time and memory costs.
Is there a better way of handling struct validation in this scenario?

Alternative technique is to use validating constructors: NewFoobar(raw []byte) (Foobar, error)
But I see that go-validator supports your case with a "struct-level validation":
https://github.com/go-playground/validator/blob/master/_examples/struct-level/main.go

Related

How to unmarshal protbuf data into custom struct in Golang

I have a proxy service that translate protobuf into another struct. I
can just write some manual code to do that, but that is inefficient and boilerplate. I can also transform the protobuf data to JSON, and deserlize the JSON data into the destination struct, but the speed is slow and it is CPU heavy.
The Unmarshaler interface is now deprecated, and Message interface have internal types which I cannot implement in my project.
Is there a way I can do this now?
Psuedo code: basically, if Go's reflection supports setting and getting of struct / class fields by some sort of field identifier, then you can do this. Something like this in C# works, so long as the field types in the two classes are the same (because in C#, I'm doing object = object, which ends up being OK if they're the same actual type).
SourceStructType sourceStruct;
DestStructType destStruct;
foreach (Field sourceField in sourceStruct.GetType().GetFields())
{
Field destField = destStruct.GetType().FindFieldByName(sourceField.name);
destStruct.SetFieldValue(destField) = sourceStruct.GetFieldValue(sourceField);
}
If the structs are more complex - i.e. they have structs within them, then you'll have to recurse down into them. It can get fiddly, but once written you'll never have to write it ever again!

Why are unexported struct fields purposely not marshalled in the json package

In the json package if you want to utilize tags to marshall and unmarshall structs the field must be exported. I always thought this was a limitation on the reflect package but it seems to be explicitly rejected here.
I would like to know what I am missing or the design choices behind this as it creates one of the following issues normally.
Exporting more fields than I want allowing users to change values without using potential getters and setters.
A lot of extra code if you chose to solve the above problem by making a second identical struct just for tag usage and make it unexported.
Here is a playground of code being able to read unexported tags.
Edit: Better playground thanks to #mkopriva

Golang gRPC database serialization key format defined on struct

I want to use the go structs that are generated by the gRPC compiler directly for database transactions but the problem is that only the json serialization field is set by gRPC.
Is there a way to either set additional serialization keys (like shown below) or is there another golang specific way to tell the database driver (sqlx on top of database/sql) that the json key format should be used?
Some example - The gRPC compiler creates the following struct:
type HelloWorld struct {
TraceId string `protobuf:"bytes,1,opt,name=trace_id,json=traceId,proto3" json:"trace_id,omitempty"`
...
What I would like to have:
type HelloWorld struct {
TraceId string `db:"trace_id" protobuf:"bytes,1,opt,name=trace_id,json=traceId,proto3" json:"trace_id,omitempty"`
...
A temporary workaround would be to write sql queries that use aliases (traceid instead of trace_id in this example) but it doesn't feel consistent and adds a lot of complexity.
I think that currently there is no built-in way of doing this. However, you might be interested in following this thread: https://github.com/golang/protobuf/issues/52
Other than that I think you can just create yet another struct for database access and make the mapping explicit which might be more readable.

Input validation with directives

I'm currently working on a server using GraphQL and I'm stuck on implementing input validation with directives. What I'm trying to do is to add directives to input types which allow me to verify the input given before passing it to the actual data fetcher.
schema:
directive #email on INPUT_FIELD_DEFINITION
type Mutation {
getAccounts(filter: InutAccount): [Account]
}
input InputAccount{
email: String #email
}
I've done the wiring and schema building part, the mutation works, but I'm having problems with implementing the schema which allows me to validate an email (ex: email has to contain "#gmail.com").
Input validation through directives do not work out of the box. I suggest two way of solving this.
Without external library - The resolver is in charge of the validation
This is the most basic solution. Your data fetcher (resolver), must return a DataFetcherResult, that may contain one or more GraphQLError. In the data fetcher, you can implement your validation, populate the DataFetcherResult GraphQLError and if no error is found perform your mutation.
This can be improved by mapping your GraphQL input object with a POJO, as graphl-java-tools would do, and use javax annotations and validators to validate your input before processing it.
With GraphQL-Java Extended Validation Library - https://github.com/graphql-java/graphql-java-extended-validation
This library does what you want and provides you some basic directives and associated constraints. Be aware the library is pretty new (there are a couple of bugs regarded nested inputs validation).

Similar to .net attributes in Go

What is similar to .net attributes in go lang.
Or how this could be achieved ?
Perhaps the most similar mechanism is Struct Tags. Not the most elegant, but they can be evaluated at runtime and provide metadata on struct members.
From the reflect package documentation: type StructTag
They are used, for example, in JSON and XML encoding for custom element names.
For example, using the standard json package, say I have a struct with a field I don't want to appear in my JSON, another field I want to appear only if it is not empty, and a third one I want to refer to with a different name than the struct's internal name. Here's how you specify it with tags:
type Foo struct {
Bar string `json:"-"` //will not appear in the JSON serialization at all
Baz string `json:",omitempty"` //will only appear in the json if not empty
Gaz string `json:"fuzz"` //will appear with the name fuzz, not Gaz
}
I'm using it to document and validate parameters in REST API calls, among other uses.
If you keep the 'optionally space-separated key:"value"' syntax, you can use the Get Method of StructTag to access the values of individual keys, as in the example.

Resources