Properly handling gRPC types to SQL ORM - go

I'm implementing a gRPC server and using an ORM called Boiler for Go. I've gotten to a point where I've built all my models and protos and now I'm handling specific type issues.
Within my protobuf files I've declared most types as strings. However my ORM treats (due to how it interfaces with Postgres) strings as null.String in it's models. Hence when I attempt something like so I get an error:
user := &models.User{
FirstName: req.FirstName,
LastName: req.LastName,
Email: req.Email,
Gender: req.Gender,
PhoneNumber: req.PhoneNumber,
}
The error: services/users.go:46:3: cannot use req.FirstName (type string) as type null.String in field value
The question is what would be the most appropriate way of handling this situation. I could: generate a new struct (which I'm already using for data validation) and cast them into null.String or I suppose I could attempt to force the casting somehow?
The question then becomes if a user doesn't supply a value in the API and the validation process allows it, how would that handle the non-existing record?
Edit: Seems like SQLBoiler developers have a package for handling nulls separately. ie. null.StringFrom(), null.Int15From etc. which I'd in essence wrap all my fields with as such:
user := &models.User{
FirstName: null.StringFrom(req.FirstName),
LastName: null.StringFrom(req.LastName),
Email: null.StringFrom(req.Email),
Password: string(password_hash),
Role: "BASIC_USER",
Status: "ACTIVE",
Gender: null.Int16From(req.Gender),
PhoneNumber: null.StringFrom(req.PhoneNumber),
VerificationCode: null.StringFrom(code.String()),
}
Feels kind of dirty but I don't see any other options. Suggestions/opinions anyone?

Related

Interface vs Union in GraphQL schema design

Suppose I am building a GraphQL API that serves a timeline of natural disaster events.
There are two different kinds of event right now:
Hurricane
Earthquake
All events have an ID and a date they occurred. I plan to have a paginated query for fetching events using cursors.
I can think of 2 different approaches to modelling my domain.
1. Interface
interface Event {
id: ID!
occurred: String! # ISO timestamp
}
type Earthquake implements Event {
epicenter: String!
magnitude: Int!
}
type Hurricane implements Event {
force: Int!
}
2. Union
type Earthquake {
epicenter: String!
magnitude: Int!
}
type Hurricane {
force: Int!
}
type EventPayload =
| Earthquake
| Hurricane
type Event {
id: ID!
occurred: String! # ISO timestamp
payload: EventPayload!
}
What are the trade-offs between the two approaches?
I believe that:
unions are about providing: a field / its resolver function resolves with an object, whose type belongs to a specific, known, set of types.
interfaces are about requesting: without, the clients would have to repeat the fields they are interested in, in every type fragment.
They serve different purposes, and they can be used together:
interface I {
id: ID!
}
type A implements I {
id: ID!
a: Int!
}
type B implements I {
id: ID!
b: Int!
}
type C implements I {
id: ID!
c: Int!
}
union Foo = A | C
type Query {
foo: Foo!
}
This schema declares that A, B, and C have some fields in common, so that it's easier for the client to request them, and that querying foo can only yield A or C.
Could you write foo: I! instead? While this would work seamlessly, I believe this leads to a bad development experience. If you're saying that foo provides an I object, your clients should be prepared for receiving any of the implementing types, including B, and would spend time to write and maintain a code that will never be called. If you know that foo can only yield A and C, please tell them explicitly.
The same holds if foo were to yield A, B, or C. It happens that it's exactly the list of types that implement I, so in this case, could you write foo: I!? No! Don't be fooled by that. Why? Because this list is expandable through federation / schema stitching! I believe it's a seldom used feature of some GraphQL frameworks, but whose adoption grows. If you've never used it, please try, it will open your mind to new ideas of inter-micro-service-communication and other Medium buzzwords. In short, imagine you're making a public API, or even somewhat-public within an organization. Someone else could "augment" your API by providing extra stuff. This may include new types implementing your interface. And so we're back to the previous paragraph.
So far, it looks like I'm in favor of your first code.
However, and this may be specific to this scenario, it seems to me that your definition of event mixes both data about its occurrence and about physics metrics. Your second code splits them into two type hierarchy. I like that. It feels more architecture-friendly. Your schema is more open. Imagine your API is about event history, and someone enhance it with forecasts: your EventPayload can be reused!
Besides, note that your first example is incomplete. Types implementing an interface must implement, i.e. repeat, every single field of this interface, like I wrote in the above code. This becomes harder to maintain as the number of fields and the number of implementing types grow.
So, the second solution also has some advantages. But doing so, the blah-blah I made earlier about being specific with returned types is hard to implement, because the payload, which is the one to be specific about, is embedded into another type, and there's no such thing as generics in GraphQL.
Here's a proposal to reconcile all of that:
interface HasForce {
force: Int!
}
type Earthquake {
epicenter: String!
magnitude: Int!
}
type Hurricane implements HasForce {
force: Int!
}
type Tsunami implements HasForce {
force: Int!
}
interface Event {
data: EventData!
}
type EventData {
id: ID!
occurred: String!
}
union HistoryMeteorologicalPhenomenon = Earthquake | Hurricane
type HistoryEvent implements Event {
data: EventData!
meteorologicalPhenomenon: HistoryMeteorologicalPhenomenon!
}
type Query {
historyEvents: [HistoryEvent!]!
}
It looks a bit more complex that both of your proposals, but it fulfills my needs. Also, it's rare to look at a schema from this height: more often, we know the entry point and dig down from there. For instance, I open the documentation at historyEvents, see that it yields phenomena of two kinds, fine, I'm not aware that other union types and event types exist.
If you were to write a lot of these union + event pairs, you could generate them with code instead, whereby one function call would declare a pair. Less error-prone, funnier to implement, and with more potential of Medium articles.
Note that the GraphQL structure is independent of your storage structure. It's possible to have multiple GraphQL objects providing data from the same insert-your-language-here object, e.g. yielded by your DB driver. There may be a tiny overhead that I haven't benchmarked, but providing a cleaner API outweighs that to me. The basic idea is that resolver functions just have to resolve with the same source, so that the resolver functions related to another type will be called with the same source object.

Can one combine two types to make a third in GraphQL schema syntax?

I have a feeling this will be deemed Not How You Do It In GraphQL, but I'm pretty new to it, so please be patient and verbose with me.
Let's say I've got two GraphQL types that I'd like to be able to utilize separately:
type UserSpecs {
name: String!
age: Int!
bio: String!
}
type UserCollections {
interests: [Interest]
buddies: [Relationship]
chats: [Chat]
}
type Query {
updateCollections(collections: UserCollections): User
updateUserSpecs(specs: UserSpecs): User
}
In my .gql file, I'd like to also define the User type as the combination of UserSpecs and UserCollections, though. In TypeScript, for instance, one would do this:
type User = UserSpecs & UserCollections
Short of manually duplicating the contents of UserSpecs and UserCollections into a third type, which would not be DRY and would create two sources of truth to maintain, does the GraphQL schema syntax have any way of combining two types to make a third?
Similarly, if it's possible to create a User type, then disassemble it into the UserSpecs and UserCollections types I'm after, that would be equally helpful.
Thank you in advance!

Graphql ID resolves as string even if it's integer

I'm new to graphql and hope someone can explain me this ID type that is always string.
As sad in docs:
The ID scalar type represents a unique identifier, often used to re-fetch an object or as a key for a cache.
If you use, for example, some caching client like Apollo, each type
should have at least one ID. This allows us to perform a normalization
of queries, making it possible for us to update things in Apollo
internal redux store automatically based on the unique id
Ok, so i can use int, but how then i get my id as integer on client side?
Reason is simple, let's say i have Book type with id of type ID and author_id relation of type Int. Also i have Author type with id of type ID. And after i fetch book and author i will have book.author_id int and author.id string, but it's the same number!
What should i do? Use everywhere ID even for many to many relations? Make new scalar ID type that can be used as ID for re-fetch but will be of type Int?
From the spec:
The ID type is serialized in the same way as a String; however, it is not intended to be human‐readable. While it is often numeric, it should always serialize as a String... GraphQL is agnostic to ID format, and serializes to string to ensure consistency across many formats ID could represent, from small auto‐increment numbers, to large 128‐bit random numbers, to base64 encoded values, or string values of a format like GUID.
It's unclear why the client would care about comparing IDs in this context -- columns like author_id should generally be hidden from the client anyway, with the schema only exposing the related entity, not fields that are only used to link entities. That said, an ID is just an ID and a client shouldn't care whether it's a string or an integer as long as it's consistent. If you have one field returning an integer (Book.author_id) and another returning a string (Author.id), then that's a problem on the part of your schema.
The ID scalar can be used for any number of fields, not just the one field (which may or may not be named id). Similarly, if you want to use Int or String as the type for your id field you can -- this will not impact Apollo's ability to cache your results.
In apollo you can use typePolicies to determintate what field is used as unique identifier. That will resolve a pain of ID! type conversion to string.
const typePolicies = {
Book: {
keyFields: ['id'],
},
BookTag: {
keyFields: ['book_id', 'tag_id'],
}
}
return new ApolloClient({
cache: new InMemoryCache({ typePolicies }),
})

Is it a bad practice to use an Input Type for a graphql Query?

I have seen that inserting an Input Type is recommended in the context of mutations but does not say anything about queries.
For instance, in learn tutorial just say:
This is particularly valuable in the case of mutations, where you might want to pass in a whole object to be created
I have this query:
type query {
person(personID: ID!): Person
brazilianPerson(rg: ID!): BrazilizanPerson
foreignerPerson(passport: ID!): ForeignerPerson
}
Instead of having a different type just because of the name (rg, passport) of the fields, or put one more argument like type in query, I could not just have the Person with an documentNr field and do an Input type like that?
input PersonInput {
documentNr : ID!
type: PersonType # this type is Foreign or Brazilian and with this I k
}
PersonType is a enum and with him I know if the document is a rg or a passport.
No, there is nothing incorrect about your approach. The GraphQL spec allows any field to have an argument and allows any argument to accept an Input Object Type, regardless of the operation. In fact, the differences between a query and a mutation are largely symbolic.
It's worth pointing out that any field can accept an argument -- not just ones at the root level. So if it suited your needs, you could easily set up a schema that would allow queries like:
query {
person(id: 1) {
powers(onlyMutant: true) {
name
}
}
}

How do I setup query cache results for built in doctrine2 repository functions?

I have a site that is for a video game I play and am working on improving the performance of the site by implementing some additional caching. I've already been able to implement query result caching on custom repository functions, but haven't been able to find anywhere that explains how I can include query result caching on the built in functions (findOneById, etc). I'm interested in doing this because many of my database queries are executed from these 'native' repository functions.
So as an example I have a character entity object with the following properties: id, name, race, class, etc.
Race and class in this object are references to other entity objects for race and class.
When I load a character for display I get the character by name (findOneByName) and then in my template I display the character's race/class by $characterObject->getRace()->getName(). These method calls in the template result in a query being run on my Race/Class entity tables fetching the entity by id (findOneById I assume).
I've attempted to create my own findOneById function in the repository, but it is not called under these circumstances.
How can I setup doctrine/symfony such that these query results are cache-able?
I am running Symfony 2.1.3 and doctrine 2.3.x
I've found out that it isn't possible to enable query cache on doctrine build in functions. I will post a link which explains why later after I find it again.
Your entities probably look something like this:
MyBundle\Entity\Character:
type: entity
table: Character
fields:
id:
id: true
type: bigint
name:
type: string
length: 255
manyToOne:
race:
targetEntity: Race
joinColumns:
raceId:
referencedColumnName: id
MyBundle\Entity\Race:
type: entity
table: Race
fields:
id:
id: true
type: bigint
name:
type: string
length: 255
oneToMany:
characters:
targetEntity: Character
mappedBy: race
If that's the case, then modify your Character entity mapping so that it eagerly loads the Race entity as well:
MyBundle\Entity\Character:
...
manyToOne:
race:
targetEntity: Race
joinColumns:
raceId:
referencedColumnName: id
fetch: EAGER
Doctrine documentation on the fetch option: #ManyToOne

Resources