Validate a GraphQL schema against another reference schema - validation

I'm not quite sure the wording I should be searching for on this.
I have a GraphQL schema which wraps a group of services using graphql-link-schema to perform the data resolution on the client side. The schema is intended to be built against a separate reference schema. How can I programmatically validate that my implementation matches the reference?
For bonus points- is it possible to determine whether a schema is a superset of another?
Thanks in advance (:

It's an interesting use case, but it's a bit unclear how validation like that would work. What causes validation to fail? Any differences between the two schemas? Extra types? Extra fields on existing types? Differences in return types? Differences in arguments or argument types?
Depending on your answer to the above questions, though, you may be able to cobble together your own validation function using the utility functions available here. Outside the main findBreakingChanges function, some of the utility functions available in that module:
findRemovedTypes
findTypesThatChangedKind
findFieldsThatChangedTypeOnObjectOrInterfaceTypes
findFieldsThatChangedTypeOnInputObjectTypes
findTypesRemovedFromUnions
findValuesRemovedFromEnums
findArgChanges
findInterfacesRemovedFromObjectTypes
If you have a reference or base schema available, though, rather than validating against it, you might also consider extending it when building the second schema. In doing so, you would effectively guarantee that the second schema matches the first except in whatever ways you intentionally deviate from it (by extending existing types, etc.). You could use extendSchema for relatively simply changes, or something like graphql-tool's mergeSchemas for more complicated changes.

Related

Is it possible to change a GraphQL schema at runtime and write resolvers that handle this dynamic schema? (in Java)

I am working with GraphQL (in Java) and I would like to find a way to do the following:
I need the possibility to constantly adapt the GraphQL schema at runtime without restart. In particular I need to be able to add new fields to GraphQL types. Moreover I need the possibility to be able to write resolvers which can handle this dynamic schema.
I do not have example code yet, so just think of the simplest example (one GraphQL type with several fields that can all be of different type).
My problem is that I am quite new in GraphQL and I do not have a lot of experience with it. Of course I looked for a solution on the internet, but I did not find one yet (or just did not notice that I found it due to my lacking experience with GraphQL).
The only interesting discovery I made is this: exposing dynamic schemas with graphql . But I do not understand how this solution works because 1) I do not know how to reload the schema at runtime and 2) I do not know how to write the resolvers so that they can handle that dynamic schema.
So can anybody help me with my problem and/or can answer my questions regarding the link I found?
I am very thankful for every help, no matter how extensive it is. Like I told before, I am quite new in GraphQL. Therefore I would be also very thankful for links to examples (if possible), so that I can understand better.
Thank you very much in advance.
#userongithub0 you may take a look at GraphQL Schema Directives
And specifically on the rest directive
First of all, don't ever try to do that or use only if there're some very strict situations. Here's why:
A schema is like a contract between the front-end & backend & on change, it can lead to instability between both of them very quickly.
If you try to change the schema of GraphQL, it might fail to connect properly with your resolvers & consecutively with your database as well.
Whenever there's is a change in the schema the GraphQL server (the server handler, in general) needs to be restarted (recompile) & it will take time, hence results in high response time.
No matter what language you are using, you should always see it as a red flag. In my opinion, it will be a really bad practice.

Does Spring Data JDBC support inheritance

I am working on a new project using spring data jdbc because it is very easy to handle and indeed splendid.
In my scenario i have three (maybe more in the future) types of projects. So my domain model could be easily modelled with plain old java objects using type inheritance.
First question:
As i am using spring data jdbc, is this way (inheritance) even supported like it is in JPA?
Second question - as addition to the first one:
I could not found anything regarding this within the official docs. So i am assuming there are good reasons why it is not supported. Speaking of that, may i be on the wrong track modelling entities with inheritance in general?
Currently Spring Data JDBC does not support inheritance.
The reason for this is that inheritance make things rather complicated and it was not at all clear what the correct approach is.
I have a couple of vague ideas how one might create something usable. Different repositories per type is one option, using a single type for persisting, but having some post processing to obtain the correct type upon reading is another one.

Protocol buffers: read only fields?

Is it possible to mark fields as read only in a .proto file such that when the code is generated, these fields do not have setters?
Ultimately, I think the answer here will be "no". There's a good basic guidance rule that applies to DTOs:
DTOs should generally be as simple as possible to convey the data for serialization in a manner well-suited to the specific serializer.
if that basic model is sufficient for you to work with above that layer, then fine
but if not: do not fight the serializer; instead, create a separate domain model above the DTO layer, and simply map between the two models before serialization or after deserialization
Or put another way: the fact that the generator doesn't want to expose read-only members is irrelevant, because if you need something exotic, you shouldn't be using the generated type outside of the code that directly touches serialization. So: in your domain type that mirrors the DTO: make it read-only there.
As for why read-only fields aren't usually a thing in serialization tools: you presumably want to be able to give it a value. Serialization tools usually want to be able to write everything they can read, and read everything they can write.
Minor note for completeness since you mention C#: if you are using a code-first approach with protobuf-net, it'll work fine with {get;}-only auto-props, and with {get;}-only manual props if all public members trivially map to an obvious constructor.

How can I enforce correct construction whilst respecting the golang CodeReviewComments rule on interfaces?

The Interfaces rule in the official Go Code Review Comments document says that packages should return concrete types rather than interfaces. The motivation for this is so that:
...new methods can be added to implementations without requiring extensive refactoring.
which I accept could be a good thing.
But what if a type I'm writing has a dependency without which it cannot serve its purpose? If I export the concrete type, developers will be able to instantiate instances without that dependency. To code defensively for the missing dependency, I then have to check for it in every method implementation and return errors if it is absent. If the developer missed any hints not to do this in my documentation, she or he won't learn about the problem until run time.
On the other hand, if I declare and return an interface with the methods the client needs, I can unexport the concrete type and enforce the use of a factory method which accepts the dependency as an argument and returns the interface plus an error. This seems like a better way to ensure correct use of the package.
Am I somehow not properly getting into the go spirit by thinking like this? Is the ethic of the language that it's okay to have a less-than-perfect encapsulation to give more flexibility to developers?
You may expect developers to read the doc you provide, and you may rely on them following the rules you set. Yes, lazy developers will bump their head from time to time, but the process of developing isn't without having to learn. Everything cannot be made explicit or enforced, and that's all right.
If you have an exported struct type Example and you provide a constructor function NewExample(), that's a clear indication that NewExample() should be used to construct values of Example. Anyone attempting to construct Example manually is expected to know what fields must be set for it to be "operational". The aim is always to make the zero value fully functional, but if that can't be achieved, the constructor function is the idiomatic way to go.
This isn't uncommon, there are countless examples in the standard library, e.g. http.Request, json.Encoder, json.Decoder, io.SectionReader, template.Template.
What you must ensure is that if your package returns values of your structs, they must (should) be properly initialized. And also if others are expected to pass values of your structs created by them, you must provide an easy way for them to create valid values of your structs (constructor function). Whether the custom struct values other developers create themselves are "valid", that shouldn't be of your concern.

Best practice to design a graphql schema with numerous object types?

Every example of a grapqhl schema I found consisted of just a couple of ObjectTypes which all made sense in the global context of the schema like "Person", "Dog", "Animal".
In real life scenarios there are way more types. More importantly there might be types that make sense only within a group of other types but are called similarly e.g StatisticsType. StatisticsType held by a field inside a GroupType would be different than this held inside PresentationType. We could call them GroupStatisticsType and PresentationStatisticsType. However, following this approach I end up with types with very very long names.
As far as I know all types must be named uniquely within the schema and end up thrown in so to say one scope. Is there some design pattern, best practice or am I missing something that help to design a decent schema?
If you have any example of a schema with numerous types (20+ is something) might be helpful.

Resources