GraphQL custom scalar type for HTML structure - graphql

During her brilliant presentation about scaling GraphQL, Leanne Shapton showed some best practices.
One of the most attractive for me was the custom scalar type for HTML structure. On the video it's [10:16]
She proposed using the custom HTML instead of simple String.
I wish you could show your implementation of this scalar or how do you handle these cases as I'm using only String for any HTML structure which doesn't seem to be a perfect way.
I'm asking not for creating scalar types or general information what is it scalar type and so on. Wondering if someone else has HTML handling already and does someone has any working solutions

At a pure GraphQL level, the only thing you can (and must) do is include a definition for the scalar type:
scalar HTML
Once you done that, you can use it as a type as shown in the slide you cite. In queries and results it will appear as some sort of scalar (string or numeric) value.
Different server and client libraries have different ways of dealing with this; there may be a uniform way to map a specific GraphQL scalar type to a native-language object. In graphql-js, a GraphQLScalarType object takes parseValue and serialize functions to convert between the two representations, for example. If you're just using a custom scalar type as a tagged string these can be very simple functions.

Related

How to achieve dynamic custom fields of different data type using gRPC proto

Looking for a solution in gRPC protobuff to implement dynamic fields of different datatypes for an multi-tenant application.
Also there can be any number of dynamic fields based on tenant.
Using map in proto, I can define different set of map for each data type. Is there any optimized way to achieve this.
Any help on this is appreciated.
There are a few different ways of transferring dynamic content in protobuf. Which is ideal varies depending on your use case. The options are ordered by their dynamism. Less dynamic options normally have better performance.
Use google.protobuf.Any in proto3. This is useful when you want to store arbitrary protobuf messages and is commonly used to provide extension points. It replaces extensions from proto2. Any has a child message and its type, so your application can check at runtime if it understands the type. If your application does not know the type, then it can copy the Any but can't decode its contents. Any cannot directly hold scalar types (like int32), but each scalar has a wrapper message that can be used instead. Because each Any includes the type of the message as a string, it is poorly suited if you need lots of them with small contents.
Use the JSON mapping message google.protobuf.Value. This is useful when you want to store arbitrary schemaless JSON data. Because it does not need to store the full type of its contents, a Value holding a ListValue of number_values (doubles) will be more compact on-the-wire than repeated Any. But if a schema is available, an Any containing a message with repeated double will be more compact on-the-wire than Value.
Use a oneof that contains each permitted type. Commonly a new message type is needed to hold the oneof. This is useful when you can restrict the schema but values have a relationship, like if the position of each value in a list is important and the types in the list are mixed. This is similar to Value but lets you choose your own types. While technically more powerful than Value it is typically used to produce a more constrained data structure. It is equal to or more compact on-the-wire than Value. This requires knowing the needed types ahead-of-time. Example: map<string, MyValue>, where MyValue is:
message MyValue {
oneof kind {
int32 int_value = 1;
string string_value = 2;
}
}
Use a separate field/collection for each type. For each type you can have a separate field in a protobuf message. This is the approach you were considering. This is the most compact on-the-wire and most efficient in memory. You must know the types you are interested in storing ahead of time. Example: map<string, int32> int_values = 1; map<string, string> string_values = 2.

Validate a GraphQL schema against another reference schema

I'm not quite sure the wording I should be searching for on this.
I have a GraphQL schema which wraps a group of services using graphql-link-schema to perform the data resolution on the client side. The schema is intended to be built against a separate reference schema. How can I programmatically validate that my implementation matches the reference?
For bonus points- is it possible to determine whether a schema is a superset of another?
Thanks in advance (:
It's an interesting use case, but it's a bit unclear how validation like that would work. What causes validation to fail? Any differences between the two schemas? Extra types? Extra fields on existing types? Differences in return types? Differences in arguments or argument types?
Depending on your answer to the above questions, though, you may be able to cobble together your own validation function using the utility functions available here. Outside the main findBreakingChanges function, some of the utility functions available in that module:
findRemovedTypes
findTypesThatChangedKind
findFieldsThatChangedTypeOnObjectOrInterfaceTypes
findFieldsThatChangedTypeOnInputObjectTypes
findTypesRemovedFromUnions
findValuesRemovedFromEnums
findArgChanges
findInterfacesRemovedFromObjectTypes
If you have a reference or base schema available, though, rather than validating against it, you might also consider extending it when building the second schema. In doing so, you would effectively guarantee that the second schema matches the first except in whatever ways you intentionally deviate from it (by extending existing types, etc.). You could use extendSchema for relatively simply changes, or something like graphql-tool's mergeSchemas for more complicated changes.

Conditional args in GraphQL mutations?

Let's say I have a mutation which has a type arg. Depending on the value of type I can either make a mutation accept another arg which is an input type or call mutation without it.
How to implement it in graphql? I know that for queries there're #skip and #include directives (for fields, not for args). Is there something similar for mutations? Or should I just specify the additional arg as optional and then do the validation on the server?
There'll be a range of opinions on this. The main problem is that because you can't define unions for input types, you can't model inputs exhaustively at the schema level. By this I mean that if you need deeper validation than just required/not-required, GraphQL's type system won't help you.
At the moment I lean towards handling all complex validation in the mutation function itself. Essentially mark all input arguments as not required and let it fall through to a validation method of your choosing.
For simple mutations, like do_foo_with_bar(bar_id: Int!), i'd still let the schema handle validation. But for more complex things (like an elaborate form), you're going to have an easier time if you do things in code.

validate request input phoenix elixir

I'm struggling to find something in the documentation that seems like it should be there...
In Phoenix I see validation at the point of trying to create an Ecto change set, but I'm not seeing much prior to that, upon validating the actual user input.
I'm not really a fan of exposing my data models across API boundaries, and I would rather just have structs representing the requests and responses, as they are likely very different shapes to my actual data models.
I'd like a way of converting user input to a struct and using some kind of validation framework to determine if the input is valid before I even think about hitting a database.
I've found https://github.com/CargoSense/vex and have gone down the route of converting the input to a struct, and using their validation, but there are a few things that worry me about this approach, namely:
I hear that there are issues with atoms in Elixir, and as structs are basically atom keyed maps, am I going to run into this atom exhaustion issue converting user input to these?
I also have some structs that would have nested structs. I'm currently checking the default value provided. If it's a struct doing some magic, based on the answers here In Elixir how do you initialize a struct with a map variable, to automatically convert a nested map to my nested struct. But again, I'm not sure if this is sensible.
The validations I'm defining in one DSL will be very similar in my Ecto models, and I would rather be using this for both.
Basically, how would you go about validating user input correctly in a Phoenix app. Am I on the right lines, or way off?

N-tier question: Where do you do the variable casting?

Our UI exposes user input as strings. All of them, including dates and numbers, are coming as strings. The question is: is it better to convert these to the appropriate type (datetime, int, etc) in the UI (and then pass converted var to the BLL methods), or in the BLL itself?
Input validation and conversion should be done on the UI layer.
Not only is this so your business layer is dealing with typed data, but also so that you can easily throw UI error messages if they enter the wrong type or if the value is outside your range*.
*Some frameworks have their own validation logic for this sort of thing... ASP.NET being the first I can think of.
UI type conversion should be done in the UI layer, not the BL layer. This decouples the UI from the BL.
I prefer to do type casting in the UI and have the BLL expect the proper datatype.

Resources