Formats to use on a response with fast-json-stringify - stringify

What I want to do is add validation to the schema response from a fastify route.
Following the documentation from Fastify here we can see this
Ajv for the validation of a request
fast-json-stringify for the serialization of a response's body
Related to improve and add validations for a response, what I want to do is check the schema when I send a response.
fast-json-stringify support different options, included format, but if you read the documentation, they said that they support JSON schema. Jsonschema has support for email format, that you can see here as a built-in format but when I try to use it on Fastify, like this:
{
response: {
200: {
type: 'object',
required: ['email'],
properties: {
email: {
type: 'string',
format: 'email',
}
}
}
}
}
And try to return ad response this
reply.code(200).send({ email: 'test' })
The only validation that I can do is when I set the type to integer and try to return a string.
Did you know if it's possible to use ajv-formats with fast-json-stringify to add validation to the response schema and use the formats from ajv and add new ones?
Many thanks in advance!

fast-json-stringify does the serialization, not the validation.
The json schema provided to it will be used to serialize only the properties declared and some type checking like integer or arrays.
the enum keyword is not supported
the format keyword is supported only for the dates as documented:
To reach your goal, you should use this plugin: fastify-response-validation that will add a validation step of your response body before the serialization process.

Related

how to get the Graphql request body in apollo-server [duplicate]

I have written a GraphQL query which like the one below:
{
posts {
author {
comments
}
comments
}
}
I want to know how can I get the details about the requested child fields inside the posts resolver.
I want to do it to avoid nested calls of resolvers. I am using ApolloServer's DataSource API.
I can change the API server to get all the data at once.
I am using ApolloServer 2.0 and any other ways of avoiding nested calls are also welcome.
You'll need to parse the info object that's passed to the resolver as its fourth parameter. This is the type for the object:
type GraphQLResolveInfo = {
fieldName: string,
fieldNodes: Array<Field>,
returnType: GraphQLOutputType,
parentType: GraphQLCompositeType,
schema: GraphQLSchema,
fragments: { [fragmentName: string]: FragmentDefinition },
rootValue: any,
operation: OperationDefinition,
variableValues: { [variableName: string]: any },
}
You could transverse the AST of the field yourself, but you're probably better off using an existing library. I'd recommend graphql-parse-resolve-info. There's a number of other libraries out there, but graphql-parse-resolve-info is a pretty complete solution and is actually used under the hood by postgraphile. Example usage:
posts: (parent, args, context, info) => {
const parsedResolveInfo = parseResolveInfo(info)
console.log(parsedResolveInfo)
}
This will log an object along these lines:
{
alias: 'posts',
name: 'posts',
args: {},
fieldsByTypeName: {
Post: {
author: {
alias: 'author',
name: 'author',
args: {},
fieldsByTypeName: ...
}
comments: {
alias: 'comments',
name: 'comments',
args: {},
fieldsByTypeName: ...
}
}
}
}
You can walk through the resulting object and construct your SQL query (or set of API requests, or whatever) accordingly.
Here, are couple main points that you can use to optimize your queries for performance.
In your example there would be great help to use
https://github.com/facebook/dataloader. If you load comments in your
resolvers through data loader you will ensure that these are called
just once. This will reduce the number of calls to database
significantly as in your query is demonstrated N+1 problem.
I am not sure what exact information you need to obtain in posts
ahead of time, but if you know the post ids you can consider to do a
"look ahead" by passing already known ids into comments. This will
ensure that you do not need to wait for posts and you will avoid
graphql tree calls and you can do resolution of comments without
waiting for posts. This is great article for optimizing GraphQL
waterfall requests and might you give good idea how to optimize your
queries with data loader and do look ahead
https://blog.apollographql.com/optimizing-your-graphql-request-waterfalls-7c3f3360b051

Resolve to the same object from two incoherent sources in graphql

I have a problem I don't know how to solve properly.
I'm working on a project where we use a graphql server to communicate with different apis. These apis are old and very difficult to update so we decided to use graphql to simplify our communications.
For now, two apis allow me to get user data. I know it's not coherent but sadly I can't change anything to that and I need to use the two of them for different actions. So for the sake of simplicity, I would like to abstract this from my front app, so it only asks for user data, always on the same format, no matter from which api this data comes from.
With only one api, the resolver system of graphql helped a lot. But when I access user data from a second api, I find very difficult to always send back the same object to my front page. The two apis, even though they have mostly the same data, have a different response format. So in my resolvers, according to where the data is coming from, I should do one thing or another.
Example :
API A
type User {
id: string,
communication: Communication
}
type Communication {
mail: string,
}
API B
type User {
id: string,
mail: string,
}
I've heard a bit about apollo-federation but I can't put a graphql server in front of every api of our system, so I'm kind of lost on how I can achieve transparency for my front app when data are coming from two different sources.
If anyone has already encounter the same problem or have advice on something I can do, I'm all hear :)
You need to decide what "shape" of the User type makes sense for your client app, regardless of what's being returned by the REST APIs. For this example, let's say we go with:
type User {
id: String
mail: String
}
Additionally, for the sake of this example, let's assume we have a getUser field that returns a single user. Any arguments are irrelevant to the scenario, so I'm omitting them here.
type Query {
getUser: User
}
Assuming I don't know which API to query for the user, our resolver for getUser might look something like this:
async () => {
const [userFromA, userFromB] = await Promise.all([
fetchUserFromA(),
fetchUserFromB(),
])
// transform response
if (userFromA) {
const { id, communication: { mail } } = userFromA
return {
id,
mail,
}
}
// response from B is already in the correct "shape", so just return it
if (userFromB) {
return userFromB
}
}
Alternatively, we can utilize individual field resolvers to achieve the same effect. For example:
const resolvers = {
Query: {
getUser: async () => {
const [userFromA, userFromB] = await Promise.all([
fetchUserFromA(),
fetchUserFromB(),
])
return userFromA || userFromB
},
},
User: {
mail: (user) => {
if (user.communication) {
return user.communication.mail
}
return user.mail
}
},
}
Note that you don't have to match your schema to either response from your existing REST endpoints. For example, maybe you'd like to return a User like this:
type User {
id: String
details: UserDetails
}
type UserDetails {
email: String
}
In this case, you'd just transform the response from either API to fit your schema.

Document all potential errors on GraphQL server?

For a mutation addVoucher there are a limited list of potential errors that can occur.
Voucher code invalid
Voucher has expired
Voucher has already been redeemed
At the moment I'm throwing a custom error when one of these occurs.
// On the server:
const addVoucherResolver = () => {
if(checkIfInvalid) {
throw new Error('Voucher code invalid')
}
return {
// data
}
}
Then on the client I search the message description so I can alert the user. However this feels brittle and also the GraphQL API doesn't automatically document the potential errors. Is there a way to define the potential errors in the GraphQL schema?
Currently my schema looks like this:
type Mutation {
addVoucherResolver(id: ID!): Order
}
type Order {
cost: Int!
}
It would be nice to be able to do something like this:
type Mutation {
addVoucherResolver(id: ID!): Order || VoucherError
}
type Order {
cost: Int!
}
enum ErrorType {
INVALID
EXPIRED
REDEEMED
}
type VoucherError {
status: ErrorType!
}
Then anyone consuming the API would know all the potential errors. This feels like a standard requirement to me but from reading up there doesn't seem to be a standardises GraphQL approach.
It's possible to use a Union or Interface to do what you're trying to accomplish:
type Mutation {
addVoucher(id: ID!): AddVoucherPayload
}
union AddVoucherPayload = Order | VoucherError
You're right that there isn't a standardized way to handle user-visible errors. With certain implementations, like apollo-server, it is possible to expose additional properties on the errors returned in the response, as described here. This does make parsing the errors easier, but is still not ideal.
A "Payload" pattern has emerged fairly recently for handling these errors as part of the schema. You see can see it in public API's like Shopify's. Instead of a Union like in the example above, we just utilize an Object Type:
type Mutation {
addVoucher(id: ID!): AddVoucherPayload
otherMutation: OtherMutationPayload
}
type AddVoucherPayload {
order: Order
errors: [Error!]!
}
type OtherMutationPayload {
something: Something
errors: [Error!]!
}
type Error {
message: String!
code: ErrorCode! # or a String if you like
}
enum ErrorCode {
INVALID_VOUCHER
EXPIRED_VOUCHER
REDEEMED_VOUCHER
# etc
}
Some implementations add a status or success field as well, although I find that making the actual data field (order is our example) nullable and then returning null when the mutation fails is also sufficient. We can even take this one step further and add an interface to help ensure consistency across our payload types:
interface Payload {
errors: [Error!]!
}
Of course, if you want to be more granular and distinguish between different types of errors to better document which mutation can return what set of errors, you won't be able to use an interface.
I've had success with this sort of approach, as it not only documents possible errors, but also makes it easier for clients to deal with them. It also means that any other errors that are returned with a response should serve as an immediately red flag that something has gone wrong with either the client or the server. YMMV.
You can use scalar type present in graphql
just write scalar JSON and return any JSON type where you want to return it.
`
scalar JSON
type Response {
status: Boolean
message: String
data: [JSON]
}
`
Here is Mutation which return Response
`
type Mutation {
addVoucherResolver(id: ID!): Response
}
`
You can return from resolver
return {
status: false,
message: 'Voucher code invalid(or any error based on condition)',
data: null
}
or
return {
status: true,
message: 'Order fetch successfully.',
data: [{
object of order
}]
}
on Front end you can use status key to identify response is fetch or error occurs.

GraphQL: how to have it return a flexible, dynamic array, depending on what the marketeer filled in? [duplicate]

We are in the situation that the response of our GraphQL Query has to return some dynamic properties of an object. In our case we are not able to predefine all possible properties - so it has to be dynamic.
As we think there are two options to solve it.
const MyType = new GraphQLObjectType({
name: 'SomeType',
fields: {
name: {
type: GraphQLString,
},
elements: {
/*
THIS is our special field which needs to return a dynamic object
*/
},
// ...
},
});
As you can see in the example code is element the property which has to return an object. A response when resolve this could be:
{
name: 'some name',
elements: {
an_unkonwn_key: {
some_nested_field: {
some_other: true,
},
},
another_unknown_prop: 'foo',
},
}
1) Return a "Any-Object"
We could just return any object - so GraphQL do not need to know which fields the Object has. When we tell GraphQL that the field is the type GraphQlObjectType it needs to define fields. Because of this it seems not to be possible to tell GraphQL that someone is just an Object.
Fo this we have changed it like this:
elements: {
type: new GraphQLObjectType({ name: 'elements' });
},
2) We could define dynamic field properties because its in an function
When we define fields as an function we could define our object dynamically. But the field function would need some information (in our case information which would be passed to elements) and we would need to access them to build the field object.
Example:
const MyType = new GraphQLObjectType({
name: 'SomeType',
fields: {
name: {
type: GraphQLString,
},
elements: {
type: new GraphQLObjectType({
name: 'elements',
fields: (argsFromElements) => {
// here we can now access keys from "args"
const fields = {};
argsFromElements.keys.forEach((key) => {
// some logic here ..
fields[someGeneratedProperty] = someGeneratedGraphQLType;
});
return fields;
},
}),
args: {
keys: {
type: new GraphQLList(GraphQLString),
},
},
},
// ...
},
});
This could work but the question would be if there is a way to pass the args and/or resolve object to the fields.
Question
So our question is now: Which way would be recommended in our case in GraphQL and is solution 1 or 2 possible ? Maybe there is another solution ?
Edit
Solution 1 would work when using the ScalarType. Example:
type: new GraphQLScalarType({
name: 'elements',
serialize(value) {
return value;
},
}),
I am not sure if this is a recommended way to solve our situation.
Neither option is really viable:
GraphQL is strongly typed. GraphQL.js doesn't support some kind of any field, and all types defined in your schema must have fields defined. If you look in the docs, fields is a required -- if you try to leave it out, you'll hit an error.
Args are used to resolve queries on a per-request basis. There's no way you can pass them back to your schema. You schema is supposed to be static.
As you suggest, it's possible to accomplish what you're trying to do by rolling your own customer Scalar. I think a simpler solution would be to just use JSON -- you can import a custom scalar for it like this one. Then just have your elements field resolve to a JSON object or array containing the dynamic fields. You could also manipulate the JSON object inside the resolver based on arguments if necessary (if you wanted to limit the fields returned to a subset as defined in the args, for example).
Word of warning: The issue with utilizing JSON, or any custom scalar that includes nested data, is that you're limiting the client's flexibility in requesting what it actually needs. It also results in less helpful errors on the client side -- I'd much rather be told that the field I requested doesn't exist or returned null when I make the request than to find out later down the line the JSON blob I got didn't include a field I expected it to.
One more possible solution could be to declare any such dynamic object as a string. And then pass a stringified version of the object as value to that object from your resolver functions. And then eventually you can parse that string to JSON again to make it again an object on the client side.
I'm not sure if its recommended way or not but I tried to make it work with this approach and it did work smoothly, so I'm sharing it here.

Changes to Input::json() function between Laravel 4 beta3 and beta4

When I was developing in Laravel4 Beta3, I used to get JSON POST data from a service using Input::json() function, But when I updated to Laravel4 Beta4, I am getting following error:
Notice: Undefined property: Symfony\Component\HttpFoundation\ParameterBag::$productName in /Applications/MAMP/htdocs/commonDBAPI/app/controllers/UserController.php line 47
Does any one have any idea, what could be the reason.
Thanks,
You can access just the JSON using Input::json()->all().
JSON input is also merged into Input::all() (and Input::get('key', 'default')) so you can use the same interface to get Query string data, Form data and a JSON payload.
The documentation does not yet reflect all changes because Laravel 4 is still in beta and the focus is on getting the code right, the documentation will be updated ready for the public release.
How is JSON merged with Input::all()?
Consider the following JSON:
{
'name': 'Phill Sparks',
'location': 'England',
'skills': [
'PHP',
'MySQL',
'Laravel'
],
'jobs': [
{
'org': 'Laravel',
'role': 'Quality Team',
'since': 2012
}
]
}
When merged into Laravel's input the JSON is decoded, and the top-level keys become top-level keys in the input. For example:
Input::get('name'); // string
Input::get('skills'); // array
Input::get('jobs.0'); // object
Input::all(); // Full structure of JSON, plus other input
Yup they changed it to return a ParameterBag object switch your code to Input::json()->all()
For :
{
"name":"Olivier",
"title":"Just a try"
}
Try this :
$input = Input::json()->all();
return $input['name'];

Resources