can I assign a variable in a graphql playground to the result of a mutation - graphql

If I have a mutation with 2 fields:
type Mutation {
createSimulation(
name: String
simulators: [AvailableSimulators!]!
timeToLiveInMS: Int
): Simulation!
create(
simulationID: ID!
simulator: AvailableSimulators!
type: String!
attributes: KeyValuePair
): CreateResult!
}
When I run the mutation in the graphql applo server playground, I need a value from the return of createSimulation in a call to create:
Can I somehow assign a variable that I can use in create?

Not part of the GraphQL standard. Should handle this in the resolver in the backend. It will depend on the technology used, but in most of the technologies you can use results from previous resolvers or call resolvers manually.

No, you should send 2 requests to handle it. The client-side should request createSimulation first to get a response then request another request to create mutation with that UUID.

Related

How to fetch schema from graphQL API endpoint

I need to fetch a schema from graphQL API endpoint.
So the result must be a string like:
type User {
id: ID!
name: String!
}
type Home {
user: User!
address: String
country: String!
}
type Query {
MyUsers: User!
}
Is it possible to do by using codegen?
It really depends on the GraphQL server you are using, Some GraphQL servers provide a GraphQL Explorer. The GraphQL Explorer is a page where you can manually input GraphQL queries into the page such as Graphiql
Other way is to try out graphql-cli you can do with the below command,
get-graphql-schema ENDPOINT_URL -j > schema.json

Mock GraphQLUpload dataype in POSTMAN

I am using Postman to send GraphQL queries to my graphQL server. This is what a particular mutation schema looks like:
extend type Mutation {
createMutation(
my_id: ID!
my_data: input_data
): some_output
input input_data {
some_key: int
file : Upload!
}
I am able to perform other mutations and query through graphQL by defining appropriate GraphQL variables in here
I am not sure how to create a json value in GraphQL Variables for "file" of type Upload

GraphQL | How to implement conditional nesting?

Please consider the following GraphQL schema:
type User {
id: ID!
events: [Event]
}
type Event {
id: ID!
user: User!
asset: Asset!
}
type Asset {
id: ID
price: Number!
name: String!
}
GraphQL is a fantastic framework for fetching nested objects, but I'm struggling to understand how conditional nesting is implemented.
Example:
I want to retrieve all events for a specific user where asset.price is greater than x.
Or
I want to retrieve all events for an asset that belongs to a list of users [].
Question: Is conditional nesting a concept in GraphQL and how is it implemented?
Side note: I use AWS AppSync and resolvers are fetching data from AWS DynamoDB.
You can define a filter/condition on any GraphQL query such as:
query {
users(permission: 'ADMIN') {
...
}
}
The permission param is passed to your resolver (say DynamoDb VTL template, Lambda etc) to be handled however you want - to GQL this is just another parameter.
You can carry this concept into nested field by creating an events resolver and you'd then call it like this:
query {
user(id: '123') {
name
events(minPrice: 200) {
nodes: {
id
eventName
eventDate
}
}
dob
...
}
}
In above case I am using a simple minPrice param but you could do more complex things such price ranges, even pass operators (eq, gt, ...). It's all irrelevant to GraphQL - all gets passed to the resolver.
How you implement that on backend depends on your setup. I use AppSync without Amplify and write my own VTL templates and build the DynamoDb request using the provided GQL fields.
Here is an SO post that shows how to create a date filter.

Apollo graphQL - can you query local state using variables without having to use a resolver?

I am using apollo-cache-inmemory, apollo-client, react-apollo.
My local state contains a users array like so: -
users: [{
__typename: "User",
userId: "hashid1",
...
},
{
__typename: "User",
userId: "hashid2",
...
}]
Now I can obviously run a simple query to retrieve all the users from the local state: -
import gql from "graphql-tag"
export default gql`{users #client {userId}}`
However, what I would like to do is to be able to query the users array directly, passing variables like so: -
const userDetails = await client.query({ query: USER_DETAILS, variables: {id: "hashId1"}})
Is it possible to run this query without using a resolver? I have attempted the following but { data } returns as null: -
export default gql`query user($id: String!) {users(userId: $id) #client {userId}}`
I already use resolvers and can easily write one to take care of this issue but I am wondering if there it is possible to perform this task without one?
It looks like you're looking for some magic ;)
You must write customization code (overwrite default resolver - return all records) to have a customized behavior (return data filtered by your criteria). That should be obviuos.
There is no default/ready/built in searching/filtering syntax in graphql - therefore, there is no default behaviours for them in apollo-client (no matter local/remote server/data). It is up to you to implement what you need.

AWS-Amplify API module: how to make GraphQL fields unique?

AWS-Amplify provides a couple of directives to build an GraphQL-API. But I haven't found out how to ensure uniqueness for fields.
I want to do something like in GraphCool:
type Tag #model #searchable {
id: ID!
label: String! #isUnique
}
This is an AWS-Amplify specific question. It's not about how to do this with generic GraphQL. It's very specifically about how to do this with AWS-Amplify's API module. (https://aws-amplify.github.io/docs/js/api)
Hey thanks for the question. This is not yet possible by default using the amplify-cli but you could do this yourself using pipeline resolvers and an extra index on your DynamoDB table. The steps to do this are as follows:
Create a GSI on the table where the label is the HASH KEY.
Create a pipeline resolver on the Mutation.createTag field in your schema. You can turn off the auto-generated Mutation.createTag mutation by changing your #model definition to #model(mutations: { update: "updateTag", delete: "deleteTag" }).
Create a function named LookupLabel that issues a Query against the new GSI where the label = $ctx.args.input.label. If this returns a value, throw an error with $util.error("Label is not unique"). If it returns no values then continue.
Create a function named CreateTag that issues a PutItem against the Tag table.
Add those two functions in order to your pipeline resolver.
You can read more about pipeline resolvers here https://docs.aws.amazon.com/appsync/latest/devguide/pipeline-resolvers.html.
As of writing amplify does not yet support custom & pipeline resolvers but you can read more about the feature here https://github.com/aws-amplify/amplify-cli/issues/574 as it will be supported in the future. For now you can add the resolver manually in the AWS AppSync console or via your own CloudFormation template that targets the id of the API created by Amplify. It would also be helpful if you create an issue here (https://github.com/aws-amplify/amplify-cli/issues) and tag this as a feature request because it would be possible to automate this with an #unique directive but this would need to be planned.
Thanks
Update: now you can use #primarykey and #index annotations:
https://docs.amplify.aws/cli/migration/transformer-migration/#what-is-changing
basic:
profile #model {
name
email #primaryKey - has to be unique
other
}
so if you needed something like:
profile #model {
name
email: String! #hasOne
other
}
email #model {
email: String! #primaryKey
}
if you are on an older version see below
I will eventually be testing this out to see if this works but you might be able to do something like rename the id to a string!
so...
type Tag #model #key["id"] {
id: String!
}
or:
type Customer #model #key(fields: ["email"]) {
email: String!
username: String
}
this second one is taken directly from the docs: https://docs.amplify.aws/cli/graphql-transformer/key#designing-data-models-using-key
The docs were updated recently so hopefully they are easier for everyone to understand.
If you need a more advanced workflow with allot of keys, and stuff like that then you just have to separate things out and make more types for example:
type Customer #model {
id: String!
email: Email! #hasOne
username: String
}
type email #model #key(fields: ["email"]) {
email: String!
}

Resources