Typed document node with vue3 apollo and complex where - laravel

I use this package https://github.com/dotansimha/graphql-typed-document-node and I usually call it like this useQuery(peopleDocument, variables).
But laravel lighthouse has a complex where plugin which automatically adds all types for various queries where conditions for example
{ people(where: { column: AGE, operator: EQ, value: 42 }) { name } }
I would like to allow users to build their own filters with their own operators but how I can define such query when filters and their operators are dynamic?

#xadm vague explanation somehow helped. So I'll answer my own question.
I'm unsure if it's how to should be used but it does work.
Lighthouse generates operators and columns enums automatically with #whereConditions directive. So you have to run yarn graphql-codegen again to fetch them.
And then simply import then simply use them in component
Lighthouse schema definition example:
type Query {
contact(where: _ #whereConditions(columns: ["id", "email", "mobile"])): Contact #first
}
Query definition:
query contact($where: QueryContactWhereWhereConditions) {
contact(where: $where) {
id
email
mobile
}
}
Vue component:
import { ContactDocument, QueryContactWhereColumn, SqlOperator } from 'src/typed-document-nodes.ts';
useQuery(ContactDocument, {where: {column: QueryContactWhereColumn.Email, operator: SqlOperator.Like, value: '%#example.com'}})

Related

GraphQL | How to implement conditional nesting?

Please consider the following GraphQL schema:
type User {
id: ID!
events: [Event]
}
type Event {
id: ID!
user: User!
asset: Asset!
}
type Asset {
id: ID
price: Number!
name: String!
}
GraphQL is a fantastic framework for fetching nested objects, but I'm struggling to understand how conditional nesting is implemented.
Example:
I want to retrieve all events for a specific user where asset.price is greater than x.
Or
I want to retrieve all events for an asset that belongs to a list of users [].
Question: Is conditional nesting a concept in GraphQL and how is it implemented?
Side note: I use AWS AppSync and resolvers are fetching data from AWS DynamoDB.
You can define a filter/condition on any GraphQL query such as:
query {
users(permission: 'ADMIN') {
...
}
}
The permission param is passed to your resolver (say DynamoDb VTL template, Lambda etc) to be handled however you want - to GQL this is just another parameter.
You can carry this concept into nested field by creating an events resolver and you'd then call it like this:
query {
user(id: '123') {
name
events(minPrice: 200) {
nodes: {
id
eventName
eventDate
}
}
dob
...
}
}
In above case I am using a simple minPrice param but you could do more complex things such price ranges, even pass operators (eq, gt, ...). It's all irrelevant to GraphQL - all gets passed to the resolver.
How you implement that on backend depends on your setup. I use AppSync without Amplify and write my own VTL templates and build the DynamoDb request using the provided GQL fields.
Here is an SO post that shows how to create a date filter.

How to make a foreach request in Apollo GraphQL?

I have a blog in javascript and I'm using Apollo GraphQL to save my data. I intend to make a list with six posts of all categories. Like this:
[
technology: [
post1,
post2,
...
],
cook: [
post1,
post2,
...
],
...
]
But I couldn't. I thought in take all categories's id's and make a big request, like this:
{
firstCategory: allPosts(where: {category: "firstId"}, first: 6) {
...fields
}
secondCategory: allPosts(where: {category: "secondId"}, first: 6) {
...fields
}
}
But if I add a new category, I must change my code.
You can pass categories [queried earlier] as variables, like:
query posts($firstCat: String, $secondCat: String, $limit: Int) {
firstCategory: allPosts(where: {category: $firstCat}, first: $limit) {
...fields
}
secondCategory: allPosts(where: {category: $secondCat}, first: $limit) {
...fields
}
}
variables:
{
"firstCat": "technology",
"secondCat": "cook",
"limit": 6
}
Of course, it's still limited to n prepared aliases.
Building dynamic aliases are possible but not adviced - source ... AST tools should be used as gluing strigs (directly or using literals) is considered as abusing GraphQL.
You can also destructure this component, multiple <Category name="technology" /> components with their own internal query (use props.name as category filter variable). This will make separate requests.
For speed and SEO purposes you can use gatsby (amount of separate requests doesn't matter ... up to some scale) - but (as all static generators) it requires rebuild/redeploy on changes.
Good API should let you query for tags and related posts inside each of them (sorted by any criteria; ... and author inside [and his posts] ... etc - it's a standard GraphQL feature).

How could I structure my graphql schema to allow for the retrieval of possible dropdown values?

I'm trying to get the possible values for multiple dropdown menus from my graphQL api.
for example, say I have a schema like so:
type Employee {
id: ID!
name: String!
jobRole: Lookup!
address: Address!
}
type Address {
street: String!
line2: String
city: String!
state: Lookup!
country: Lookup!
zip: String!
}
type Lookup {
id: ID!
value: String!
}
jobRole, city and state are all fields that have a predetermined list of values that are needed in various dropdowns in forms around the app.
What would be the best practice in the schema design for this case? I'm considering the following option:
query {
lookups {
jobRoles {
id
value
}
}
}
This has the advantage of being data driven so I can update my job roles without having to update my schema, but I can see this becoming cumbersome. I've only added a few of our business objects, and already have about 25 different types of lookups in my schema and as I add more data into the API I'll need to somehow to maintain the right lookups being used for the right fields, dealing with general lookups that are used in multiple places vs ultra specific lookups that will only ever apply to one field, etc.
Has anyone else come across a similar issue and is there a good design pattern to handle this?
And for the record I don't want to use enums with introspection for 2 reasons.
With the number of lookups we have in our existing data there will be a need for very frequent schema updates
With an enum you only get one value, I need a code that will be used as the primary key in the DB and a descriptive value that will be displayed in the UI.
//bad
enum jobRole {
MANAGER
ENGINEER
SALES
}
//needed
[
{
id: 1,
value: "Manager"
},
{
id: 2,
value: "Engineer"
},
{
id: 3,
value: "Sales"
}
]
EDIT
I wanted to give another example of why enums probably aren't going to work. We have a lot of descriptions that should show up in a drop down that contain special characters.
// Client Type
[
{
id: 'ENDOW',
value: 'Foundation/Endowment'
},
{
id: 'PUBLIC',
value: 'Public (Government)'
},
{
id: 'MULTI',
value: 'Union/Multi-Employer'
}
]
There are others that are worse, they have <, >, %, etc. And some of them are complete sentences so the restrictive naming of enums really isn't going to work for this case. I'm leaning towards just making a bunch of lookup queries and treating each lookup as a distinct business object
I found a way to make enums work the way I needed. I can get the value by putting it in the description
Here's my gql schema definition
enum ClientType {
"""
Public (Government)
"""
PUBLIC
"""
Union/Multi-Employer
"""
MULTI
"""
Foundation/Endowment
"""
ENDOW
}
When I retrieve it with an introspection query like so
{
__type(name: "ClientType") {
enumValues {
name
description
}
}
}
I get my data in the exact structure I was looking for!
{
"data": {
"__type": {
"enumValues": [{
"name": "PUBLIC",
"description": "Public (Government)"
}, {
"name": "MULTI",
"description": "Union/Multi-Employer"
}, {
"name": "ENDOW",
"description": "Foundation/Endowment"
}]
}
}
}
Which has exactly what I need. I can use all the special characters, numbers, etc. found in our descriptions. If anyone is wondering how I keep my schema in sync with our database, I have a simple code generating script that queries the tables that store this info and generates an enums.ts file that exports all these enums. Whenever the data is updated (which doesn't happen that often) I just re-run the code generator and publish the schema changes to production.
You can still use enums for this if you want.
Introspection queries can be used client-side just like any other query. Depending on what implementation/framework you're using server-side, you may have to explicitly enable introspection in production. Your client can query the possible enum values when your app loads -- regardless of how many times the schema changes, the client will always have the correct enum values to display.
Enum values are not limited to all caps, although they cannot contain spaces. So you can have Engineer but not Human Resources. That said, if you substitute underscores for spaces, you can just transform the value client-side.
I can't speak to non-JavaScript implementations, but GraphQL.js supports assigning a value property for each enum value. This property is only used internally. For example, if you receive the enum as an argument, you'll get 2 instead of Engineer. Likewise, you would return 2 instead of Engineer inside a resolver. You can see how this is done with Apollo Server here.

graphql-tools difference between mergeSchemas and makeExecutableSchema

So the reason I am asking this question is because I can get both of these to return a working result with just replacing one or the other. So which is the right one to use and why?
What are their purposes in regards to schemas?
import { mergeSchemas } from 'graphql-tools'
import bookSchema from './book/schema/book.gql'
import bookResolver from './book/resolvers/book'
export const schema = mergeSchemas({
schemas: [bookSchema],
resolvers: [bookResolver]
})
import { makeExecutableSchema } from 'graphql-tools'
import bookSchema from './book/schema/book.gql'
import bookResolver from './book/resolvers/book'
export const schema = makeExecutableSchema({
typeDefs: [bookSchema],
resolvers: [bookResolver]
})
Both of these examples work and return the desired outcome. I believe the correct one to use here is the makeExecutableSchema but not sure why the first one would work?
EDIT
Just incase it would be nice to have the types/resolvers:
typeDefs
type Query {
book(id: String!): Book
bookList: [Book]
}
type Book {
id: String
name: String
genre: String
}
Resolvers
export default {
Query: {
book: () => {
return {
id: `1`,
name: `name`,
genre: `scary`
}
},
bookList: () => {
return [
{ id: `1`, name: `name`, genre: `scary` },
{ id: `2`, name: `name`, genre: `scary` }
]
}
}
}
Query Ran
query {
bookList{
id
name
genre
}
}
Result
{
"data": {
"bookList": [
{
"id": "1",
"name": "name",
"genre": "scary"
},
{
"id": "2",
"name": "name",
"genre": "scary"
}
]
}
}
mergeSchemas is primarily intended to be used for schema stitching, not combing code for a single schema you've chosen to split up for organizational purposes.
Schema stitching is most commonly done when you have multiple microservices that each expose a GraphQL endpoint. You can extract schemas from each endpoint and then use mergeSchemas to create a single GraphQL service that delegates queries to each microservice as appropriate. Technically, schema stitching could also be used to extend some existing API or to create multiple services from a base schema, although I imagine those use cases are less common.
If you are architecting a single, contained GraphQL service you should stick with makeExecutableSchema. makeExecutableSchema is what actually lets you use Schema Definition Language to generate your schema. mergeSchemas is a relatively new API and has a number of open issues, especially with regards to how directives are handled. If you don't need the functionality provided by mergeSchemas -- namely, you're not actually merging separate schemas, don't use it.
Yes makeExecutableSchema creates a GraphQL.js GraphQLSchema instance from GraphQL schema language as per graphql-tools docs So if you are creating stand alone, contained GrpaphQL service is a way to go.
But if you are looking to consolidate multiple GraphQL services there are multiple different strategies you may consider such as schema-stitching, schema-merging from graphql-tools or federation from apollo (there are probably more).
Since I landed here while searching what is the difference between stitching and merging I wanted to point out that they are not one and the same thing. Here is the answer I got for this question on graphql-tools github.
Schema Stitching creates a proxy schema on top of different independent subschemas, so the parts of that schema are executed using GraphQLJS internally. This is useful to create an architecture like microservices.
Schema Merging creates a new schema by merging the extracted type definitions and resolvers from them, so there will be a single execution layer.
The first one keeps the individual schemas, but the second one won't. A use case for the first would be for combining multiple remote GraphQL APIs (microservices), while the second one would be good for combining local schemas.

how to match queries with apollo's refetchQuery

My fundimental question is do the variables for queries need to be exact for refetchQueries to work. Or can you give it a subset of variables and it will match similar queries.
Consider the following ....
<Query<NotesQuery, NotesQueryVariables>
query={notesQuery}
variables={{
input: {
notebookId: notebookContext.id,
first: 20
}
}}
>
</Query>
and the following mutation:
client
.mutate<NoteCreateOrUpdateMutation, NoteCreateOrUpdateMutationVariables>({
mutation: noteCreateOrUpdateMutation,
variables: {
input: {
noteId: note ? note.id : undefined,
subjectIds: noteSubjects,
notebookId: notebookContext.id,
authorId: userContext.id,
content: noteContent,
context: noteCaption,
}
},
refetchQueries: [
{
query: notesQuery,
variables: { input: { notebookId: notebookContext.id } } as NotesQueryVariables
}
]
})
when I do that mutation it is NOT refetching the note query with the pagination
If I add the first: 20 parameter -- it works.
I would like it to clear all noteQueries that match with the given parameters. Is that possible?
I believe you'll be wanting to add #connection directives to your gql definitions of notesQuery and measurementsQuery. You didn't post those, so unfortunately I can't show you exactly what that would look like for your use case.
Anyway, the #connection directive will allow Apollo to match on notebookId for example, while ignoring the value of first.
Unfortunately, you've bundled all your input into the object input, and I don't know how you would select just notebookId with the filter. Assuming that your gql definition looks something like this for notesQuery:
const notesQuery = gql`
query notes($input: InputType!) {
notes(input: $input) #connection(key: "notes", filter: ["input['notebookId']"]) {
id
...
}
}
`;
^^^ Unfortunately, that won't work because of the way that apollo-utilities/lib/storeUtils.js -> getStoreKeyName() function works. It'll just ignore the above attempt to get better resolution than an arg name, i.e. can't go beyond input. Any string in the filter array that doesn't match an arg name is silently ignored.
Looks like you'll have to modify your schema.
More info at: https://www.apollographql.com/docs/react/features/pagination.html#connection-directive

Resources