How can I conditionally change the object type name in a graphql query in gatsby at build time? - graphql

I have static queries in my components that gets data from a db. Here is an example:
{
allDbName {
nodes {
name
}
}
}
I need a way to conditionally change the object type name (allDbName in the above example) at build time depending on the database we are using. I have tried string interpolation, graphql fragments and variables, and importing graphql queries from another file, but none of these are possible or suitable for this purpose. All the fields will be identical, the only thing that needs to change is the object type name.

Related

Filtering CassandraRepository by multiple fields dynamically

I need to filter on multiple fields of an entity dynamically when searching the CassandraRepository.
Specifically, there are multiple String fields of the entity. The user can indicate which (if any) of these fields they want to match a specified Regular expression (e.g., ".*").
However, it looks like CassandraRepository doesn't provide support for JpaSpecificationExecutor, which is what resources online typically suggest using for this purpose, giving the following issue:
Could not create query for public abstract Page JpaSpecificationExecutor.findAll(Specification, Pageable)! Reason: Page queries are not supported. Use a Slice query.
What is the appropriate way to approach this issue?
Based on the research I have done, the closest you can get is creating your own CQL Query String based on the inputs provided, and executing them on a CassandraOperations Object that you can autowire into the necessary class.

Custom fields with a GraphQL query

Possibly exposing my ignorance of apollo-server but hoping someone can help: so ATM I have some schemas stitched together with #graphql-tools; all very simple, cool. I can make queries without problem.
There's a desire to add custom fields to given queries, so that we add extra data from other sources into the requested existing query template.
To explain by example: say the schema looks like this:
type User {
id
projectId
}
I'm trying to develop something so that the query getUserById($id...) can provide a template like so:
query userById($id: ID!) {
userById(id: $id) {
id
project {
id
name
# whatever other fields I want from Project type
}
}
}
And then apollo/graphql would then make a separate, asynchronous request to fetch the project for that given User.
As I understand graphql-tools, I can see resolvers allow the ability to make async requests for extra data ... but my problem is by defining project { within the query template, an error is thrown because - of course - project is not defined in the actual Schema itself.
Is there a way to filter and remove fields from a given query, somewhere in the chain of events? A custom apollo-server plugin perhaps? As I said I'm exposing my ignorance here but I've got a little lost in how apollo behaves in tandem with GraphQl.

How to create a GraphQL query that returns data from multiple tables/models within one field using Laravel Lighthouse

Im trying to learn GraphQL with Laravel & Lighthouse and have a question Im hoping someone can help me with. I have the following five database tables which are also defined in my Laravel models:
users
books
user_books
book_series
book_copies
I'd like to create a GraphQL endpoint that allows me to get back an array of users and the books they own, where I can pull data from multiple tables into one subfield called "books" like so:
query {
users {
name
books {
title
issue_number
condition
user_notes
}
}
}
To accomplish this in SQL is easy using joins like this:
$users = User::all();
foreach ($users as $user) {
$user['books'] = DB::select('SELECT
book_series.title,
book.issue_number
book_copies.condition,
user_books.notes as user_notes
FROM user_books
JOIN book_copies ON user_books.book_copy_id = book_copies.id
JOIN books ON book_copies.book_id = books.id
JOIN book_series ON books.series_id = book_series.id
WHERE user_books.user_id = ?',[$user['id']])->get();
}
How would I model this in my GraphQL schema file when the object type for "books" is a mashup of properties from four other object types (Book, UserBook, BookCopy, and BookSeries)?
Edit: I was able to get all the data I need by doing a query that looks like this:
users {
name
userBooks {
user_notes
bookCopy {
condition
book {
issue_number
series {
title
}
}
}
}
}
However, as you can see, the data is separated into multiple child objects and is not as ideal as getting it all in one flat "books" object. If anyone knows how I might accomplish getting all the data back in one flat object, Id love to know.
I also noticed that the field names for the relationships need to match up exactly with my controller method names within each model, which are camelCase as per Laravel naming conventions. Except for my other fields are matching the database column names which are lower_underscore. This is a slight nitpick.
Ok, after you edited your question, I will write the answer here, to answer your new questions.
However, as you can see, the data is separated into multiple child objects and is not as ideal as getting it all in one flat "books" object. If anyone knows how I might accomplish getting all the data back in one flat object, Id love to know.
The thing is, that this kind of fetching data is a central idea of GraphQL. You have some types, and these types may have some relations to each other. So you are able to fetch any relations of object, in any depth, even circular.
Lighthouse gives you out of the box support to eloquent relations with batch loading, avoiding the N+1 performance problem.
You also have to keep in mind - every field (literally, EVERY field) in your GraphQL definition is resolved on server. So there is a resolve function for each of the fields. So you are free to write your own resolver for particular fields.
You actually can define a type in your GraphQL, that fits your initial expectation. Then you can define a root Query field e.g. fetchUsers, and create you custom field resolver. You can read in the docs, how it works and how to implement this: https://lighthouse-php.com/5.2/the-basics/fields.html#hello-world
In this field resolver you are able to make your own data fetching, even without using any Laravel/Eloquent API. One thing you have to take care of - return a correct data type with the same structure as your return type in GraphQL for this field.
So to sum up - you have the option to do this. But in my opinion, you have to write more own code, cover it with tests on you own, which turns out in more work for you. I think it is simpler to use build-in directives, like #find, #paginate, #all in combination with relations-directives, which all covered with tests, and don't care about implementation.
I also noticed that the field names for the relationships need to match up exactly with my controller method names within each model, which are camelCase as per Laravel naming conventions.
You probably means methods within Model class, not controller.
Lighthouse provides a #rename directive, which you can use to define different name in GraphQL for your attributes. For the relation directives you can pass an relation parameter, which will be used to fetch the data. so for your example you can use something like this:
type User {
#...
user_books: [Book!]! #hasMany(relation: "userBooks")
}
But in our project we decided to use snak_case also for relations, to keep GraphQL clean with consistent naming convention and less effort

Update Apollo cache after object creation

What are all the different ways of updating the Apollo InMemoryCache after a mutation? From the docs, I can see:
Id-based updates which Apollo performs automatically
Happens for single updates to existing objects only.
Requires an id field which uniquely identifies each object, or the cache must be configured with a dataIdFromObject function which provides a unique identifier.
"Manual" cache updates via update functions
Required for object creation, deletion, or updates of multiple objects.
Involves calling cache.writeQuery with details including which query should be affected and how the cache should be changed.
Passing the refetchQueries option to the useMutation hook
The calling code says which queries should be re-fetched from the API, Apollo does the fetching, and the results replace whatever is in the cache for the given queries.
Are there other ways that I've missed, or have I misunderstood anything about the above methods?
I am confused because I've been reading the code of a project which uses Apollo for all kinds of mutations, including creations and deletions, but I don't see any calls to cache.writeQuery, nor any usage of refetchQueries. How does the cache get updated after creations and deletions without either of those?
In my own limited experience with Apollo, the cache is not automatically updated after an object creation or deletion, not even if I define dataIdFromObject. I have to update the cache myself by writing update functions.
So I'm wondering if there is some secret config I've missed to make Apollo handle it for me.
The only way to create or delete a node and have Apollo automatically update the cache to reflect the change is to return the parent field of whatever field contains the updated List field. For example, let's say we have a schema like this:
type Query {
me: User
}
type User {
id: ID!
posts: [Post!]!
}
type Post {
id: ID!
body: String!
}
By convention, if we had a mutation to add a new post, the mutation field would return the created post.
type Mutation {
writePost(body: String!): Post!
}
However, we could have it return the logged in User instead (the same thing the me field returns):
type Mutation {
writePost(body: String!): User!
}
by doing so, we enable the client to make a query like:
mutation WritePost($body: String!){
writePost(body: $body) {
id
posts {
id
body
}
}
}
Here Apollo will not only create or update the cache for all the returned posts, but it will also update the returned User object, including the list of posts.
So why is this not commonly done? Why does Apollo's documentation suggest using writeQuery when adding or deleting nodes?
The above will work fine when your schema is simple and you're working with a relatively small amount of data. However, returning the entire parent node, including all its relations, can be noticeably slower and more resource-intensive once you're dealing with more data. Additionally, in many apps a single mutation could impact multiple queries inside the cache. The same node could be returned by any number of fields in the schema, and even the same field could be part of a number of different queries that utilize different filters, sort parameters, etc.
These factors make it unlikely that you'll want to implement this pattern in production but there certainly are use cases where it may be a valid option.

Apollo graphql remote schema extending is dependent on remote fields

We are able to extend our remote schema, but with one major caveat: the new field cannot be queried for on it's own; it must be included with at least one field from the remote. Is it possible to query for just the extended field?
In the example below, I have extended "name" to include the field "catsName." If I query for "first," the query works. If I query for "catsName" and "first," the query works. If I query for just "catsName," it returns an internal server error with status code 400.
Note :
- When we extend non-remote fields, we do not have this issue.
- Our remote GraphQL engine uses Absinthe (Erlang/Elixir). We use Apollo locally. Our goal is to support the legacy Absinthe GraphQL implementation.
Working Query :
query{
user{
profile{
personal{
name{ // extend type name
catsName // New field
first // Original field
}}}}}
Non-working Query :
query{
user{
profile{
personal{
name{ // extend type name
catsName // New field
}}}}}
Error :
"message": "Field \"name\" of type \"UserPersonalName\" must have a
selection of subfields. Did you mean \"name { ... }\"?"
After further research we determined that the reason for this is that user, profile, personal, and name exist on the remote server. The extension of "catsName" exists locally. If we query for just catsName, our engine has no way of knowing that the root fields exist, and therefore returns an error. If we include at least one remote field in our query, the local engine knows to fetch the remote and is able to return all of the data.

Resources