Postgraphile VS custom graphql resolver/schema in express server - graphql

I was experimenting with Postgraphile and it is a great tool to auto-generate the GraphQL API but I am still a little bit confused why should I use it. (P.S. I am not very experienced with GraphQL and PostgreSQL.)
Question 1:
Can I think of it this way? Postgraphile generates the code(query, mutation, resolvers, schema, types) for a server and these code are the code that we are going to write anyways if we are not using Postgraphile?
Question 2:
An example, a server receives a string James from the front end and I want to concat Bond to it before storing it in the db's full name column. How do I achieve this mutation? Am I going to need the makeExtendSchemaPlugin to merge my schema with the resolver in Postgraphile?

Question 1: Postgraphile generates the code(query, mutation, resolvers, schema, types) for a server and these code are the code that we are going to write anyways if we are not using Postgraphile
Correct. postgraphile will create CRUD operations that are optimized and many other features which otherwise you would need to implement.
Question 2: An example, a server receives a string James from the front end and I want to concat Bond to it before storing it in the db's full name column. How do I achieve this mutation?
You can create postgreSQL function and implement business logic. https://www.graphile.org/postgraphile/custom-mutations/

Related

Is there a way to know which query to Hasura generated a given SQL output?

Assuming you've identified queries to inspect on a relational database that are likely running into the pitfall of sending too many too small queries and want to figure out where they come from to give the team sending them a heads up, is there any way to tell what graphql query generated it from the compiled SQL output?
Doing things the other way around where you inspect the compiled output of a known graphql query is easy. But there doesn't seem to be any easy way of acting on feedback from the actual DB?
The Hasura Query log is probably a good place to start. Do you have these logs enabled for your Hasura installation?
If you look for logs of type query-log you'll get a structured JSON object with properties that will have the operation name as well as the GQL query that was submitted to Hasura and the generated_sql that was produced.
You'd be able to match on the generated_sql and then find the actual GQL that caused it using that approach

hasura - to call an http service api and insert the response into postgresql

I've already made an action of type query that calls an http endpoint and return a list of results.
Then i should insert this resut into the postgresql (i suppose through a mutation).
So, how can i join this insert mutation to the previus query result, and eventually apply some custom logic (eg: not insert the already present records)
I was looking into this myself a couple of days ago, and my takeaway so far was that this is currently not possible. You would still have to write a small service (e.g. aws lambda) that calls your action and plugs the result into the mutation. That is also where you can apply your business logic.
It would be a great feature to have, in order to connect two APIS directly together or even just do data transfers from one place to another.
The new Rest transformers released in 2.1 at least make it easier and faster to integrate with existing APIs, so all you need to do is the plumbing now

GraphQL stitching VS merging schemas

What is the practical difference between merging and stitching GraphQL schemas? The graphql-tools documentation (merging:https://www.graphql-tools.com/docs/schema-merging, stitching:https://www.graphql-tools.com/docs/schema-stitching/stitch-combining-schemas) is a bit ambiguous when it comes exactly to each implementation's use cases. If I understood correctly, stitching is just a matter of organizational preferences and each subscheme becomes a 'proxy' to your scheme, while the merge functionality seems pretty similar to me. Could you please explain the difference? Thank you!
Schema stitching is used when you want to retrieve data from multiple GraphQL APIs in the same query (which is basically the motivation behind GraphQL).
For example, you may have to extract data from two GraphQL APIs - one which offers you information about the location, the other GraphQL API gives information about the weather. For you to execute a query that has access to both endpoints at the same time, you have to STITCH the schemas of the two endpoints, which will allow you to perform a query like this(which presents a link between the two endpoints as well) :
{
event(id: "5983706debf3140039d1e8b4") {
title
description
url
location {
city
country
weather {
summary
temperature
}
}
}
}
On the other hand, schema merging refers to gathering all your schemas that have been split based on domains, mainly for organizational purposes. Schema merging does not keep the individual subschemas.

Apollo GraphQL DataLoader DynamoDb

I'm new to GraphQL and am reading about N+1 issue and the dataloader pattern to increase performance. I'm looking at starting a new GraphQL project with DynamoDB for the database. I've done some initial research and found a couple of small NPM packages for dataloader and DynamoDb but they do no seem to be actively supported. So, it seems to me, from my initial research, that DynamoDB may not be the best choice supporting an Apollo GraphQL app.
Is it possible to implement dataloader pattern against DynamoDb database?
Dataloader doesn't care what kind of database you have. All that really matters is that there's some way to batch up your operations.
For example, for fetching a single entity by its ID, with SQL you'd have some query that's a bit like this:
select * from product where id = SOME_ID_1
The batch equivalent of this might be an in query as follows:
select * from product where id in [SOME_ID_1, SOME_ID_2, SOME_ID_3]
The actual mechanism for single vs batch querying is going to vary depending on what database you're using, it may not always be possible but it usually is. A quick search shows that DynamoDB has BatchGetItem which might be what you need.
Batching up queries that take additional parameters (such as pagination, or complex filtering) can be more challenging and may or may not be worth investing the effort. But batching anything that looks like "get X by ID" is always worth it.
In terms of finding libraries that support Dataloader and DynamoDB in particular, I wouldn't worry about it. You don't need this level of tooling. As long as there's some way of constructing the database query, and you can put it inside a function that takes an array of IDs and returns a result in the right shape, you can do it -- and this usually isn't complicated enough to justify adding another library.

GraphQL and nested resources would make unnecessary calls?

I read GraphQL specs and could not find a way to avoid 1 + N * number_of_nested calls, am I missing something?
i.e. a query has a type client which has nested orders and addresses, if there are 10 clients it will do 1 call for the 10 clients + 10 calls for each client.orders + 10 calls for each client.addresses.
Is there a way to avoid this? Not that it is not the same as caching an UUID of something, those are all different values and if you GraphQL points to a database which can make joins, it would be pretty bad on it because you could do 3 queries for any number of clients.
I ask this because I wanted to integrate GraphQL with an API that can fetch nested resources in an efficient way and if there was a way to solve the whole graph before resolving it would be nice to try to put some nested stuff in just one call.
Or I got it wrong and GraphQL is meant to be used only with microservices?
This is one of the difficulties of GraphQL's "resolver architecture". You must avoid incurring a ton of network latency by doing a lot of I/O in each resolver. Apps using a SQL DBMS will often grapple with the N + 1 problem at first. You need to use some batching and/or caching techniques to get around this.
If you are using Node.js on the server, I have two tools to recommend:
DataLoader - A database-agnostic tool for batching resolvers for each field and caching individual records.
Join Monster - A SQL-tailored tool that reads each query and your schema and compiles a SQL query for you. It leverages JOINs and DataLoader-style batching to fetch the data from your tables in a few (or a single) SQL queries.
I consider, that you're talking about using GraphQL with SQL database backend. The standard itself is database agnostic, and it doesn't care, how are you going to work out the problems of possible N+1 SELECT issues in your code. That being said, the specific server-side implementations of GraphQL server introduce many different ways of mitigating that problem:
AFAIK, Ruby implementation is able to to make use of Active Record and gems such as bullet to apply horizontal batching of executed database calls.
JavaScript implementation may make use of DataLoader library, which have similar techinque of batching series of executed promises together. You can see it in action here.
Elixir and Python implementations have concept of runtime info about executed subqueries, that can be used to determine which data will be further needed in order to execute GraphQL query, and potentially prefetch it.
F# implementation works similar to Elixir, but plugin itself can perform live analysis of execution tree to better describe, which fields can be potentially used in code, allowing for easier split of GraphQL domain model from database model.
Many implementations (i.e. PostGraph) tie underlying database model directly into GraphQL schema. In this case GQL query is often translated directly into database query language.

Resources