Getting 'delete impacts too many records' when deleting records using supabase - graphql

I'm currently using Supabase together with Graphql and trying to delete some data using a mutation. Unfortunately this mutation sometimes fails and gives me back an error that tells me that the delete impacts too many records. Does anyone have any idea what might be causing this?
The mutation I'm using is:
mutation UnfollowUser($followerUserId: UUID, $followingUserId: UUID) {
deleteFromfollowsCollection(filter: {follower: {eq: $followerUserId}, following: {eq: $followingUserId}}){
affectedCount
}
}

Although it does not seem to be documented anywhere, it turns out that supabase's pg_graphql has an atMost property. This can be used to limit the number of deleted records, and by default seems to be equal to 1.
Using this property we can adjust the previously decribed mutation, and allow it to delete up to 10 records at a time.
mutation UnfollowUser($followerUserId: UUID, $followingUserId: UUID) {
deleteFromfollowsCollection(filter: {follower: {eq: $followerUserId}, following: {eq: $followingUserId}}, atMost: 10){
affectedCount
}
}
Reference

Related

Filtering a list of values by a field value in GraphQL

So I'm doing some tests with GraphQL, and I'm failing in doing something that I believe is fairly simple.
When going to the GraphQL demo site (https://graphql.org/swapi-graphql) I'm presented with a default query which goes like this:
{
allFilms {
films {
title,
director,
releaseDate
}
}
}
This works as expected and returns a list of films.
Now - I would like to modify this query to return only the films where the director is George Lucas, and for the life of me - I can't figure out how to do that.
I've tried using the where and filter expressions, and also change the second line to films: (director: "George Lucas") but keep getting error messages.
What's the correct syntax for doing that?
Thanks!
If you check the docs of the provided GraphQL schema, you'll see that this is not possible. Following is the definition of the allFilms field:
allFilms(
after: String
first: Int
before: String
last: Int
): FilmsConnection
As per the doc, it has 4 input arguments, which are after, first, before, and last. There is no way to filter this out using the director's name.
GraphQL is not SQL. You cannot use expressions like WHERE or FILTER in GraphQL. The schema is already defined and the filters are pre-defined too. If the schema does not allow you to filter values using a certain field, you just can't do it.
You can to see the graphql schema here https://github.com/graphql/swapi-graphql/blob/master/schema.graphql
The allFilms query does not contain a filter for the field director. Also i can't find other query with this filter.
Most likely you need to write a filter on the result of the query.

filter by Time in graphql (using faunaDB service)

My graphQL schema looks like this,
type Todo {
name: String!
created_at: Time
}
type Query {
allTodos: [Todo!]!
todosByCreatedAtFlag(created_at: Time!): [Todo!]!
}
This query works.
query {
todosByCreatedAtFlag(created_at: "2017-02-08T16:10:33Z") {
data {
_id
name
created_at
}
}
}
Could anyone point out how i can create greater than (or less than) Time query in graphql (using faunaDB).
GraphQL range queries are not supported (yet.. they're coming!)
FaunaDB does not provide range queries for their GraphQL out-of-the-box, we are working on these features.
.. but there is a workaround.
That doesn't mean though that it can't do range queries since range queries are supported in FQL and you can always 'escape' from GraphQL to FQL to implement more advanced queries by writing a User Defined Function (UDF).
.. using resolvers
By using the #resolver keyword in your schema you can implement GraphQL queries yourself by writing a User Defined Function in FaunaDB in FQL. There are some basic examples in the documentation bt I imagine you might need some help so I'll write you a simple example.
I added your schema and added two documents:
First thing is that our schema will be extended with the resolver:
type Todo {
name: String!
created_at: Time
}
type Query {
allTodos: [Todo!]!
todosByCreatedAtFlag(created_at: Time!): [Todo!]!
todosByCreatedRange(before: Time, after:Time): [Todo!]! #resolver
}
All this does is add a function for us to implement:
Which if we call via GraphQL gives us exactly that Abort message we saw in the screenshot before since it has not been implemented yet. But we can see that the GraphQL statement actually calls the function.
.. UDF implementation
First thing we will do is add the parameter which is just writing a name as the first parameter of the lambda:
Which also takes an array in case you need to pass multiple parameters (which I do in the resolver that I defined in the schema):
We'll add an index to support our query. Values are for ranges (and for return values and sorting). We'll add created_at to range over it and also add ref since we'll need the return value to get the actual document behind the index.
We could then start off by just writing a simple function (that won't work yet)
Query(
Lambda(
["before", "after"],
Paginate(
Range(Match(Index("todosByCreatedAtRange")), Var("before"), Var("after"))
)
)
)
and could test this by calling the function manually via the shell.
This indeed returns the two objects (range is inclusive).
Of course, there is one problem with this, it does not return the data in the structure that GraphQL expects it so we'll get these strange errors:
We can do two things now, either define a type in our Schema that fits these and/or we can adapt the data the returns. We'll do the latter and adapt our result to the expected [Todo!]! result to show you.
Step one, map over the result. The only thing we introduce here is the Map and the Lambda. We do not do anything special yet, we just return the reference instead of both the ts and the reference as an example.
Query(
Lambda(
["before", "after"],
Map(
Paginate(
Range(
Match(Index("todosByCreatedAtRange")),
Var("before"),
Var("after")
)
),
Lambda(["created_at", "ref"], Var("ref"))
)
)
)
Calling it indeed shows that the function now only returns references.
Let's get the actual documents. I know that FQL is verbose (and with good reasons, although it should become less verbose in the future) so I started adding comments to clarify things
Query(
Lambda(
["before", "after"],
Map(
// This is just the query to get your range
Paginate(
Range(
Match(Index("todosByCreatedAtRange")),
Var("before"),
Var("after")
)
),
// This is a function that will be executed on each result (with the help of Map)
Lambda(["created_at", "ref"],
// We'll use Let to structure our queries ( allowing us to use varaibles )
Let({
todo: Get(Var("ref"))
},
// And then we return something
Var("todo")))
)
)
)
Our function now returns data.. woohoo!
We still need to make sure this data is conforms to what GraphQL expects, and from the schema we can see that it expects a [Todo!]! (See docs tab) and a Todo looks like (see the schema tab):
type Todo {
_id: ID!
_ts: Long!
name: String!
created_at: Time
}
As you can also see from that docs tab is that 'non-resolver' queries are automatically changed to return TodoPages. The function we wrote so far actually return pages.
Option 1, change the schema and turn it into a paginated resolver.
We can fix this by adding the paginated: true option to the resolver. You will have to take into account for extra parameters that will be added to the resolver as explained here. I haven't tried that myself, so I'm not 100% certain how that would work. The advantage of a paginated resolve is that you can immediately take advantage of sane pagination in the GraphQL endpoint.
Option 2, turn it into a non-paginated result.
A paginated result is a result that looks as follows:
{ data: [ document1, document2, .. ],
before: ...
after: ..
}
The result doesn't accept pages but an array so I'll change it and retrieve the data field:
And we have our result.
The complete query looks as follows:
Query(
Lambda(
["before", "after"],
Select(
["data"],
Map(
Paginate(
Range(
Match(Index("todosByCreatedAtRange")),
Var("before"),
Var("after")
)
),
Lambda(
["created_at", "ref"],
Let({ todo: Get(Var("ref")) }, Var("todo"))
)
)
)
)
)
Disclaimers
Once you go custom, pagination also becomes your responsibility (e.g. pass an extra parameter). You can't fetch relations out of the box anymore as you would normally do by just requesting the relations in the GraphQL body.
Some words on the benefits of UDFs and the hybrid of GraphQL/FQL
Before you shy away from FQL (and yes, we do have to add range queries and are working on that), here is some explanation on the UDF approach in general and why it makes sense to think about it anyway.
You will at a certain moment encounter things in GraphQL that are just impossible (complex conditional transactions, e.g. update document and update this other document only if some condition that results form the previous update is true). Users that use other GraphQL implementations typically solve this by writing a serverless function in case you have to implement advanced logic or transactions.
FaunaDB's answer to this is to use their User Defined Functions (UDFs). This is not a serverless function, it's a FaunaDB function implemented in FQL which might seem cumbersome at first but it's important to realize that it gives you the same benefits ( multi-region/strong consistency/scalability/free-tier/pay-as-you-go) that FaunaDB provides.

Apollo query does not return cached data available using readFragment

I have 2 queries: getGroups(): [Group] and getGroup($id: ID!): Group. One component first loads all groups using getGroups() and then later on a different component needs to access a specific Group data by ID.
I'd expect that Apollo's normalization would already have Group data in cache and would use it when getGroup($id: ID!) query is executed, but that's not the case.
When I set cache-only fetchPolicy nothing is returned. I can access the data using readFragment, but that's not as flexible as just using a query.
Is there an easy way to make Apollo return the cached data from a different query as I would expect?
It's pretty common to have a query field that returns a list of nodes and another that takes an id argument and returns a single node. However, deciding what specific node or nodes are returned by a field is ultimately part of your server's domain logic.
As a silly example, imagine if you had a field like getFavoriteGroup(id: ID!) -- you may have the group with that id in your cache but that doesn't necessarily mean it should be returned by the field (it may not be favorited). There's any number of factors (other arguments execution context, etc.) that might affect what nodes(s) are returned by a field. As a client, it's not Apollo's place to make assumptions about your domain logic.
However, you can effectively duplicate that logic by implementing query redirects.
const cache = new InMemoryCache({
cacheRedirects: {
Query: {
group: (_, args) => toIdValue(cache.config.dataIdFromObject({ __typename: 'Group', id: args.id })),
},
},
});

GraphQL Authorization / Permission

So basically how do you handle permissions?
Let's say we have a list of Post(s) of some kind, with an argument first to limit the amount of posts. And only the owner and approved users can read the posts, everyone else can't. What is the best way to implement this?
query {
{
viewer {
posts(first: 10) {
id
text
}
}
}
}
What I'm currently thinking of, is to have a single source of truth to whether a user can read the post or not, and hook it up with the dataloader module.
But, how do I query for exactly 10 posts? If I query my DB for exactly 10 rows, when I then later on filter them with some business logic, then I can get for example 8 posts returned.
A solution is to not put a limit on the query, but that's not very efficient. So what is a good way to go about this?
Inspiration from here
(1) https://dev-blog.apollodata.com/auth-in-graphql-part-2-c6441bcc4302
(2) https://dev-blog.apollodata.com/graphql-at-facebook-by-dan-schafer-38d65ef075af
(1) solved it by
export const DB = {
Lists: {
all: (user_id) => {
return sql.raw("SELECT id FROM lists WHERE owner_id is NULL or owner_id = %s, user_id);
}
}
}
as the query, and then to filter out which rows can be read:
resolve: (root, _, ctx) => {
// factor out data fetching
return DB.Lists.all(ctx.user_id)
.then( lists => {
// enforce auth on each node
return lists.map(auth.List.enforce_read_perm(ctx.user_id));
});
}
So, we can clearly see that it's querying for all the rows, even if, say, the first argument was 1, which is what I'm trying to avoid.
Maybe I'm approaching the problem wrong in some way, as the business logic lives on another layer than the DB one, so there's no way but to query all the rows. Any help appreciated.
For future reference and other people searching for solutions.
Used Dataloader to solve the authentication problem.
Literally implemented what they did in https://dev-blog.apollodata.com/graphql-at-facebook-by-dan-schafer-38d65ef075af and used this boilerplate repo as guidance. Not much more to say than that.

Proper Upsert (Atomic Update Counter Field or Insert Document) with RethinkDB

After looking at some SO questions and issues on RethinkDB github, I failed to come to a clear conclusion if atomic Upsert is possible?
Essentially I would like to perform the same operation as ZINCRBY using Redis.
If member does not exist in the sorted set, it is added with increment
as its score (as if its previous score was 0.0). If key does not
exist, a new sorted set with the specified member as its sole member
is created.
The current implementation appears to differ from almost all databases that I have used. With the data being replaced or inserted not updated. This is a simple use case, like update the last visit, update the number of clicks, update a product quantity. So I must be missing something very obvious, because I cannot see a simple way to do this.
Yes, it is possible. After get on the key, perform an atomic replace. Something like this might work:
function set_or_increment_score(player, points){
return r.table('scores').get(player).replace(
row =>
{ id: player,
score: r.branch(
row.eq(null),
points,
row('score').add(points))
});
}
It has the following behaviour:
> set_or_increment_score("alice", 1).run(conn)
{ inserted: 1 }
> set_or_increment_score("alice", 2).run(conn)
{ replaced: 1 }
It works because get returns null when the document doesn't exist, and a replace on a non-existing document tuns into an insert. See the documentation for replace
So I end up using the following code to go around the no Update issue.
r.db("test").table("t").insert(
{id:"A", type:"player", species:"warrior", score:0, xp:0, armor:0},
{conflict: function(id, oldDoc, newDoc) {
return newDoc.merge(oldDoc).merge(
{armor: oldDoc("armor").add(1)});
}
}
)
Do you think this is more readable/elegant or do you see any issues with the code compared to your sample?

Resources