Apollo Client Query Deduplication on Cached Object - caching

I recently read an article about Apollo Client Caching on the official blog, with a screenshot
here.
According to the article, if the current query contains a cached object, along with other objects, the query will be deduplicated to only query the rest of the objects.
However, I did some testing by logging out queries on the server, which suggested that the query had not been partially deduplicated. Instead, the entire query got sent to the server.
Can anyone provide any insight on this. Thanks very much.
Test:
First query:
{
post(_id: 1) {
_id
title
}
}
Second query:
{
post(_id: 1) {
_id
title
}
author(_id: 1) {
_id
firstName
}
}
Intended outcome:
The secomd query received by the server only contains
author(_id: 1) {
_id
firstName
}
since post(_id: 1) has been cached after the first query is sent, according to the blog.
Actual outcome:
Server log: (the second query has NOT been deduplicated)
{
"operationName": null,
"variables": {},
"query": "{\n post(_id: 1) {\n _id\n title\n __typename\n
}\n}\n"
} /graphql 200 81 - 0.520 ms
{
"operationName": null,
"variables": {},
"query": "{\n post(_id: 1) {\n _id\n title\n __typename\n
}\n author(_id: 1) {\n _id\n firstName\n __typename\n }\n}\n"
} /graphql 200 140 - 0.726 ms

There is a subject called Query deduplication in Apollo-Client
that comes from Apollo-Link (the transport layer of Apollo-Client)
but what it does is deduplicate existing queries that are currently being fetched
Query deduplication can be useful if many components display the same data, but you don’t want to fetch that data from the server many times. It works by comparing a query to all queries currently in flight
you can read more about it here and here
But it doesn't relate so much to the cache.. I think what you're looking for is a better understanding of how cache is managed in Apollo-Client
So how is caching handled in Apollo-Client? you can find a more through article about it here from their official docs: apollo-client-caching
more specifically to your case i believe that if you would use watchQuery / query and make sure the fetchPolicy is cache-first then it might not resend the query
I hope this gives you better insight on caching in Apollo-Client

Related

How to do a terms query on #Query in SpringBoot Elasticsearch Repository

From here, an example of how to query something IN a list, but the example is idealized. It queries id.
I want to do a query: name in ["abc","ghi"]
From here, I find an answer, but I find it not work.
I try several different ways below.
#Query("{ \"terms\": {\"name\": [\"?0\"] } }") //the 1st query.
Flux<Response> findByNames(String names);
#Query("{\"names\": {\"values\": ?0 }}") //mimic the ids query in the example.
Flux<Response> findByNames(List<String> names);
#Query("{ \"terms\": {\"name\": ?0 } }")
Flux<Response> findByNames(List<String> names);
The 1st query, I pass in nameList.stream().collect(Collectors.joining(",")), it generates the query json is nearly correct, but a slight error:
{ "terms": {"name": ["abc,ghi"] } }
If it can generate the query json like ( look at the quote mark):
{ "terms": {"name": ["abc","ghi"] } }
then, it works.
So, My question is: How to do a terms query on the #Query in SpringBoot Elasticsearch Repository
There was a bug in the handling of collection parameters in #Query methods that was fixed 2 weeks ago in the main branch (https://github.com/spring-projects/spring-data-elasticsearch/pull/2182), I'll need to port that back to the currently maintained branches as well.
Addition 16.07.2022: it's backported now and will bei in the next maintenance releases of 4.4 and 4.3.

How to get the current earnings of the farms via Elrond REST API

How I could obtain the current earnings of a farm from Maiar Exchange via Elrond REST API?
For example, for the LKMEX farm I want to determine the current earnings (My Earned MEX) in MEX and/or USDT since the latest harverst or 'reinvest'. Thanks!
Based on what I was looking in maiar.exchange. You can determine there is a graphql request for this.
You can do graphql request to https://graph.maiar.exchange/graphql. I did not search if there is any OpenAPI spec to know if there is any security bound to this route. However, to help you out, here is the graphql (with redacted content), that is used to get current amount in farm with current harvestable amount:
{
"variables": {
"getRewardsForPositionArgs": {
"farmsPositions": [
{
"attributes": "XXXX",
"identifier": "MEXFARM-XXXXX-XXXXX",
"farmAddress": "erd1XXXX",
"liquidity": "19700000000000000000000000"
},
{
"attributes": "XXX",
"identifier": "LKFARM-XXXXXXX-XXXXXX",
"farmAddress": "erd1XXXXX",
"liquidity": "19700000000000000000000000"
},
]
}
},
"query": "query ($getRewardsForPositionArgs: BatchFarmRewardsComputeArgs!) {\n getRewardsForPosition(farmsPositions: $getRewardsForPositionArgs) {\n rewards\n remainingFarmingEpochs\n decodedAttributes {\n aprMultiplier\n compoundedReward\n initialFarmingAmount\n currentFarmAmount\n lockedRewards\n identifier\n __typename\n }\n __typename\n }\n}\n"
}
Liquidity is from where you want to get the data. There might be a need to do another graphql request before doing this to know where you are for liquidity.

Best practices for writing a PUT endpoint for a REST API

I am building a basic CRUD service with some business logic under the hood, and I'm about to start working on the PUT (update) endpoint. I have already fully written+tested GET (read) and POST (create) for my data object. The data store for my documents is an ElasticSearch instance on AWS.
I have some decisions to make about how I want to architect the PUT, namely, how I want to determine a valid request. My goal is to make it so the POST is only for the creation of new assets, and PUT will only update existing documents. (At the moment, I am POSTing to elastic with /_doc/, however the intent is to move to /_create/ as part of this work)
What I'm a little hung-up on is the "right" way to check that a document exists before making the API call to Elastic to update.
When a user submits a document to PUT, should I first GET from Elastic with the document ID to make sure the document already exists? Or should I simply try to "update" the resource and if it doesn't exists, one is created?
Obviously there are trade-offs to each strategy. With the latter, PUTing a document that doesn't exist almost completely negates the need for a POST at all, so I'd be more inclined to go with the former - despite the additional REST call - to maintain the integrity of the basic REST definition.
Thoughts?
The consideration whether to update a doc (with versioning) or create a new one with some shared ID related to all previous versions depends on your use case -- either of them are 'correct' but there's too little information to advise on that right now.
With regards to the document-exists strategies -- there are essentially 2 types of IDs in ES -- what I call:
internal ids (_id)
external ids (doc_values-provided ids)
Create an index & a doc:
PUT myindex
PUT myindex/_doc/internal_id_1
{
"external_id": "1"
}
Internal ID check
GET myindex/_doc/internal_id_1
or
GET myindex/_count
{
"query": {
"ids": {
"values": [
"internal_id_1"
]
}
}
}
or
GET myindex/_count
{
"query": {
"term": {
"_id": {
"value": "internal_id_1"
}
}
}
}
External ID check
GET myindex/_count
{
"query": {
"term": {
"external_id": {
"value": "1"
}
}
}
}
and many others (terms, match (for partial matches etc), ...)
Note that I've used the _count endpoint instead of _search -- it's slightly faster.
If you intend to check the _version of a given doc before you proceed to update it, replace _count with _search?version=true and the _version attribute will become available:
{
"_index":"myindex",
"_type":"_doc",
"_id":"internal_id_1",
"_version":2, <---
"_score":1.0,
"_source":{
"external_id":"1"
}
}

What is query_hash in instagram?

I was working for the first time on graphql, and I saw that Instagram hash their queries.
I searched something, but I don't know if it is correct. The hash is like a persistedquery stored in a cache memory?
Or am I wrong?
Example: this is my request payload
{
"operationName":"user",
"variables":{},
"query":"query user {\n users {\n username\n createdAt\n _id\n }\n}\n"
}
this is instagram:
query_hash: 60b755363b5c230111347a7a4e242001
variables: %7B%22only_stories%22%3Atrue%7D
(it is in urlencode mode).
Now, how could I hash my query? I'm using NodeJS as backend and react js as frontend.
I would like to understand how it works x)! Thank you guys!
The persisted query is used to improve GraphQL network performance by reducing the request size.
Instead of sending a full query which could be very long, you send a hash to the GraphQL server which will retrieve the full query from the key-value store using the hash as the key.
The key value store can be memcached, redis, etc
The Apollo Server comes with automated persisted queries out of the box. I recommended gives it a try. They have publish a blog about it. https://blog.apollographql.com/automatic-persisted-queries-and-cdn-caching-with-apollo-server-2-0-bf42b3a313de
If you want to build your own solution, you can use this package to do the hashing yourself https://www.npmjs.com/package/hash.js
query_hash (or query_id) does not hash the variables or the parameters, it hashes the payload.
Lets say your actual path is /graphql and your payload is
{
"user": {
"profile": [
"username",
"user_id",
"profile_picture"
],
"feed": {
"posts": {
"data": [
"image_url"
],
"page_size": "{{variables.max_count}}"
}
}
}
}
Then this graphql payload will be hashed and it becomes d4d88dc1500312af6f937f7b804c68c3. Now instead of doing that on /graphql you do /graphql/query/?query_hash=d4d88dc1500312af6f937f7b804c68c3. This way you hashed the payload, as in you hashed the "keys" that are required from the graphql. So when you pass variables as a param then the payload does not actually change, because the variables are constant as well, and you are changing them on the backend, and not in the payload.

GraphQL fragment JSON format

I'm attempting to read some data out of GitHub with their v4 (GraphQL) API. I've written a Java client that is working fine up until I start replacing some of the query with GraphQL fragments.
I was using GraphiQL to initially test my queries, and adding fragments was pretty simple in there. However, when translating to JSON, I haven't figured out the correct format. I've tried:
{ "query": "{ ... body_of_query ... } fragment fragname on Blob { byteSize text }" }
{ "query": "{ ... body_of_query ... }, fragment fragname on Blob { byteSize text }" }
{ "query": "{ ... body_of_query ... }", "fragment": "{fragname on Blob { byteSize text } }" }
EDIT: Adding for #Scriptonomy:
{
query {
search(first:3, type: REPOSITORY, query: \"language:HCL\") {
edges {
node {
... on Repository {
name
descriptionHTML
object(expression: \"master:\") {
... on Tree {
...recurseTree
}
}
}
}
cursor
}
pageInfo {
endCursor
hasNextPage
}
}
}
fragment recurseTree on Tree {
entries {
name
type
}
}
I'm sure it would be fun and all to keep throwing random variations on this, and my morning has been huge fun searching various GraphQL docs and blogs on fragments, and I may have even actually guessed the correct answer but had mismatched parens (I'm just using hardcoded JSON until I know the format -- perhaps not the wisest choice looking back on it).
I'm hoping that someone may know the correct format and set me on the correct course before I keel over from GraphQL-doc over-exposure.
Fragments are sent in the same property of the JSON body as the query itself. You can see an example for using fragments here.
A valid GraphQL request is usually either a GET request that encodes the query as URL query parameter, or a POST request with a JSON body. The JSON body has one required key, query and one optional field, variables. In your case, the JSON needs to look like this:
{
"query": "{\n query {\n search(first:3, type: REPOSITORY, query: \"language:HCL\") {\n edges {\n node {\n ... on Repository {\n name\n descriptionHTML\n object(expression: \"master:\") {\n ... on Tree {\n ...recurseTree\n }\n }\n }\n }\n cursor\n }\n pageInfo {\n endCursor\n hasNextPage\n }\n }\n}\n\nfragment recurseTree on Tree {\n entries {\n name\n type\n }\n}"
}
That is the JSON.stringify version of the verbatim query string in your question.
I recommend you to run queries from a GraphiQL instance connected to your GitHub GraphQL API and look into the network request. You can copy the GraphQL request as cuRL to see how the JSON body needs to look like.
If you still obtain a 400, please share some code, because that means your request was malformed, so it probably never hit the GraphQL parser in the first place.
There is no need to translate GraphQL Query to JSON. This would be your query:
"{ query { ... body_of_query ... } fragment fragname on Blob { byteSize text } }"
For future users, and people like me who stumbled upon this hurdle.
Query needs to be sent in the given order;
{ "query": "fragment fragname on Blob { byteSize text } methodName(ifMethodParam: paramVal){...fragname }" }
Hope this will help others.

Resources