Firestore - Using cache until online content updates - caching

I am starting with Firestore. I've read docs and tutorials about the offline data persistence but I have not really clear if Firestore downloads data again even if the content hasn't been modified.
For example, if I have a query where the results will be updated once a week and I don't need that the app download the content again until the changes were made, what is the best way in terms of efficiency to write the code?
Thanks!

You want to use the "snapshot listener" API to listen to your query:
https://firebase.google.com/docs/firestore/query-data/listen#listen_to_multiple_documents_in_a_collection
Here's some JavaScript as an example:
db.collection("cities").where("state", "==", "CA")
.onSnapshot(function(querySnapshot) {
var cities = [];
querySnapshot.forEach(function(doc) {
cities.push(doc.data().name);
});
console.log("Current cities in CA: ", cities.join(", "));
});
The first time you attach this listener Firestore will access the network to download all of the results to your query and provide you with a query snapshot, as you'd expect.
If you attach the same listener a second time and you're using offline persistence, the listener will be fired immediately with the results from the cache. Here's how you can detect if your result is from cache or local:
db.collection("cities").where("state", "==", "CA")
.onSnapshot({ includeQueryMetadataChanges: true }, function(snapshot) {
snapshot.docChanges.forEach(function(change) {
if (change.type === "added") {
console.log("New city: ", change.doc.data());
}
var source = snapshot.metadata.fromCache ? "local cache" : "server";
console.log("Data came from " + source);
});
});
After you get the cached result, Firestore will check with the server to see if there are any changes to your query result. If yes you will get another snapshot with the changes.
If you want to be notified of changes that only involve metadata (for example if no documents change but snapshot.metadata.fromCache changes) you can use QueryListenOptions when issuing your query:
https://firebase.google.com/docs/reference/android/com/google/firebase/firestore/QueryListenOptions

Related

GraphQL vs Normalized Data Structure Advantages

From Redux docs:
This [normalized] state structure is much flatter overall. Compared to
the original nested format, this is an improvement in several ways...
From https://github.com/paularmstrong/normalizr
:
Many APIs, public or not, return JSON data that has deeply nested objects. Using data in this kind of structure is often very difficult for JavaScript applications, especially those using Flux or Redux.
Seems like normalized database-ish data structures are better to work with on front end. Then why GraphQL is so popular if it's whole language style is revolved around quickly getting any nested data? Why do people use it then?
This kind of discussion is off-topic on SO ...
it's not only about [normalized] structures ...
graphql client (like apollo) takes care of all data fetching related nuances (error handling, cache, refetching, data conversion, and many more) also but hardly doable with redux.
Different use cases, you can use both:
keep (complex) app state in redux,
handle data fetching in apollo (you can use it for local state, too).
Let's look at why we want to normalize the cache and what kind of work we have to do to get a normalized cache.
For the main page we fetch a list of TODOs and a list of high priority TODOS. Our two endpoints return the following data:
{
all: [{ id: 1, title: "TODO 1" }, { id: 2, title: "TODO 2" }, { id: 2, title: "TODO 2"}],
highPrio: [{ id: 1, title: "TODO 1" }]
}
If we would store the data like this into our cache, we have a difficult time updating a single todo, because we have to update the todo in every array we have in our store or might have in our store in the future.
We can normalize the data and only store references in the array. This way we can easily update a single todo in a single place:
{
queries: {
all: [{ ref: "Todo:1" }, { ref: "Todo:2" }, { ref: "Todo:2" }],
highPrio: [{ ref: "Todo:1" }}]
},
refs: {
"Todo:1": { id: 1, title: "TODO 1" },
"Todo:2": { id: 2, title: "TODO 2" },
"Todo:3": { id: 3, title: "TODO 3" }
}
}
The downside is, that this shape of data is now much harder to use in our list component. We will have to transform the cache a lot, roughtly like so:
function denormalise(cache) {
return {
all: cache.queries.all.map(({ ref }) => cache.ref[ref]),
highPrio: cache.queries.highPrio.map(({ ref }) => cache.ref[ref]),
};
}
Notice how now updating Todo:1 inside of the cache will update all queries that reference the todo automatically, if we run this function inside of the React component (this is often called a selector in Redux).
The magical thing about GraphQL is that it is a strict specification with a type system. This allows GraphQL clients like Apollo to globally identify objects and normalise that cache. At the same time it can also automatically denormalise the cache for you and update objects in the cache automatically after a mutation. This means that most of the time you have to write no caching logic at all. And this should explain why it is so popular: The best code is no code!
const { data, loading, error } = useQuery(gql`
{ all { id title } highPrio { id title }
`);
This code automatically fetches the query on load, normalizes the response and writes it into the cache. Then denormalizes the cache back into the shape of the query using the cache data. Updates to elements in the cache automatically update all subscribed components.

How to test and automate APIs implemented in GraphQL

In our company, we are creating an application by implementing graphQL.
I want to test and automate this APIs for CI/CD.
I have tried REST-assured but since graphQL queries are different than Json,
REST-assured doesn't have proper support for graphQL queries as discussed here.
How can we send graphQL query using REST-assured?
Please suggest the best approach to test and automate graphQL APIs
And tools which can be used for testing and automation.
So I had the same issue and I was able to make it work on a very simple way.
So I've been strugling for a while trying to make this graphQL request with Restassured in order to validate the response (amazing how scarce is the info about this) and since yesterday I was able to make it work, thought sharing here might help someone else.
What was wrong? By purely copying and pasting my Graphql request (that is not json format) on the request was not working. I kept getting error "Unexpected token t in JSON at position". So I thought it was because graphql is not JSON or some validation of restassured. That said I tried to convert the request to JSON, imported library and lot of other things but none of them worked.
My grahql query request:
String reqString = "{ trade { orders { ticker } }}\n";
How did I fixed it? By using postman to format my request. Yes, I just pasted on the QUERY window of postman and then clicked on code button on the right side (fig. 1). That allowed my to see my request on a different formatt, a formatt that works on restassured (fig. 2). PS: Just remeber to configure postman, which I've pointed with red arrows.
My grahql query request FORMATTED:
String reqString = {"query":"{ trade { orders { ticker } }}\r\n","variables":{}}
Fig 1.
Fig 2.
Hope it helps you out, take care!
You can test it with apitest
{
vars: { #describe("share variables") #client("echo")
req: {
v1: 10,
}
},
test1: { #describe("test graphql")
req: {
url: "https://api.spacex.land/graphql/",
body: {
query: `\`query {
launchesPast(limit: ${vars.req.v1}) {
mission_name
launch_date_local
launch_site {
site_name_long
}
}
}\`` #eval
}
},
res: {
body: {
data: {
launchesPast: [ #partial
{
"mission_name": "", #type
"launch_date_local": "", #type
"launch_site": {
"site_name_long": "", #type
}
}
]
}
}
}
}
}
Apitest is declarative api testing tool with JSON-like DSL.
See https://github.com/sigoden/apitest

After a mutation, how do I update the affected data across views? [duplicate]

This question already has an answer here:
Auto-update of apollo client cache after mutation not affecting existing queries
(1 answer)
Closed 3 years ago.
I have both the getMovies query and addMovie mutation working. When addMovie happens though, I'm wondering how to best update the list of movies in "Edit Movies" and "My Profile" to reflect the changes. I just need a general/high-level overview, or even just the name of a concept if it's simple, on how to make this happen.
My initial thought was just to hold all of the movies in my Redux store. When the mutation finishes, it should return the newly added movie, which I can concatenate to the movies of my store.
After "Add Movie", it would pop back to the "Edit Movies" screen where you should be able to see the newly added movie, then if you go back to "My Profile", it'd be there too.
Is there a better way to do this than holding it all in my own Redux store? Is there any Apollo magic I don't know about that could possibly handle this update for me?
EDIT: I discovered the idea of updateQueries: http://dev.apollodata.com/react/cache-updates.html#updateQueries I think this is what I want (please let me know if this is not the right approach). This seems better than the traditional way of using my own Redux store.
// this represents the 3rd screen in my picture
const AddMovieWithData = compose(
graphql(searchMovies, {
props: ({ mutate }) => ({
search: (query) => mutate({ variables: { query } }),
}),
}),
graphql(addMovie, {
props: ({ mutate }) => ({
addMovie: (user_id, movieId) => mutate({
variables: { user_id, movieId },
updateQueries: {
getMovies: (prev, { mutationResult }) => {
// my mutation returns just the newly added movie
const newMovie = mutationResult.data.addMovie;
return update(prev, {
getMovies: {
$unshift: [newMovie],
},
});
},
},
}),
}),
})
)(AddMovie);
After addMovie mutation, this properly updates the view in "My Profile" because it uses the getMovies query (woah)! I'm then passing these movies as props into "Edit Movies", so how do I update it there as well? Should I just have them both use the getMovies query? Is there a way to pull the new result of getMovies out of the store, so I can reuse it on "Edit Movies" without doing the query again?
EDIT2: Wrapping MyProfile and EditMovies both with getMovies query container seems to work fine. After addMovie, it's updated in both places due to updateQueries on getMovies. It's fast too. I think it's being cached?
It all works, so I guess this just becomes a question of: Was this the best approach?
The answer to the question in the title is
Use updateQueries to "inform` the queries that drive the other views that the data has changed (as you discovered).
This topic gets ongoing discussion in the react-apollo slack channel, and this answer is the consensus that I'm aware of: there's no obvious alternative.
Note that you can update more than one query (that's why the name is plural, and the argument is an object containing keys that match the name of all the queries that need updating).
As you may guess, this "pattern" does mean that you need to be careful in designing and using queries to make life easy and maintainable in designing mutations. More common queires means less chance that you miss one in a mutation updateQueries action.
The Apollo Client only updates the store on update mutations. So when you use create or delete mutations you need to tell Apollo Client how to update. I had expected the store to update automatically but it doesn’t…
I have founded a workaround with resetStore just after doing your mutation.
You reset the store just after doing the mutation. Then when you will need to query, the store is empty, so apollo refetch fresh data.
here is the code:
import { withApollo } from 'react-apollo'
...
deleteCar = async id => {
await this.props.deleteCar({
variables: { where: {
id: id
} },
})
this.props.client.resetStore().then(data=> {
this.props.history.push('/cars')
})
}
...
export default compose(
graphql(POST_QUERY, {
name: 'carQuery',
options: props => ({
fetchPolicy: 'network-only',
variables: {
where: {
id: props.match.params.id,
}
},
}),
}),
graphql(DELETE_MUTATION, {
name: 'deleteCar',
}),
withRouter,
withApollo
)(DetailPage)
The full code is here: https://github.com/alan345/naperg
Ther error before the hack resetStore

Bloodhound does not cache data from remote fetches in local storage

I am trying to load autocompletion information of people's names for typeahead and then not have to query the server again if I already have a result.
For example if i search a person's name and the data for that person (among others) gets retrieved from a remote query, when I delete the name and search for the surname instead I want to have the previously cached names with that surname to show up. What actually happens is that the results are again retrieved from the server and the suggested.
Caching only works while typing a single word ("Mic" -> "Mich" -> "Micha" -> "Michael").
TL;DR: I want to cache results from bloodhound in Local Storage not only from prefetch (which cannot be applied to my situation) but from remote as well and use that before querying remote again.
What i currently have is
function dispkey(suggestion_object){
console.log(suggestion_object);
return suggestion_object["lastname"] + ", " + suggestion_object["firstname"];
}
var engine = new Bloodhound({
name: 'authors',
local: [],
remote: 'http://xxxxxx.xxx/xxxx/xxxxxxxxxx?query=%%QUERY',
datumTokenizer: function(d) {
return Bloodhound.tokenizers.whitespace(d.val);
},
queryTokenizer: function (s){
return s.split(/[ ,]+/);
},
});
engine.initialize();
$('.typeahead').typeahead({
highlight: true,
hint: true,
minLength: 3,
},
{
displayKey: dispkey,
templates: {
suggestion: Handlebars.compile([
'<p id="author_autocomplete_email_field" >{{email}}</p>',
'<p id="author_autocomplete_name_field">{{lastname}} {{firstname}}</p>',
].join(''))},
source: engine.ttAdapter(),
});
I haven't found something similar and i am afraid there is no trivial solution to this.
P.S.: I also noticed that datumTokenizer never gets called
datumTokenizer: function(d) {
console.log("Lalalalala");
return Bloodhound.tokenizers.whitespace(d.val);
},
when i used this, "Lalalalala" was never outputted in the chrome debug console.
As jharding mentioned it's not possible to have remote suggestions pulled from localstorage at this point.
However, I recently worked on a small project where I needed to store previous form inputs for future use in typeahead.js. To do this I saved an array of form input values to localstorage.
var inputs = ['val1', 'val2', 'val3', 'val4'];
localStorage.setItem('values', JSON.stringify(inputs));
I then retrieved the array for use in the typeahead field.
var data = JSON.parse(localStorage.getItem('values'));
$('input').typeahead({
minLength: 3,
highlight: true,
},
{
name: 'data',
displayKey: 'value',
source: this.substringMatcher(data)
});
You can view my full source here.

Twitter typeahead.js remote and search on client

As of my understanding typeahead.js got three ways of fetching data.
Local: hardcoded data
Prefetch: Load a local json file, or by URL
Remote: Send a query to the backend which responds with matching results
I want to fetch all data from the backend and then
process it on the client.
The data my server responds with got the following structure:
[{id:2, courseCode:IDA530, courseName:Software Testing, university:Lund University},
{id:1, courseCode:IDA321, courseName:Computer Security, university:Uppsala University}, ...]
I want it to search on all fields in each entry. (id, courseCode, courseName, university)
I wanna do more on the client and still fetching one time for each user (instead of every time a user are typing), I probably misunderstood something here but please correct me.
You should re-read the docs. Basically there are two things you need:
Use the prefetch: object to bring all the data from the backend to the client only once (that's what you are looking for, if I understand correctly.)
Use a filter function to transform those results into datums. The returned datums can have a tokens field, which will be what typeahead searched by, and can be built from all your data.
Something along the lines of:
$('input.twitter-search').typeahead([{
name: 'courses',
prefetch: {
url: '/url-path-to-server-ajax-that-returns-data',
filter: function(data) {
retval = [];
for (var i = 0; i < data.length; i++) {
retval.push({
value: data[i].courseCode,
tokens: [data[i].courseCode, data[i].courseName, data[i].university],
courseCode: data[i].courseCode,
courseName: data[i].courseName,
template: '<p>{{courseCode}} - {{courseName}}</p>',
});
}
return retval;
}
}
}]);

Resources