The Apollo cache spends 1.2s parsing the result and putting it into the cache:
Things I've tried:
keyFields: false to avoid normalisation
fetchPolicy: "no-cache"
assumeImmutableResults: true
canonizeResults: false
Turning all this on should allow Apollo to dump the object directly into the cache without any further processing. Is there any way to achieve this? It looks like Apollo is trying to deep clone this huge object.
Related
I am currently working on performance improvements for a React-based SPA. Most of the more basic stuff is already done so I started looking into more advanced stuff such as service workers.
The app makes quite a lot of requests on each page (most of the calls are not to REST endpoints but to an endpoint that basically makes different SQL queries to the database, hence the amount of calls). The data in the DB is not updated too often so we have a local cache for the responses, but it's obviously getting lost when a user refreshes a page. This is where I wanted to use the service worker - to keep the responses either in cache store or in IndexedDB (I went with the second option). And, of course, the cache-first approach does not fit here too well as there is still a chance that the data may become stale. So I tried to implement the stale-while-revalidate strategy: fetch the data once, then if the response for a given request is already in cache, return it, but make a real request and update the cache just in case.
I tried the approach from Jake Archibald's offline cookbook but it seems like the app is still waiting for real requests to resolve even when there is a cache entry to return from (I see those responses in Network tab).
Basically the sequence seems to be the following: request > cache entry found! > need to update the cache > only then show the data. Doing the update immediately is unnecessary in my case so I was wondering if there is any way to delay that? Or, alternatively, not to wait for the "real" response to be resolved?
Here's the code that I currently have (serializeRequest, cachePut and cacheMatch are helper functions that I have to communicate with IndexedDB):
self.addEventListener('fetch', (event) => {
// some checks to get out of the event handler if certain conditions don't match...
event.respondWith(
serializeRequest(request).then((serializedRequest) => {
return cacheMatch(serializedRequest, db.post_cache).then((response) => {
const fetchPromise = fetch(request).then((networkResponse) => {
cachePut(serializedRequest, response.clone(), db.post_cache);
return networkResponse;
});
return response || fetchPromise;
});
})
);
})
Thanks in advance!
EDIT: Can this be due to the fact that I put stuff into IndexedDB instead of cache? I am sort of forced to use IndexedDB instead of the cache because those "magic endpoints" are POST instead of GET (because of the fact they require the body) and POST cannot be inserted into the cache...
I am receiving images URLs from an API response and I want to cache each of those images. I tried just using cache.addAll but it does not work because the images are an opaque response. I am considering fetching each image using fetch and having the routes cache them but I am not sure as to whether this would be the best way. Are there any other better alternatives?
There's some guidance in this Stack Overflow answer that explains how to cache opaque responses; the gist of it is:
const request = new Request('https://third-party-no-cors.com/', {
mode: 'no-cors',
});
// Assume `cache` is an open instance of the Cache class.
fetch(request).then(response => cache.put(request, response));
The caveat is that your code has no way of knowing whether you're caching a valid response, or a HTTP 404 or some other error.
I'm building an app with express and using passport's facebook login
The example application is:
https://github.com/passport/express-4.x-facebook-example/blob/master/server.js
And from it has come to my attention that I can skip the const/var=require... format and directly do this if I never have to reference it again:
e.g
const createError = require('http-errors'),
session = require('cookie-session');
...
app.use(session({ secret: process.env.cookie_secret, resave: true, saveUninitialized: true }));
app.use(function(req, res, next) {
next(createError(404));
});
becomes
app.use(require('cookie-session')({ secret: process.env.cookie_secret, resave: true, saveUninitialized: true }));
app.use(function(req, res, next) {
next(require('http-errors')(404));
});
This works, great, my file is half it's length now but.. I'm worried about the performance implication of this?
require() is a synchronous operation and blocks the event loop. As such, you generally do not want to ever be doing the first require() of a module in the middle of an actual request handler in a server because that will momentarily block the event loop.
Now, since modules are cached, only the first time you require() a module will actually take very long. But, never-the-less, it is considered a good coding practice to load your dependencies upon startup when synchronous I/O is no big deal and not during run-time.
If there were any problems with loading dependencies, you probably also want those to be discovered at server startup time, not once the server is already serving customers.
So, I think the answer to your question is yes and no. Yes, it's just fine to directly require() without assigning to variables in your startup code. No, it's not recommended to do so inside a request handler or middleware. Better to load your dependencies upon startup. Now, no great harm comes to your code if you happen to do a require() inside a request handler because only the first time actually loads if from disk and takes very long, but as a general practice, it's not the recommended way of coding just because you're trying to save a variable name somewhere.
Personally, I'd also like to know that once my server has startup, all dependencies have been successfully loaded too so there is no danger of an imperfect install being discovered later after it starts serving requests (where it may not be as obvious what went wrong and where users would see the consequences).
Here's one other thing to consider. Javascript is moving from require() to import over time and you cannot use import except at the top level of a module. You can't use it inside a statement.
Summary:
You want to load dependencies at startup so you don't block the event loop during actual processing of requests.
You want to load dependencies at startup so you discover any missing dependencies at server startup and not during server run-time.
Code is generally considered more reader-friendly if dependencies are obvious and easy to see for anyone who works on this module.
In the future when we all are using import instead of require(), import is only allowed at the top level.
I was learning GraphQL and about to finish the tutorial and this never happened before.
The problem is that the GraphQL server keeps receiving requests after opening GraphQL Playground in the browser even though no query or mutation is made.
I see these sort of responses being returned by the server:
{
"name":"deprecated",
"description":"Marks an element of a GraphQL schema as no longer supported.",
"locations":[
"FIELD_DEFINITION",
"ENUM_VALUE"
],
"args":[
{
"name":"reason",
"description":"Explains why this element was deprecated, usually also including a suggestion for how to access supported similar data. Formatted using the Markdown syntax (as specified by [CommonMark](https://commonmark.org/).",
"type":{
"kind":"SCALAR",
"name":"String",
"ofType":null
},
"defaultValue":"\"No longer supported\""
}
]
}
This is expected behavior.
GraphQL Playground issues an introspection query to your server. It uses the result of that query to provide validation and autocompletion for your queries. Playground will send that query to your server repeatedly (every 2 seconds by default) so that if your schema changes, these changes can be immediately reflected in the UI (although there's an issue with this feature at the moment).
You can adjust the relevant settings (click on the settings icon in the top right corner of the Playground UI) to either change the polling frequency or turn it off entirely:
'schema.polling.enable': true, // enables automatic schema polling
'schema.polling.endpointFilter': '*localhost*', // endpoint filter for schema polling
'schema.polling.interval': 2000, // schema polling interval in ms
However, the behavior you're seeing is only related to Playground so it's harmless and won't impact any other clients connecting to your server.
I have the following situation, where the already sent headers problem happens, when sending multiple request from the server to the client via AJAX:
It is something I expected since I opted to go with AJAX, instead of sockets. Is there is other way around to exchange the data between the server and the client, like using browserify to translate an emitter script for the client? I suppose that I can't escape the sockets, so I will take advice about simpler library, as sockets.io seems too complex for such a small operation.
//-------------------------
Update:
Here is the node.js code as requested.
var maxRunning = 1;
var test_de_rf = ['rennen','ausgehen'];
function callHandler(word, cb) {
console.log("word is - " + word);
gender.gender_function_rf( word , function (result_rf) {
console.log(result_rf);
res.send(result_rf);// Here I send data back to the ajax call
setTimeout(function() { cb(null);
}, 3000);
});
}
async.eachLimit(test_de_rf, maxRunning, function(item, done) {
callHandler(item, function(err) {
if (err) throw new Error(err);
done();
});
}, function(err) {
if (err) throw new Error(err);
console.log('done');
});
res.send() sends and finishes an http response. You can only call it once per request because the request is finished and done after calling that. It is a fairly high level way of sending a response (does it all at once in one call).
If you wanted to have several different functions contributing to a response, you could use the lower level functions on the http object such as res.setHeader(), res.writeHead(), res.write() (which you can call multiple times) and res.end() (which indicates the end of the response).
You can use the standard webSocket API in the browser and get webSocket module for server-side support or you can use socket.io which offers both client and server support and a number of higher level functions (such as automatic reconnect, automatic failover to http polling if webSockets are not supported, etc...).
All that said, if what you really want is the ability to just send some data from server to client whenever you want, then a webSocket is really the better way to go. This is a persistent connection, is supported by all modern browsers and allows the server to send data unsolicited to the client at any time. I'd hardly say socket.io is complex. The doc isn't particularly great at explaining things (not uncommon in the open source world as the node.js doc isn't particularly great either). But, I've always been able to figure advanced things out by just looking at a few runtime data structures in the debugger and/or looking at the source code.