I'm considering to use Falcor in an app project I'm currently working on, I've started reading the docs but there's still one issue that is not entirely clear to me.
Let's make this example.
At time zero client A performs a request to a Falcor model, which in turns retrieves the needed data from a server DataSource, and stores it in the client's cache.
At time one the same server data is changed by operations performed by client B.
At time two client A performs the same request to the Falcor model, which finds a cached value and serves the now outdated data.
Is there any way to notifiy client A after time one that its Falcor cache for that data is outdated, and should instead perform a new request to the server DataSource?
You can use web sockets to send messages to the client. On the client you can call invalidate to manually invalidate the cache. You can also set an expires time on values to cause them to expire after a certain amount of time.
Related
I am trying to implement different cache strategies using ServiceWorker. For the following strategies the way to implement is completely clear:
Cache first
Cache only
Network first
Network only
For example, while trying to implement the cache-first strategy, in the fetch hook of the service-worker I will first ask the CacheStorage (or any other) for the requested URL and then if exists respondWith it and if not respondWith the result of network request.
But for the stale-while-revalidate strategy according to this definition of the workbox, I have the following questions:
First about the mechanism itself. Does stale-while-revalidate mean that use cache until the network responses and then use the network data or just use the network response to renew your cache data for the next time?
Now if the network is cached for the next time, then what scenarios contain a real use-case of that?
And if the network response should be replaced immediately in the app, so how could it be done in a service worker? Because the hook will be resolved with the cached data and then network data could not be resolved (with respondWith).
Yes, it means exactly that. The idea is simple: respond immediately from the cache, then refresh the cache in the background for the next time.
All scenarios where it is not important to always get the very latest version of the page/app =) I'm using stale-while-revalidate strategy on two different web applications, one for public transportation services and one for displaying restaurant menu information. Many sites/apps are just fine with this but of course not all.
One very important thing to note here on the #2:
You could eg. use stale-while-revalidate only for static assets. This way your html, js, css, images etc. would be cached and quickly served to the user, but the data fetched dynamically from an API could still be fresh. For some apps this works, for some others not so well. Depends completely on the app. Of course you have to remember not to change the semantics of your API if the user is running a previous version of the app etc.
Not possible in any automatic way. What you could do, however, is implement a msg channel between the Service Worker and the "regular JS code on the page" using window.postMessage API. You could listen for certain messages on the page and then, from the Service Worker, send a msg when an important change has happened and the cache has been updated. Then you could either show the user a prompt telling that the page really needs to be reloaded right now or even force reload it from JS. You would need to put this logic of determining when an important update has happened into the Service Worker of course.
Oh the joyous question of HTTP vs WebSockets is at it again, however even after quit a bit of reading on the hundreds of versus blog posts, SO questions, etc, etc.. I'm still at a complete loss as to what I should be working towards for our application. In this post I will be supplying information on application functionality, and the types of requests/responses used in our application currently.
Currently our application is a sloppy piece of work, thrown together using AngularJS and AJAX requests to a Apache server running PHP, namely XAMPP. With the launch of our application I've noticed that we're having problems with response times when the server is under any kind of load. This probably has something to do with the sloppy architecture of our server, the hardware, and the fact that our MySQL database isn't exactly optimized.
However, with such a loyal fanbase and investors seeing potential in our application and giving us a chance to roll out a 2.0 I've been studying hard into how to turn this application into a powerhouse of low latency scalability. Honestly the best option would be hire someone with experience, but unfortunately I'm a hobbyist, and a one-man-army without much experience.
After some extensive research, I've decided on writing the backend using NodeJS this time. However I'm having a hard time deciding on HTTP or Websockets. Here's the types of transactions that are done between the Server/Client.
Client sends a request to the server in JSON format. The request has a few different things.
A request id (For processing logic based on the request)
The data associated with the request ID.
The server receives the request, polls the database (if necessary) and then responds to the client in JSON format. Sometimes the server is serving files to the client. Namely images in Base64 format.
Currently the application (When being used) sends a request to the server every time an interface is changed, which on average for our application is once every few seconds. Every action on our interfaces sends another request to the server. The application also sends requests to check for notifications/messages every 8 seconds, (or two seconds depending on if they're on the messaging interface).
Currently here are the benefits I see of a stated connection over a stateless connection with our application.
If the connection is stated, I can eliminate the requests for notifications and messages, as the server can just tell the client whenever one comes available. This can eliminate x(n)/4 requests per second to the server alone.
Handling something like a disconnection from the server is as simple as attempting to reconnect, opposed to handling timeouts/errors per request, this would only be handled on the socket.
Additional security can be obtained by removing security keys for database interaction, this should prevent the possibility of Hijacking(?) of a session_key and using it to manipulate or access another users data. The session_key is only needed due to there being no state in the AJAX setup.
However, I'm someone who started learning programming through TCP game server emulation. So I understand some benefits of a STATED connection, while I don't understand the benefits of a STATELESS connection very much at all. I know they both have their benefits and quirks, but I'm curious what would be the best approach for us.
We're mainly looking for Scalability, as we had a local application launch and managed to bottleneck at nearly 10,000 users in under 48 hours. Luckily I announced this as a BETA and the users are cutting me a lot of slack after learning that I did it all on my own as a learning project. I've disabled registrations while looking into improving the application's front and backend.
IMPORTANT:
If using WebSockets, would we be able to asynchronously download pictures from the server like we can with AJAX? For example, I can make 5 requests to the server using AJAX for 5 different images, and they will all start downloading immediately, using a stated connection would I have to wait for each photo to be streamed before moving to the next request? Would this only bottle-neck a single user, or every user that is waiting on a request to be completed?
It all boils down on how your application works and how it needs to scale. I would use bare WebSockets rather than any wrapper, since it is an already easy to use API and your hands won't be tied when you need to scale out.
Here some links that will give you insight, although not concrete answers to your questions because as I said, it depends on your expectations.
Hard downsides of long polling?
WebSocket/REST: Client connections?
Websockets, and identifying unique peers[PHP]
How HTML5 Web Sockets Interact With Proxy Servers
If your question is Should I use HTTP over Websockets ?, the response is: You should not.
Even if it is faster because you don't lose time opening the connection, you lose also all the HTTP specification like verbs (GET, POST, PATCH, PUT, ...), path, body, and also response, status code. This seams simple but you'll have to re-implement all or part of these protocol things.
So you should use Ajax, as long as it is one ponctual request.
When you need to make an ajax request every 2 seconds, you need in fact that the server sends you data, not YOU request server to check Api change (if changed). So this is a sign that you should implement a websocket server.
I'm about to set up suggestion search for ElasticSearch with the NEST client. Ideally I'd start matching as of the 2nd character entered. However, it takes 600ms the first time I call the client. Every subsequent call is more like 20ms. Is there a way to cache or prepare the NEST client?
I've read this post: Elasticsearch and .NET
I've also read that I can either create a new client or use the same instance of the client with no repercussions.
I just want to get the client ready for use before I call it so the user isn't waiting for the client to validate itself.
For the moment I'm making a connection to the client as soon as the user hits the website, then saving the client reference in the session. However, the first search is still slow even though I've already established the connection. Is there a way to preload/cache the connection so the delay occurs during page load?
The first hit cache built up is per AppDomain. So you do not need to cache the client itself. Every client you are going to instantiate after the first hit is going to be warm.
I've opened up a working ticket so you are able to initiate the warmup process in your application startup so you are no longer penalizing the first user of your system with the warmup costs.
https://github.com/elasticsearch/elasticsearch-net/issues/742
thanks everyone!
recently i want to built a small cms on meteor,but have some question
1,cache,page cache,data cache,etc..
For example,when people search some article
in server side:
Meteor.publist('articles',function(keyword){
return Articles.find({keyword:keyword});
});
in client:
Meteor.subscribe('articles',keyword);
that's ok ,but ......
the question is ,everytime people doing so ,it invoke a mongo query,and reduce the performance,
in other framework use common http or https,people can depend on something like squid or varnish to cache the page or data,so everytime you route to a url,you read data from the cache server ,but Meteor built on socket.js or websocket,and I don't know how to cache throught the socket.......I trid varnish ,but seen no effect.
so,may be it ignore the websocket?is there some method to cache the data,in the mongodb,in server,can i add some cache server ?
2, chat
I see the chatroom example in https://github.com/zquestz/simplechat
But unlike implyment using socket.js,this example save the chat message in the mongodb ,so the data flow is message ->mongo->query->people,this invoke the mongo query too!
and in socket.js,just save the socket in the context(or the server side cache),so the data don't go throught the db.
My question is , is there a socket interface in Meteor ,so I can message->socket->people? and if can't , how is the performace in the productive envirment as the chatroom example doing(i see it runs slow ...)
With Meteor, you don't have to worry about caching Mongodb queries. Meteor does that for you. Per the docs on data and security:
Every Meteor client includes an in-memory database cache. To manage the client cache, the server publishes sets of JSON documents, and the client subscribes to those sets. As documents in a set change, the server patches each client's cache.
[...]
Once subscribed, the client uses its cache as a fast local database, dramatically simplifying client code. Reads never require a costly round trip to the server. And they're limited to the contents of the cache: a query for every document in a collection on a client will only return documents the server is publishing to that client.
Because Meteor does poll the server every so often to see if the client's cache needs patching, you're probably seeing those polls happening every now and then. But they probably aren't very large requests. Additionally, due to a feature of Meteor called latency compensation, when you update a data source, the client immediately reflects the change without first waiting on the server. This reduces the appearance of performance reduction to the user.
If you have many documents in mongo, you may also be seeing them all get fetched if you still have the autopublish package enabled. You can fix that by removing it with meteor remove autopublish and write code to only publish the relevant data instead of the entire database.
If you really need to manage caching manually, the docs also go into that:
Sophisticated clients can turn subscriptions on and off to control how much data is kept in the cache and manage network traffic. When a subscription is turned off, all its documents are removed from the cache unless the same document is also provided by another active subscription.
Additional performance improvements to Meteor are currently being worked on, including a DDP-level proxy to support "very large number of clients". You can see more detail on this at the Meteor roadmap.
If you stumble upon this question not because of a lack of understanding of meteor's minimongo and are instead interested in how to cache subscriptions after they are no longer needed for the moment (but they maybe in the future and don't want to keep their extra DDP overhead on client server) there are two package options:
https://github.com/ccorcos/meteor-subs-cache
https://github.com/kadirahq/subs-manager
I was creating a mobile app and cache of database was not working hence I used GroundDB package of meteor https://github.com/raix/Meteor-GroundDB now the database is always in local whenever I restart the app,
Also you need to look in appcache package of meteor to cache the entire app locally.
Our site is divided into several smaller sites recently, which are then distributed in different IDCs.
One of these sites serves user authentication and other user-related services, the other sites access it through web services.
On every site that fetches data remotely, we make a local cache so that we don't have to go remote every time user information is needed.
What cache updating strategy would you recommend to ensure data integrity?
Since you need the updated-policy close to realtime, you definitely need the cache-invalidation notification engine.
There are 2 possible implementation models for it:
1.Pull
Main server pulls child-servers with notification messages like "resourceID=34392 not more valid in your cache".
This message should be sent on each data update on main server.
Poll
Each child-server ask main server about the cache item validity right before serving it to user.
Ofcourse, in this case, main server should keep the list of objects updated during last cache-lifetime period, and respond to "If-object-was-updated" requests very quickly.
As you see in both cases, your main server should trigger an event on each data change.
In first case this event will be transferred via 'notification bus' to child server, and in second case this event will be stored in recently-updated-objects list.
So both options need some code changes on main server.
As for me the second options is much more easy to implement in common, but it`s very depends of the software stack you're using.