How can I cache queries with Parse Server? - caching

I have an iOS app that is using Parse Server, and I noticed that a lot of my queries are made on tables that are not changing often.
I would like to know if it's possible to cache (for instance every day) some of these requests using Parse Server in order to limit resources used and improve my capacity.
Thanks for your help.
Cyril

For now we don't provide caching mechanisms, but you could implement it through a reverse proxy or another strategy in the front of your parse-server
For example, you can configure it with nginx, to cache the requests and serve them before you hit your parse-server installation
https://www.nginx.com/resources/wiki/start/topics/examples/reverseproxycachingexample/

Related

How to cache data in next.js server on vercel?

I'm trying to build a small site that gets its data from a database (currently I use Firebase's Cloud Firestore).
I've build it using next.js and thought to host it on vercel. It looks very nice and was working well.
However, the site needs to handle ~1000 small documents - serve, search, and rarely update. In order to reduce calls to the database on every request, which is costly both in time, and in database pricing, I thought it would be better if the server could get the full list of item when it starts (or on the first request), and then hold them in memory and make data request get the data from its memory.
It worked well in the local dev server, but when I deployed it to vercel, it didn't work. It seems it forces me to work in serverless mode, where each request is separate, and I can't use a common in-memory cache to get the data.
Am I missing something and there is a way to achieve something like that with next.js on vercel?
If not, can you recommend other free cloud services that can provide what I'm looking for?
One option can be using FaunaDB and Netlify, as described in this post, but I ended up opening a free Wix site and using Wix data to store the data. I built http-functions module to provide access to the data via REST, which also caches highly used data in memory. Currently it seems to work like a charm!

What technology to use to avoid too many VMs

I have a small web and mobile application partly running on a webserver written in PHP (Symfony). I have a few clients using the application, and slowly expanding to more clients.
My back-end architecture looks like this at the moment:
Database is Cloud SQL running on GCP (every client has it's own
database instance)
Files are stored on Cloud Storage (GCP) or S3 (AWS), depending on the client. (every client has it's own bucket)
PHP application is running in a Compute Engine VM (GCP), (every client has it's own VM)
Now the thing is, in the PHP code, the only thing client specific is a settings file with the database credentials and the Storage/S3 keys in it. All the other code is exactly the same for every client. And mostly the different VMs sit idle all day, waiting on a few hours usage per client.
I'm trying to find a way to avoid having to create and maintain a VM for every customer. How could I rearchitect my back-end so I can keep separate Databases and Storage Buckets per client, but only scale up my VM's when capacity is needed?
I'm hearing alot about Docker, was thinking about keeping db credentials and keys in a Redis DB or Cloud Datastore, was looking at Heroku, AppEngine, Elastic Beanstalk, ...
This is my ideal scenario as I see it now
An incoming request is done, hits a load balancer
From the request, determine which client the request is for
Find the correct settings file, or credentials from a DB
Inject the settings file in an unused "container"
Handle the request
Make the container idle again
And somewhere in there, determine based on the the amount of incoming requests or traffic, if I need to spin up or spin down containers to handle the extra or reduced (temporary) load.
All this information overload has me stuck, I have no idea what direction to choose, and I fail seeing how implementing any of the above technologies will actually fix my problem.
There are several ways do it with minimum efforts:
Rewrite loading of config file depending from customer
Make several back-end web sites on one VM (best choice i think)

Can a person add CORS headers using the ELB Application Load Balancer (sitting in front of Solr)?

We have a number of EC2 instances running Solr in EC2, which we've used in the past through another application. We would like to move towards allowing users (via web browser) to directly access Solr.
Without something "in front" of Solr this results in a security risk, so we have opted to try to use ELB (specifically the Application Load Balancer) as a simple and maintenance free way of preventing certain requests from hitting SOLR (i.e. preventing the public from DELETING or otherwise modifying the documents in Solr).
This worked great, but we realize that we need to deal with the CORS issue. In other words, we need to add the appropriate headers to requests that come in from a browser. I have not yet seen a way of doing this with Application Load Balancer but am wondering if it is possible to do someway. If it is not possible, I would love as an additional recomendation the easier and least complicated way of adding these headers. We really really really hate to add something like nginx in front of Solr because then we've got additional redundancy to deal with, more servers, etc.
Thank you!
There is not much I can find on CORS for ALB either and I remember when I used Beanstalk with ELB I had to add CORS support in my java application directly.
Having said that, I can find a lot of articles on how to set up CORS for Solr.
Can it be an option for you?

Server side caching of dynamic content with Nginx and Etags

I have a CouchDB DB, with an Nginx reverse proxy in front of it. Some responses from CouchDB take a long time to generate (yes, it was a bad choice, but need to stick with it for now), and I would like to cache them with Nginx. (Currently Nginx only does SSL.)
CouchDB supports Etags, so ideally what I would like is Nginx caching the Etags as well for dumb clients. The clients do not use Etags, they would just query Nginx, which goes to CouchDB with its cached Etag, and then either sends back the cached response, or the new one to the client.
My understanding based on the docs is that Nginx cannot do this currently. Have I missed something? Is there an alternative that supports this setup? Or the only solution is to invalidate the Nginx cache by hand?
I am assuming that you already looked at varnish and did not find it suitable for your case. There are two ways you can achieve what you want.
With nginx
Nginx has a default caching mechanism that you can configure for your use.
If that does not help, you should give Nginx compiled with the 3rd party Ngx_Lua module a try. This is also conveniently packaged along with other useful modules and the required Lua environment as Openresty.
With Ngx_Lua, you can use the shared dictionary to cache your couchdb responses. As the name suggests shared dictionary uses a shared memory zone in Ngx_Lua's execution environment. This is similar to the way the proxy_cache works in Nginx(which also defines a shared memory zone in Nginx's execution environment) but comes with the added advantage that you can program it.
The steps required to build a couchdb cache are pretty simple (with this approach you don't need to send etags to the client)
You make a request to couchdb
You save the {Url-Etag:response} pair
Next time the request comes to the same url query for etags using a HEAD request.
If response etag matches the {Url-Etag:response} pair then send the cached response otherwise query couchdb again using (get/post) methods and update the {Url-Etag:response} pair before sending the response to the client.
Of course if you program a cache by hand you will have to define max cache size and a mechanism to remove old items from the cache. The lua_shared_dict directive can help you define a memory size for which the responses will be cached. When saving the values in the shared dictionary you can specify the time for which the value will remain the memory zone after which it will automatically expire. Combining the max cache size parameter and cache time parameter of the shared dictionary you should be able to program fairly complex caching mechanism for your users.
With erlang
Since couchdb is written in erlang you already have an erlang env on your machine. So if you can program in it you can create a very robust distributed cache with mnesia. The steps are the same. Erlang timers can be combined with gen_* behaviours to give you the automatic expiry of items and mnesia has functions to monitor it's memory usage and notify you about it. The two approaches are almost equivalent the only difference being that mnesia can be distributed.
Update
As #abyz suggested redis is also good choice when it comes to caching.

Which schemaless datastores provide good performance?

I've recently written a web app that uses couchdb. I like couchdb and it suited the app - which has a lot of dynamic behaviour and simply pulls JSON directly from couchdb. Being able to upload images via a browser is nice and it's a snap to do tweaks to document data. The replication also has made deployment a breeze as the app is a couchapp, and all that's required to deploy is a replicate to the production server.
However for a new app I'm thinking off (think blog type thingy), I want good performance and it's one area I think couchdb is not strong in. The app will be predominantly read oriented (I'm estimating 90% reads to 10% writes).
Which datastores provide the best performance in a single server scenario? I'd be very interested to hear people's experiences in this...
I think MongoDB is beginning to look like the front runner performance wise for schemaless data stores.
We're currently in the processes of evaluating this for storing binary objects that can range from 10Kb to 50Mb and I've been very impressed with it's performance even on modest hardware.
If it is primarily read performance you are worried about why not just put a varnish proxy in front of couchdb? I use a couple of custom configurations in varnish to tell it not to actually query couchdb for cached objects despite couchdb specifying must-validate, then have a script with an active HTTP GET on _changes that uses the data from _changes in order to explicitly purge changed entries from varnish.
As a plus varnish lets you do URL rewriting, which I need. Most of the other solutions for it involve running something like apache or ngnix just to rewrite URLs for couchdb.

Resources