Difference b/w web content cache and application cache - firefox

What's the difference b/w web content cache and application cache.
On my system Firefox is using a space of 400MB for web content cache.

Application cache refers to the mechanism by which a Web application can store data on the server side. The actual store varies, it can be a database, in-memory, etc. This is usually done for performance reasons. For example, a call to get data from a database may take considerable amount of time and may not change often. Once the data is fetched initially, the developer may chose to put it in App Cache to get it quickly from memory next time as opposed to call the DB again.
Browser-cache refers to the data stored on the user's computer (client). Browsers, for example, may cache images, style sheets, etc. This depends on how the server responds to the browser requests. For example, a server may send certain headers in the response indicating that a javascript file should be cached until changed on the server, etc. This way, Browsers improve the user experience by not re-downloading data unnecessarily multiple times.

Related

What are the size limits for Laravel's file-based caching?

I am a new developer and am trying to implement Laravel's (5.1) caching facility to improve the speed of my app. I started out caching a large DB table that my app constantly references - but it got too large so I have backed away from that and am now 'forever' caching smaller chunks of data - for example, for each page only the portions of that large DB table that are relevant.
I have watched 'Caching Essentials' on Laracasts, done some Googling and had a search in this forum (and Laracasts') but I still have a couple of questions:
I am not totally clear on how the cache size limits work when you are using Laravel's file-based system - is there an overall in-app size limit for the cache or is one limited size-wise only per key and by your server size?
What are the signs you should switch from file-based caching to something like Memcached or Redis - and what are the benefits of using one of those services? Is it the fact that your caching is handled on a different server (thereby lightening the load on your own)? Do you switch over to one of these services when your local, file-based cache gets too big for your server?
My app utilizes several tables that have 3,000-4,000 rows - the data in these tables is constantly referenced and will remain static unless I decide to add new options. I am basically looking for the best way to speed up queries to the data in these tables.
Thanks!
I don't think Laravel imposes any limitations on its file i/o at all - the limitations will be with how much what PHP can read / write to a file at once, or hold in its memory / process at any one time.
It does serialise the data that you cache, and unserialise it when you reload it, so your PHP environment would have to be able to process the entire cache file (which is equivalent to the top level cache key) at once. So, if you are getting cacheduser.firstname, it would have to load the whole cacheduser key from the file, unserialise it, then get the firstname key from that.
I would take the PHP memory limit (classic, i know!) as a first point to investigate if you want to keep down this road.
Caching services like Redis or memcached are bespoke, optimised caching solutions. They take some of the logic and responsibility out of your PHP environment.
They can, for example, retrieve sub-keys from items without having to process the whole thing, so can retrieve part of some cached data in a memory efficient way. So, when you request cacheduser.firstname from redis, it just returns you the firstname attribute.
They have other advantages regarding tagging / clearing out subsets of caches (see [the cache tags Laravel docs] (https://laravel.com/docs/5.4/cache#cache-tags))
Another thing to think about is scaling. If your site is large enough, and is load-balanced across multiple servers, the filesystem caching may be different across those servers, as each server can only check their local filesystem for the cache files. A caching service can be on a different server (many hosts will have a separate redis / memcached services available), so isn't victim to this issue.
Also - as I understand it (and this might be the most important thing), the file cache driver in Laravel is mainly for local development and testing. Although it can work fine for simple applications with basic caching needs, it's not intended for large scalable production environments.
Personally, I develop locally and test with file caching, as i'm only dealing with small amounts of data then, and use redis to cache on production environments.
It doesn't necessarily need to be on a separate server to get the benefits. If you are never going to scale to multiple application servers, then using a caching service on the same server will already be a large improvement to caching large documents.

if I am using caching on my website, where cached files are saved?

I have this small technical question about caching...
I am planning to use caching for my website and I was wondering if the cached file where save on visitors personal computers !?
I asked somebody and told me that they are saved on HTML files, and these are not on visitors Personal PC
Regards
That depends on what you mean by Cache. Most sites use caching to save bandwidth by reducing the hits to the database or other server resources by not having to re-generate dynamic content on every request. On the other hand, browsers will cache JavaScript and CSS files from websites on the local computer as a part of their normal process. Cookies are 'caching' important information specific to that computer / user and are also stored by the browser locally.
I am assuming that your talking about pages on a server, reusing them for multiple requests. That can be stored as tmp files or as entries in a database on the server (CakePHP and CSP comes to mind here). It really depends on your configuration and what you decide you want to do.

storing assets clientside permanently (or extended period of time)

Consider a HTML5 game, rather heavy on the assets, is it possible to somehow provide the user with an option to store the assets locally, in order to avoid loading all those assets again each time he loads the game?
Yes, there are several options:
Web Storage (localStorage/sessionStorage) can be used to store strings (or stringified objects). It has limited storage capacity but is very easy to use.
Indexed DB is a light database which allow you to store any kind of objects incl. BLOBs. It has a default limit (typically 5 mb) but has an interface that allows you to request more storage space.
Web SQL is also a database, although deprecated it has still good support in for example Safari (which do not support Indexed DB) and works by executing short SQL queries.
File system API is in the works but not widely supported (only Chrome for now). As with Indexed DB you can request larger storage space, in fact very large in this case. It's a pseudo file system which allow you store any kind of data.
And finally there is the option of application cache using manifest files and off-line storage. You can download the assets and define them using manifest files which makes them available to the app without having to consult server.
There are legacy mechanisms such as UserData in IE and of course cookies which probably has very limited use here and has it downsides such as being sent forth and back between server for every page request.
In general I would recommend web storage if the amount of data is low, or Indexed DB (Web SQL in browsers which do not support Indexed DB) for larger data. File system is cool but has little support as of yet.
Note: There is no guarantee the data will be stored on client permanently (user can choose directly or indirectly to clear stored data) so this must be taken into consideration.

Caching Dynamic data that isn't really dynamic in an IIS7 environment

Okay, so I have an old ASP Classic website. I've determined I can reduce a huge number of DB calls by caching the data daily. Our site data is read only, and changes very slowly. I think based on our site usage, I would be able to cache pages by query string for every visit each day, without a hit to our server.
My first thought was to use Output Caching, but the problem I discovered right away was that it wasn't until the third page request was generated that I gained any performance. I verified this using SQL profiler, but I'm not sure why.
My second thought was to add this ObjPageCache include file from https://web.archive.org/web/20211020131054/https://www.4guysfromrolla.com/webtech/032002-1.shtml After some research I discovered that this could cause more issues than it may solve http://support.microsoft.com/kb/316451
I'm hoping someone on here will tell me that since 2002 the issue with Sending ServerXMLHTTP or WinHTTP Requests to the Same Server has been resolved with Microsoft.
Depending on how your data is maintained you could choose from a number of ways to cache it.
If your data is changed and saved in one single place you could choose to generate an html-file which you save to the serverdisk and refer to in your linking. This will require write access for the process running your site though (e.g. NETWORK SERVICE). This will produce fast pages as the server serves these pages without any scriptingengine getting involved.
Another option is reading the data into an DomDocument which you store in the Application object and refer to on the page that needs it (hence saving the roundtrip to the database). You could keep two timestamps together with the cached data (one for the cachingtime and one for the time of change of data in the database). Timestamps will allow for fast check for staleness of the cached data: cached timestamp <> database timestamp => refresh data; otherwise use cached data. One thing to note about this approach is that Application does not accept objects other than multithreaded object so you will have to use the MSXML2.FreeThreadedDomDocument.6.0
Personally I prefer the last one as it allows for a more dynamic usage and I don't have to worry about write access permissions for the process running my site (which would probably pose security risks anyways).

Sqlite as cache store to store static copies of dynamic pages - Is it a good idea?

We're running our blog from a shared hosting account. Our host limits the allowed inodes/number of files on the hosting account to 150,000. We've implemented our caching mechanism that caches all pages in full as soon as they are accessed so that subsequent seeks are delivered from cache. Unfortunately, the inode limit won't allow us to store more pages very soon.
Fortunately we have sqlite on our server. Though we have mysql too, but our shared hosting account only allows us to have maximum 25 concurrent connections from the apache webserver to the mysql server. That's a major constraint! Its said that sqlite is "serverless", and so I believe sqlite won't have that kind of limitation.
With that, should I and can I use a sqlite table to store full cache pages of all dynamic pages of our blog ? The average cached page size is around 125 kbs and I have around 125,000 cache pages and growing.
Will that have any kind of bottlenecks to slow down the cache page delivery out of the sqlite database file?
Will I be able to write more pages to the cache_table in the sqlite database while simuntaneously deliverying sought pages from the cache_table to the site visitors?
I't not a good idea cause sqlite usage may impact you website performance (at least on response time).
I recommend to use Memcached or NoSQL DB as a last resort (need to test for response time rise).
But if you have not choise, sqlite will be better then MySQL, cause its select operations are faster.
Haven't calculated that because there has never been a need to calculate max page generation time. I manage all pages statically in full, and with that, it has always been a stable process without any trouble.
Our server load varies from 400 to 8000 page requests in an hour.

Resources