Is it possible to clear all site cache? I would like to do this when the user logs out or the session expires instead of instructing the browser not to cache on each request.
As far as I know, there is no way to instruct the browser to clear all the pages it has cached for your site. The only control that you, as a website author, have over caching of a page occurs when the browser tries to access that page. You can specify that cached versions of your pages should expire at a certain time using the Expires header, but even then the browser won't actually clear the page from its cache at that time.
i certainly hope not - that would give the web site destructive powers over the client machine!
If security is your main concern here, why not use HTTPS? Browsers don't cache content received via HTTPS (or cache it only in memory).
One tricky way to mimic this would be to include the session-id as a parameter when referencing any static piece of content on the site. When the user establishes the session, the browser will recognize all the pieces of content as new due to the inclusion of this parameter. For the duration of the session the browser will used the static content in its cache. After the user logs out and logs back in again, the session-id parameter for the static contents will be different, so the browser will recognize this is as completely new content and will download everything again.
That being said... this is a hack and I wouldn't recommend pursuing it.. For what reason do you want the user's cache to be cleared after their session expires? There's probably a better solution that can fit your situation as opposed to what you are currently asking for.
If you are talking about asp.net cache objects, you can use this:
For Each elem As DictionaryEntry In Cache
Cache.Remove(elem.Key)
Next
to remove items from the cache, but that may not be the full-extent of what you are trying to accomplish.
Related
I am unable to clear cookies using watir-webdriver and browser.cookies.clear
Are there any other alternatives ?
This is as far as I know a browser based limitation, due to concerns of security and privacy.
Webdriver is interacting with the browser by javascript, and javascript is not allowed to clear all cookies (think how nasty that could be on a malicious site). In the non testing env, most JS that is executing came from the server of the site you are accessing. So the most it is allowed to do is clear the cookies for the 'current domain' e.g. the cookies belonging to the current URL. So if a web page wants to clear all its own cookies, that is OK, but it is not allowed to clear cookies belonging to other domains.
So if you want all your tests to start with fresh cookies, you will need something in the 'Before' section in env.rb that goes to the root of the site in question, and then clears the cookies
BTW the same limitation applies to setting cookies, if you want to create cookies for a specific site, you need to navigate to that site before trying to create them, or nothing gets created.
This is not an answer, but FYI only.
Suppose chrome is the chosen browser, when cucumber is running, from the output of ps -ef | grep chrome we'll be able to find the customized user data directory which is something like below.
--user-data-dir=/tmp/.org.chromium.Chromium.G2DgPo
And inside that directory, we'll be able to find the Cookies file stored under the Default folder.
Not sure directly deleting that database file could fulfill the needs or not. Cause in a normal browser session, such runtime data are stored at
~/.config/google-chrome/Default/Cookies
I have added cache in my application with $cacheFactory, but when the user close the browser and then reopen, the cache data about the browsed products is expired.
Are there any way to make it longer? put a date of expiration or something like that?
Thanks.
Edit: I would maybe have to use Breeze.js, but before this I wanted to know if any of you know if it is possible to do it with angularjs (I read the API and there is no info about it :S).
Edit 2: to help, the way in which I use $cacheFactory is like this JSFiddle code.
factory('SomeCache', function($cacheFactory) {
return $cacheFactory('someCache', {
capacity: 3 // optional - turns the cache into LRU cache
});
}).
Edit 3: Lawnchair is not an option, it stores in a javascript array and doesn't persists the data after closing the browser..
$cacheFactory uses in-memory browser storage, which means it doesn't persist across even a page refresh, let alone closing and re-opening the browser.
If you want to cache data across page loads and browser sessions, you're looking for localStorage. See here for a very simple example:
http://jsbin.com/acokis/2/edit
I think you'll want to look at amplify.js or lawnchair.js as they provide nice wrappers for managing localStorage, also also provide fallback adapters for browsers that don't support localStorage (fallback options won't persist across page refreshes though).
We have a simple CCTV system in our office that shows a live image from each of our security cameras. The CCTV system doesn't have an API or any method of extracting the live images. You can however view the image from another browser by creating a basic HTML page with the image link:
http://192.168.1.6/media/getimage_sid.php?sid=a09c4ecb72bade3802e7bf563b0d0bd6&card=1&camera=1&width=384&height=288
This works perfectly, until the session expires and/or timesout. I don't know very much about cookies and sessions but when I inspected the page in Google Chrome I noticed the following cookie:
Name Value Domain Path Expires Size
PHPSESSID a09c4ecb72bade3802e7bf563b0d0bd6 192.168.1.6 / Session 41
there is also a HTTP column and a Secure column but both are empty.
What I am trying to figure out, is how do I keep that cookie alive or trigger it to recreate with the same value? I'm assuming that a rake task to log in to the system wouldn't work because that would reset the session ID every time.
The intranet is a Rails application, so one way would be to create a script that logs in and stores the current session ID to the database and then putting the last recorded sessions ID into the IMG links from the database. It's a bit of a long way around though, I'm hoping for a better solution.
I have read a few articles showing how to do this with AJAX but that would appear to rely on the intranet being viewed all the time. I need this to work if no-one has viewed the intranet for the weekend.
This project is so we can put a couple of live (when the page refreshes!) images on our intranet so we don't have to continuously go to the CCTV system, log in and find the right camera just to see who is at the garage door etc.
Any help would be appreciated.
It's a bit of hack but I've made a small script to pull in the latest session ID and then put it into the image links.
A random different approach: does the following URL get the right image, without having to worry about the session id?
http://192.168.1.6/media/getimage_sid.php?card=1&camera=1&width=384&height=288
The session ID used in the cookie seems to be the PHP generated one.
I don't think session ID should become stale if you 'notify' the server that you're still online.1 You should try to specify the Cookie: in your HTTP request headers. Specifying the SID via the URL is probably not be enough to indicate to the server that you're actually using it.2
If your web-pages are fetching the images directly (i.e. you have an <img src="http://192.168.1.6/..."> in the HTML page) you might work like this:
make an AJAX request (XMLHttpRequest) to a URL which returns a session ID.
any subsequent request to the server on that page should automatically include the session in the headers.3
Otherwise, if you can't specify a Cookie: header, you may choose to make the time before a session becomes stale longer. If you have access to the computer hosting the PHP interface (192.168.1.6) then you can configure PHP to do so (via the php.ini configuration file, I believe). Information about session configuration is available here, and specifically the gc-maxlifetime options seems useful:
session.gc_maxlifetime specifies the number of seconds after which data will be seen as 'garbage' and potentially cleaned up. Garbage collection may occur during session start (depending on session.gc_probability and session.gc_divisor).
Alternatively, if none of the above appeal to you, your solution to fetch (GET) a page to obtain a valid, fresh session ID seems logical and good. You could optimize this by measuring how long it takes before session IDs become stale and fetching new session IDs only at that interval.
1 I looked for a valid reference for this but couldn't find one.
2 specifically PHP uses a PHPSESSID= token in the URL whereas in your example it looks like sid=. It is also generally considered bad practice security-wise I believe (this article explains how it might be used for XSS), since you're exposing user information in the URL, though I think this has little to no effect in this case
3 according to the XMLHttpRequest spec of the send() method:
If the user agent supports HTTP State Management it should persist, discard and send cookies
I have a website which is displayed to visitors via a kiosk. People can interact with it. However, since the website is not locally hosted, and uses an internet connection - the page loads are slow.
I would like to implement some kind of lazy caching mechanism such that as and when people browse the pages - the pages and the resources referenced by the pages get cached, so that subsequent loads of the same page are instant.
I considered using HTML5 offline caching - but it requires me to specify all the resources in the manifest file, and this is not feasible for me, as the website is pretty large.
Is there any other way to implement this? Perhaps using HTTP caching headers? I would also need some way to invalidate the cache at some point to "push" the new changes to the browser...
The usual approach to handling problems like this is with HTTP caching headers, combined with smart construction of URLs for resources referenced by your pages.
The general idea is this: every resource loaded by your page (images, scripts, CSS files, etc.) should have a unique, versioned URL. For example, instead of loading /images/button.png, you'd load /images/button_v123.png and when you change that file its URL changes to /images/button_v124.png. Typically this is handled by URL rewriting over static file URLs, so that, for example, the web server knows that /images/button_v124.png should really load the /images/button.png file from the web server's file system. Creating the version numbers can be done by appending a build number, using a CRC of file contents, or many other ways.
Then you need to make sure that, wherever URLs are constructed in the parent page, they refer to the versioned URL. This obviously requires dynamic code used to construct all URLs, which can be accomplished either by adjusting the code used to generate your pages or by server-wide plugins which affect all text/html requests.
Then, you then set the Expires header for all resource requests (images, scripts, CSS files, etc.) to a date far in the future (e.g. 10 years from now). This effectively caches them forever. This means that all requests loaded by each of your pages will be always be fetched from cache; cache invalidation never happens, which is OK because when the underlying resource changes, the parent page will use a new URL to find it.
Finally, you need to figure out how you want to cache your "parent" pages. How you do this is a judgement call. You can use ETag/If-None-Match HTTP headers to check for a new version of the page every time, which will very quickly load the page from cache if the server reports that it hasn't changed. Or you can use Expires (and/or Max-Age) to reload the parent page from cache for a given period of time before checking the server.
If you want to do something even more sophisticated, you can always put a custom proxy server on the kiosk-- in that case you'd have total, centralized control over how caching is done.
Background
I'm building an app that links recent
web pages you've visited together.
To do this, I need to get the HTML
for recent URLs using Cocoa.
Right now, I'm using an invisible
WebView to do this.
As I understand it, if the URL isn't
in the cache for my app, this is
hitting web servers.
What I want
The chances are high that the URL I'm grabbing has already been cached by Safari as the page has already been visited.
I want my app to check Safari's cache for the URL first. If it's there, it should just use this data. If not, it should hit the web server and store the page in my app's cache.
I don't really want to have to parse the cache.db file from Safari using sqlite3 - I've no idea if this format will stay the same. I'm after something simpler and more high level.
What I've tried
I know that you can set up your own NSURLCache using the method initWithMemoryCapacity:diskCapacity:diskPath: but I don't want to try pointing this to the Safari cache in case it screws up Safari by writing to it.
Is there an easy, high level way of sharing the Safari cache?
UPDATE
Aha. I've just realised there may be a way to do this I've been missing.
I could make a new instance of NSURLCache with initWithMemoryCapacity:diskCapacity:diskPath:, point it at the Safari cache, then specify a cache policy of NSURLRequestReturnCacheDataDontLoad for the URL Request when loading the page.
When this fails, I could just try and load the page as normal. I'll try this out and update the question when I know more.
To be honest, you just can't do this.
Firstly, I'm pretty certain -[NSURLCache initWithMemoryCapacity:diskCapacity:diskPath:] won't work as you expect. It will instead blow away the old cache file to create its own; potentially highly upsetting Safari.
Secondly NSURLCache is a composite cache. That is, it caches data first in memory, and then moves it out to disk at some point. So even if you could properly access Safari's cache file (which you can't) you'd only be able to access the older cached data; not the most recent.