Android Webview - can't disable cache - caching

I'm using webview in my app which is loading remote web page, which is then using socket.io (node.js) via xhr-pooling.
Problem is that I can't disable caching of received data through socket.io.
For example, every 10 seconds my node server does io.emit, and my webview receives it and saves it in:
/data/data/...../webviewCache
I do not want my webview anything to save, because as time passes number of those files is just rising and they aren't helping my app run faster...
I've tried:
browser.getSettings().setCacheMode(2); //(2 is LOAD_NO_CACHE)
browser.getSettings().setAppCacheEnabled(false);
but neither of those works. My webview is still saving files to the cache folder.
At this moment, I've set up timer which is emptying cache folder every 60 seconds but that's not solution I would like to release in production...
Am I missing something here or there is bug with disabling cache within android?
UPDATE 1: After whole day of debugging I've found out something interesting.
Logcat shows two interesting things: saveCacheFile and getCacheFile
Then I've decided once again to try turn off the cache...
browser.getSettings().setCacheMode(android.webkit.WebSettings.LOAD_NO_CACHE);
That actually caused that WebView wasn't loading files from cache anymore, but it was still saving them. Log cat says something like this:
saveCacheFile for url .../socket.io/1/xhr-polling/BLNN28E7S4PZJsy2pWaF?t=13537
So I believe actual question would be, how to prevent webview from SAVING cache files on every request.

How about adding random string in the query part of your URL? This trick works under some cases.

The only solution I found was to send "Cache-Control: no-store" in the HTTP response header.

Related

"New version available" with service worker and sw-precache

I'm trying to use sw-precache, but I must be doing something wrong!
I'm mostly using the demo code available from the github repo and can't seem to get updates to the app to come through. Once it's cached the first time, it never checks for new versions.
I was expecting that when I publish a new service worker, the browser would request the new service worker and update the cache accordingly in the background. Then using the registration code in the example, I would be able to prompt the user to refresh and get the latest version from their newly refreshed cache.
Would really appreciate if someone could please point me in the right direction.
Example
To demonstrate the problem, I've created an isolated example here:
https://github.com/stevenocchipinti/sw-precache-demo
The example uses a basic skeleton from create-react-app which has a built in build task which take care of fingerprinting the filenames, etc.
I suspect the problem is with me caching everything by using the following sw-precache config:
{
"staticFileGlobs": [ "build/**/*.*" ],
"stripPrefix": "build/"
}
There are more accurate steps in the repo's readme, but the basic steps I'm taking to reproduce the problem are as follows (with my probably incorrect expectations).
Steps and Assumptions
Browse to the app for the first
I should see Content is now available offline! in the console
Reload the page
The message in the console should not appear again because the service worker is installed, but the page should still work.
Go offline and reload the page
The page should still work
Make a visible change to the source code
Rebuild (run the build task and sw-precache)
This is where my understanding must be wrong
Reload the page
The service worker should update the cache in the background
When its done, you should see New or updated content is available. in the console
The actual visible changes should not be visible until the next reload
Reload the page again
The browser will use the new cache this time around
The changes should be visible now!
There shouldn't be any messages in the console
The problem
Once the app has been cached initially, it will never update unless you unregister the service worker or force a reload.
I'm not sure how to make this work - any help would be greatly appreciated!
After replicating your development hosting environment, I can see that you're serving your service-worker.js file with a browser HTTP cache lifetime of one hour:
There's more information as to why this is leading to the behavior you're seeing, along with best practices, in this previous answer. As mentioned at the top of that answer, browsers plan on changing their behavior to stop honoring the HTTP cache for the service worker file by default, mainly due to the type of confusion that you're experiencing here. For the time being, though, the production versions of both Chrome and Firefox continue to honor those headers.

How can I stop my app from logging people out of their session in Safari?

My command line testing tool, which uses NSURLConnection, is interfering with Safari's cookies. How do I stop this from happening?
Here's what I'm seeing:
I log into the web site in Safari.
I run my command line based sync tool.
The sync tool logs in, and gets several pages of data. For each request, the cookie rolls over. (The sync tool does not log out.)
I return to Safari and click a link. The link returns me to the login screen.
If I skip step 2-3, the link in Safari works correctly. My tool is clearly the cause of this.
I'm creating my connections like this:
_connection = [[NSURLConnection alloc] initWithRequest: request
delegate: self
startImmediately: NO];
I'm not doing anything explicitly to the cookies, but just letting the default code handle them.
I'm not sure what's really happening here. If Safari and my app really shared the cookies, wouldn't Safari's copy of the cookie also be rolled over? While weird behaviour, everything would work and I wouldn't even know what was happening. This is something else.
Anyway, how can I stop my command line tool from logging people out of their session in Safari?
Seems like the right approach here is turning off default cookie handling entirely, so it doesn't touch the shared store. You can use -[NSMutableURLRequest setHTTPShouldHandleCookies:NO] to disable the default behavior, then read the cookie headers out of the responses, store them yourself, and insert them back into subsequent URL requests as appropriate.

Any way to get around the browser http timeout during debugging?

I am currently working on a Django development. There is a problem, which isn't a true problem but very annoying. Often, when I try to debug my Django app by putting down some break points, I get this error at the server end:
error: [Errno 32] Broken pipe
After reading this other post, Django + WebKit = Broken pipe, I have learned that this has nothing to do with the server but the client browser used. Basically, what happened is that the browser has a http request timeout. If it doesn't receive a response within the timeout, it will close down the connection with the server.
I find this timeout isn't really needed, indeed causing headache, during debugging. Is there any way I can lift this timeout or increase it for my browser (Chrome)? Or maybe a substitute browser that doesn't have this constraint?
Note: Although I am using Django and have mentioned about it, this isn't a Django-related question. It's more like a question on how to make my debugging process more effective.
I prefer using linux/unix curl command for debugging web applications. It's good approach, especially if you want to focus on some specific request, for example: POST does not work fine for some set of parameters, or cookies are not set as expected.
Of course it may take some time at the beginning to find out how to use it, but then, you will have a total control about every single piece of request: timeouts, cookies, headers and so on. It's very helpful, because you can be sure that what you wanted to send is actually sent (no additional data is added by the web browser).

Zend_Session and Zend_Log _Db are both writing to the database twice for every page load

There are plenty of examples of similar problems littered around the web but none of their solutions seem to fix this particular variation. Any suggestions would be appreciated.
Usually this problem occurs because a rogue link is causing a request for a resources like a favicon or css file to hit the dispatcher more than once, thus causing multiple dispatch processes and therefore multiple rows in your database.
I have checked that all the links on this very simple example page do actually resolve to the resource to which they point.
The session handler is setup as follows:
Zend_Db_Table_Abstract::setDefaultAdapter($db);
Zend_Session::setSaveHandler(new
Zend_Session_SaveHandler_DbTable($config->session->toArray()));
The db logging is setup as follows:
$writer = new Zend_Log_Writer_Db($db, $config->log->tableName,
$config->log->columnMap->toArray());
$logger = new Zend_Log($writer);
Both objects are correctly setup and can read and write to and from the database. Only everything happens twice. If I put a test log message anywhere in the application it is written into the database twice. If I increment three variables with every call to the index action - one stored in the session, one passed around via a Zend_Registry object and another local to the indexAction - only the session variable is incremented by 2. The Apache access log shows the correct amount of requests being fired from the page load and all have good response codes of either 200 or 304 (unchanged).
I have tried disabling all head links.
I have tried disabling the layout entirely.
I have localised everything to the dispatcher and exited before dispatch is run.
In all cases the extra write/increment takes place.
Any thoughts?
Thanks in advance for any help.
I seem to have found and fixed the issue. Chrome (and possibly all Webkit browsers) issues an additional HEAD request on top of the GET which means the application is hit twice and anything session based will be triggered as a result of both requests. My temporary solution is to put the following code near the start of my index.php file.
if ("HEAD" == $_SERVER['REQUEST_METHOD']) {
exit;
}
I hope that helps anyone with the same issue.
Google Chrome always asks for the favicon.ico by making annoying requests to the server. Take care about this in Chrome.
For more information:
http://framework.zend.com/issues/browse/ZF-11502?page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#issue-tabs
Thanks to the Sebastian Galenski contribution.

NS_BINDING_ABORTED Shown in Firefox with HttpFox

I am seeing some of the server calls (Used for tracking purpose) in my site getting aborted in Firefox while seeing through HttpFox. This is happening while clicking some link that loads another page in the same window. It works fine with popup. The error type shown is NS_BINDING_ABORTED. I need to know is the tracking call is hitting the server or not.
It works perfectly with Internet Explorer. Is it any problem with the tool? In that case can you suggest any that can be used in Firefox too.
Because your server is not sending HTTP Expires headers, the browser is checking to see if what is in its cache is current.
The way it does this is to send the server a request saying what the date of what it has in the cache is, and the server is sending 304 status telling the client that what it has is current. In other words, the server is not sending the entire content again but instead sending just a short header to say the existing cache content is current.
What you probably need to fix, is to add Expires headers to what you are serving. Then you will see the NS_BINDING_ABORTED message change to (cache), meaning the browser is simply getting content out of its cache, knowing it has not yet expired.
I should add that, when you do a FireFox forced refresh, it assumes that you want to double-check what is in the cache, so it temporarily ignores Expires.
You shouldn't be worried just because you see something that looks like a failure code (NS_BINDING_ABORTED).
In one post a Firefox developer confirms that NS_BINDING_ABORTED is simply an indication that a page load has been stopped.
It seems perfectly normal that opening a page while another page is being loaded cancels the loads on the first page. It doesn't necessarily mean the loads were aborted before the request got sent to the server, which seems to be what you care about.
[edit: reworded & removed the bit about me not being familiar with HttpFox, as people who see this in 2022 are probably not using it anyway.]
What other javascript do you have on the page? Some javascript might be firing causing the request to be aborted.
I noticed the same thing in my application. I was redirecting the page in javascript (window.location = '/some/page.html') but then further down the block of code, I was calling 'window.reload()'. The previous redirection was aborted because window.reload was called.
I don't know what tracking you are using but it's possible that the request is being sent to your server but the request is aborted because another request was issued afterwards.
I have experienced a similar problem, but have identified the cause.
I have a link in the first cell of a table row, and some Javascript that replicates that link across the other TD's of the row. When I click on the 'real' link (in the first cell) I get this unwanted side-effect; when I click on other cells in the row, all is fine. I feel it's because the script is adding a second link to that first cell, when it already has one.
Hence, two instantaneous requests for the same page, with the first being aborted by the second.
This technique is fairly common, so something to look out for.
NS_BINDING_ABORTED error - Best Approach -Using a JavaScript “setInterval” method with the time delay of Min ‘0’ to max ‘100’ milliseconds based on the page load, we can execute our track link request after the default page submit request is processed.
World best solution:
var el = document.getElementById("t");
el.addEventListener("click", avoidNSError, false); //Firefox
function avoidNSError(){
ElementInterval = setInterval(function () {
/* Tracking or other request code goes here */
clearInterval(ElementInterval);
},0);
};
In my case, same NS_BINDING_ABORTED error, but it was because a "button" element, which I clicked to trigger an event, was missing the attribute "type" value "submit" = How to prevent buttons from submitting forms
The error NS_BINDING_ABORTED can have a variety of reasons.
In my case it was garbage in the response headers received from the server, basically a HTTP protocol violation.
Using a web debugging proxy such as Fiddler may sometimes reveal such issues better than the browser's own debugging console (which today does what, I assume, HttpFox did, just better), or at least show more detailed information or clearer error messages.
I know this is a very old question but this happened to me recently with Firefox 95. The images of an ancient application made by a collegue of mine were not loaded (or loaded randomly) because of this code:
window.addEventListener('focus', function() {
// omit other code...
location.reload();
}
Once nested this code into a 'load' listener, the issue completely disappeared.
in my case experience, NS_BINDING_ABORTED occurred because missing closed tag between <form>...</form>
example:
<form name="myform" action="submit.php" method="post">
<div class="myclassinput">
<input type="text" name="firstname">
<input type="text" name="lastname">
<input type="submit" value="Submit">
</form>
there is I am forget to write closing </div> tag before </form>.
I note my experience here, just in case... For me it was a website on a local dev server (adress 192.... etc) which was put online on an already used URL like www.something.com
The consequence was that an MP4 video (through the H5P library) didn't play, but allowed to be scrolled through the progress bar. And when I copy/paste the URL to this video, this NS_BINDING_ABORTED error appeared on my laptop, while my colleague on the same internet connection had no problem to view it.
I made an ipconfig /release and /renew, then restarted my computer, and it was fixed... maybe it was some old data conflict with the previous content on this already used URL domain? I don't know.
For me reason was in Firefox browser preventDefault function not worked in form submit event. This answer helped to solve: https://stackoverflow.com/a/56695472/2097494

Resources