Web Programming with AJAX, Problem with caching (I think) - ajax

Web programmer here - using AJAX (HTML, CSS, JavaScript, AJAX, PHP, MySQL), but for some reason Internet Explorer is acting up (surprise surprise).
AJAX is updating query results on the HTML page, via a PHP script that queries a MySQL Database.
Everything is working fine, except when I use Internet Explorer 8.0 .
There are several php scripts, which allow for the data to be ordered according to certain criteria, and for testing purposes I have attached the mktime field (current time, in the format HH:MM:SS) to the beginning of the results for each query.
When I use IE, these times appear to remain constant, whereas with ALL other browsers these times are correct and display the current time.
I think the issue has something to do with caching or something along those lines anyway.
Any thoughts or suggestions welcome...

Here is an article on the caching issue.
If your request is a GET change it to a POST, this will prevent the results being cached.

GET requests are cached in IE; switch it to a POST request and it won't be cached anymore.

Instead of switching to POST, which can be ugly if you're not really using it to update or create content, you should append a random number to the query string, as in http://domain.com/ajax/some-request?r=123456. If this number is unique for every request you won't have caching problems.

What I have done is, I have kept the "GET" and added new dummy query parameter to the querystring as follows,
./BaseServlet?sname=3d_motor&calcdir=20110514&dummyParam=datetime
I set dummyParam a value of date object in the javascript so that every time the url is generated browser will treat it as a new url and fetch new (fresh) results.
var d = new Date();
url = url + '&dummyParam='+d.valueOf();
So instead of generating some random numbers this is easy way!

Related

Reload TSV File Without Refreshing Page

I've been searching for a day or 2 for an answer to this question, but I haven't found one yet. I've got an external application which is modifying a TSV file (adding data) periodically. I'm using the Basic Line Chart example to display the data and it looks really nice:
Now I want the data to update when the TSV file is updated. I want to be able to set an auto-refresh on the data where it pulls from the tsv file and repopulates the graph without refreshing the entire page.
I tried just wrapping up the current code in a function and calling setInterval on that function, but the data remains the same each time (maybe because it's cached?).
Ideally the solution to this would be a function which can be called to Update whenever I'd like (based on a user event, timer, whatever).
Any ideas, links, or suggestions for alternate ways to accomplish the same goal would be much appreciated!
As a bonus question: I understand D3 may not be the right choice for this sort of Psudo-Real-Time data display. Are there other packages which lend themselves to this sort of thing more? The app generating the data is a C# application (in case that ends up mattering).
Edit: As a supplementary explanation, imagine this example but with the data being read from a file: http://mbostock.github.com/d3/tutorial/bar-2.html
If you are executing an Ajax call to fetch the data from the server and you think caching is a problem, you can try busting the cache by setting the cache parameter in jquery's ajaxSetup to false anywhere in your code:
$.ajaxSetup({cache: false});
From the docs:
If set to false, it will force requested pages not to be cached by the
browser. Note: Setting cache to false will only work correctly with HEAD and
GET requests. It works by appending "_={timestamp}" to the GET parameters. The
parameter is not needed for other types of requests, except in IE8 when a
POST is made to a URL that has already been requested by a GET.

Ajax results filtering and URL parameters

I am building a results filtering page using AJAX requests. I would like to reflect the filters in the URL. For example: for price_from I want to add ?price_from=VAL to the URL.
I have a backend that is capable of rendering the page with URL parameters.
After some googling I would a Backbone.router solution which has a hash fallback for the IE that does not support HTML5 history API.
I have a problem with setting a good philosophy of routes. I have a set of filtering parameters (price_from, price_to, color, ...) and I would like to attach each parameter to one route.
Is that possible to chain the routes to match for example: ?price_from=0&price_to=1&color=red? (the item order can change)
It means: call all the routes at the same time and keep the ie backwards compatibility?
Your best bet would be to have a query portion of the URL rather than using GET parameters to denote the search criteria. For example:
Push state: /search/query/price_from=0&price_to=1&color=red
Hash based: #search/query/price_from=0&price_to=1&color=red
Your backend would of course need to change a bit to be able to parse the new URL structure.

Auto-refresh page with AJAX

I've achieved to make my page auto refresh every 20 seconds, but now I want to do it in another section and I have a slightly different situation.
The difference is that now the page is refreshed using ajax, so the query string in the URL doesn't change. In the other situation, the query string parameters matched the controller method parameters when the automatic post was made, so it worked fine.
I want to know if there's a way to change the url 'artificially' when the ajax request is made, or if somebody can give me a well explained solution for this issue. I'm relatively new to MVC.
You could get the current URL in Javascript using,
var url = window.location.href;
You can then use this in your refresh function.

Adding a UNIQUE ID to the URL parameter if the XMLHttp Open Method - What does this mean and how is it done?

I need help understanding AJAX. I am going through the tutorial on W3C schools ( creating a button that opens text file on the server and displays the result in a div)
One part of the tutorials seems abstract to me, without sufficient explanation. I am sure its a pre-requisite that I have missed or not aware of, detailing below
To avoid getting a cached result in response to an XMLHttpRequest made to the server, the tutorial says one needs to ADD A UNIQUE ID to the URL parameter in the XMLHttp Open Method which has been done (in the tutorial) by adding a ?, another character (t) and an = after the file extention followed by joining a random number to the URL (using Math.random()). See code below.
A simple GET request would be like:
xmlhttp.open("GET","demo_get.asp,true); \\I can understand this
Unique ID added to URL
xmlhttp.open("GET","demo_get.asp?t=" + Math.random(),true); \\ I can't undersatnd this
'?' , 't' & a random number generator added to demo_get.asp - Why T, why not P Q R Z ?? Why "?" after .asp
Should the compiler not go bonkers and report an error if arbitary characters are added to the file location. How is the part of the URL after the file extention handled as in this case ?t= + Math.random()
This has been a case of much agony and frustration for the last 3 days cause I don't get which part of JS i have missed here, what do you call this concept and where can I read it ??
This apart, specifying message headers while sending data - What are HTTP headers and what do they mean. How do I decide what the parameters of the setRequestHeader() method shall be ?
Please help. Rest of Ajax is clear to me.
(I haven't read on the second part - the message headers. I have asked that query here to avoiding posting another question later, just in case it turns out to be as eluding and enigmatic as the UNIQUE ID concept - Apologies in case its a direct simple question I ought to read up myself)
Cache compares the requested URL with those present with it, if a unique id is added to the URL, it does not match and the browser treats it as a fresh GET request, which then is forwarded to the server. This is a standard way to bypass / disable browser caching.
Please refer this document for a better understanding of browser Caching.
See Page No 4, which explains the same thing as stated above.
http://www.f5.com/pdf/white-papers/browser-behavior-wp.pdf

How can I prevent IE Caching from causing duplicate Ajax requests?

We are using the Dynamic Script Tag with JsonP mechanism to achieve cross-domain Ajax calls. The front end widget is very simple. It just calls a search web service, passing search criteria supplied by the user and receiving and dynamically rendering the results.
Note - For those that aren’t familiar with the Dynamic Script Tag with JsonP method of performing Ajax-like requests to a service that return Json formatted data, I can explain how to utilise it if you think it could be relevant to the problem.
The service is WCF hosted on IIS. It is Restful so the first thing we do when the user clicks search is to generate a Url containing the criteria. It looks like this...
https://.../service.svc?criteria=john+smith
We then use a dynamically created Html Script Tag with the source attribute set to the above Url to make the request to our service. The result is returned and we process it to show the results.
This all works fine, but we noticed that when using IE the service receives the request from the client Twice. I used Fiddler to monitor the traffic leaving the browser and sure enough I see two requests with the following urls...
Request 1: https://.../service.svc?criteria=john+smith
Request 2: https://.../service.svc?criteria=john+smith&_=123456789
The second request has been appended with some kind of Id. This Id is different for every request.
My immediate thought is it was something to do with caching. Adding a random number to the end of the url is one of the classic approaches to disabling browser caching. To prove this I adjusted the cache settings in IE.
I set “Check for newer versions of stored pages” to “Never” – This resulted in only one request being made every time. The one with the random number on the end.
I set this setting value back to the default of “Automatic” and the requests immediately began to be sent twice again.
Interestingly I don’t receive both requests on the client. I found this reference where someone is suggesting this could be a bug with IE. The fact that this doesn’t happen for me on Firefox supports this theory.
Can anyone confirm if this is a bug with IE? It could be by design.
Does anyone know of a way I can stop it happening?
Some of the more vague searches that my users will run take up enough processing resource to make doubling up anything a very bad idea. I really want to avoid this if at all possible :-)
I just wrote an article on how to avoid caching of ajax requests :-)
It basically involves adding the no cache headers to any ajax request that comes in
public abstract class MyWebApplication : HttpApplication
{
protected MyWebApplication()
{
this.BeginRequest += new EventHandler(MyWebApplication_BeginRequest);
}
void MyWebApplication_BeginRequest(object sender, EventArgs e)
{
string requestedWith = this.Request.Headers["x-requested-with"];
if (!string.IsNullOrEmpty(requestedWith) && requestedWith.Equals(”XMLHttpRequest”, StringComparison.InvariantCultureIgnoreCase))
{
this.Response.Expires = 0;
this.Response.ExpiresAbsolute = DateTime.Now.AddDays(-1);
this.Response.AddHeader(”pragma”, “no-cache”);
this.Response.AddHeader(”cache-control”, “private”);
this.Response.CacheControl = “no-cache”;
}
}
}
I eventually established the reason for the duplicate requests. As I said, the mechanism I chose to use for making Ajax calls was with Dynamic Script Tags. I build the request Url, created a new Script element and assigned the Url to the src property...
var script = document.createElement(“script”);
script.src = https://....”;
Then to execute the script by appending it to the Document Head. Crucially, I was using the JQuery append function...
$(“head”).append(script);
Inside the append function JQuery was anticipating that I was trying to make an Ajax call. If the type of element being appended is a Script, then it executes a special routine that makes an Ajax request using the XmlHttpRequest object. But the script was still being appended to the document head, and being executed there by the browser too. Hence the double request.
The first came direct from the script – the one I intended to happen.
The second came from inside the JQuery append function. This was the request suffixed with the randomly generated query string argument in the form “&_=123456789”.
I simplified things by preventing the JQuery library side effect. I used the native append function...
document.getElementByTagName(“head”).appendChild(script);
One request now happens in the way I intended. I had no idea that the JQuery append function could have such a significant side effect built in.
See www.enhanceie.com/redir/?id=httpperf for further discussion.

Resources