When pressing Back button, what determines whether the browser hits the server again or re-renders what it had in memory? - caching

I'm developing a Rails 3 app, and I noticed that when pressing Back, the browser re-shows the page it already has in memory, instead of hitting the server.
That said, I'm 99% confident that in previous apps I've developed, this wasn't the case, and the browser would hit the server again.
So, provided I remember correctly, this must be some HTTP header / something that Rails is setting to makes things snappier.
I'd like to know what it is since this is kind of problematic for us in pages where we modify the DOM heavily through JS, and the user ends up pressing Back and getting the "original" version served, instead of the heavily modified one, which is really confusing in some cases.
EDIT: I thought doing this had fixed the problem, but it didn't:
def force_no_cache_on_back_button
#expires_in -1, :public => false
headers['Pragma'] = 'no-cache'
headers['Cache-Control'] = 'no-cache; no-store; private; must-revalidate; max-age=0'
headers['Expires'] = 1.day.ago.to_s
end
Surprisingly that does work over HTTPS with a broken SSL certificate (we use self-signed for our staging server), but it doesn't work in production, with a "good" SSL cert. Which is just weird.
Any other ideas?
Thanks!
Daniel

Related

Connection refused error when I try to close a Watir browser?

This thing just seems to give me problem after problem.
I posted another question earlier, trying to solve the problem of retaining my session state between closing and opening through Watir. Firefox achieves this on its own, so I figured if I just set the preferences correctly, it'd save my state. I ended up having to go into the selenium-webdriver source and make some changes in order to achieve this in reality.
So, I was just testing my application. Part of its behavior is to loop through a bunch of pages and extract text from them. While it's looping, I simply have it in a while true loop, and figured "hey, I can just stop the program with Ctrl+C". Well, this worked fine up to now, until it came to saving the states.
Ctrl+C causes it not to save its state. My guess as to why is that you need to actually close the browser (and I actually recreated the bug in IRB, so I'm pretty sure this is the case). Simple, right? Why not just use an ensure block with #browser.close in it? That was my first thought.
So, when I try it this way, it does hit the ensure block, and the ensure block calls a method called kill. Kill calls #browser.close if #browser.exists?. The problem is that when it tries to execute this line, I get a nice long list of errors leading up to selenium-webdriver. It seems as if it's trying to make an HTTP request as part of its close functionality, and is failing because, perhaps, Ctrl+C exited the application.
The stack trace is located at https://gist.github.com/Inkybro/5557085
The very last thing I thought was that maybe I needed to let any calls to the #browser object complete, so I placed a bunch of trap('INT', 'IGNORE') and trap('INT', 'DEFAULT') lines around these pieces of code. This also doesn't seem to do the trick.
I'm not really sure as to why, which is why I'm posting here. What I think needs to happen is that whatever processing is going on at the time of Ctrl+C needs to finish processing before #browser.close can be called. If anybody has experience with Watir and/or Selenium, or even if you don't, perhaps you could help me out?
Are you supposed to be testing the browser itself, or your product? Because really, if you are doing things like above, (logging in, closing and re-opening the browser to see if you are still logged in) it sounds to me like you are basically testing the browser's ability to store and read cookies and properly provide them when making requests of a page.
What about if you presume for a moment that the browser in fact does what it is supposed to do with regard to cookie management and usage? If you do that, then what becomes important? I think it would be that your app tells the browser to create the proper cookie, with the proper contents.
So maybe rather than actually trying to test that firefox will in fact use it's cookies correctly (or any other browser doing that for that matter) why not simply test that when the user has logged in, that the proper cookies have been created? You may also have to test that the cookie is updated periodically so that a user's session does not expire while they are actively using the site. again pretty easy to test by just looking at the cookies
You might also want to test that your server is sensitive to changes in the cookie, or a missing cookie. That the server is looking at the cookie and is not depending on session variables is also pretty easy to test, login, alter or clear the cookie, try to access a page and see if it fails.
So with that you get three things
1) Proper cookies created upon login.
2) Cookies kept updated as various pages are accessed.
3) Missing or altered cookie == no soup for your user.
(#4 site works properly if cookies present is implied by the three above and all the rest of your tests that exercise your site/app)
Stop trying to do the work of the mozilla test team, and refocus on testing your application.
I'm still not sure what you try to achieve, but this hacky piece of code ensures browser is closed on Ctrl+C and no your exception is raised:
require 'watir-webdriver'
begin
browser = Watir::Browser.new
loop do
# some code
end
rescue SystemExit, Interrupt
puts 'Exiting!'
ensure
begin
browser.close
rescue Errno::ECONNREFUSED
# do nothing
end
end

Chrome XmlHttpRequest Hanging

When I make a XmlHttpRequest (via jQuery's $.ajax) to a particular URL, my Chrome consistently hangs every time with a status on the request of 'Pending'.
After that Chrome must be closed ie. forcibly from Task Manager, and it exhibits general signs of mayhem such as the Cookies and Scripts tabs being empty when they were full of normal looking data immediately prior.
This is odd because (a) my coworkers, running a seemingly identical everything, have no such problems; (b) I have been using Chrome to run this code (our company's JavaScript app) for many months and this just started happening for no apparent reason.
I checked out the Apache logs, they appear to be processing the request normally and to completion, but Chrome never sees the reply, apparently.
A couple of other clarifications: prior to the failure, the same Chrome and Apache return a truckload of JS and image files normally, eg., things seem to be fine right up until they aren't. The request is not particularly large (a few hundred bytes in and out) or complex in any obvious way.
If anybody can give me some hints of where to look, I'd be grateful!
I'm experiencing similar behavior with slightly different symptoms. My ajax requests work fine, every second request up to 6 requests, then they all start failing (same url as when working, same payload, etc), but in my case they're not even hitting the server, just stuck in "Pending" in Inspector.
I dont have an answer for you, but to help debug, have you tried chromes net-internals?
Point your browser at:
chrome://net-internals/#sockets
and/or
chrome://net-internals/#events
I see my requests in #sockets go into "active", but never come back, and in #events I can see that the request stalls after the HOST_RESOLVER_IMPL_REQUEST stage.
I'm thinking it could be a resource issue caused by not properly ending the request, but thats just pure speculation.

Qt4.8 : QWebView and HTTPS

After having had a hell of a time to make SSL works on Qt with windows, most HTTPS website seem to be working. Untrusted certificate are now added when required and such, well everything is fine ... BUT I can't login on reuters.com for some weird reason.
Take a QWebView, add a bit of magic to handle ssl errors that shows up, go on reuters then click on sign in.
Then something weird occurs. First of all, it requested acceptance of untrusted certificates, which ain't that weird. But then, once that is done, nothing happens. QWebView waits and never send the loadFinished(bool) signal. More over the web page displayed doesn't change.
When I try to do this on Firefox or IE, it tells me that there is mixed content on the webpage. Could it be the problem?

How can I stop my app from logging people out of their session in Safari?

My command line testing tool, which uses NSURLConnection, is interfering with Safari's cookies. How do I stop this from happening?
Here's what I'm seeing:
I log into the web site in Safari.
I run my command line based sync tool.
The sync tool logs in, and gets several pages of data. For each request, the cookie rolls over. (The sync tool does not log out.)
I return to Safari and click a link. The link returns me to the login screen.
If I skip step 2-3, the link in Safari works correctly. My tool is clearly the cause of this.
I'm creating my connections like this:
_connection = [[NSURLConnection alloc] initWithRequest: request
delegate: self
startImmediately: NO];
I'm not doing anything explicitly to the cookies, but just letting the default code handle them.
I'm not sure what's really happening here. If Safari and my app really shared the cookies, wouldn't Safari's copy of the cookie also be rolled over? While weird behaviour, everything would work and I wouldn't even know what was happening. This is something else.
Anyway, how can I stop my command line tool from logging people out of their session in Safari?
Seems like the right approach here is turning off default cookie handling entirely, so it doesn't touch the shared store. You can use -[NSMutableURLRequest setHTTPShouldHandleCookies:NO] to disable the default behavior, then read the cookie headers out of the responses, store them yourself, and insert them back into subsequent URL requests as appropriate.

Any way to get around the browser http timeout during debugging?

I am currently working on a Django development. There is a problem, which isn't a true problem but very annoying. Often, when I try to debug my Django app by putting down some break points, I get this error at the server end:
error: [Errno 32] Broken pipe
After reading this other post, Django + WebKit = Broken pipe, I have learned that this has nothing to do with the server but the client browser used. Basically, what happened is that the browser has a http request timeout. If it doesn't receive a response within the timeout, it will close down the connection with the server.
I find this timeout isn't really needed, indeed causing headache, during debugging. Is there any way I can lift this timeout or increase it for my browser (Chrome)? Or maybe a substitute browser that doesn't have this constraint?
Note: Although I am using Django and have mentioned about it, this isn't a Django-related question. It's more like a question on how to make my debugging process more effective.
I prefer using linux/unix curl command for debugging web applications. It's good approach, especially if you want to focus on some specific request, for example: POST does not work fine for some set of parameters, or cookies are not set as expected.
Of course it may take some time at the beginning to find out how to use it, but then, you will have a total control about every single piece of request: timeouts, cookies, headers and so on. It's very helpful, because you can be sure that what you wanted to send is actually sent (no additional data is added by the web browser).

Resources