detect in rails if the user refresh the page - ruby

I am using rails 3.0 and ruby 1.8.7
How could detect if the user refresh the browser page?
I am coding a web wizard form, so I go to next step if all is valid. However if the user refresh the page i don't want go to next step.
Update
I put a hidden field (I know about session solution) which is updated when the user submit the form. My problem is that the form has many steps.
Any idea?. Excuse me
Thanks in advance

Actually server shouldn't know anything about client's state, that's what REST was designed for.

If you really need it, you can use the flash object in order to detect when user refresh page.
For this purpose you need that the entry point of the page will be a redirect_to, so it makes things a little bit complicated, but at least it will solve your problem.

Related

Is there a better way to work with the success page (success.phtml) in Magento?

I'm attempting to do a number of things with the success page (order confirmation page) in Magento but I am faced with the nuisance of having to create a test order every time I wish to see a change because hitting refresh redirects you. The style changes are easy as I can fresh only the CSS if necessary but some of the conversion analytics (e.g. Google) and other items I'm trying to include on the final page aren't as straight forward. Is there a better way? Thanks in advance.
Magento clears session information for a customer after they place their order as it assumes most users will leave the site after that. It’s more of a user-experience feature than it is a security feature. That being the way it is, when you hit refresh on the order confirmation page, your information will disappear and Magento will generally tell you “you have no items in your cart.”
You can disable this for the purpose of development by going to app/code/core/Mage/Checkout/controllers/OnepageController.php and commenting out the line that says:
$session->clear();
Should be line 240. Change it to //$session->clear(); and Magento will instead allow the session to expire naturally according to how long session life is set to for that specific instance. Now you can style success.phtml or see what conversion information is being sent to various service providers (PepperJam, Google, Proclivity, etc..) without having to create more than 1 test order.
You could use Selenium, Firefox Add-on to record a macro for placing an order. This should avoid the repetitive process of placing an order.
https://addons.mozilla.org/en-us/firefox/addon/selenium-expert-selenium-ide/
You can also use this bookmarklet to auto-populate the fields on the checkout page.
http://www.nicksays.co.uk/auto-populate-magento-checkout-bookmarklet/

So many caches when coding in asp.net mvc 3 and don't know what to use

I'm writing an application in MVC3. it has features like login, a simple forum, news, and pages that get their main content from the db.
I'm looking into caching right now.
First I tried the simple [OutputCache] attribute but noticed that it caches the same content for every user. Normally it wouldn't be that much of a problem, but - for example - the login box is cached too and therefore it shows the same content for every user (and everybody will just see that they are logged in as admin). Even if I set Location=OutputCacheLocation.Client, after a logout the cached page still shows that I'm logged in.
No matter, I thought I can always try Response.WriteSubstitution, but for some reason it seems to be broken in MVC3.
I'm now reading about the "ASP.NET MVC Result Cache", and it seems interesting, but is it a proper way to handle caching?
Also am I able to cache childactions, or partial views in an otherwise very dynamic page?
There are so many options and I don't know what should I use and when.
Sorry that my question is so vague, but I don't even know what to ask in this case.
I think this post my solve your problem.
MVC3 custom outputcache
Good luck

Spider/Crawler for testing an AJAX web app that requires a session cookie?

We have a web app that is heavy on AJAX and it is very customizable so we need something that will click on every link in it to make sure that none of the forms/pages break. I know that there are lots of spiders/crawlers out there but we haven't been able to find one that's easy to implement and works with AJAX and allows you to have a session cookie.
Well, considering you asked this question over two years ago I doubt you'll have much need for the answer. But in case someone else comes across this question from a search engine, here's my suggestion:
Use Selenium http://seleniumhq.org/ or IEUnit https://code.google.com/p/ieunit/ to automate a browser itself. They both operate on top of a JavaScript engine so you can write a few lines of code to click on every anchor tag in your site.

specific limitations of AJAX?

I'm still pretty new to AJAX and javascript, but I'm getting there slowly.
I have a web-based application that relies heavily on mySQL and there are individual user accounts that are accessed and the UI is populated with user specific data.
I'm working on getting rid of a tabbed navigation bar that currently loads new pages because all that changes from page to page is information within one box.
The thing is that box needs to reload info from the database, etc.
I have had great help from users here showing that I need to call the database within the php page that ajax is calling.
OK-so pardon the lengthy intro-what I'm wondering is are there any specific limitations to what ajax can call that I need to know about? IE: someone mentioned that it's best not to call script files and that I should remove scripts from the php page that is being called and keep those in the 'parent' page. Any other things like this I need to keep in mind?
To clarify: I'm not looking to discuss the merits/drawbacks of the technology. I'm wondering about specific coding implementation that I need to be aware of (for example-I didn't until yesterday realize that if even if I had established a mySQL connection on the page, that I would need to re establish that connection in my called page as well...makes perfect sense now).
XMLHttpRequest which powers ajax has a number of limitations. I recommend brushing up on the same origin policy. This is a pivotal rule because it limits where AJAX calls can be made.
First, you can't have Javascript embedded in the HTTP response to an AJAX call. That's a security issue.
No mention of the dynamics of the database, but if the data to be displayed in tabs doesn't have to be real-time, why not cache it server-side?
I find that like any other protocol, Ajax works best in tightly controlled conditions. It wouldn't make much sense for updating nearly the whole page, unless you find that the user experience is improved with an on-page 'loader'. Without going into workarounds, disadvantages will include losing the browser back button / history, issues such as the one your friend mentioned, and also embedded resources and other rich content can suffer as well, and just having an extra layer of complexity to deal with in your app. Don't treat it as magic sauce for your app - make sure every use delivers specific results that benefit your client / audience.
IMHO, it's best to put your client side javascript in a separate page and then import it - neater container. one thing I've faced before is how to call xml back which contains code to run such as more javascript - it's worth checking if this is likely earlier on and avoiding, than having to look at evals.
Mildly interesting.

a script to log into webpage

I want to write a script to log in and interact with a web page, and a bit at a loss as to where to start. I can probably figure out the html parsing, but how do I handle the login part? I was planning on using bash, since that is what I know best, but am open to any other suggestions. I'm just looking for some reference materials or links to help me get started. I'm not really sure if the password is then stored in a cookie or whatnot, so how do I assess the situation as well?
Thanks,
Dan
Take a look a cURL, which is generally available in a Linux/Unix environment, and which lets you script a call to a web page, including POST parameters (say a username and password), and lets you manage the cookie store, so that a subsequent call (to get a different page within the site) can use the same cookie (so your login will persist across calls).
I did something like that at work some time ago, I had to login in a page and post the same data over and over...
Take a look at here. I used wget because I did not get it working with curl.
Search this site for screen scraping. It can get hairy since you will need to deal with cookies, javascript and hidden fields (viewstate!). Usually you will need to scrape the login page to get the hidden fields and then post to the login page. Have fun :D

Resources