In case a (say login) form POST submission fails and Firefox displays "Try Again" message.
Is there any way to click this "Try Again" automatically or through any settings in Firefox about:config that it clicks it?
Related
"Clicking" the Try Again button is relatively easy. There is an extension that does just that, and lets you set the number of seconds between retries.
The real rub here is that you want to "blindly" retry form POSTs. As we all know, just because you didn't get a response, that doesn't necessarily imply that nothing was changed on the server.
Re-submitting a login form sounds harmless enough, and usually is. But if you imagine forms that result in orders being placed or money being moved, it's easy to understand why browsers have implemented this kind of warning:
This is what you'll see if you enable an extension like TryAgain and a form post fails. It's the same behavior you'd get by pressing F5 yourself. The extension will dutifully try to POST again, but the browser is going to intervene with an alert, and refuse to send the POST until "Resend" is clicked.
This kind of safety feature does a fair amount to protect end-users and developers from poor implementations and network hiccups. However, it's really going to work against what you're trying to accomplish.
That said, if you could figure out a way to modify the extension to detect the alert and somehow click "Resend", you'd be in business. I can't say for sure that this is impossible, put it kind of looks that way, at least for now: this issue was marked as "won't fix", and this issue is still open.
Here is an extension for firefox:
auto reload
but i would warn you. because you could auto send any sensitive data. usually web browsers ask reload because the dont want any sensitive data to be submitted without user discretion.
Related
Is it possible to set some flag in my browser so that I always get the RECAPTHCA image challenges? Sometimes when you click on the "I am not a robot" button, it gives you a pop up challenge with something like "Click all the images which contain a car", but sometimes it just checks off the box and takes your word for the fact that you're not a robot.
I would like to test the UI of my tool both on a desktop and on mobile, and make sure that the challenge pop up shows up and interacts well with other elements of the page.
In other words, as a developer, I want Google to think that I'm a robot so that it always gives me the visual challenge.
Is there any way to force this behavior?
Note: I've done some research and was unable to find any relevant questions or blog posts that might yield an answer.
Force Google recaptcha to use simple checkbox click challenge asks for a way to force Google to NOT use the visual challenge, only the checkbox
How to force recheck user with reCAPTCHA? talks about forcing a recheck of some kind, but has no answers
https://groups.google.com/forum/#!topic/recaptcha/2ed-s3KK3Do actually asks my same question, but users did not seem keen on providing answers, with one user just suggesting not to use RECAPTCHA at all!
https://developers.google.com/recaptcha/docs/faq#id-like-to-run-automated-tests-with-recaptcha-v2-what-should-i-do is straight from Google, but it does exactly the opposite of what I want - it sets your site up such that the captcha appears on the page but is actually a test captcha that always lets you pass, and NEVER gives you the challenge. I want the exact inverse of this.
The methods told here should generally work, but there is no guarantee of the same. There is a very easy way to guarantee that Google reCAPTCHA challenge always show up. All you need to do is to add a custom BOT device in developer tools and then use the same to test.
In Chrome Dev Tools, open Settings. Open Devices after that.
Add a custom device with any name and set User Agent String to Googlebot/2.1
Finally, in Device Mode, at the left of the top bar, choose the custom device that you created (the default is Responsive).
Thanks to the SO users who had put it up in the answer and follow-up comment here.
I too have been looking for similar functionality. While I have not found a code-based solution to force the challenge, I have found a fairly reliable hack.
Grab a VPN tool (I happen to use IP Vanish), then connect to a remote server (I've had success connecting to China). Then, open up a private/incognito window and fill out your form.
From my testing, the combination of the remote IP and the blank user session triggers the challenge.
Here are a few things you can try. In my experience all of them will increase your chances of getting a challenge.
Log in at https://www.google.com/recaptcha/admin and edit your
reCAPTCHA settings. Under Security Preference choose Most Secure.
Use a VPN + incognito mode (as suggested here)
If you're using the invisible reCAPTCHA, I found that using explicit
rendering + immediately calling grecaptcha.execute() after
grecaptcha.render() will usually trigger the challenge. I suspect
this is because Google's AI expects a user interaction of some kind
to trigger grecaptcha.execute() and not the onloadCallback itself.
I use reCAPTCHA's SDK in Android, and I also encounter the need to force validation when testing. I tried it many times. At last, I turned off or turned on the flight mode, which can be verified in the retest. I guess it may be that Google put my IP on the white list in the background, so I passed the verification without any challenge.
That should be possible, because when LinkedIn forcefully logged out an user for excessive usage, it showed captcha on next login, and there always was the challenge.
Unfortunately, LinkedIn switched from Recaptcha to another provider just few days ago, so I cannot just look up into their JavaScript code.
It is what makes me believe that Recaptcha does have an undocumented option to force the challenge.
2022 and later
It seems to be increasingly harder to trigger the recaptcha challenge of the invisible recaptcha. Using the UserAgent of a bot, going into incognito mode is not enough anymore. A VPN might work, but I do not trust free VPN services.
I am however still able to trigger the recaptcha challenge when I'm only using the keyboard while filling in the form fields and pressing the submit button with the enter key. It seems like the Google Recaptcha is now also following your mouse movements to determine if you are a real user. Make sure to never hover your mouse cursor over the webpage and only use the keyboard.
I was looking for something like this and after some research plus trial & error what worked for me is to use the invisible recaptcha and invoke the challenge with JS.
After you have loaded the recaptcha script on your page then do
grecaptcha.execute()
and the challenge might be invoked.
I'm faced with an interesting task:
Our transport guys have to monitor a 3rd-party webpage the entire day, clicking every 5 seconds on a button, to refresh the page and get available transport slots. The slots section is only updated when the button is clicked. When slots become available, the available slot label changes from "0" to "1", or "2", depending on the amount of open slots...
Is there any way of writing a script that would automatically click on the button, and raise an alert when that specific value on the page changes? Maybe some sort of UI Testing framework that could automated this?
Any suggestions?
Pressing a button on such webpage always boils down to a HTTP request, which you can do with pretty much plain Ruby's net/http. However, I guess there is some authentication going on there, so cookies may have to be preserved. For such uses, Mechanize is very nice library. It relies on Nokogiri, and pages you get are really easy to scan for such changes, as number of open slots you need.
Without more detailed information about the pages you need to scrape, this is pretty much all the advice you can get.
We have a problem with our SAAS site. We sometimes have users kicked out because our authentication cookie is not there (or possibly corrupted). This happens rarely enough that it is hard to find, but often enough that I want to know why.
I want to install a monitor / sniffer for one of our support engineers. They get the problem every once and a while and can stop and call when it happens.
I am looking for something that will log page visits (with timestamp) and cookie changes (create/mod/delete).
Does anyone have a tool that will do this type of logging for FireFox? Maybe a Sqlite tool that will work for Firefox (which I think takes exclusive on the Sqlite db file).
I think Tamper Data can help you. Open the Tamper Data window. Do the requests. Right click -> Export as XML. You can view the cookies by double clicking on the Cookie header.
I am using RUBY to screen scrap a web page (created in asp.net) which uses gridview to display data. I am successfully able to read the data displayed on page-1 of the grid but unable to figure out how I can move to the next page in the grid to read all the data.
Problem is the page number hyperlinks are not normal hyperlinks (with URL) but instead are javascript hyperlink which causes postback to the same page..
An example of the hyperlink:-
6
I recommend using Watir, a ruby library designed for browser testing, if you're already using ruby for processing. For one thing, it gives you a much nicer interface to the DOM elements on the page, and it makes clicking links like this easier:
ie.link(:text, '6').click
Then, of course you have easier methods for navigating the table as well. It's easy enough to automate this process:
1..total_number_of_pages.each do |next_page|
ie.link(:text, next_page).click
# table processing goes here
end
I don't know your use case, but this approach has its advantages and disadvantages. For one thing, it actually runs a browser instance, so if this is something you need to frequently run quietly in the background in completely automated way, this may not be the best approach. On the other hand, if it's ok to launch a browser instance, then you don't have to worry about all that postback nonsense, and you can just click the link as if you were a user.
Watir: http://wtr.rubyforge.org/
You'll need to figure out the actual URL.
Option 1a: Open the page in a browser with good developer support (e.g. firefox with the web development tools) and look through the source to find where _doPostBack is defined. Figure out what URL it's constructing. Note that it might not be in the main page source, but instead in something that the page loads.
Option 1b: Ditto, but have ruby do it. If you're fetching the page with Net:HTTP you've got the tools to find the definition of __doPostBack already (the body as a string, ruby's grep, and the ability to request additional files, such as those in script tags).
Option 2: Monitor the traffic between a browser and the page (e.g. with a logging proxy) to find out what the URL is.
Option 3: Ask the owner of the web page.
Option 4: Guess. This may not be as bad as it sounds (e.g. if the original URL ends with "...?page=1" or something) but in general this is the least likely to work.
Edit (in response to your comment on the other question):
Assuming you're using the Net:HTTP library, you can do a postback by just replacing your get with a post, e.g. my_http.post(my_url) instead of my_http.get(my_url)
Edit (in response to danieltalsky's answer):
watir may be a really good solution for you (I'm kicking myself for not having thought of it), but be aware that you may have to manually fire the event or go through other hoops to get what you want. As a specific gotcha, with any asynchronous fetch like this you need to make sure that the full response has come back before you scrape it; that isn't a problem when you're doing the request inline yourself.
You will have to perform the postback. The data is pass with a form POST back to the server. Like Markus said use something like FireBug or the Developer Tools in IE 8 and fiddler to watch the traffic. But honestly this is a web form using the bloated GridView and you will be in for a fun adventure. ;)
You'll need to do some investigation in order to figure out what HTTP request the javascript execution is performing. I've used the Mozilla browser with the Firebug plugin and also the "Live HTTP Headers" plugin to help determine what is going on. It will likely become clear to you which requests you will need to make in order to traverse to the next page. Make sure you pay attention to any cookies getting set.
I've had really good success using Mechanize for scraping. It wraps all of the HTTP communication, html parsing and searching(using Nokogiri), redirection, and holding onto cookies. But it doesn't know how to execute Javascript, which is why you will need to figure out what http request to perform on your own.
What is the requirement for the browser to show the ubiquitous "this page has expired" message when the user hits the back button?
What are some user-friendly ways to prevent the user from using the back button in a webapp?
Well, by default whenever you're dealing with a form POST, and then the user hits back and then refresh then they'll see the message indicating that the browser is resubmitting data. But if the page is set to expire immediately then they won't even have to hit refresh and they'll see the page has expired message when they hit back.
To avoid both messages there are a couple things to try:
1) Use a form GET instead. It depends on what you're doing but this isn't always a good solution as there are still size restrictions on a GET request. And the information is passed along in the querystring which isn't the most secure of options.
-- or --
2) Perform a server-side redirect to a different page after the form POST.
Looks like a similar question was answered here:
Redirect with a 303 after POST to avoid "Webpage has expired": Will it work if there are more bytes than a GET request can handle?
As a third option one could prevent a user from going back in their browser at all. The only time I've felt a need to do this was to prevent them from doing something stupid such as paying twice. Although there are better server-side methods to handle that. If your site uses sessions then you can prevent them from paying twice by first disabling cache on the checkout page and setting it expire immediately. And then you can utilize a flag of some sort stored in a session which will actually change the behavior of the page if you go back to it.
you need to set pragma-cache control option in HTTP headers:
http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.9
However, from the usability point of view, this is discouraged approach to the matter. I strongly encourage you to look for other options.
ps: as proposed by Steve, redirection via GET is the proper way (or check page movement with JS).
Try using the following code in the Page_Load
Response.Cache.SetCacheability(HttpCacheability.Private)
use one of the following before session_start:
session_cache_expire(60); // in minutes
ini_set('session.cache_limiter', 'private');
/Note:
Language is PHP
I'm not sure if this is standard practice, but I typically solve this issue by not sending a Vary header for IE only. In Apache, you can put the following in httpd.conf:
BrowserMatch MSIE force-no-vary
According to the RFC:
The Vary field value indicates the set
of request-header fields that fully
determines, while the response is
fresh, whether a cache is permitted to
use the response to reply to a
subsequent request without
revalidation.
The practical effect is that when you go "back" to a POST, IE simply gets the page from the history cache. No request at all goes to the server side. I can see this clearly in HTTPWatch.
I would be interested to hear potential bad side-effects of this approach.