Is it possible to set some flag in my browser so that I always get the RECAPTHCA image challenges? Sometimes when you click on the "I am not a robot" button, it gives you a pop up challenge with something like "Click all the images which contain a car", but sometimes it just checks off the box and takes your word for the fact that you're not a robot.
I would like to test the UI of my tool both on a desktop and on mobile, and make sure that the challenge pop up shows up and interacts well with other elements of the page.
In other words, as a developer, I want Google to think that I'm a robot so that it always gives me the visual challenge.
Is there any way to force this behavior?
Note: I've done some research and was unable to find any relevant questions or blog posts that might yield an answer.
Force Google recaptcha to use simple checkbox click challenge asks for a way to force Google to NOT use the visual challenge, only the checkbox
How to force recheck user with reCAPTCHA? talks about forcing a recheck of some kind, but has no answers
https://groups.google.com/forum/#!topic/recaptcha/2ed-s3KK3Do actually asks my same question, but users did not seem keen on providing answers, with one user just suggesting not to use RECAPTCHA at all!
https://developers.google.com/recaptcha/docs/faq#id-like-to-run-automated-tests-with-recaptcha-v2-what-should-i-do is straight from Google, but it does exactly the opposite of what I want - it sets your site up such that the captcha appears on the page but is actually a test captcha that always lets you pass, and NEVER gives you the challenge. I want the exact inverse of this.
The methods told here should generally work, but there is no guarantee of the same. There is a very easy way to guarantee that Google reCAPTCHA challenge always show up. All you need to do is to add a custom BOT device in developer tools and then use the same to test.
In Chrome Dev Tools, open Settings. Open Devices after that.
Add a custom device with any name and set User Agent String to Googlebot/2.1
Finally, in Device Mode, at the left of the top bar, choose the custom device that you created (the default is Responsive).
Thanks to the SO users who had put it up in the answer and follow-up comment here.
I too have been looking for similar functionality. While I have not found a code-based solution to force the challenge, I have found a fairly reliable hack.
Grab a VPN tool (I happen to use IP Vanish), then connect to a remote server (I've had success connecting to China). Then, open up a private/incognito window and fill out your form.
From my testing, the combination of the remote IP and the blank user session triggers the challenge.
Here are a few things you can try. In my experience all of them will increase your chances of getting a challenge.
Log in at https://www.google.com/recaptcha/admin and edit your
reCAPTCHA settings. Under Security Preference choose Most Secure.
Use a VPN + incognito mode (as suggested here)
If you're using the invisible reCAPTCHA, I found that using explicit
rendering + immediately calling grecaptcha.execute() after
grecaptcha.render() will usually trigger the challenge. I suspect
this is because Google's AI expects a user interaction of some kind
to trigger grecaptcha.execute() and not the onloadCallback itself.
I use reCAPTCHA's SDK in Android, and I also encounter the need to force validation when testing. I tried it many times. At last, I turned off or turned on the flight mode, which can be verified in the retest. I guess it may be that Google put my IP on the white list in the background, so I passed the verification without any challenge.
That should be possible, because when LinkedIn forcefully logged out an user for excessive usage, it showed captcha on next login, and there always was the challenge.
Unfortunately, LinkedIn switched from Recaptcha to another provider just few days ago, so I cannot just look up into their JavaScript code.
It is what makes me believe that Recaptcha does have an undocumented option to force the challenge.
2022 and later
It seems to be increasingly harder to trigger the recaptcha challenge of the invisible recaptcha. Using the UserAgent of a bot, going into incognito mode is not enough anymore. A VPN might work, but I do not trust free VPN services.
I am however still able to trigger the recaptcha challenge when I'm only using the keyboard while filling in the form fields and pressing the submit button with the enter key. It seems like the Google Recaptcha is now also following your mouse movements to determine if you are a real user. Make sure to never hover your mouse cursor over the webpage and only use the keyboard.
I was looking for something like this and after some research plus trial & error what worked for me is to use the invisible recaptcha and invoke the challenge with JS.
After you have loaded the recaptcha script on your page then do
grecaptcha.execute()
and the challenge might be invoked.
Related
My client needs data scraped from a website. I am planning to use php_curl. The problem is, the site is using Google reCAPTCHA. Few powerful data items are visible only when you click "show this information link". then the reCAPTCHA appears in lightbox and vanishes, and information is displayed.
I have checked the source html, the protected item is actually loaded when someone clicks, and there is no way for me to automate this click. I have even tried to open the site in iframe and then use JS to click it, but it fails as both domains are different. I have also tried to use Selenium stand alone version but its downloads are corrupt.
Unless there is a design flaw with the website, the reCAPTCHA will prevent you from scraping the material without human intervention.
Technically, your best bet is to employ humans to solve CAPTCHAs all day and write some software to automatically scrape the material it protects for each one they solve. A number of viable businesses have been created this way, where the data is valuable and there is a genuine public interest in opening the data-set. (For example I heard that flight companies use CAPTCHA devices to prevent price comparison sites from driving down the cost to the consumer, and I'd argue in such a case there is an overwhelming public interest to defeating such defences).
Morally, however, you would need to tell us what you are doing in order for us to advise you. It is possible your client is merely planning to steal other people's material and then attempt to monetise it for him/herself, even though they had no hand in creating it. That may breach some copyright laws, but moreover, they (and you) need to decide if the scraping is fair.
I am facing the same problem but resolved it using clear my cookies in httprequest in useragent after clear cookie wait time function (tread sleep) for some time and then start scrapping again. But I am doing this in C#, not in PHP. Applying this logic may help you.
In case a (say login) form POST submission fails and Firefox displays "Try Again" message.
Is there any way to click this "Try Again" automatically or through any settings in Firefox about:config that it clicks it?
Related
"Clicking" the Try Again button is relatively easy. There is an extension that does just that, and lets you set the number of seconds between retries.
The real rub here is that you want to "blindly" retry form POSTs. As we all know, just because you didn't get a response, that doesn't necessarily imply that nothing was changed on the server.
Re-submitting a login form sounds harmless enough, and usually is. But if you imagine forms that result in orders being placed or money being moved, it's easy to understand why browsers have implemented this kind of warning:
This is what you'll see if you enable an extension like TryAgain and a form post fails. It's the same behavior you'd get by pressing F5 yourself. The extension will dutifully try to POST again, but the browser is going to intervene with an alert, and refuse to send the POST until "Resend" is clicked.
This kind of safety feature does a fair amount to protect end-users and developers from poor implementations and network hiccups. However, it's really going to work against what you're trying to accomplish.
That said, if you could figure out a way to modify the extension to detect the alert and somehow click "Resend", you'd be in business. I can't say for sure that this is impossible, put it kind of looks that way, at least for now: this issue was marked as "won't fix", and this issue is still open.
Here is an extension for firefox:
auto reload
but i would warn you. because you could auto send any sensitive data. usually web browsers ask reload because the dont want any sensitive data to be submitted without user discretion.
We should be able to access some of it so that we can edit the placement of each GUI object inside of CoreGui. So, other than security reasons, why are we not allowed to edit placement of GUI objects?
Also, why can't trusted users use CoreScripts? What if they need to access HttpGet so they can provide a nice display showing where their best friend is at the current time and place? SocialService won't always do the trick.
Can a developer (or any other experienced Roblox player, particularly one that knows the UI in and out) please answer these questions to the best of his/her ability?
I asked this in the OBC cast, specifically about editing the UI inside CoreGui. I'm not sure what security reasons could be preventing this, however. They did reply - the answer was, "Well, we definitely don't want you moving the little help icon, or the exit button."
I got the feeling the general reason is because users would become confused if everything was misplaced. For example, if you went into a website where you could play several games all made by that company (like ROBLOX), would you expect the exit or help buttons to me placed differently in every game?
They did say we will be able to change the colours.
Hope this clears things up.
Some GUI objects like the report abuse button we don't want users to have the ability to be able to remove. Another sensitive area is the chat window. If it was completely scriptable, you could write a script to make it look like another user was saying something that he wasn't. This is not really desirable.
HttpGet is currently a privileged function for two main reasons:
It would allow users to get dynamic content into levels, which would make moderation a more difficult task.
Poorly or maliciously written scripts could HttpGet roblox.com in an infinite loop, sapping our server resources.
There was no obvious benefit, but some obvious downsides. We prefer to solve only the problems that need to be solved in order to ship features, so we err on the side of caution for things like this. If we later decide to open up new functionality, like making the ROBLOX social graph available through an API, we can do that with a dedicated interface that limits the number of requests you can make to the website in a given period, and only return the info that we are sure we want you to be able to get.
It's interesting to note that for a very long time Adobe Flash player didn't support TCP sockets for the same reason.
I'm making a website using Facebook Connect and decided to use Facebook's XFBML tags like "fb:profile-pic" since they are so easy to use.
I haven't been able to make them work no matter how hard I look online but then I noticed that it worked on all the browser's instead of Firefox.
I also realized that even on Facebook's own "The Run Around" sample app they don't work!! You can check it out here: http://www.somethingtoputhere.com/therunaround/index.php
If you log in with Firefox your picture is not shown, but if you use another browser it is shown. This happens with the fb:profile-pic tag or any other tag like fb:name.
I haven't found any information online so I'm asking other people that have worked with this: Are these tags simply not compatible with Firefox ? Do they have outages or something like that ? Has this happened to anyone before ? Any ideas on how to resolve this ?
I guess they do have "outages". I've spent the whole weekend trying to resolve this and now they post they had a problem and have resolved it.
From the Platform Live Status website:
http://developers.facebook.com/live_status.php#msg_497
We are experiencing a possible config
problem with api.connect.facebook.com.
If you are including Connect JS
library through
http://static.ak.connect.facebook.com/js/api_lib/v0.4/FeatureLoader.js.php,
all API requests through JavaScript
would fail. This affects rendering of
XFBML tags (such as fb:name and
fb:profile-pic) as well. While we are
fixing this issue, you can work around
the problem by changing
http://static.ak.connect.facebook.com/js/api_lib/v0.4/FeatureLoader.js.php
to
http://static.ak.facebook.com/js/api_lib/v0.4/FeatureLoader.js.php.
It's also safe to keep url change
permanently because
connect.facebook.com is just an alias
to facebook.com.
I wish they had updated that sooner, now I'm looking for a place to find out about this stuff before I spend days working on something before realizing it's not a problem with my code!
Open up Firefox > Preferences > Privacy and make sure "Accept third party cookies" is checked. This is needed for Facebook Connect to work. Also, when using Connect, make sure all your tags are fully closed, i.e. <fb:profile-pic></fb:profile-pic> and not <fb:profile-pic/>. From the docs:
The user's browser must be set to
accept 3rd Party Cookies in order for
it to stay connected between clicks.
Source: http://wiki.developers.facebook.com/index.php/Logging_In_And_Connecting
FWIW, I wouldn't use "the run around" as a sample app. That thing has been the same since they introduced Connect and is pretty hacky.
do check in connect section under the canvas option.
there should be a link of your physical file.
I'm trying to log in to a website and save an HTML page automatically (I want to be able to do this on a regular time interval). From the surface, this is a typical modern website where, if the user navigates directly to a "locked" URL, a log-in form pops up, and after logging in, the user is redirected to the intended page.
I gave mechanize a shot (http://wwwsearch.sourceforge.net/mechanize/) but it wasn't finding some form elements which were needed for login (hidden elements that have some values put in by a javascript function that runs when the user clicks the "log in" button).
I played a bit with the "web browser" control in .NET but quickly lost interest because I couldn't even get it to submit a query on the Google page.
I don't care what the language is; I'll learn it to solve this problem. At a minimum it has to work in Windows.
A simple example, say, typing in a query into the Google search box would be a great bonus.
In my experience, the most reliable way is to use javascript. It works well in .Net. To test, browse to the following addresses one after another in Firefox or Internet Explorer:
http://www.google.com
javascript:function f(){document.forms[0]['q'].value='stackoverflow';}f();
javascript:document.forms[0].submit()
That performs a search for "stackoverflow" on Google. To do it in VB .Net using the webbrowser control, do this:
WebBrowser1.Navigate("http://www.google.com")
Do While WebBrowser1.IsBusy OrElse WebBrowser1.ReadyState <> WebBrowserReadyState.Complete
Threading.Thread.Sleep(1000)
Application.DoEvents()
Loop
WebBrowser1.Navigate("javascript:function%20f(){document.forms[0]['q'].value='stackoverflow';}f();")
Threading.Thread.Sleep(2000) 'wait for javascript to run
WebBrowser1.Navigate("javascript:document.forms[0].submit()")
Threading.Thread.Sleep(2000) 'wait for javascript to run
Notice how the space in the URL is converted to %20. I'm not certain if this is necessary but it can't hurt. It is important that the first javascript be in a function. The calls to Sleep() are to wait for Google to load and also for the javascript stuff. The Do While Loop might run forever if the page fails to load so for automation purposes have a counter that will timeout after, say, 60 seconds.
Of course, for Google you can just navigate directly to www.google.com?q=stackoverflow but if your site has hidden input fields, etc, then this is the way to go. Only works for HTML sites - flash is a whole other matter.
If I understand you right, you want to log in to only one webpage, and that form always stays the same. You could either reverse engineer the java script, or debug it via a javascript debugger in the browser (e.g. firebug for firefox). Or you can fill in the form in your browser and look at the http request via a network packet sniffer. Once you have all required form data to submit, you can do the same with your program (thats what I did the last time I had a pretty similar task to do). dont forget to store all cookie data you requested back from the webserver and send it with the next request, to 'stay logged in'.
Its being already discussed here.
Basically its gist is you can use selenium, an open source web automation tool, which has api library available in various languages like java, ruby, etc.
Neoload can handle the form filling with authentication, assuming you don't want to collect data, just perform actions. It's a web stress tool, so it's not really meant to be used as a time-based service, but you COULD just leave it running.
I've used Ruby and Watir (a web app testing suite) for something similar, but it was a very small task (basically visiting URLs from a text file and downloading an image).
There's also an extension called iMacros that can do some automation, but I'm not personally familiar with it (just aware of it).
"I'm trying to log in to a website and save an HTML page automatically"
SAVEAS TYPE=HTM FOLDER=C: FILE=page.html
https://addons.mozilla.org/en-US/firefox/addon/imacros-for-firefox/?src=search
This commands played in iMacros addon will save the page on C: drive and name it page.html
Also,
URL GOTO=www.website.com
Goes on the particular website you want to save. You can also use scripting in iMacros and set different websites in macro.