WatiN can't find text after searching google - watin

I'm trying to run a simple watiN example: search google then verify the search result. (on IE9)
var browser = new IE("http://www.google.com/ncr");
browser.TextField(Find.ByName("q")).TypeText("WatiN");
browser.Button(Find.ByName("btnG")).Click();
Assert.True(browser.ContainsText("WatiN"));
This test fails! I don't know why, but adding a call to WaitUntilContainsText("Everything") make this pass:
var browser = new IE("http://www.google.com/ncr");
browser.TextField(Find.ByName("q")).TypeText("WatiN");
browser.WaitUntilContainsText("Everything");// because of google instant??
browser.Button(Find.ByName("btnG")).Click();
Assert.True(browser.ContainsText("WatiN"));
I guess this maybe because of the behavior of google instant but can't be sure.
Can someone explain what's wrong with this test?

Yes, it has to do with Google Instant. When you call Click() on button the page will not be reloaded, so the call to ContainsText will occur almost without delay. You need to use some Wait... methods of the IE or elements if you are browsing pages generated by javascript on the fly (AJAX mostly).

Related

HtmlUnit. How can I get site content updated by ajax and websockets?

I need to fetch comments from this site https://russian.rt.com/, for example, for this news: https://russian.rt.com/sport/article/486467-rossiya-hokkei-zoloto-olimpiady
So I try this:
String url = "https://russian.rt.com/sport/article/486467-rossiya-hokkei-zoloto-olimpiady";
try (WebClient client = new WebClient(BrowserVersion.FIREFOX_52)) {
client.getOptions().setJavaScriptEnabled(true);
client.getOptions().setThrowExceptionOnScriptError(false);
client.getOptions().setThrowExceptionOnFailingStatusCode(false);
client.setAjaxController(new NicelyResynchronizingAjaxController());
HtmlPage rtPage = client.getPage(agencyURL);
HtmlElement comBlock = rtPage.getFirstByXPath("//ul[#class='sppre_messages-list']");
} ...
But HtmlElement comBlock is always null.
I've tried waiting for javascript to complete by
client.waitForBackgroundJavaScript(10*1000);
- scrolling page:
client.getCurrentWindow().setInnerHeight(60000);
or
rtPage.executeJavaScript("window.scrollBy(0,600)");
- getting elements at the bottom of the page and clicking them.
But neither of that helped and HtmlElement comBlock after all these operations is always null.
Maybe comments module uses some kind of websockets and this is not even possible?
Can anyone help me, please?
Have done some short tests with this site. At first i have seen a NPE when calling the site. This is fixed now in HtmlUnit. Usually i will inform via Twitter (www.twitter.com/HtmlUnit) if a new snapshot build is available. After that fix i faced many more javascript problems. Looks like the page does a lot of javascript including some uggly things. If you like to get this fixed it will be a great help if you can isolate simple cases that show the problems to give us a chance to fix HtmlUnit (there is more info about this on the HtmlUnit home page).
Sorry for not having a direct solution but as for many open source projects we need help from the community to do all the work.

Selenium profile is getting detected by Google?

Basically I'm trying out Selenium webdriver (using FireFox) and right now I am trying to sign up to a Google account.
However, the strange thing is that whenever I run Selenium and let it use the (empty I assume?) Selenium FireFox profile Google seems to detect it and block me (asking for phone vertification).
This is even the case when I load up the selenium profile and manually sign up.
When I sign up manually (and don't use the selenium profile) I can sign up just fine.
Is the Selenium FireFox profile some how special which enables the servers to detect it?
EDIT: I'm trying to startup selenium with my default FF profile (however it keeps starting up in an empty profile) - here's the code:
OpenQA.Selenium.Proxy proxySetting = new OpenQA.Selenium.Proxy();
proxySetting.HttpProxy = proxy;
proxySetting.FtpProxy = proxy;
proxySetting.SslProxy = proxy;
FirefoxProfile profile = new FirefoxProfile("default");
profile.SetProxyPreferences(proxySetting);
profile.SetPreference("browser.privatebrowsing.autostart", true);
_driver = new FirefoxDriver(profile);
EDIT:
I managed to open the default firefox profile but now it doesn't use my proxy settings. How can I use the normal profile and still customize the profile proxies?
This post talks about an HtmlDriver tag being added to the HTML in the FirefoxDriver which would be a dead giveaway
Google is a strong supporter of Open Source, and even Selenium itself, however I don't think Google would particularly condone a Selenium script creating a bunch of spam accounts that probably would never be used, and just take space.
That being said, I believe that it would be possible potentially.
The only way that Google would be able to know you are using Selenium, is based on the Request Headers. It's possible either the User-Agent has something to do with Selenium, or one of the other Headers.
My solution would be to use something like Fiddler to listen to the requests that Firefox is sending, and then edit your Selenium scripts to account for, and change those requests so Google does not know that you are using Selenium.
This most likely goes against their terms of use, so exercise caution, and use this answer for educational purposes only.
Is there a chance, if you were using the complete path to your firefox profile directory? (e.g. C:\Users\???\AppData\Roaming\Mozilla\Firefox\Profiles\your_profile.default)

Using CasperJs + SlimerJS for testing 3rd party tracking. Calls don't go out

I have been using CasperJS + PhantomJS to walk through a site, http://example.com. As it does so, tracking calls to http://just_another_tracking_service.com/?with&some&params are fired (I have a separate scripts that captures and deals with those calls). These always return a tiny gif, with the correct mime type. The protocol for the gifs is http.
I have been trying to use the same CasperJS script to do the same with SlimerJS - the walking-though-the-site part works very well (much better than PhantomJS in fact). But no calls to http://just_another_tracking_service.com/ are sent. I have tried enabling web security this way, but no joy.
Edit: removed pageSettings from code sample, it didn't make any difference, as this page explained http://docs.slimerjs.org/current/configuration.html
var casper = require("casper")
.create({ waitTimeout: 10000 });
Any ideas what I am doing wrong / what should I be doing? Thanks in advance.

Up to date, working Google apps-script example integrated with spreadsheet?

I'm trying to build some kind of GUI on top of/embedded into a google spreadsheet.
I've been crawling through the docs, and sadly, hitting a wall.
I DID find the sample video, at
http://www.youtube.com/watch?v=5VmEPo6Rkq4
Unfortunately, it seems out of date and broken :( Some of the calls are no longer valid.
And, while I think I figured that part out, I cant get the callback handler to be recognized.
It gives me a runtime error of
"Error encountered: Script function not found:
function respondToSubmit(e) {
/* full body of function here*/
}"
The odd thing is, for supposedly not finding it, it does a good job of printing out the whole function body.
It doesnt seem to be an error inside the function itself, because when I make it an EMPTY function, it still gives the same error :(
Could someone please point me to a simple, working example of how to add a UI alongside a google spreadsheet, or equivalent?
Please note that I dont need a general-purpose, standalone application(I think).
I'm just trying to embed some GUI type functions, in one very specific google spreadsheet that I have.
There are examples of simple Spreadsheet UIs using three different approaches in the
Dialogs and Sidebars in Google Apps documentation. They all work today. The third approach, Custom Dialogs, can be implemented using UiService or HtmlService, but that page only shows an example using HtmlService.

Toggle javascript support programmatically without restarting firefox

The problem: toggle javascript support without restarting firefox (nor resorting to different driver) during cucumber test run.
If Firefox's prefutils were exposed to javascript in a web page, that would make it possible. But it is not the case.
So, is there a plugin that does it? Or is there another way to solve the problem? Or is there a good tutorial (that highlights the exposing bit) on how to make such a plugin?
Edit
On a second thought, how would javascript be of any help once it is disabled? Probably the whole idea is a bit screwed.
I assume that your tests run with normal web content privileges. In that case, they aren't going to be able to affect browser settings such as whether JavaScript is enabled (I assume that's what you mean by "toggle JavaScript support").
I'd implement a simple XPCOM component with a method to turn JS support on and off (by setting the appropriate pref). You can expose it as a JavaScript global property so that your tests can access it. See Expose an XPCOM component to javascript in a web page for more details. Package your component in an extension and make sure it is installed in the Firefox instance where your tests are running.
If you want to access the preferences API directly from your content script, you can add the following prefs to Firefox, either in about:config or by adding the following lines to prefs.js in your profile directory:
user_pref("capability.principal.codebase.p1.granted", "UniversalXPConnect UniversalBrowserRead UniversalBrowserWrite UniversalPreferencesRead UniversalPreferencesWrite UniversalFileRead");
user_pref("capability.principal.codebase.p1.id", "http://www.example.com");
user_pref("capability.principal.codebase.p1.subjectName", "");`
user_pref("signed.applets.codebase_principal_support", true);
Replace www.example.com with the domain that you want to grant the privileges to. Also add this line to your JS code before you call the preferences API:
netscape.security.PrivilegeManager.enablePrivilege('UniversalXPConnect');
A local file (something loaded from file:///) is allowed to request additional privileges. Normally you would get a prompt asking whether you want to allow access - you can "auto-accept" the prompt by adding the following lines to prefs.js in the Firefox profile:
user_pref("capability.principal.codebase.p0.granted", "UniversalXPConnect");
user_pref("capability.principal.codebase.p0.id", "file://");
user_pref("capability.principal.codebase.p0.subjectName", "");
You page can then do:
netscape.security.PrivilegeManager.enablePrivilege("UniversalXPConnect");
var branch = Components.classes["#mozilla.org/preferences-service;1"]
.getService(Components.interfaces.nsIPrefBranch);
branch.setBoolPref("javascript.enabled", false);
This will definitely work if your page is a local file. Judging by the error message however, you are currently running code from about:blank. It might be that changing capability.principal.codebase.p0.id into about:blank or into moz-safe-about:blank will allow that page to get extended privileges as well but I am not sure.
However, none of this will really help if JavaScript is already disabled and you need to enable it. This can only be solved by writing an extension and adding it to the test profile. JavaScript in Firefox extensions works regardless of this setting.
That means you need Javascript to toggle enabling or disabling Javascript.
function setJavascriptPref(bool) {
prefs = Components.classes["#mozilla.org/preferences-service;1"]
.getService(Components.interfaces.nsIPrefBranch);
prefs.setBoolPref("javascript.enabled", bool);
}

Resources