How to handle Modals in cucumber + Capybara + Selenium - ruby

So I am trying to click a forgot password link (which causes a modal pop up) and confirm the pop up link so I can perform a test on the sent out email.
My code looks like this:
page.find(:css, '#launch-modal-link').click # code fails on this line, after clicking the link
page.driver.browser.switch_to.alert.accept # does not get to this line of code.
What am I doing wrong exactly when trying to click the "Ok" button in the modal pop up?
Do I need to add a try catch block (or whatever it is called in Ruby) around the link

Solved it - Found the answer somewhere else. Its a hack though, and not something done via cucumber directly.
page.evaluate_script('window.confirm = function() { return true; }')
This works because it over writes the confirm() to always return true and the confirm function seems to be a common javascript function to return the button clicked in a dialog box. Could be wrong about that. (read the javascript function being performed onclick. Might not always work)

Related

Cucumber, Capybara, and Selenium check if browser alert is showing

I am looking for checking browser alert when I do a test case. In my scenario, if there is an error occurence, it will alert an pop up dialog. I was looking the solution to handle this. So far I have done a function like this :
def alert_present?
begin
page.driver.browser.switch_to.alert
true
rescue
Selenium::WebDriver::Error::NoAlertOpenError
false
end
end
is there any way other than this?
ok so if you want just to accept the alert and you are not sure that even there will be one
then i think overriding the confirm method in javascript will do it, just make sure add this line before the line that triggers the dialogue to pop up
page.evaluate_script('window.confirm = function() { return true; }')
your line of code that triggers the alert
if there will be alert it will be accepted once it popup if not there won't be problems.

CasperJS click event having AJAX call

I am trying to fetch data from a site by simulating events using CasperJS with phantomJS 1.7.0.
I am able to simulate normal click events and select events. But my code fails in following scenario:
When I click on button / anchor etc on remote page, the click on remote page initiates an AJAX call / JS call(depending on how that page is implemented by programmer.).
In case of JS call, my code works and I get changed data. But for clicks where is AJAX call is initiated, I do not get updated data.
For debugging, I tried to get the page source of the element container(before and after), but I see no change in code.
I tried to set wait time from 10 sec to 1 ms range, but that to does not reflect any changes in behavior.
Below is my piece of code for clicking. I am using an array of CSS Paths, which represents which element(s) to click.
/*Click on array of clickable elements using CSS Paths.*/
fn_click = function(){
casper.each(G_TAGS,function(casper, cssPath, count1)
{
casper.then ( function() {
casper.click(cssPath);
this.echo('DEBUG AFTER CLICKING -START HTML ');
//this.echo(this.getHTML("CONTAINER WHERE DETAILS CHANGE"));
this.echo('DEBUG AFTER CLICKING -START HTML');
casper.wait(5000, function()
{
casper.then(fn_getData);
}
);
});
});
};
UPDATE:
I tried to use remote-debug option from phantomJS, to debug above script.
It is not working. I am on windows. I will try to run remote debugging on Ubuntu as well.
Please help me. I would appreciate any help on this.
UPDATE:
Please have a look at following code as a sample.
https://gist.github.com/4441570
Content before click and after click are same.
I am clicking on sorting options provided under tag (votes / activity etc.).
I had the same problem today. I found this post, which put me in the direction of jQuery.
After some testing I found out that there was already a jQuery loaded on that webpage. (A pretty old version though)
Loading another jQuery on top of that broke any js calls made, so also the link that does an Ajax call.
To solve this I found http://api.jquery.com/jQuery.noConflict/
and I added the following to my code:
this.evaluate(function () { jq = $.noConflict(true) } );
Anything that was formerly assigned to $ will be restored that way. And the jQuery that you injected is now available under 'jq'.
Hope this helps you guys.

My Asp.Net Form data is not being submitted properly by Selenium 2 (Webdriver)

So I have written a test which populates a form, saves (in the admin tool), and then publishes.
However, my form is being lost between the save click and the publish click. I would show what the form looks like in HTML, but its pretty huge (like 20-30 fields)
In psuedo code, filling out the form looks like this:
1) Fill in form using dropdowns
2) Hit the save button - saves all form data
3) Hit the publish button
When I pause the script to see what is happening within selenium, I see the form properly being populated. I then see the Save button properly being clicked. When I pause the screen before hitting publish, I see that the content I have saved after clicking the save button was lost or is in the wrong fields.
When I do this manually, it works correctly. I know selenium submits forms differently than the standard user, however, is there anything I can do on my end to make sure that form is being submitted properly?
What does the Save button actually do? Is it Javascript, or a simple ` button?
Are you using the C# interface to Selenium webdriver? You probably have code that looks something like this:
FillInForm();
selenium.click(By.CssSelector("input[value='Save']"));
selenium.click(By.CssSelector("input[value='Publish']"));
Have you tried inserting, between save and publish, lines like the following:
// further up: By saveButton = ...
// By formField = ...
selenium.click(saveButton);
var formField = selenium.FindElement(formField);
Assert.That(formField.GetAttribute("value")
.contains("The text you typed into the form")
);
The point here being to check that save really is doing what it says on the tin. Generally, when you ask the WebDriver to "click" on a button, it does do exactly (more or less) what the user does. Alternatively, you can inject some javascript to force the form to submit - but then you're explicitly not testing what the user actually does (but you might find it's closer to what you experience).

Selenium Webdriver - How do I skip the wait for page load after a click and continue

I have an rspec test using webdriver that clicks on a button... after clicking the button, the page never fully loads (which is expected and correct behaviour). After clicking the button, I want to wait 2 seconds and then navigate to a different URL... despite the fact that the page has not loaded. I don't want to throw an error because the page hasn't loaded, I want to just ignore that, and continue on as if everything is good. The page should never load, that is the expected and correct behaviour.
How can I avoid waiting until the timeout, and secondly, how can I have that not throw an error which casuses the test to fail.
Thank you!
WebDriver has a blocking API and it will always wait for page to be loaded. What you can do instead, is to press the button via JavaScript, i.e. trigger its onclick event. I am not familiar with Ruby, but in Java it would be:
WebDriver driver = ....; // Init WebDriver
WebElement button = ....; // Find your element for clicking
String script = "if (document.createEventObject){"+
"return arguments[0].fireEvent('onclick');"+
"}else{"+
"var evt = arguments[0].ownerDocument.createEvent('MouseEvents');"+
"evt.initMouseEvent('click',true,true,"+
"element.ownerDocument.defaultView,1,0,0,0,0,false,"+
"false,false,false,1,null);"+
"return !element.dispatchEvent(evt);}" ;
((JavascriptExecutor)driver).executeScript(script, button);
After this you can wait for 2 seconds and continue
I had this similar problem, I tried this solution which #Sergii mentioned but was getting below error:
org.openqa.selenium.JavascriptException: javascript error: element is not defined (Session info: chrome=84.0.4147.89)
As a workaround had to initialize element within java script code.
String script = "var element = document.querySelector('button[type=submit]');" +
"if (document.createEventObject){"+
"return element.fireEvent('onclick');"+
"}else{"+
"var evt = element.ownerDocument.createEvent('MouseEvents');"+
"evt.initMouseEvent('click',true,true,"+
"element.ownerDocument.defaultView,1,0,0,0,0,false,"+
"false,false,false,1,null);"+
"return !element.dispatchEvent(evt);}" ;
Thank you #Sergii for your answer, I was not able to add this as a comment due to limited characters, so added it as a separate answer.
why don't you try a simple trick of using "waiting()" after waitForPageToLoad() which makes it to neglect the previous command in Selenium and never fails that step
What if you used RSpec's expect method:
expect {
Thread.new() {
sleep(2)
raise RuntimeError
}
theButton.click()
}.to raise_error
Send the 'Enter' key to the link or button in question instead of a click, webdriver won't wait for the page to load and will return instantly (this is in C#, just translate to Ruby):
element.SendKeys(Key,Return);

Google Chrome Extension - How can I include a content script more than once?

I've been working on Chrome Extension for a website for the past couple of days. It's coming along really nicely but I've encountered a problem that you might be able to help with.
Here's an outline of what the extension does (this functionality is complete):
A user can enter their username and password into the extensions popup - and verify their user account for the particular website
When a user browses http://twitter.com a content script is dynamically included that manipulates the DOM to include an extra button next to each tweet displayed.
When a user clicks this button they are presented with a dialog box
I've made a lot of progress but here is my problem:
When a user visits Twitter the content script is activated and all tweets on the page get my new button - but if the user then clicks 'More...' and dynamically loads the next 20 tweets... these new additions to the page DOM do not get affected by the content script (because it is already loaded).
I could add an event listener to the 'More...' button so it then triggers the original content script again (and adds the new button) but i would have to predict the length of twitter's ajax request response.
I can't tap into their Ajax request that pulls in more tweets and call my addCurateButton() function once the request is complete.
What do you think is the best solution? (if there is one)
What you want to do is to re-execute your content-script every time the DOM is changed. Luckily there is an event for that. Have a look at the mutation event called DOMNodeInserted.
Rewrite your content script so that it attaches an event listener to the body of the DOM for the DOMNodeInserted event. See the example below:
var isActive = false;
/* Your function that injects your buttons */
var inject = function() {
if (isActive) {
console.log('INFO: Injection already active');
return;
}
try {
isActive = true;
//inject your buttons here
//for the sake of the example I just put an alert here.
alert("Hello. The DOM just changed.");
} catch(e) {
console.error("ERROR: " + e.toString());
} finally {
isActive = false;
}
};
document.body.addEventListener("DOMNodeInserted", inject, false);
The last line will add the event listener. When a page loads the event is triggered quite often so you should define a boolean (e.g. var isActive), that you initialize to false. Whenever the inject function is run check whether isActive == true and then abort the injection to not execute it too often at the same time.
Interacting with Ajax is probably the hardest thing to coax a content script to do, but I think you’re on the right track. There are a couple different approaches I’ve taken to solving this problem. In your case, though, I think a combination of the two approaches (which I’ll explain last) would be best.
Attach event listeners to the DOM to detect relevant changes. This solution is what you’ve suggested and introduces the race condition.
Continuously inspect the DOM for changes from inside a loop (preferably one executed with setInterval). This solution would be effective, but relatively inefficient.
The best-of-both-worlds approach would be to initiate the inspection loop only after the more button is pressed. This solution would both avoid the timing issue and be efficient.
You can attach an event-handler on the button, or link that is used for fetching more results. Then attach a function to it such that whenever the button is clicked, your extension removes all the buttons from DOM and starts over inserting them, or check weather your button exists in that particular class of DOM element or not and attach a button if it doesn't.

Resources