Selenium RC - Unable to click on a link using XPATH / CSS path / //a[contains(text(),'abc')] - selenium-rc

Selenium IDE is able to recognise the ID, Xpath, CSSPath for a link. But, Selenium RC is unable to click on the link using XPath or CSSPath or ID. I have also used "Contains Text()", but of no use. Please find below code that i am currently executing in Eclipse IDE.
selenium.open("https://abc.com");
selenium.type("UserName", "123456");
selenium.click("xpath=//form[#id='loginForm']/table/tbody/tr[7]/td/input");
selenium.click("xpath=//a[#id='_ebg9dd']");
// selenium.click("xpath=//a[contains(text(), 'Request Form')]");
Can someone please suggest any other alternative or correct the code if there is any discrepancy ?

Are you converting it using the IDE? For example, this is an Xpath selector for Java Junit 4 RC:
Seems unusual to have an id on your a-tag but if you wanted to try using 'contains' this is an example of one that works for me when I just tried it..
selenium.click("//div[#class='span5 footer-links']/ul/li/a[contains(text(), 'Submit your page')]");
I also wonder about the fact that you look to be inputting text into a 'username' field and then followed with two 'clicks'. Do you not need to either input something into another field or wait for something after the first click? Just seems like an odd series of events (may not be though, I obviously don't know the specifics of what it is you're doing.

Try this :
selenium.click("link=name_of_link_present_on_page");

Related

Getting a xPath from XML document

I am trying to get some values from an online XML document, but I cannot find the right xpath to navigate to those values. I want to import these values into a Google Spreadsheet document, which requires me to get the exact xpath.
The website is this one, and I am trying to get the information for "WillPay" information from MeetingInfo Venue=S1, Races RaceNo=1, Pools PoolInfo Pool=WIN, in OddsInfo.
For now, the value of "Number=1" should be 3350 (or something close to this, it changes quite often), and I would like to load all of these values onto the google spreadsheet document.
What I've tried is locating the xpath of all of it, and tried to my best attempt to get
"/AOSBS_XML/Meetings/MeetingInfo/Races/Pools/PoolInfo/OddsSet/OddsInfo/#WillPay"
but it doesn't work.
I've been stuck on this problem for months now and I've been avoiding it, but realised I can't anymore because it's hindering my work. Please help.
Thanks!
-Brandon
Try using this xpath expression:
//MeetingInfo[#Venue="S1"]/Races//RaceInfo[#RaceNo="1"]//Pools//PoolInfo[#Pool="WIN"]//OddsSet//OddsInfo[#Number="1"]/#WillPay
An alternative :
//OddsInfo[#WillPay][ancestor::PoolInfo[#Pool='WIN'] and ancestor::RaceInfo[#RaceNo='1'] and ancestor::MeetingInfo[#Venue='S1']]

Is it possible to interact with Select2 components using HPE Unified Functional Testing?

I hope this question makes sense.
When recording in HPE UFT in my web application and selecting an option from a dropdown box made using Select2 and then running the test it fails.
It returns the next error:
Cannot identify the object "WebElement" (of class WebElement).
Verify that this object's properties match an object currently displayed in your application.
This is for version 14.03 of the tool running in a Windows machine.
And I have tried different record modes without any luck.
The code that is generated when recording the test is:
Browser("LHO DEV").Page("SITE DEV_4").WebList("WebElement").Click
Browser("LHO DEV").Page("SITE DEV_4").WebTree("select2-single-results").Select "Option Value 9"
Browser("LHO DEV").Page("SITE DEV_4").WebEdit("WebEdit").Set "Value 9"
I understand that this code won't work as Select2 as it behaves in a different way than a regular dropdown/select box.
So, I would really appreciate if anyone can give me a light in the right direction.
Well, I found a solution maybe not the desired one but at least works.
Basically, the method RunScript allows executing Javascript, in this way I was able to access the Select2 element and select the desired values.
Here is the code I used:
Browser("LHO DEV").Page("SITE DEV_4").RunScript("$('#single').select2('open')")
Browser("LHO DEV").Page("SITE DEV_4").RunScript("$('#single_element').val([66]).trigger('change')")
Browser("LHO DEV").Page("SITE DEV_4").RunScript("$('#single_element').select2('close')")
I hope anyone finds this useful in the future.

Capybara 'first' selector not working

In my test case I have a few similar buttons from which I'm trying to select and click the first one. When I use find('a.add-link').click it gives me ambiguous match error, which is expected, but when I try using find('a.add-link').first.click, it still claims it's an ambiguous match.
Also, if I try using something like first('a.add-link').click, it doesn't find the selector.
Another method I found somewhere using find('a.add-link', match: :first).click also doesn't work, it says it's a wrong key
I'm using cucumber version 1.2.5
Ok, I've managed to solve it by using
eventually do
first('a.add-link').click
end

Google spreadsheet ImportXML Error:"the XPath query did not return any data"

I continue to get this error when I try to run this XPath query
//div[#iti='0']
on this link (flight search from google)
https://www.google.com/flights/#search;f=LGW;t=JFK;d=2014-05-22;r=2014-05-26
I get something like this:
=ImportXML("https://www.google.fr/flights/#search;f=jfk;t=lgw;d=2014-02-22;r=2014-02-26";"//div[#iti='0']")
I verified and the XPath is correct (I get the answer wanted using XPath helper, the answer wanted are the data relative to the first flight selected).
I guess that it is a problem of syntax, but I tried more or less all the combinations of lower/uppercase, punctuation (replacing ; , ' ") and I tried to link the URI and the XPath query stored in cells, but nothing works.
Any help will be appreciated.
As a matter of fact, maybe it is a bug on the new google sheets or they have changed how the function works. I've activated mine and when I try to use the ImportXML it simply wont work. Since I have some old sheets here (on the old mechanism) they still work normally. If I copy and paste the script from the old to the new one it simply doesn't get any data.
Here a example:
=ImportXML("http://www.nytimes.com/pages/todayspaper/index.html";"//div[#class='columnGroup first']//h3")
If I run this on the old mechanism it works fine, but if I run the same on the new mechanism, first it will exchange my ";" for a "," and then it will bring a "#N/A" with a warning of "Error: Imported XML content cannot be parsed".
Edit (05/05/2015):
I am happy to say that I tested this function again today on the new spreadsheets and they've fixed it. I was checking that every two months and now finally they have solved this issue. The example I've added above is now returning information.
I'm sorry, but you won't be able to easily parse Google result pages. The reason your function throws an error is because the content of the page you see in your browser is generated by javascript, and Google spreadsheet doesn't execute js.
Your ImportXML has the right syntax, it doesn't return anything because the node you're looking for isn't there (importXML Parse Error).
You will have to find another source if you want these result in your spreadsheet. For info some libraries already parse the usual result page (http://www.seerinteractive.com/blog/google-scraper-in-google-docs-update for example, if it still works), but I doubt finding one for your special case will be easy.
This gives the answer (importXML Parse Error), but it's not entirely obvious.
ImportXML doesn't load Javascript. When you're building ImportXML queries on Google results, make sure you're testing against a version of the page that has Javascript turned off. You can do this using the Chrome DevTools.
(But I agree that ImportXML is fickle, idiosyncratic, and generally rage-inducing).

Using wildcards in Selenium IDE

I'm somewhat new to automation, and am learning everything auto-didactically, so forgive me if my terminology is a bit off. I've searched hi and low for an answer to this question, and I can't seem to find anything. I presume it's my small vocabulary when it comes to this stuff... anyway...
I'm attempting to write a test that performs all the actions necessary to complete a tutorial by using the recorder. However, for one particular step, the element ID changes. For example, the ID I'm trying to click is this:
//li[#id='message_661119']/div[2]/div[2]/a/img
However, for each new user that is performing the tutorial "quest", the number of the id changes.
Is there anyway to get Selenium to recognize, or use, wildcards? Example:
//li[#id='message_******']/div[2]/div[2]/a/img
Of course, the example above does not work.
Any advice would be immensely helpful. Thank you!!
You can use starts-with() for this:
//li[starts-with(#id, 'message_')]/div[2]/div[2]/a/img
It's one of the examples mentioned in Locating Techniques in Selenium's docs for starts-with().
In Target field of the command in Selenium IDE where you can see message_123123 click on a dropdownlist and choose an option which is related to xpath:idRelative or if this one doesn't work then try another options which do not include that annoying message_123123 so this way you'll identify webpage element by it's location but not id. I solved my issue this way

Resources