Capybara 'first' selector not working - ruby

In my test case I have a few similar buttons from which I'm trying to select and click the first one. When I use find('a.add-link').click it gives me ambiguous match error, which is expected, but when I try using find('a.add-link').first.click, it still claims it's an ambiguous match.
Also, if I try using something like first('a.add-link').click, it doesn't find the selector.
Another method I found somewhere using find('a.add-link', match: :first).click also doesn't work, it says it's a wrong key
I'm using cucumber version 1.2.5

Ok, I've managed to solve it by using
eventually do
first('a.add-link').click
end

Related

Cypress Assertions contradictions?

So i think this is one area that I run into issues with, and despite Cypress's amazing documentation assertions sort of feel lacking (but maybe thats because they use Chai assertions and I should be looking there? but even then i've run into confusion).
Anyways, it seems like when I am asserting on some items within cypress I return into conflicting results and I can't seem to pinpoint any reason why.
A few examples specifically:
Why does should(‘contain’,'{Some Text}) search child elements but should(‘have.text/value’,{Some Text}) does not? Or at least thats how it appears? I can't find any documentation that states this
In the example above I noticed this when having a cy.get on an angular dropdown toggle with a <span> within it (that contains the text).
Another oddity is have.text vs have.value. I've noticed sometimes works when the other doesn't. For example input fields only have.value works but not contains or have.text. Is there any reason for this?
I guess im trying to figure out if there is some cheat sheet/guide to when to use each one because it's mostly been trial and error for me (But i'd like to know "why" one works and the other doesn't).
Thanks!

Confused about XPath Syntax

Problem Summary:
Hi, I'm trying to learn to use the Scrapy Framework for python (available at https://scrapy.org). I'm following along with a tutorial I found here: https://www.scrapehero.com/scrape-alibaba-using-scrapy/, but I was going to use a different site for practice rather than just copy them on Alibaba. My goal is to get game data from https://www.mlb.com/scores.
So I need to use Xpath to tell the spider which parts of the html to scrape, (I'm about halfway down on that tutorial page on the scrapehero site, at the "Construct Xpath selectors for the product list" section). Problem is I'm having a hell of a time figuring out what syntax should actually be to get the pieces I want? I've been going over xpath examples all morning trying to figure out the right syntax but I haven't been able to get it.
Background info:
So what I want is- from https://www.mlb.com/scores, I want an xpath() command which will return an array with all the games displayed.
Following along with the tutorial, what I understand about how to do this is I'd want to inspect the elements from the webpage, determine their class/id, and specific that in the xpath command.
I've tried a lot of variations to get the data but all are returning empty arrays.
I don't really have any training in XPath so I'm not sure if my syntax is just off somewhere or what, but I'd really appreciate any help on getting this command to return the objects I'm looking for. Thanks for taking the time to read this.
Code:
Here are some of the attempts that didn't work:
response.xpath("//div[#class='g5-component--mlb-scores__game-wrapper']")
response.xpath("//div[#class='g5-component]")
response.xpath("//li[#class='mlb-scores__list-item mlb-scores__list-item--game']")
response.xpath("//li[#class='mlb-scores__list-item']")
response.xpath("//div[#!data-game-pk-id > 0]")'
response.xpath("//div[contains(#class, 'g5-component')]")
Expected Results and Actual Results
I want an XPath command that returns an array containing a selector object for each game on the mlb.com/scores page.
So far I've been able to get generic returns that aren't actually what I want (I can get a selector that returns the whole page by just leaving out the predicates, but whenever I try to specify I end up with an empty array).
So for all my attempts I either get the wrong objects or an empty array.
You need to always check HTML source code (Ctrl+U in a browser) for the data you need. For MLB page you'll find that content you are want to parse is loaded dynamically using JavaScript.
You can try to use Scrapy-Splash to get target content from your start_urls or you can find direct HTTP request used to get information you want (using Network tab of Chrome Developer Tools) and parse JSON:
https://statsapi.mlb.com/api/v1/schedule?sportId=1,51&date=2019-06-26&gameTypes=E,S,R,A,F,D,L,W&hydrate=team(leaders(showOnPreview(leaderCategories=[homeRuns,runsBattedIn,battingAverage],statGroup=[pitching,hitting]))),linescore(matchup,runners),flags,liveLookin,review,broadcasts(all),decisions,person,probablePitcher,stats,homeRuns,previousPlay,game(content(media(featured,epg),summary),tickets),seriesStatus(useOverride=true)&useLatestGames=false&language=en&leagueId=103,104,420

Jmeter http simple server(plugin) ADD unique line does not work

http://localhost:9191/sts/ADD?FILENAME=/tmp/newEventsFlag.csv&LINE=7ddb876ac39c485a&ADD_MODE=FIRST&UNIQUE=TRUE
If i run this multiple times and then do a GET http://localhost:9191/sts/LENGTH?FILENAME=/tmp/newEventsFlag.csv
I can see that the body length is equal to the number of posts I made.
So unique parameter does not work I guess or am I missing something here?
If it does not work is it possible to get the source code to fix this?
This is a bug in the simple table server version 2.2.
Should be fix in 2.3

Selenium RC - Unable to click on a link using XPATH / CSS path / //a[contains(text(),'abc')]

Selenium IDE is able to recognise the ID, Xpath, CSSPath for a link. But, Selenium RC is unable to click on the link using XPath or CSSPath or ID. I have also used "Contains Text()", but of no use. Please find below code that i am currently executing in Eclipse IDE.
selenium.open("https://abc.com");
selenium.type("UserName", "123456");
selenium.click("xpath=//form[#id='loginForm']/table/tbody/tr[7]/td/input");
selenium.click("xpath=//a[#id='_ebg9dd']");
// selenium.click("xpath=//a[contains(text(), 'Request Form')]");
Can someone please suggest any other alternative or correct the code if there is any discrepancy ?
Are you converting it using the IDE? For example, this is an Xpath selector for Java Junit 4 RC:
Seems unusual to have an id on your a-tag but if you wanted to try using 'contains' this is an example of one that works for me when I just tried it..
selenium.click("//div[#class='span5 footer-links']/ul/li/a[contains(text(), 'Submit your page')]");
I also wonder about the fact that you look to be inputting text into a 'username' field and then followed with two 'clicks'. Do you not need to either input something into another field or wait for something after the first click? Just seems like an odd series of events (may not be though, I obviously don't know the specifics of what it is you're doing.
Try this :
selenium.click("link=name_of_link_present_on_page");

How to export scrubyt extractor?

I've written a scrubyt extractor based on the 'learning' technique - that is, specifying the current text on the page and getting it to work out the XPath expressions itself. However, I now want to export the extractor so that it can be used even when the page has changed.
The documentation for scrubyt seems to be all over the place now, but from what I can find I should be able to put the line extractor.export(__FILE__) and it should work. It doesn't - I just get an error saying that there is the wrong number of arguments for export, it should have 0. I've tried it without any arguments and it still fails.
I would ask on the scrubyt forum, but it seems like no-one's been there for ages!
Any ideas what to do here?
Just had the same problem and tried "puts google_data.export()" (trying to get some stuff from google)
This gave me the following:
=== Extractor tree ===
export() is not working at the moment, due to the removal or
ParseTree, ruby2ruby and RubyInline.
For now, in case you are using examples, you can replace them by hand
based on the output below.
So if your pattern in the learning extractor looks like
book "Ruby Cookbook"
and you see the following below:
[book] /table[1]/tr/td[2]
then replace "Ruby Cookbook" with "/table[1]/tr/td[2]" (and all the
other XPaths) and you are ready!
[link] /body/div/div/div/div/div/ol/li/h3/a
which gave me the xpath I was looking for
scrubyt version is 0.4.06

Resources