what might be the reason for selenium working for xpath sometimes but sometimes fail to identify the xpath? - xpath

def linkdin_login(company_name,username,password):
driver.get('https://linkedin.com/')
driver.find_element(By.XPATH,'//*[#id="session_key"]').send_keys(username)
driver.find_element(By.XPATH,'//*[#id="session_password"]').send_keys(password)
driver.find_element(By.XPATH,"//button[#class='sign-in-form__submit-button']").click()
#def company_info(company_name):
element = driver.find_element(By.CSS_SELECTOR,"#global-nav-typeahead > input")
element.send_keys(company_name)
element.send_keys(Keys.ENTER)
driver.implicitly_wait(10) # seconds
driver.get(driver.find_element(By.CSS_SELECTOR,".search-nec__hero-kcard-v2 > a:nth-child(1)").get_attribute("href"))
driver.implicitly_wait(10)
people()
by the above code i am logging into LinkedIn and fetching the LinkedIn page of the some companies after getting the page I am trying to get the employee data by using people function show below
def people():
driver.implicitly_wait(10)
driver.get(driver.find_element(By.XPATH,"/html/body/div[5]/div[3]/div/div[2]/div/div[2]/main/div[1]/section/div/div[2]/div[1]/div[2]/div/a").get_attribute("href"))
driver.implicitly_wait(10)
people = driver.find_element(By.XPATH,"/html/body/div[4]/div[3]/div[2]/div/div[1]/main/div/div/div[2]/div/ul")
people_data = people.find_elements(By.TAG_NAME,"li")
for i in people_data:
print(i.text)
in this function i am trying to access the link to employees data
that is where the problem lies
the line 2 of people function i trying to get the link the problem is due to some reason sometimes i am getting the link(not to frequently!!) but most of the time i am getting the error saying Xpath not found
i didn't know how to attach a html page so i am attaching the link
([https://www.linkedin.com/company/google/](https://www.stackoverflow.com/)
1. I tried implicit wait assuming that the program is trying to access the Xpath during loading of the page

Related

Cypress test passes single form input field but ignores the following one(s)

I’m trying to test a Stripe form with 3 input fields in Cypress. I found an example that works to test a single input that takes all the payment info (https://medium.com/#chipomapondera/hi-michael-98e432948028).
My version passes on inputting the CC but fails on the next input(s). My code is below:
it('checks user can support the Creator', () => {
cy.get('button[class="buttons__FollowButton-sc-10ti9z2-0 huoUmA"]').click()
cy.wait(4000)
cy.get('body')
.should('contain', 'Join this community')
cy.get('button[class="styledComponents__SubscribeButton-g42pit-3 kUgWbq"]').click()
cy.getWithinIframe('[name="cardnumber"]').type('4242424242424242')
.getWithinIframe('[name="exp-date"]').type('1232')
.getWithinIframe('[name="cvc"]').type('987')
})
It doesn’t seem to like the following after it has typed the card number:
cy.getWithinIframe(‘[name=”exp-date”]’).type(‘1232’)
cy.getWithinIframe(‘[name=”cvc”]’).type(‘987’)
The error I receive is:
cypress error
I could see a typo in the type field where there are no ending single quotes at this line of test .getWithinIframe('[name="exp-date"]').type('1232). Could you please try following .getWithinIframe('[name="exp-date"]').type('1232') or may be try without quotes .getWithinIframe('[name="exp-date"]').type(1232)
I followed the medium article you shared and ran into the same issue as you. The cause of this problem is of course that stripe is creating multiple iframes and the method created in the article is just getting the first iframe.
So a very simple solution is to pass the id of the div containing the iframe to our getWithinIframe function. The function will now look like this:
Cypress.Commands.add('getWithinIframe', (iframeSelector, targetElement) =>
cy.get(`#${iframeSelector} iframe`).iframeLoaded().its('document').getInDocument(targetElement));
And call it like so:
cy.getWithinIframe('cardNumberElement','[name="cardnumber"]').type(1212123);
Hope this helps anybody who is facing the same issues.

Displaying JSON output from an API call in Ruby using VScode

For context, I'm someone with zero experience in Ruby - I just asked my Senior Dev to copy-paste me some of his Ruby code so I could try to work with some APIs that he ended up putting off because he was too busy.
So I'm using an API wrapper called zoho_hub, used as a wrapper for Zoho APIs (https://github.com/rikas/zoho_hub/blob/master/README.md).
My IDE is VSCode.
I execute the entire length of the code, and I'm faced with this:
[Done] exited with code=0 in 1.26 seconds
The API is supposed to return a paginated list of records, but I don't see anything outputted in VSCode, despite the fact that no error is being reflected. The last 2 lines of my code are:
ZohoHub.connection.get 'Leads'
p "testing"
I use the dummy string "testing" to make sure that it's being executed up till the very end, and it does get printed.
This has been baffling me for hours now - is my response actually being outputted somewhere, and I just can't see it??
Ruby does not print anything unless you tell it to. For debugging there is a pretty printing method available called pp, which is decent for trying to print structured data.
In this case, if you want to output the records that your get method returns, you would do:
pp ZohoHub.connection.get 'Leads'
To get the next page you can look at the source code, and you will see the get request has an additional Hash parameter.
def get(path, params = {})
Then you have to read the Zoho API documentation for get, and you will see that the page is requested using the page param.
Therefore we can finally piece it together:
pp ZohoHub.connection.get('Leads', page: NNN)
Where NNN is the number of the page you want to request.

Capybara find_field & has_selector not working

I am using Capybara and getting errors from the finders 'find_field' & 'has_selector'.
I have tried using them like this:
page = visit "http://www.my-url-here.com"
next if page.has_selector?('#divOutStock')
page.find_field('#txtQty').set('9999')
has_selector returns the error: "NoMethodError: undefined method `has_selector?' for {"status"=>"success"}:Hash"
find_field cannot find the field. (It is present on the page and is not a hidden field.)
I have also tried using fill_in to set the field value, that doesn't work either.
How can I get this to work with Capybara?
You have a couple of issues in your code
page is just an alias for Capybara.current_session. If you assign to it you're creating a local variable and it's no longer a session
find_field takes a locator - http://www.rubydoc.info/gems/capybara/Capybara/Node/Finders#find_field-instance_method - which will be matched against the id, name, or label text. It does not take a CSS selector
Your code should be
page.visit "http://www.my-url-here.com"
next if page.has_selector?('#divOutStock')
page.find_field('txtQty').set('9999')
and you could rewrite the last line as
page.fill_in('txtQty', with: '9999')
Also you should note that (if using a JS capable driver) has_selector? will wait up to Capybara.default_max_wait_time for the #divOutStock to appear. If it's not usually going to be there and you want to speed things up a bit you could do something like
page.visit "http://www.my-url-here.com"
page.assert_text('Something always on page once loaded') #ensure/wait until page is loaded
next if page.has_selector?('#divOutStock', wait: 0) # check for existence and don't retry
page.fill_in('txtQty', with: '9999')

Logging Into Google To Scrape A Private Google Group (over HTTPS)

I'm trying to log into Google, so that I can scrape & migrate a private google group.
It doesn't seem to log in over SSL. Any ideas appreciated. I'm using Mechanize and the code is below:
group_signin_url = "https://login page to goolge, with referrer url to a private group here"
user = ENV['GOOGLE_USER']
password = ENV['GOOGLE_PASSWORD']
scraper = Mechanize.new
scraper.user_agent = Mechanize::AGENT_ALIASES["Linux Firefox"]
scraper.agent.http.verify_mode = OpenSSL::SSL::VERIFY_NONE
page = scraper.get group_signin_url
google_form = page.form
google_form.Email = user
google_form.Passwd = password
group_page = scraper.submit(google_form, google_form.buttons.first)
pp group_page
I worked with Ian (the OP) on this problem and just felt we should close this thread with some answers based on what we found when we spent some more time on the problem.
1) You can't scrape a Google Group with Mechanize. We managed to get logged in abut the content of the Google Group pages is all rendered in-browser, meaning that HTTP requests, such as issued by Mechanize, are returned with a few links and no actual content.
We found that we could get page content by the use of Selenium (we used Selenium in Firefox, using the Ruby bindings).
2) the HTML element IDs/classes in Google Groups are obfuscated but we found that these Selenium commands will pull out the bits you need (until Google change them)
message snippets (click on them to expand messages)
find_elements(:class, 'GFP-UI5CCLB')
elements with name of author
find_elements(:class, 'GFP-UI5CA1B')
elements with content of post
find_elements(:class, 'GFP-UI5CCKB')
elements containing date
find_elements(:class, 'GFP-UI5CDKB') (and then use the attribute[:title] for a full length date string)
3) I have some Ruby code here which scrapes the content programmatically and uploads it into a Discourse forum (which is what we were trying to migrate to).
It's hacky but it kind of works. I recently migrated 2 commercially important Google Groups using this script. I'm up for taking on 'We Scrape Your Google Group' type work, please PM me.

The "expected_title" procedure throws wrong error on expected title

I have some issues with "expected_title" procedure from watir-page-helper.
It is throwing an error like the current web page has a different title than the expected one, although it is the correct title:
RuntimeError: Expected title 'Some title' instead of 'Some Title'.
This happens randomly, and my tests fail frequently on different pages. The website on which I am working is loading in a reasonable amount of time, I don't think it is a loading page issue.
To initialize the pages I am using the next method:
#new_mail_editor = Module::Page.new(#browser, false)
This is for pages that are opened when accessing links.
Does someone have a clue why this is happening?
Is there a way to dodge this issue?
Thank you.
Watir-page-helper has been end-of-lifed, you should try page-object gem.
Meantime I found out what I was doing wrong. When initializing the browser and checking the title, I was using "has_expected_title?" instead of "expected_title". It seems that I didn't used correctly the first function
Now everything works great.

Resources