Not able to identify element on Google Home page - xpath

I am trying to enter text on search text box on google home page. That I can identify using xpath as //*[#id="q"]. But I wanted to reach to that element using parent child relationship. I am using xpath as below :
`(//form[#id='a'])/div[2]/input[#id='q']`
But when I am running the script, it is giving error that said "no such element". Can someone please tell what I am missing in writing the xpath?

Please upload your HTML code to let us know of the issue better. Meanwhile, Try this .//form[#id='tsf']//input[#name='q'].
Hope it helps.

Related

Import XML of span element fails with #N/A

finally decided to sign up to stackoverflow because of this. So I´d be super grateful about a solution!
I´m trying to get a number of a <span> element. Here is an image of the data box I´m trying to scrape from. It´s on this page: https://de.marketscreener.com/kurs/aktie/SNOWFLAKE-INC-112440376/analystenerwartungen/
The relevant Xpath is //*[#id="highcharts-0oywbsk-200"]/div[2]/div/span/span
I´m trying: =IMPORTXML("https://de.marketscreener.com/kurs/aktie/SNOWFLAKE-INC-112440376/analystenerwartungen/"),"//div[2]/div/span/span")
I´m ignoring the #id-element, this works pretty well with many elements on the same page, but in this case not at all. I ignore the id, because I can´t use it as it changes on every page. Is this ok?
Google Sheets always gives me a #N/A error?! Any idea how to scrape that number?
disabling JavaScript reveals what you can scrape:

Setting the correct xpath

I'm trying to set the right xpath for using RSelenium, but I'm not very experienced in this area, so any help would be much appreciated.
Since I'm not allowed to post pictures yet I have tried to add a link to a screenshot of the html:
The html
I need R to scrape the dates (28-10-2020 - 13-11-2020), but so far I have not been able to set the correct xpath when using html.nodes.
I'm trying to scrape from sites like this one: https://www.boligsiden.dk/adresse/topperne-9-3-33-2620-albertslund-01650532___9__3__33
I usually do this on python rather than R
As you can see in this image when you right-click on the element concerned. You get a drop-down menu with an x-path to the element.
Other than that, the site orientation and x-path might change and a full x-path might be a good option in the short-run, so I rather prefer driver.find_element_by_xpath('//button[contains(text(),"Login")]')\ .click()
In your case which would be find_element_by_xpath('//*[contains(#class, 'u-pb-4 u-block')]')
I hope this helps and it is mostly the same across different languages

Struggling to write xpath / find a unique ID for an element

For web automation purpose, I need a unique ID of a text/label, which has some white space around it.
please check the screenshots attached
one screenshot shows - the best I could right the xpath but it doesn't solve the problem
another screenshot shows - my idea but not successful.
as per screenshot, I need xpath for text called "Suresh Kumar YOGESH"
Here is the xpath that you can use.
//div[#class='text_head_detail_view_name_holder']/h1/text()[2]
Screenshot: with sample code
To complete supputuri's answer, another option (might be more secure) :
//span[contains(.,"Offenders")]/following::text()[1]`

Confused about scrapy and Xpath

I am trying to scrape some data from the following website: https://xrpcharts.ripple.com/
The data I am interested in is Total XRP which you can see immediately below or to the side (depending on your browser) of the circle diagram. So what I first did was inspect the element I am interested in. So I see that it is inside <div class="stat" inside span ng-bind="totalXRP | number:2" class="ng-binding">99,993,056,930.18</span>.
The number 99,993,056,930.18 is what I am interested in.
So I started in a scrapy shell and wrote:
fetch("https://xrpcharts.ripple.com")
I then used chrome to copy the Xpath by right clicking on that place of HTML code, the result chrome gave me was:
/html/body/div[5]/div[3]/div/div/div[2]/div[3]/ul/li[1]/div/span
Then I used the Xpath command to extract the text:
response.xpath('/html/body/div[5]/div[3]/div/div/div[2]/div[3]/ul/li[1]/div/span/text()').extract()
but this gave me an empty list []. I really do not understand what I am doing wrong here. I think I am making an obvious mistake but I dont see it. Thanks in advance!
The bottom line is: you cannot expect the page you see in the browser to be the same page Scrapy would download and have available to work with. Scrapy is not a browser.
This page is quite dynamic and complex and is constructed with the help of multiple asynchronous requests bringing in both the logic and the data. There is also JavaScript executed in the browser that plays an important role in forming and supporting the HTML document object tree.
Scrapy does not have all these things, the thing you get when you do fetch() is just the very first initial "bare bones" HTML page without all the "dynamic content".

Xpath for Log-in

I am trying to click on "Log in" link present on home page of "stackoverflow" using xpath as show as below. But no success
driver.findElement(By.xpath("//a[contains(text(),'log in')]")).click();
Please help what i m missing here.
Thanks
Use the following xpath:
//a[#class='login-link'][text()='log in']
I tried personally and it worked with me. Hope it helps.
The xpath which you have used in your question returns two webelements:
​log in​​
​log in​​
that's why selenium is not able to click on that.So use
//a[#class='login-link'][text()='log in']
Try this :
driver.findElement(By.xpath("//a[#class='login-link' and .='log in']").click();
Above XPath match <a> tag having class attribute equals "login-link" and inner text equals "log in".
General tip: always prefer . over text(). There are very rare situations when you have to filter element specifically by using text().

Resources