Trying to get table rows with Scrapy xpath - xpath

I have some html that looks like the screenshot. I want to get the table rows. I have:
for table_row in response.selector.xpath("//*[#id = 'ctl00_ContentPlaceHolder1_CaseDetailParties1_gvParties']"):
print table_row
In the command line I tried:
>>> table_row
Out[5]: <Selector xpath="//*[#id = 'ctl00_ContentPlaceHolder1_CaseDetailParties1_gvParties']" data=u'<table class="ParamText" cellspacing="0"'>
>>> table_row.xpath('/tbody')
Out[6]: []
>>> table_row.xpath('//tbody')
Out[7]: []
Why am I unable to select the tbody?

tbody is generated by the browser, you don't get it with Scrapy downloader. Just get straight to the tr elements:
table_row.xpath('.//tr')

Related

xpath could not recognize predicate for a tag

I try to use scrapy xpath to scrape a page, but it seems it cannot capture the tag with predicates when I use a for loop,
# This package will contain the spiders of your Scrapy project
from cunyfirst.items import CunyfirstSectionItem
import scrapy
import json
class CunyfristsectionSpider(scrapy.Spider):
name = "cunyfirst-section-spider"
start_urls = ["file:///Users/haowang/Desktop/section.htm"]
def parse(self, response):
url = response.url
yield scrapy.Request(url, self.parse_page)
def parse_page(self, response):
n = -1
for section in response.xpath("//a[contains(#name,'MTG_CLASS_NBR')]"):
print(response.xpath("//a[#name ='MTG_CLASSNAME$10']/text()"))
n += 1
class_num = section.xpath('text()').extract_first()
# print(class_num)
classname = "MTG_CLASSNAME$" + str(n)
date = "MTG_DAYTIME$" + str(n)
instr = "MTG_INSTR$" + str(n)
print(classname)
class_name = response.xpath("//a[#name = classname]/text()")
I am looking for a tags with name as "MTG_CLASSNAME$" + str(n), with n being 0,1,2..., and I am getting empty output from my xpath query. Not sure why...
PS.
I am basically trying to scrape course and their info from https://hrsa.cunyfirst.cuny.edu/psc/cnyhcprd/GUEST/HRMS/c/COMMUNITY_ACCESS.CLASS_SEARCH.GBL?FolderPath=PORTAL_ROOT_OBJECT.HC_CLASS_SEARCH_GBL&IsFolder=false&IgnoreParamTempl=FolderPath%252cIsFolder&PortalActualURL=https%3a%2f%2fhrsa.cunyfirst.cuny.edu%2fpsc%2fcnyhcprd%2fGUEST%2fHRMS%2fc%2fCOMMUNITY_ACCESS.CLASS_SEARCH.GBL&PortalContentURL=https%3a%2f%2fhrsa.cunyfirst.cuny.edu%2fpsc%2fcnyhcprd%2fGUEST%2fHRMS%2fc%2fCOMMUNITY_ACCESS.CLASS_SEARCH.GBL&PortalContentProvider=HRMS&PortalCRefLabel=Class%20Search&PortalRegistryName=GUEST&PortalServletURI=https%3a%2f%2fhome.cunyfirst.cuny.edu%2fpsp%2fcnyepprd%2f&PortalURI=https%3a%2f%2fhome.cunyfirst.cuny.edu%2fpsc%2fcnyepprd%2f&PortalHostNode=ENTP&NoCrumbs=yes
with filter applied: Kingsborough CC, fall 18, BIO
Thanks!
Well... I've visited the website you put in the question description, I used element inspection and searched for "MTG_CLASSNAME" and I got 0 matches...
So I will give you some tools:
In your settings.py set that:
LOG_FILE = "log.txt"
LOG_STDOUT=True
then print the response body ( response.body ) where you should ( in the top of parse_page function in this case ) and search it in log.txt
Check there if there is what you are looking for.
If there is, use this https://www.freeformatter.com/xpath-tester.html (
or similar ) to check your xpath statement.
In addition, change for section in response.xpath("//a[contains(#name,'MTG_CLASS_NBR')]"):
by for section in response.xpath("//a[contains(#name,'MTG_CLASS_NBR')]").extract():, this will raise an error when you get the data that you are looking for.

Scrapy: How to get a correct selector

I would like to select the following text:
Bold normal Italics
I need to select and get: Bold normal italist.
The html is:
<strong>Bold</strong> normal <i>Italist</i>
However, a/text() yields
normal
only. Does anyone know a fix? I'm testing bing crawling, and the bold text is in different position depending on the query.
You can use a//text() instead of a/text() to get all text items.
# -*- coding: utf-8 -*-
from scrapy.selector import Selector
doc = """
<strong>Bold</strong> normal <i>Italist</i>
"""
sel = Selector(text=doc, type="html")
result = sel.xpath('//a/text()').extract()
print result
# >>> [u' normal ']
result = u''.join(sel.xpath('//a//text()').extract())
print result
# >>> Bold normal Italist
You can try to use
a/string()
or
normalize-space(a)
which returns Bold normal Italist

Scrapy Pagination Fails

Hello this is my first ever post ,
So I am trying to make a Web Spider that will follow the links in invia.cz and copy all the titles from the hotel.
import scrapy
y=0
class invia(scrapy.Spider):
name = 'Kreta'
start_urls = ['https://dovolena.invia.cz/?d_start_from=13.01.2017&sort=nl_sell&page=1']
def parse(self, response):
for x in range (1, 9):
yield {
'titles':response.css("#main > div > div > div > div.col.col-content > div.product-list > div > ul > li:nth-child(%d)>div.head>h2>a>span.name::text"%(x)).extract() ,
}
if (response.css('#main > div > div > div > div.col.col-content >
div.product-list > div > p >
a.next').extract_first()):
y=y+1
go = ["https://dovolena.invia.cz/d_start_from=13.01.2017&sort=nl_sell&page=%d" % y]
print go
yield scrapy.Request(
response.urljoin(go),
callback=self.parse
)
In this website pages are loading with AJAX so I change the value of the URL manually, incremented by one only if the next button appears in the page.
In the scrapy shell when I test if the button appears and the conditions everything is good but when I start the spider it only crawls the first page.
It's my first spider ever so thanks in advance.
Also the errol log Error Log1 Error Log
Your usage of "global" y variable is not only peculiar but won't work either
You're using y to calculate how many times parse was called. Ideally you don't want to access anything outside of the functions scope, so you can achieve the same thing with using request.meta attribute:
def parse(self, response):
y = response.meta.get('index', 1) # default is page 1
y += 1
# ...
#next page
url = 'http://example.com/?p={}'.format(y)
yield Request(url, self.parse, meta={'index':y})
Regarding your pagination issue, your next page url css selector is incorrect since the <a> node you're selecting doesn't have a absolute href attached to it, also this issue makes your y issue obsolete. To solve this try:
def parse(self, response):
next_page = response.css("a.next::attr(data-page)").extract_first()
# replace "page=1" part of the url with next number
url = re.sub('page=\d+', 'page=' + next_page, response.url)
yield Request(url, self.parse, meta={'index':y})
EDIT: Here's the whole working spider:
import scrapy
import re
class InviaSpider(scrapy.Spider):
name = 'invia'
start_urls = ['https://dovolena.invia.cz/?d_start_from=13.01.2017&sort=nl_sell&page=1']
def parse(self, response):
names = response.css('span.name::text').extract()
for name in names:
yield {'name': name}
# next page
next_page = response.css("a.next::attr(data-page)").extract_first()
url = re.sub('page=\d+', 'page=' + next_page, response.url)
yield scrapy.Request(url, self.parse)

XPath - extracting text between two nodes

I'm encountering a problem with my XPath query. I have to parse a div which is divided to unknown number of "sections". Each of these is separated by h5 with a section name. The list of possible section titles is known and each of them can occur only once. Additionally, each section can contain some br tags. So, let's say I want to extract the text under "SecondHeader".
HTML
<div class="some-class">
<h5>FirstHeader</h5>
text1
<h5>SecondHeader</h5>
text2a<br>
text2b
<h5>ThirdHeader</h5>
text3a<br>
text3b<br>
text3c<br>
<h5>FourthHeader</h5>
text4
</div>
Expected result (for SecondSection)
['text2a', 'text2b']
Query #1
//text()[following-sibling::h5/text()='ThirdHeader']
Result #1
['text1', 'text2a', 'text2b']
It's obviously bit too much, so I've decided to restrict the result to the content between selected header and the header before.
Query #2
//text()[following-sibling::h5/text()='ThirdHeader' and preceding-sibling::h5/text()='SecondHeader']
Result #2
['text2a', 'text2b']
Yielded results meet the expectations. However, this can't be used - I don't know whether SecondHeader/ThirdHeader will exist in parsed page or not. It is needed to use only one section title in a query.
Query #3
//text()[following-sibling::h5/text()='ThirdHeader' and not[preceding-sibling::h5/text()='ThirdHeader']]
Result #3
[]
Could you please tell me what am I doing wrong? I've tested it in Google Chrome.
If all h5 elements and text nodes are siblings, and you need to group by section, a possible option is simply to select text nodes by count of h5 that come before.
Example using lxml (in Python)
>>> import lxml.html
>>> s = '''
... <div class="some-class">
... <h5>FirstHeader</h5>
... text1
... <h5>SecondHeader</h5>
... text2a<br>
... text2b
... <h5>ThirdHeader</h5>
... text3a<br>
... text3b<br>
... text3c<br>
... <h5>FourthHeader</h5>
... text4
... </div>'''
>>> doc = lxml.html.fromstring(s)
>>> doc.xpath("//text()[count(preceding-sibling::h5)=$count]", count=1)
['\n text1\n ']
>>> doc.xpath("//text()[count(preceding-sibling::h5)=$count]", count=2)
['\n text2a', '\n text2b\n ']
>>> doc.xpath("//text()[count(preceding-sibling::h5)=$count]", count=3)
['\n text3a', '\n text3b', '\n text3c', '\n ']
>>> doc.xpath("//text()[count(preceding-sibling::h5)=$count]", count=4)
['\n text4\n']
>>>
You should be able to just test the first preceding sibling h5...
//text()[preceding-sibling::h5[1][normalize-space()='SecondHeader']]

Capybara, rspec- How to find text anywhere on page

There are multiple ways to find it but I want to do this in a specific manner. Here it is-
To get an element with some text in it, my framework creates an xpath in this manner-
#xpath = "//h1[contains(text(), '[the-text-i-am-searching-for]')]"
Then it executes-
find(:xpath, #xpath).visible?
Now in similar format I want to create an xpath which just looks for a text anywhere in the page and then can be used in find(:xpath,#xpath).visible? to return a true or false.
To give a little more context:
My HTML paragraph looks something like this-
<blink><p>some text here <b><u>some bold and underlined text here</u></b> again some text Learn more [the-text-i-am-searching-for]</p></blink>
but if I try to find it using find(:xpath, #xpath) where my xpath is
#xpath = "//p[contains(text(), '[the-text-i-am-searching-for]')]"
it fails.
Try replacing "//p[contains(text(), '[the-text-i-am-searching-for]')]" with "//p[contains(., '[the-text-i-am-searching-for]')]"
I don't know your environment but in Python with lxml it works:
>>> import lxml.etree
>>> doc = lxml.etree.HTML("""<blink><p>some text here <b><u>some bold and underlined text here</u></b> again some text Learn more [the-text-i-am-searching-for]</p></blink>""")
>>> doc.xpath('//p[contains(text(), "[the-text-i-am-searching-for]")]')
[]
>>> doc.xpath('//p[contains(., "[the-text-i-am-searching-for]")]')
[<Element p at 0x1c1b9b0>]
>>>
The context node . will be converted to a string to match the signature boolean contains(string, string) (http://www.w3.org/TR/xpath/#section-String-Functions)
>>> doc.xpath('string(//p)')
'some text here some bold and underlined text here again some text Learn more [the-text-i-am-searching-for]'
>>>
Consider these variations
>>> doc.xpath('//p')
[<Element p at 0x1c1b9b0>]
>>> doc.xpath('//p/*')
[<Element b at 0x1e34b90>, <Element a at 0x1e34af0>]
>>> doc.xpath('string(//p)')
'some text here some bold and underlined text here again some text Learn more [the-text-i-am-searching-for]'
>>> doc.xpath('//p/text()')
['some text here ', ' again some text ', ' [the-text-i-am-searching-for]']
>>> doc.xpath('string(//p/text())')
'some text here '
>>> doc.xpath('//p/text()[3]')
[' [the-text-i-am-searching-for]']
>>> doc.xpath('//p/text()[contains(., "[the-text-i-am-searching-for]")]')
[' [the-text-i-am-searching-for]']
>>> doc.xpath('//p[contains(text(), "[the-text-i-am-searching-for]")]')
[]

Resources