Lxml or Xpath content print - xpath

I have the following function
def parseTitle(self, post):
"""
Returns title string with spaces replaced by dots
""
return post.xpath('h2')[0].text.replace('.', ' ')
I would to see the content of post. I have tried everything I can think of.
How can I properly debug the content? This is an website of movies where I'm rip links and title and this function should parse the title.
I am sure H# is not existing, how can I print/debug this?

post is lxml element tree object, isn't it?
so first, you could try:
# import lxml.html # if not yet imported
# (or you can use lxml.etree instead of lxml.html)
print lxml.html.tostring(post)
if isn't, you should create element tree object from it
post = lxml.html.fromstring(post)
or maybe the problem is just that you should replace h2 with //h2?
your question is not very explanatory..

Related

Problems with '._ElementUnicodeResult'

While trying to help another user out with some question, I ran into the following problem myself:
The object is to find the country of origin of a list of wines on the page. So we start with:
import requests
from lxml import etree
url = "https://www.winepeople.com.au/wines/Dry-Red/_/N-1z13zte"
res = requests.get(url)
content = res.content
res = requests.get(url)
tree = etree.fromstring(content, parser=etree.HTMLParser())
tree_struct = etree.ElementTree(tree)
Next, for reasons I'll get into in a separate question, I'm trying to compare the xpath of two elements with certain attributes. So:
wine = tree.xpath("//div[contains(#class, 'row wine-attributes')]")
country = tree.xpath("//div/text()[contains(., 'Australia')]")
So far, so good. What are we dealing with here?
type(wine),type(country)
>> (list, list)
They are both lists. Let's check the type of the first element in each list:
type(wine[0]),type(country[0])
>> (lxml.etree._Element, lxml.etree._ElementUnicodeResult)
And this is where the problem starts. Because, as mentioned, I need to find the xpath of the first elements of the wine and country lists. And when I run:
tree_struct.getpath(wine[0])
The output is, as expected:
'/html/body/div[13]/div/div/div[2]/div[6]/div[1]/div/div/div[2]/div[2]'
But with the other:
tree_struct.getpath(country[0])
The output is:
TypeError: Argument 'element' has incorrect type (expected
lxml.etree._Element, got lxml.etree._ElementUnicodeResult)
I couldn't find much information about _ElementUnicodeResult), so what is it? And, more importantly, how do I fix the code so that I get an xpath for that node?
You're selecting a text() node instead of an element node. This is why you end up with a lxml.etree._ElementUnicodeResult type instead of a lxml.etree._Element type.
Try changing your xpath to the following in order to select the div element instead of the text() child node of div...
country = tree.xpath("//div[contains(., 'Australia')]")

Scrapy Xpath with text() contains

I'm using scrapy, and I'm trying to look for a span that contains a specific text. I have:
response.selector.xpath('//*[#class="ParamText"]/span/node()')
which returns:
<Selector xpath='//*[#class="ParamText"]/span/text()' data=u' MILES STODOLINK'>,
<Selector xpath='//*[#class="ParamText"]/span/text()' data=u'C'>,
<Selector xpath='//*[#class="ParamText"]/span/text()' data=u' MILES STODOLINK'>]
However when I run:
>>> response.selector.xpath('//*[#class="ParamText"]/span[contains(text(),"STODOLINK")]')
Out[11]: []
Why does the contains function not work?
contains() can not evaluate multiple nodes at once :
/span[contains(text(),"STODOLINK")]
So, in case there are multiple text nodes within the span, and "STODOLINK" isn't located in the first text node child of the span, then contains() in the above expression won't work. You should try to apply contains() check on individual text nodes as follow :
//*[#class="ParamText"]/span[text()[contains(.,"STODOLINK")]]
Or if "STODOLINK" isn't necessarily located directly within span (can be nested within other element in the span), then you can simply use . instead of text() :
//*[#class="ParamText"]/span[contains(.,"STODOLINK")]
In my terminal (assuming my example is identical to your file though) your code works:
Input
import scrapy
example='<div class="ParamText"><span>STODOLINK</span></div>'
scrapy.Selector(text=example).xpath('//*[#class="ParamText"]/span[contains(text(),"STODOLINK")]').extract()
Output:
['<span>STODOLINK</span>']
Can you clarify what might be different?
I use Scrapy with BeautifulSoup4.0. IMO, Soup is easy to read and understand. This is an option if you don't have to use HtmlXPathSelector. Below is an example for finding all links. You can replace that with 'span'. Hope this helps!
import scrapy
from bs4 import BeautifulSoup
import Item
def parse(self, response):
soup = BeautifulSoup(response.body,'html.parser')
print 'Current url: %s' % response.url
item = Item()
for link in soup.find_all('a'):
if link.get('href') is not None:
url = response.urljoin(link.get('href'))
item['url'] = url
yield scrapy.Request(url,callback=self.parse)
yield item

Substitution in a file name with reStructuredText (Sphinx)?

I want to create several files from a single template, which differ only by a variable name. For example :
(file1.rst):
.. |variable| replace:: 1
.. include template.rst
(template.rst) :
Variable |variable|
=====================
Image
-------
.. image:: ./images/|variable|-image.png
where of course I have an image called "./images/1-image.png". The substitution of "|variable|" by "1" works well in the title, but not in the image file name, and at compilation I get :
WARNING: image file not readable: ./images/|variable|-image.png
How can I get reST to make the substitution in the variable name too? (if this changes anything, am using Sphinx).
There are two problems here: a substitution problem, and a parsing order problem.
For the first problem, the substitution reference |variable| cannot have adjacent characters (besides whitespace or maybe _ for hyperlinking) or else it won't parse as a substitution reference, so you need to escape it:
./images/\ |variable|\ -image.png
However, the second problem is waiting around the corner. While I'm not certain of the details, it seems reST is unable to parse substitutions inside other directives. I think it first parses the image directive, which puts it in the document tree and thus out of reach of the substitution mechanism. Similarly, I don't think it's possible to use a substitution to insert content intended to be parsed (e.g. .. |img1| replace::`.. image:: images/1-image.png`). This is all speculative based on some tests and my incomplete comprehension of the official documentation, so someone more knowledgeable can correct what I've said here.
I think you're aware of the actual image substitution directive (as opposed to text substitution), but I don't think it attains the generality you're aiming for (you'll still need a separate directive for the image as from the |variable|), but in any case it looks like this:
.. |img1| image:: images/1-image.png
Since you're using Sphinx, you can try creating your own directive extension (see this answer for information), but it won't solve the substitutions-inside-markup problem.
You have to create a custom directive in this case as Sphinx doesn't allow you to substitute image paths. You can change Sphinx figure directive as follows and use it instead of the image directive.
from typing import Any, Dict, List, Tuple
from typing import cast
from docutils import nodes
from docutils.nodes import Node, make_id, system_message
from docutils.parsers.rst import directives
from docutils.parsers.rst.directives import images, html, tables
from sphinx import addnodes
from sphinx.directives import optional_int
from sphinx.domains.math import MathDomain
from sphinx.util.docutils import SphinxDirective
from sphinx.util.nodes import set_source_info
if False:
# For type annotation
from sphinx.application import Sphinx
class CustomFigure(images.Figure):
"""The figure directive which applies `:name:` option to the figure node
instead of the image node.
"""
def run(self) -> List[Node]:
name = self.options.pop('name', None)
path = self.arguments[0] #path = ./images/variable-image.png
#replace 'variable' from th.e given value
self.argument[0] = path.replace("variable", "string substitution")
result = super().run()
if len(result) == 2 or isinstance(result[0], nodes.system_message):
return result
assert len(result) == 1
figure_node = cast(nodes.figure, result[0])
if name:
# set ``name`` to figure_node if given
self.options['name'] = name
self.add_name(figure_node)
# copy lineno from image node
if figure_node.line is None and len(figure_node) == 2:
caption = cast(nodes.caption, figure_node[1])
figure_node.line = caption.line
return [figure_node]
def setup(app: "Sphinx") -> Dict[str, Any]:
directives.register_directive('figure', Figure)
return {
'version': 'builtin',
'parallel_read_safe': True,
'parallel_write_safe': True,
}
You can add this CustomFigure.py directive in the conf.py of the project and use the customfigure directive across Sphinx project instead of the Image directive. Refer http://www.sphinx-doc.org/en/master/usage/extensions/index.html to add a custom directive to your Sphinx project.

XPath: Select Certain Child Nodes

I'm using XPath with Scrapy to scrape data off of a movie website BoxOfficeMojo.com.
As a general question: I'm wondering how to select certain child nodes of one parent node all in one Xpath string.
Depending on the movie web page from which I'm scraping data, sometimes the data I need is located at different children nodes, such as whether or not there is a link or not. I will be going through about 14000 movies, so this process needs to be automated.
Using this as an example. I will need actor/s, director/s and producer/s.
This is the Xpath to the director: Note: The %s corresponds to a determined index where that information is found - in the action Jackson example director is found at [1] and actors at [2].
//div[#class="mp_box_content"]/table/tr[%s]/td[2]/font/text()
However, would a link exist to a page on the director, this would be the Xpath:
//div[#class="mp_box_content"]/table/tr[%s]/td[2]/font/a/text()
Actors are a bit more tricky, as there <br> included for subsequent actors listed, which may be the children of an /a or children of the parent /font, so:
//div[#class="mp_box_content"]/table/tr[%s]/td[2]/font//a/text()
Gets all most all of the actors (except those with font/br).
Now, the main problem here, I believe, is that there are multiple //div[#class="mp_box_content"] - everything I have works EXCEPT that I also end up getting some digits from other mp_box_content. Also I have added numerous try:, except: statements in order to get everything (actors, directors, producers who both have and do not have links associated with them). For example, the following is my Scrapy code for actors:
actors = hxs.select('//div[#class="mp_box_content"]/table/tr[%s]/td[2]/font//a/text()' % (locActor,)).extract()
try:
second = hxs.select('//div[#class="mp_box_content"]/table/tr[%s]/td[2]/font/text()' % (locActor,)).extract()
for n in second:
actors.append(n)
except:
actors = hxs.select('//div[#class="mp_box_content"]/table/tr[%s]/td[2]/font/text()' % (locActor,)).extract()
This is an attempt to cover for the facts that: the first actor may not have a link associated with him/her and subsequent actors do, the first actor may have a link associated with him/her but the rest may not.
I appreciate the time taken to read this and any attempts to help me find/address this problem! Please let me know if any more information is needed.
I am assuming you are only interested in textual content, not the links to actors' pages etc.
Here is a proposition using lxml.html (and a bit of lxml.etree) directly
First, I recommend you select td[2] cells by the text content of td[1], with expressions like .//tr[starts-with(td[1], "Director")]/td[2] to account for "Director", or "Directors"
Second, testing various expressions with or without <font>, with or without <a> etc., makes code difficult to read and maintain, and since you're interested only in the text content, you might as well use string(.//tr[starts-with(td[1], "Actor")]/td[2]) to get the text, or use lxml.html.tostring(e, method="text", encoding=unicode) on selected elements
And for the <br> issue for multiple names, the way I do is generally modify the lxml tree containing the targetted content to add a special formatting character to <br> elements' .text or .tail, for example a \n, with one of lxml's iter() functions. This can be useful on other HTML block elements, like <hr> for example.
You may see better what I mean with some spider code:
from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector
import lxml.etree
import lxml.html
MARKER = "|"
def br2nl(tree):
for element in tree:
for elem in element.iter("br"):
elem.text = MARKER
def extract_category_lines(tree):
if tree is not None and len(tree):
# modify the tree by adding a MARKER after <br> elements
br2nl(tree)
# use lxml's .tostring() to get a unicode string
# and split lines on the marker we added above
# so we get lists of actors, producers, directors...
return lxml.html.tostring(
tree[0], method="text", encoding=unicode).split(MARKER)
class BoxOfficeMojoSpider(BaseSpider):
name = "boxofficemojo"
start_urls = [
"http://www.boxofficemojo.com/movies/?id=actionjackson.htm",
"http://www.boxofficemojo.com/movies/?id=cloudatlas.htm",
]
# locate 2nd cell by text content of first cell
XPATH_CATEGORY_CELL = lxml.etree.XPath('.//tr[starts-with(td[1], $category)]/td[2]')
def parse(self, response):
root = lxml.html.fromstring(response.body)
# locate the "The Players" table
players = root.xpath('//div[#class="mp_box"][div[#class="mp_box_tab"]="The Players"]/div[#class="mp_box_content"]/table')
# we have only one table in "players" so the for loop is not really necessary
for players_table in players:
directors_cells = self.XPATH_CATEGORY_CELL(players_table,
category="Director")
actors_cells = self.XPATH_CATEGORY_CELL(players_table,
category="Actor")
producers_cells = self.XPATH_CATEGORY_CELL(players_table,
category="Producer")
writers_cells = self.XPATH_CATEGORY_CELL(players_table,
category="Producer")
composers_cells = self.XPATH_CATEGORY_CELL(players_table,
category="Composer")
directors = extract_category_lines(directors_cells)
actors = extract_category_lines(actors_cells)
producers = extract_category_lines(producers_cells)
writers = extract_category_lines(writers_cells)
composers = extract_category_lines(composers_cells)
print "Directors:", directors
print "Actors:", actors
print "Producers:", producers
print "Writers:", writers
print "Composers:", composers
# here you should of course populate scrapy items
The code can be simplified for sure, but I hope you get the idea.
You can do similar things with HtmlXPathSelector of course (with the string() XPath function for example), but without modifying the tree for <br> (how to do that with hxs?) it works only for non-multiple names in your case:
>>> hxs.select('string(//div[#class="mp_box"][div[#class="mp_box_tab"]="The Players"]/div[#class="mp_box_content"]/table//tr[contains(td, "Director")]/td[2])').extract()
[u'Craig R. Baxley']
>>> hxs.select('string(//div[#class="mp_box"][div[#class="mp_box_tab"]="The Players"]/div[#class="mp_box_content"]/table//tr[contains(td, "Actor")]/td[2])').extract()
[u'Carl WeathersCraig T. NelsonSharon Stone']

Ruby script for posting comments

I have been trying to write a script that may help me to comment from command line.(The sole reason why I want to do this is its vacation time here and I want to kill time).
I often visit and post on this site.So I am starting with this site only.
For example to comment on this post I used the following script
require "uri"
require 'net/http'
def comment()
response = Net::HTTP.post_form(URI.parse("http://www.geeksforgeeks.org/wp-comments-post.php"),{'author'=>"pikachu",'email'=>"saurabh8c#gmail.com",'url'=>"geekinessthecoolway.blogspot.com",'submit'=>"Have Your Say",'comment_post_ID'=>"18215",'comment_parent'=>"0",'akismet_comment_nonce'=>"70e83407c8",'bb2_screener_'=>"1330701851 117.199.148.101",'comment'=>"How can we generalize this for a n-ary tree?"})
return response.body
end
puts comment()
Obviously the values were not hardcoded but for sake of clearity and maintaining the objective of the post i am hardcoding them.
Beside the regular fields that appear on the form,the values for the hidden fields i found out from wireshark when i posted a comment the normal way.I can't figure out what I am missing?May be some js event?
Edit:
As few people suggested using mechanize I switched to python.Now my updated code looks like:
import sys
import mechanize
uri = "http://www.geeksforgeeks.org/"
request = mechanize.Request(mechanize.urljoin(uri, "archives/18215"))
response = mechanize.urlopen(request)
forms = mechanize.ParseResponse(response, backwards_compat=False)
response.close()
form=forms[0]
print form
control = form.find_control("comment")
#control=form.find_control("bb2_screener")
print control.disabled
# ...or readonly
print control.readonly
# readonly and disabled attributes can be assigned to
#control.disabled = False
form.set_all_readonly(False)
form["author"]="Bulbasaur"
form["email"]="ashKetchup#gmail.com"
form["url"]="9gag.com"
form["comment"]="Y u no put a captcha?"
form["submit"]="Have Your Say"
form["comment_post_ID"]="18215"
form["comment_parent"]="0"
form["akismet_comment_nonce"]="d48e588090"
#form["bb2_screener_"]="1330787192 117.199.144.174"
request2 = form.click()
print request2
try:
response2 = mechanize.urlopen(request2)
except mechanize.HTTPError, response2:
pass
# headers
for name, value in response2.info().items():
if name != "date":
print "%s: %s" % (name.title(), value)
print response2.read() # body
response2.close()
Now the server returns me this.On going through the html code of the original page i found out there is one more field bb2_screener that i need to fill if I want to pretend like a browser to the server.But the problem is the field is not written inside the tag so mechanize won't treat it as a field.
Assuming you have all the params correct, you're still missing the session information that the site stores in a cookie. Consider using something like mechanize, that'll deal with the cookies for you. It's also more natural in that you tell it which fields to fill in with which data. If that still doesn't work, you can always use a jackhammer like selenium, but then technically you're using a browser.

Resources