Is there such a thing as a valid HTML5 fragment? - validation

I obviously can't determine whether a fragment of HTML is valid without knowing what the rest of the document looks like (at a minimum, I would need a doctype in order to know which rules I'm validating against). But given the following HTML5 fragment:
<article><header></article>My header</header><p>My text</p></article>
I can certainly determine that it is invalid without seeing the rest of the document. So, is there such a thing as "provisionally valid" HTML, or "valid providing it fits into a certain place in a valid document"?
Is there more to it than the following pseudocode?
def is_valid_fragment(fragment):
tmp = "<!doctype html><html><head><title></title></head><body>" + fragment + "</body></html>"
return my_HTML5_validator.is_valid_html5_document(tmp)

You can certainly talk about an XML document weing well-formed, and you can construct a document from any single element and its children. You could thus talk about singly-rooted XHTML5 fragments being well-formed. You could deal with a multiply-rooted fragment (like <img/><img/>) by dealing with it as a sequence of documents, or wrapping it in some synthetic container element - since we're only talking about well-formedness, that would be okay.
However, HTML5 still allows the SGML self-closing tags, like <hr> and so on, whose self-closingness can only be determined by appeal to the doctype. For instance, <div><hr></div> is okay, but <div><tr></div> is not. If you were dealing with DOM nodes rather than text as input, this would be a nonissue, but if you have text, you'd need a parser which knows enough about HTML to be able to deal with those elements. Beyond that, though, some very simple rules, lifted directly from XML, would be enough to handle well-formedness.
If you wanted to go beyond well-formedness and look at some aspects of validity, i think you can still do that at the singly-rooted fragment level with XML. As the spec says:
An XML document is valid if it has an associated document type declaration and if the document complies with the constraints expressed in it.
A DTD can name any element as the root, and the mechanics then take care of checking the relationship between that element and its children, and their children and so on, and the various other constraints that make up validity.
Again, you can transfer that idea directly to HTML. I don't know how you deal with multiply-rooted fragments, though. And bear in mind that certain whole-document constraints (like IDs being unique) might hold inside the fragment, but not in an otherwise valid document once the fragment has been inserted into it.

Depending on what you intend to do with this verification, I think you should keep in mind that browsers are extremely forgiving regarding malformed HTML!
The invalid HTML string that you give in your example would work perfectly fine in (most if not all) browers:
const serializedHTML = "<article><header></article>My header</header><p>My text</p></article>"
const range = document.createRange()
const fragment = range.createContextualFragment(serializedHTML)
console.log(fragment)
The content of the fragment defined in the snippet above would result in the following DOM tree:
<article>
<header></header>
</article>
"My header"
<p>My text</p>

A crude method would be to check whether passing the fragment through the innerHTML of another element changes the text by doing something like the code below.
<html>
<head>
</head>
<script>
function validateHTML(htmlFragment) {
var testDiv = document.getElementById('testDiv')
testDiv.innerHTML = htmlFragment
var res = htmlFragment==testDiv.innerHTML
testDiv.innerHTML = ""
return res
}
</script>
<body>
<div id=testDiv style='display:none'></div>
<textarea id=txtElem onKeyUp="this.style.backgroundColor = validateHTML(this.value) ? '' : '#f00'"></textarea>
</body>
</html>

You could check if it is well-formed.

Related

How to grab a piece of data which has a different xpath on different webpages?

So I am trying to grab a piece of data that is displayed in a different xpath on different pages.
if you will see the xpath of the IPA pronunction on wiktionary... https://en.wiktionary.org/wiki/foo you will see that the xpath is
//*[#id="mw-content-text"]/ul[1]/li[1]/span[4]
but if I got to another word, like https://en.wiktionary.org/wiki/bar then the xpath would be
//*[#id="mw-content-text"]/ul[1]/li[2]/span[5]
I cannot think of any way to reconcile these, is there something that I am missing?
The answer is simple. Never let a tool write any XPath for you. All tools get it wrong.
Look at the document's HTML source and write the appropriate XPath it yourself.
var result = document.evaluate("//*[#class = 'IPA']", document),
elem;
while (elem = result.iterateNext()) {
console.log(elem);
}
The above shows the simplest variant. It selects two occurrences of <span class="IPA"> on https://en.wiktionary.org/wiki/foo and quite a few more on https://en.wiktionary.org/wiki/bar.
Use a more specific expression to narrow down the results.

How does empty start tag work in HTML4?

The HTML4 specification mentions various SGML shorthand markup constructs. While I understand what others do, with a help of HTML validator, I cannot find understand why anyone would want an empty start tag. It cannot even have attributes, so it's not a shorter <span>.
The SGML definition of HTML4 enables the empty start feature. In it, there is an interesting section with features.
FEATURES
MINIMIZE
DATATAG NO
OMITTAG YES
RANK NO
SHORTTAG YES
LINK
SIMPLE NO
IMPLICIT NO
EXPLICIT NO
OTHER
CONCUR NO
SUBDOC NO
FORMAL YES
APPINFO NONE
The important section of features is MINIMIZE section. It enables OMITTAG which is a standard feature of HTML, which allows start or end tags to be ommited. This is particular allows you to write code like <p> a <p> b, without closing paragraphs.
The more important part is SHORTTAG feature, which is actually a category. However, because it's not expanded, the SGML automatically assumed YES for all entries in it. It has the following categories in it. Feel free to skip this list, if you aren't interested in other shorthand features in SGML.
ATTRIB, which deals with attributes, and has following options.
DEFAULT - defines whether attributes can contain default values. This allows writing <p> without defining every single attribute. Nobody would want to write <p id="" class="" style="" title="" lang="en" dir="ltr" onclick="" ondblclick="" ...></p> after all. Hey, I even gave up trying to write all that. This is a commonly supported feature.
OMITNAME - if the attribute and value have the same name, the value is optional. This allows writing <input type="checkbox" checked> for instance. This is a commonly supported feature (although, HTML5 defines default to be empty string, not an attribute name).
VALUE - allows writing values without quotes. This allows writing code like <p class=warning></p> for instance. This is a commonly supported feature.
ENDTAG, which is a category for end tags containing the following options.
UNCLOSED - allows starting a new tag before ending the previous tag, allowing code like <p><b></b</p>.
EMPTY - allows unnamed end tags, such as <b>something</>. They close most recent element which is still open.
STARTTAG, which is a category for start tags containing the following options.
NETENABL - allows using Null End Tag notation. It's worth noting this notation is incompatible with XHTML. Anyway, the feature allows writing code like <b/<i/hello//, which means the same thing as <b><i>hello</i></b>.
UNCLOSED - allows starting a new tag before ending the previous tag, allowing code like <p<b></b></p>.
EMPTY - this is the asked feature.
Now, it's important to understand what EMPTY does. While <> may appear useless at first (hey, how could you determine what it does, when nothing aside of Validator supports it), it's actually not. It opens the previous sibling, allowing code like the following.
<ul>
<li class=plus> hello world
<> another list element
<> yet another
<li class=minus> nope
<> what am I doing?
</ul>
In this example, the list has two classes, plus and minus for positive and negative arguments. However, the webmaster was lazy (and doesn't care about that HTML4 doesn't support this), and decided to use empty start tag in order to not specify the class for next elements. Because <li> has optional end tag, this automatically closed previous <li> tag.

htmlagilitypack identify inconsistancy

I use htmlagilitypack & xpath.
How can I identify insconsistancy in html. For example:
<table><tr><td>
<b>Car1</b><span>Color123</span>
<bCar2</b><span>Color333</span>
<b>Car3</b><span>Color221</span>
<b>Car4 <span>Color224</span>
<b>Car5</b><span>Color621</span>
</table></tr></td>
Car2 & Car4 bold are broken.
The problem is that i use root.SelectNodes("//b[1]")[Index] and it misses index position2 (Car2) and put on its place Car3 and I don't even know that such thing happened if i don't inspect the results manually. At least, i need to have "empty" position2 (Car2) and a correct position3 (Car3).
HtmlAgility pack can't indetify and fix it automatically. doc.ParseErrors can't identify it.
Can you offer some combination of XPath functions like Substring, Boolean, Concat, Number etc.? I'm not good enough in XPath, but I feel that these functions can help in identifying inconsistancy.
p.s. Html Tidy library can't fix it. It sometimes decides that:
<b>Car4 <span>Color224</span></b>
Which is not the correct fix.
HtmlDocumemt.ParseErrors does contains 3 errors for your example:
- Start tag <b> was not found (because there is a closing b without an opening one)
- Start tag <tr> was not found (because the tr is inside an opening b without a closing one)
- Start tag <td> was not found (same as tr)
In the general case, it's impossible to 1) identify errors the way you want, and 2) much more difficult to fix them. You would have to exactly define what's the expected format.
You can use the Html Agility Pack to identify errors with specific requirements. For example here is a piece of code, that validates your doc, base on the functional requirement that "every child element of a TD must be a B or a SPAN and must not contain more than one grand child element":
HtmlDocument doc = new HtmlDocument();
doc.Load("MyFile.htm");
foreach (HtmlNode childOfTd in doc.DocumentNode.SelectNodes("//td/*"))
{
if ((childOfTd.Name != "b") && (childOfTd.Name != "span") || (childOfTd.ChildNodes.Count > 1))
{
Console.WriteLine("child error, outerHtml=" + childOfTd.OuterHtml);
}
}
To fix this requires raw text access (maybe a Regex, and BTW, a Regex can also identify simple errors) because the Html Agility Pack builds a DOM that does not let you access incorrect syntax nodes, by design.

xPath expression for attributes that don't have ancestors with same attribute

I'm trying to extract elements with an attribute, and not extract the descendants separately that have the same attribute.
Using the following html:
<html><body>
<div box>
some text
<div box>
some more text
</div>
</div>
<div box>
this needs to be included as well
</div>
</body></html>
I want to be able to extract the two outer <div box> and its descendants including the inner <div box>, but don't want to have the inner <div box> extracted separately.
I have tried using all sorts of different expressions but think I am missing something quite fundamental. The main expression I have been trying is: //[#box and not(ancestor::#box) but this still returns two elements.
I am trying to do this using the 'Hpricot' (0.8.3) Gem in Ruby 1.9.2 as follows:
# Assuming html is set to the html above
doc = Hpricot(html)
elements = doc.search('//[#box and not(ancestor::#box)]')
# The following is returning 3 instead of 2
elements.size
Any help on this would be great.
Your XPATH is invalid. You have to address something in order to use the predicate filter(e.g. []). Otherwise, there isn't anything to filter.
This XPATH works:
//div[#box and not(ancestor::div/#box)]
If the elements aren't all guarenteed to be <div>, you can use a more generic match for elements:
//*[#box and not(ancestor::*/#box)]
Using elements = doc.search('//[#box and not(ancestor::#box)]') isn't correct.
Use elements = doc.at('//div[#box]') which will find the first occurrence.
I'd recommend using Nokogiri over Hpricot. Nokogiri is well supported, very flexible and more robust.
EDIT: Added because original question changed:
Thanks that worked perfectly, except I forget to mention that I want to return multiple outer elements. Sorry about that, I have updated the question. I will look into Nokogiri further, I didn't choose it originally because Hpricot seemed more approachable.
Remember that XPath acts like accessing a file in a directory at its simplest form, so you can drill down and search in "subdirectories". If you only want the outer <div> tags, then look inside the <body> level and no further:
doc.search('/html/body/div')
or, if you might have unadorned div tags along with the targets:
doc.search('/html/body/div[#box]')
Regarding Hpricot seeming more approachable:
Nokogiri implements a superset of Hpricot's accessors, allowing you to drop it into place for most uses. It supports XPath and CSS accessors allowing more intuitive ways of getting at data if you live in CSS and HTML and don't grok XPath. In addition there are many methods to find your desired target:
doc.search('body > div[box]')
(doc / 'body > div[box]')
doc.css('body > div[box]')
Nokogiri supports the at and % synonym found in Hpricot also, along with css_at, if you only want the first occurrence of something.
I started using Nokogiri after running into some situations where Hpricot exploded because it couldn't handle malformed news-feeds in the wilds.

HtmlUnit getByXpath returns null

I am coding with Groovy, however, I don't believe its a language specific set of questions.
I actually have two questions
First Question
I've run into an issue while using HtmlUnit. It is telling me that what I am trying to grab is null.
The page I'm testing it on is:
http://browse.deviantart.com/resources/applications/psbrushes/?order=9&offset=0#/dbwam4
My code:
client = new WebClient(BrowserVersion.FIREFOX_3)
client.javaScriptEnabled = false
page = client.getPage(url)
//coming up as null
title = page.getByXPath("//html/body/div[4]/div/div[3]/div/div/div/div/div/div/div/div/div/div/h1/a")
println title
This simply prints out: []
Is this because the page uses onclick()? If so, how would I get around that? Enabling javascript creates a mess in my cmd prompt.
Second Question
I am wanting to also get the image but am having trouble because when I attempt to get the XPath (via firebug) it shows up as: //*[#id="gmi-ResViewSizer_img"]
How do I handle that?
First Answer:
/html/body/div[3]/div/div[3]/div/div/div/div/div/div/div/div/div/div/h1/a
Your XPATH was off by one in the predicate filter for the 4th div of the body, it should be the 3rd div. It appears the HTML for the site can/does change from when you had origionally snagged the XPATH using Firebug. You may need to adjust your XPATH to accommodate for potential change and be less sensitive to some differences in document structure.
Maybe something like this:
/html/body//div/h1/a
Second Answer: The XPATH that you listed will work. It may look odd/short(and may not be the most efficient), but // starts at the root node and looks throughout every node in the tree, * matches on any element(to include the img) and the [] predicate filter restricts it to those that have an id attribute who's value equals "gmi-ResViewSizer_img".
There are many other options for XPATHs that could work as well. It will also depend on how often the HTML structure changes. This is one that also works for the page referenced to select that img:
/html/body/div/div/div/div/img[1]
I had the same problem, I solved when I realize iframe tags on page, try call
((HtmlPage)current_page.getFrames()[n].getEnclosedPage()).getElementByXPath(...
where n is the position in frame in iframe collection. It's work for me !!!
Thanks a lot.

Resources