can't display ordered list of markdown - html-lists

I'am using Hexo to post my blog. I edited my blog by markdown. And I encountered with some problems when I try to use the ordered list and it can't be displayed normally.
Here is my code:
1. first
2. second
+ inner first
+ inner second
However, only disordered list was shown.
I would like it was shown as follow:
http://7xjj3m.com1.z0.glb.clouddn.com/20150622_0.jpg
but it was follow indeed:
http://7xjj3m.com1.z0.glb.clouddn.com/20150622_1.jpg
So, what's the problem?

Your syntax is correct but you need to indent by four spaces. I would refrain from using tab because your computer / program could be defaulted to 2 instead of 4.
I use a markdown editor called Mou; and I inputed your syntax and got the proper result.

Related

SPSS Output modify - deleting footnotes

When generating tables in SPSS with the calculation of the mode it will show a footnote when multiple modes are present ("a. Multiple modes exist. The smallest value is shown").
There is a way to hide this footnote, but the hidden footnote will still take up an empty row in the output. For processing reasons I would like to delete the footnote entirely.
I have been trying to do this with the Output Modify command in the syntax, but can't get it to work. There are commands for selecting the footnotes:
/TABLECELLS
SELECT = ["expression", COUNT, MEAN, MEDIAN, RESIDUAL,
PERCENT, SIGNIFICANCE, POSITION(integer),
BODY, HEADERS, FOOTNOTES, TITLE, CAPTION, …]
And for deleting objects:
/DELETEOBJECT DELETE={NO**}
{YES }
But trying to combine these does not yield the wanted result. Is what I am trying to do possible? Or maybe a suggestion for another solution?
Thanks in advance!
try adding "NOTESCAPTIONS = NO" at the end of your OUTPUT MODIFY syntax.
that should remove all notes and captions in the output.
no need to use "DELETEOBJECT" subcommand.

To build a flow using Power Automate to download linked csv report in gmail

I'm trying to create a flow using Power Automate (which I'm quite new to) that can get the link/URL in an email I receive daily, then download the .csv file that normally a click to the link would do, and then save the file to a given local folder.
An example of the email I get:
Screenshot of the email I get daily
I searched in Power Automate Community and found this insightful LINK post & answer almost solved it. However, after following the steps and built the flow, it kept failing at the Compose step.
Screenshot of the Flow & Error Message
The flow
Error message
Expression used:
substring(body('Html_to_text'),add(indexOf(body('Html_to_text'),'here'),5),sub(indexOf(body('Html_to_text'),'Name'),5))
Seems the expression couldn't really get the URL/Link? I'm not sure and searched but couldn't find any more posts that can help.
Please kindly share all insights on approaches or workarounds that you think may help me solve the problem and truly thanks!
PPPPPPPPisces
We need to breakdown the bits of the function here which needs 3 bits of info
substring(1 text to search, 2 starting position of the text you want, 3 length of text)
For example, if you were trying to return an unknown number from the text dog 4567 bird
Our function would have 3 parts.
body('Html_to_text'), this bit gets the text we are searching for
add(indexOf(body('Html_to_text'),'dog'),4), this bit finds the position in the text 4 characters after the start of the word dog (3 letters for dog + the space)
sub(sub(indexOf(body('Html_to_text'),'bird'),2)),add(indexOf(body('Html_to_text'),'dog'),4)), I've changed the structure of your code here because this part needs to return the length of the URL, not the ending position. So here, we take the position of the end of the URL (position of the word bird minus two spaces) and subtract it from the position of the start of the URL (position of the word dog + 4 spaces) to get the length.
In your HTML to text output, you need to check what the HTML looks like, and search for a word before the URL starts, and a word after the URL starts, and count the exact amount of spaces to reach the URL. You can then put those words and counts into your code.
More generally, when you have a complicated problem that you need to troubleshoot, you can break it down into steps. For example. Rather than putting that big mess of code into a single block, you can make each chunk of the code in its own compose, and then one final compose to bring them all together - that way when you run it you can see what information each bit is giving out, or where it is failing, and experiment from there to discover what is wrong.

How to properly scraping filtered content using XPath Query to Google Sheet?

So, this is about a content from a website which I want to get and put it in my Google Sheets, but I'm having difficulty understanding the class of the content.
target link: https://www.cnbc.com/quotes/?symbol=XAU=
This number is what I want to get from. Picture 1: The part which i want to scrape
And this is what the code looks like in inspector. Picture 2: The code shown in inspector
The target is inside a span attribute but the span attribute looks very difficult to me, so I tried to simplify it using this line of code here =IMPORTXML("https://www.cnbc.com/quotes/?symbol=XAU=","//table[#class='quote-horizontal regular']//tr/td/span")
Picture 3: List is shown when putting the code
After some tries, I am able to get the right target, but it confuse me, Im using this code =IMPORTXML("https://www.cnbc.com/quotes/?symbol=XAU=","//table[#class='quote-horizontal regular']//tr/td/span[#class='last original'][1]")
Picture 4: The right target is shown when the xpath query is more specified
As what you can see in 2nd Picture, 'last original' is not really the full name of the class, when I put the 'last original ng-binding' instead it gave me an error saying imported content is empty
So, correct me if my code is wrong, or accidental worked out somehow because there's another correct way?
How about this answer?
Modified formula 1:
When the name of class is last original and last original ng-binding, how about the following xpath and formula?
=IMPORTXML(A1,"//span[contains(#class,'last original')][1]")
In this case, the URL of https://www.cnbc.com/quotes/?symbol=XAU= is put in the cell "A1".
In this case, //span[contains(#class,'last original')][1] is used as the xpath. The value of span that the name of class includes last original is retrieved. So last original and last original ng-binding can be used.
Modified formula2:
As other xpath, how about the following xpath and formula?
=IMPORTXML(A1,"//meta[#itemprop='price']/#content")
It seems that the value is included in the metadata. So this sample retrieves the value from the metadata.
Reference:
IMPORTXML
To complete #Tanaike's answer, two alternatives :
=IMPORTXML(B2;"//span[#class='year high']")
"Year high" seems always equal to the current stock index value.
Or, with value retrieved from the script element :
=IMPORTXML(B2;"substring-before(substring-after(//script[contains(.,'modApi')],'""last\"":\""'),'\')")
Note : since I'm based in Europe, you need to replace ; with , in the formulas.

How to copy particular elements from web page

My goal is to get particular text area from web page. Imagine it as if you were able to draw a rectangle anywhere on a page and everything in this rectangle would be copied in your clipboard. I am using FireBug (feel free to suggest another solutions, I have searched for plugin or bookmarklets but did not find anything usefull) with it's console window and XPath for this purpose. The values which I want obtain are in following format (this was observed from FireBug "HTML inspect"):
<span class="number3_0" title="Numbers">3.00</span>
so I end up with following code, which I issue from FireBug console:
$x("//span[#title='Numbers']/text()")
After this I get something like this:
[<TextNode textContent="2.00">, <TextNode textContent="2.00">, <TextNode textContent="2.00">, <TextNode textContent="2.00">, <TextNode textContent="3.00">]
After this I click (with right mouse button) on [ and select Inspect in DOM panel then I press ctrl+a and copy/paste the data in following format:
0 <TextNode textContent="2.00">
1 <TextNode textContent="2.00">
2 <TextNode textContent="2.00">
3 <TextNode textContent="2.00">
4 <TextNode textContent="3.00">
As you can assume the value of textContent is the information that I am interested in. I have tried to modify original XPath query to return me only this numbers but no luck. I was:
wrapping whole query into string() as suggested here Xpath - get only node content without other elements
trying to figure out how this one is working Extracting text in between nodes through XPath and lot of more.
To be able to obtain desired values I used some bash-scripting + xml-formatting, after this tedious/error-prone task I get following format:
<?xml version="1.0"?>
<head>
<TextNode textContent="2.00"/>
<TextNode textContent="2.00"/>
<TextNode textContent="2.00"/>
<TextNode textContent="2.00"/>
<TextNode textContent="3.00"/>
<TextNode textContent="3.00"/>
</head>
Now I use xmlstarlet to obtain those values (yes I know that I can use regexp in previous step and have all data that I need. But I am interesting in DOM/XPath parsing and trying to figure out how it is working) in following way:
cat input | xmlstarlet sel -t -m "//TextNode" -v 'concat(#textContent,"
")'
This finnaly gives me the desired output:
2.00
2.00
2.00
2.00
3.00
My questions are a bit generic:
How this terrible long process can be automated?
How to modify the original XPath string used in FireBug
$x("//span[#title='Numbers']/text()") to immediatelly get only
numbers and save myself rest of steps?
I am still not very familiar with xmlstarlet, especially selection
(sel) mode drives me crazy. I have seen various combinations of
following options:
-c or --copy-of - print copy of XPATH expression
-v or --value-of - print value of XPATH expression
-o or --output - output string literal
-m or --match - match XPATH expression
can somebody please explain when to use which one? It would be glad to see in particular examples if is possible. In case of interest there are various combinations of mentioned options, that I do not understand well:
http://www.grahl.ch/blog/minutiae-return-content-element-xmlstarlet
Extracting and dumping elements using xmlstarlet
Testing for an XML attribute
4.) The last question regarding xmlstarlet is a bit cosmetic syntactical sugar, how to obtain nice newline separated output, as you can see I 'cheat' with adding newline as a separator but when I tried it with escape character like this:
cat input | xmlstarlet sel -t -m "//TextNode" -v 'concat(#textContent,"\n")'
it did not worked, also the original reference from where I learn a lot used it in this 'ugly' way http://www.ibm.com/developerworks/library/x-starlet/index.html
PS: maybe those all steps could be simplified with curl + xmlstarlet but it could be handy to have also FireBug option for pages which requires login or such other stuff.
Thanks for all idea.
From what I gather you want to collect numbers from spans that have a title 'Numbers' and want it as a string.
Try the following:
var numberNodes = document.querySelectorAll('span[title="Numbers"]')
function giveText(me) { return me.textContent; }
Array.prototype.map.call(numberNodes, giveText).join("\n");
The first line selects all nodes using CSS query selectors in the document (meaning you do not need XPath).
The second line creates a function that returns the text content of a node.
The third line maps the elements from the numberNodes list using the giveText function, produces an array of numbers, and then finally joins them with a newline.
After this you might not need this xmlstarlet.
$$("<CSS3 selector>") and $x("<XPATH>") in Firebug actually return a real Array (not like the results of document.querySelectorAll() or document.evaluate). So they are more convenient.
With Firefox + Firebug:
var numbersNode = $x("//span[#title='Numbers']/text()");
var numbersText = numbersNode.map(function(numberNode) {
return numberNode.textContent;
}).join("\n");
// Special command of Firebug to copy text into clipboard:
copy(numbersText);
You can even do with a compact way using arrow functions of the EcmaScript 6:
copy($x("//span[#title='Numbers']/text()").map(x => x.textContent).join("\n"));
The same if you chose $$('span[title="Numbers"]') as suggested William Narmontas.
Florent

using xpath in selenium.get.Text and selenium.click

I have Адреса магазинов on page and want to store text, then click on this link and verify that the page where am I going to contains this text in headers. So I tried to find element by xpath, and selenium.getText get the right result, but selenium.click goes to another link. Where have I made a mistake? Thanks in advance!
String m_1 = selenium.getText("xpath=html/body/div[3]/div[2]/div[1]/h4[1]");
selenium.click("xpath=html/body/div[3]/div[2]/div[1]/h4[1]");
selenium.waitForPageToLoad("30000");
assertTrue(selenium.getText("css=h3").contains(m_1));
page:http://www.svyaznoy.ru/map/
Resume:
using xpath=//descendant::a[#href='/address_shops/'][2] or css=div.deff_one_column a[href='/address_shops/'] get right results
using xpath=//a[#href='/address_shops/'] - Element is not currently visible
xpath=//a[#href='/address_shops/'][2] - Element not found
There is a missing slash at the beginning of the expression. I am kind of surprised this got through at all - the first slash means "begin at root node".
Also, it is better to select the <a> element instead of the <h>. Sometimes it works, sometimes is misclicks, sometimes the click doesn't do anything at all. Try to be as concrete as you can be.
Try this one.
String m1 = selenium.getText("xpath=/html/body/div[3]/div[2]/div/h4/a");
selenium.click("xpath=/html/body/div[3]/div[2]/div/h4/a");
selenium.waitForPageToLoad("30000");
// your variable is named m1, but m_1 was used here
assertTrue(selenium.getText("css=h3").contains(m1));
By the way, there are even better XPath expressions you could use. See the documentation, it really is helpful. Just an example, this would work, too, and is much easier to write and read:
String m1 = selenium.getText("xpath=//a[#href='/address_shops/']");
selenium.click("xpath=//a[#href='/address_shops/']");
Sorry, didn't notice page link. Css for second link can be something like that css=div.deff_one_column a[href='/address_shops/']

Resources