I'm writing an API sepc using RAML and publishing using MULE standalone. I want to indent and bullet some parapgrach in the description of a Verb within a Resource. Is that possible? I tried using p style tags but it doesn't work, although using b tags for bold DOES work.
Any ideas
It is possible in RAML to put the bullet point in the description using \n* as follows :-
/books:
displayName: Book
description: !
"The following **services** that are listed here are :- \n\n* First point.
\n* Second point.
\n* Third point.
\n* Fourth point.
\n* Fith point.
\n* Sixth point.
\n* Seventh point \n * Eight point with black bullet "
get:
description: !
"The following **services** description are :- \n\n* First point.
\n* Second point."
You need to have ! and put the description under double quotes
Result you can see as follows :-
I had the same issue using the RAML Spec 1.0, and after some time I found out that line breaks work with <br />. So I added them and simulated bullet points.
Example:
description: Some fruits: <br />* Apple
<br />* Banana
<br />* Kiwi
This renders well then using raml2html to publish it.
I can see that this thread is quite old, but I still wanted to add it, if someone (like me) looks for a solution.
Related
I want to limit my character length in r studio and I have a long header for one of my topics. I want the header divided into two lines but still shows in html after knitted as a header.
My title is: Animals in the region are getting declawed and slain.
on r studio it's currently like this:
Animals in the region are getting \
declawed and slain.
My problem is that in html it shows "Animals in the region are getting" as the header "and declawed and slain." as normal text.
How do I solve this please and thank you.
You can use <br> tags to denote new lines for html output. Just a note that it is good practice to provide some reproducible code when asking a question as this makes it easier for people to answer.
---
title: "Animals in the region are getting <br> declawed and slain."
output:
html_document:
df_print: paged
---
### Animals in the region are getting <br> declawed and slain.
I'm trying to collect a dataset that could be used for automatically generating baseball articles.
I have play-by-play records of MLB games from retrosheet.org that I would like to be written out to plain text, as those that could possibly appear as part of a recap news article.
Here are some examples of the play-by-play records:
play,2,0,semim001,32,.CBFFFBBX,9/F
play,2,0,phegj001,01,FX,S7/G
play,2,0,martn003,01,CX,3/G
play,2,1,youne003,00,,NP
The following is what I would like to achieve:
For the first example
play,2,0,semim001,32,.CBFFFBBX,9/F,
I want it to be written out as something like:
"semim001 (Marcus Semien) was on three balls and two strikes in the second inning as the away player. He hit the ball into play after one called strike, one ball, three fouls, and another two balls. The fly ball was caught by the right outfielder."
The plays are formatted in the following way:
The first field is the inning, an integer starting at 1.
The second field is either 0 (for visiting team) or 1 (for home team).
The third field is the Retrosheet player id of the player at the plate.
The fourth field is the count on the batter when this particular event (play) occurred. Most Retrosheet games do not have this information, and in such cases, "??" appears in this field.
The fifth field is of variable length and contains all pitches to this batter in this plate appearance and is described below. If pitches are unknown, this field is left empty, nothing is between the commas.
The sixth field describes the play or event that occurred.
Explanations for all the symbols in the fifth and sixth field can be found on this Retrosheet page.
With Python 3, I've been able to format all the info of invariable length into a formatted sentence, which is all but the last two fields. I'm having difficulty in thinking of an efficient way to unparse (correct me if this is the wrong term to use here) the fifth and sixth fields, the pitches and the events that occurred, due to their variable length and wide variety of things that can occur.
I think I could write out all the rules based on the info on the Retrosheet website, but I'm looking for suggestions for a smarter way to do this. I wrote natural language processing as tags, hoping this could be a trivial problem in that field. Any pointers will be greatly appreciated!
I have looked through several posts about this, but have failed to apply the principles used to get the result I desire, so I'm going to just post my specific problem.
I am building a Google Sheet that enables the user to pull up Bible verses.
I have it all working, however I am running into an issue with a hidden element being pulled into my text().
FUNCTION:
=IMPORTXML("http://www.biblestudytools.com/ESV/Numbers/5-3.html",
"//*[#class='scripture']//span[2]//text()")
RESULT: You shall put out both male and female, putting them outside the camp, that they may not defile their camp, 1in the midst of which I dwell."
You can see the "1" that is showing up before the word "in"
I have found the xPath that pulls only that "1"
//*[#class='scripture']//span[2]//sup//text()
I am trying to remove that "1" from the text.
HELP PLEASE!!! :)
You can add a predicate to the end to exclude text nodes that are inside sup elements:
=IMPORTXML("http://www.biblestudytools.com/ESV/Numbers/5-3.html",
"//*[#class='scripture']//span[2]//text()[not(ancestor::sup)]")
This will retrieve only the text nodes that are not inside a sup element, but it will still result in having the verse spread out across two cells, because there are two text nodes. You can rectify this by wrapping this expression in a JOIN():
=JOIN("", IMPORTXML("http://www.biblestudytools.com/ESV/Numbers/5-3.html",
"//*[#class='scripture']//span[2]//text()[not(ancestor::sup)]"))
I want to make an Arabic morphological analyzer using Prolog.
I have implemented the following code.
check(ي,1,male).
check(ت,1,female).
check(ا,1,me).
dict(لعب,3).
ending('',0,single).
ending(ون,2,plur).
parse([]).
parse(Word,Gender,Verb,Plurality):-
sub_atom(Word,0,LenHead,_,FirstCut),
check(FirstCut,LenHead,Gender),
sub_atom(Word,LenHead,_,LenAfter,Verb),
dict(Verb,LenOfVerb),
Location is LenHead+LenOfVerb,
sub_atom(Word,Location,LenAfter,_,EndOfWord),
ending(EndOfWord,_,Plurality).
This is called using:
parse(يلعب,A,S,D).
Expectation:
A = male
S = لعب
D = single
Explanation of code:
It should parse the word يلعب, note that in Arabic the ي (first letter to the right) indicates that it's masculine word. And لعب is a verb.
Error:
When running the code, I get the following error:
ERROR: parse/4: Undefined procedure: dict/2
Note that when mimicking the Arabic word using English letters, the code behaves as expected and doesn't produce this error.
How can I resolve such error, or make the Prolog understand R-to-L words?
Edit:
In the attached image, note that in the red box, it succeeded to match the ي to male. In the blue box, when it failed, it should have backtracked and starts to concatenate to try to match a new word, but instead it produces the error shown
You have to be careful when you are using SWI-Prolog on the Mac. There is a slight problem with copy paste. If you use [user], and then past multiple lines, it doesn't read all lines:
This happens all the time and isn't related to the arabic script or unicode, or somesuch. I have filed a bug report to SWI Prolog here. When you use [user], and do the lines one by one you get the right result.
In the above screenshot you see that I did a one by one paste, since there are multiple prompts '|:'. Other Prolog systems don't have necessarely this problem, for example I get in Jekejeke Prolog:
Best workaround for SWI-Prolog is probably to store the facts in a file, and consult them from there. In Jekejeke Prolog I have to investigate, why the space after the comma is showing on the wrong side.
I'm trying to extract the text for a document to index it for search. The below mostly works except various words and punctuation run together. When it removes tags, I need to replace them with spaces so I do not get this issue. I have been trying to figure out the most efficient way to do this but I'm coming up empty so far.
doc = Nokogiri::HTML(html)
doc.xpath("//script").remove
doc.xpath("//style").remove
doc.xpath("//a").remove
text = doc.text.gsub(/\s+/,' ')
Here is some sample text I extracted from http://www.washingtontimes.com/blog/redskins-watch/2012/oct/18/redskins-linemen-respond-jason-pierre-paul-rg3-com/
Before the season it was New York Giants defensive end Osi Umenyiora
who made waves by saying he wouldn't call Robert Griffin III by “RG3”
until he did something. Until then, it was “Bob Griffin.”After
Griffin's 76-yard touchdown run in the Washington Redskins' victory
over the Minnesota Vikings, fellow Giants defensive end Jason
Pierre-Paul was the one who had some comments about Griffin.“Don’t
bring it to my side," Pierre-Paul told New York media. “Go the other
way. …“Yes, it'll be a very good matchup. Not on my side, though. Not
on my side. Or the other side.”Griffin, asked jokingly Wednesday about
running for office, said: “I’ve got a lot other guys to be running
away from right now, Pierre-Paul, Osi, all those guys.”But according
to a couple of Redskins linemen, Griffin shouldn't have much to worry
about Sunday if he gets into the open field.“If Robert gets into that
situation, I don't think there's many people that can run him down,”
right guard Chris Chester said. “I'm still going to go out there and
try to block and make sure no one touches Robert at all. But he's a
plenty good athlete to be able to outrun a lot of people in this
league.”Prompted with Pierre-Paul's comments, left tackle Trent
Williams responded: “What do you want me to say about that?”“Robert's
my guy. I don't know Pierre-Paul. I don't know why he would say
something like that,” he said. “Maybe he knows something I don't.”
You could try inserting a space before each p tag:
doc.search('p').each{|el| el.before ' '}
but a better approach probably is something like:
text = doc.search('div.story p').map{|p| p.text}.join(" ")
Other answers are discussing inserting whitespace into the document, but if (as the question asks) your requirement is to replace those nodes with whitespace, Nokogiri has a replace method. So to replace script tags with spaces do:
doc.xpath('//script').each do |node|
node.replace(' ')
end
The question also asks about 'correct' spacing. Most browsers will not insert a space when they render around a <script> tag, so while useful for text extraction, this is not necessarily the 'correct' thing to do.