Related
I was reading the template documentation of rsyslog to find better properties and I stumble upon this one:
spifno1stsp - expert options for RFC3164 template processing
However, as you can see, the documentation is quite vague. Moreover, I have not been able to find a longer explanation anywhere. The only mentions found with Google are always about the same snippet or the same very short description.
Indeed, there is no explanation of this property:
on the entire rsyslog.com website,
or in the RFC3164,
or anywhere else actually.
It is like everybody copy & paste the same snippet here and there but it is very difficult to understand what it is actually doing.
Any idea ?
Think of it as somewhat like an if statement. If a space is present, don't do anything. Otherwise, if a space is not present, add a space.
It is useful for ensuring that just one space is added to the output, often between two strings.
For any cases like this that you find where the docs can be improved please feel free to open an issue with a request for clarification in the official GitHub rsyslog documentation project. The documentation team is understaffed, but team members will assist where they can.
If you're looking for general help, the rsyslog-users mailing list is also a good resource. I've learned a lot over the years by going over the archives and reading prior threads.
Back to your question about the spifno1stsp option:
While you will get a few hits on that option, what you'll probably find more results on is searching for the older string template option, sp-if-no-1st-sp. Here is an example of its use from the documentation page you linked to:
template(name="forwardFormat" type="string"
string="<%PRI%>%TIMESTAMP:::date-rfc3339% %HOSTNAME% %syslogtag:1:32%%msg:::sp-if-no-1st-sp%%msg%"
)
Here is the specific portion that is relevant here:
`%msg:::sp-if-no-1st-sp%%msg%`
From the Property Replacer documentation:
sp-if-no-1st-sp
This option looks scary and should probably not be used by a user. For
any field given, it returns either a single space character or no
character at all. Field content is never returned. A space is returned
if (and only if) the first character of the field’s content is NOT a
space. This option is kind of a hack to solve a problem rooted in RFC
3164: 3164 specifies no delimiter between the syslog tag sequence and
the actual message text. Almost all implementation in fact delimit the
two by a space. As of RFC 3164, this space is part of the message text
itself. This leads to a problem when building the message (e.g. when
writing to disk or forwarding). Should a delimiting space be included
if the message does not start with one? If not, the tag is immediately
followed by another non-space character, which can lead some log
parsers to misinterpret what is the tag and what the message. The
problem finally surfaced when the klog module was restructured and the
tag correctly written. It exists with other message sources, too. The
solution was the introduction of this special property replacer
option. Now, the default template can contain a conditional space,
which exists only if the message does not start with one. While this
does not solve all issues, it should work good enough in the far
majority of all cases. If you read this text and have no idea of what
it is talking about - relax: this is a good indication you will never
need this option. Simply forget about it ;)
In short, sp-if-no-1st-sp (string template option) is analogous to spifno1stsp (standard template option).
Hope that helps.
I am using using jekyll and markdown for the first time to build a blog site. From what I understand about markdown files, the pound key is what is used to comment lines, except it does the complete opposite for me. Anything with in all of my .md files are commented out, and the things that are supposed to be comments, are actual live text on the page. Here's what I mean:
Does anyone know what the problem is? It was working properly yesterday, so I'm thinking that it may be a problem with my text editor (Atom). Thanks!
From what I understand about markdown files, the pound key is what is used to comment lines, except it does the complete opposite for me.
Nope. Hashes used to represent various levels of header:
Atx-style headers use 1-6 hash characters at the start of the line, corresponding to header levels 1-6. For example:
# This is an H1
## This is an H2
###### This is an H6
Markdown doesn't have the concept of comments, although it does support inline HTML, so you can use HTML comments, e.g.
<!-- This is a comment -->
I've recently discovered, to my astonishment (having never really thought about it before), machine-sorting Japanese proper nouns is apparently not possible.
I work on an application that must allow the user to select a hospital from a 3-menu interface. The first menu is Prefecture, the second is City Name, and the third is Hospital. Each menu should be sorted, as you might expect, so the user can find what they want in the menu.
Let me outline what I have found, as preamble to my question:
The expected sort order for Japanese words is based on their pronunciation. Kanji do not have an inherent order (there are tens of thousands of Kanji in use), but the Japanese phonetic syllabaries do have an order: あ、い、う、え、お、か、き、く、け、こ... and on for the fifty traditional distinct sounds (a few of which are obsolete in modern Japanese). This sort order is called 五十音順 (gojuu on jun , or '50-sound order').
Therefore, Kanji words should be sorted in the same order as they would be if they were written in hiragana. (You can represent any kanji word in phonetic hiragana in Japanese.)
The kicker: there is no canonical way to determine the pronunciation of a given word written in kanji. You never know. Some kanji have ten or more different pronunciations, depending on the word. Many common words are in the dictionary, and I could probably hack together a way to look them up from one of the free dictionary databases, but proper nouns (e.g. hospital names) are not in the dictionary.
So, in my application, I have a list of every prefecture, city, and hospital in Japan. In order to sort these lists, which is a requirement, I need a matching list of each of these names in phonetic form (kana).
I can't come up with anything other than paying somebody fluent in Japanese (I'm only so-so) to manually transcribe them. Before I do so though:
Is it possible that I am totally high on fire, and there actually is some way to do this sorting without creating my own mappings of kanji words to phonetic readings, that I have somehow overlooked?
Is there a publicly available mapping of prefecture/city names, from the government or something? That would reduce the manual mapping I'd need to do to only hospital names.
Does anybody have any other advice on how to approach this problem? Any programming language is fine--I'm working with Ruby on Rails but I would be delighted if I could just write a program that would take the kanji input (say 40,000 proper nouns) and then output the phonetic representations as data that I could import into my Rails app.
宜しくお願いします。
For Data, dig Google's Japanese IME (Mozc) data files here.
https://github.com/google/mozc/tree/master/src/data
There is lots of interesting data there, including IPA dictionaries.
Edit:
And you may also try Mecab, it can use IPA dictionary and can convert kanjis to katakana for most of the words
https://taku910.github.io/mecab/
and there is ruby bindings for that too.
https://taku910.github.io/mecab/bindings.html
and here is somebody tested, ruby with mecab with tagger -Oyomi
http://hirai2.blog129.fc2.com/blog-entry-4.html
just a quick followup to explain the eventual actual solution we used. Thanks to all who recommended mecab--this appears to have done the trick.
We have a mostly-Rails backend, but in our circumstance we didn't need to solve this problem on the backend. For user-entered data, e.g. creating new entities with Japanese names, we modified the UI to require the user to enter the phonetic yomigana in addition to the kanji name. Users seem accustomed to this. The problem was the large corpus of data that is built into the app--hospital, company, and place names, mainly.
So, what we did is:
We converted all the source data (a list of 4000 hospitals with name, address, etc) into .csv format (encoded as UTF-8, of course).
Then, for developer use, we wrote a ruby script that:
Uses mecab to translate the contents of that file into Japanese phonetic readings
(the precise command used was mecab -Oyomi -o seed_hospitals.converted.csv seed_hospitals.csv, which outputs a new file with the kanji replaced by the phonetic equivalent, expressed in full-width katakana).
Standardizes all yomikata into hiragana (because users tend to enter hiragana when manually entering yomikata, and hiragana and katakana sort differently). Ruby makes this easy once you find it: NKF.nkf("-h1 -w", katakana_str) # -h1 means to hiragana, -w means output utf8
Using the awesomely conveninent new Ruby 1.9.2 version of CSV, combine the input file with the mecab-translated file, so that the resulting file now has extra columns inserted, a la NAME, NAME_YOMIGANA, ADDRESS, ADDRESS_YOMIGANA, and so on.
Use the data from the resulting .csv file to seed our rails app with its built-in values.
From time to time the client updates the source data, so we will need to do this whenever that happens.
As far as I can tell, this output is good. My Japanese isn't good enough to be 100% sure, but a few of my Japanese coworkers skimmed it and said it looks all right. I put a slightly obfuscated sample of the converted addresses in this gist so that anybody who cared to read this far can see for themselves.
UPDATE: The results are in... it's pretty good, but not perfect. Still, it looks like it correctly phoneticized 95%+ of the quasi-random addresses in my list.
Many thanks to all who helped me!
Nice to hear people are working with Japanese.
I think you're spot on with your assessment of the problem difficulty. I just asked one of the Japanese guys in my lab, and the way to do it seems to be as you describe:
Take a list of Kanji
Infer (guess) the yomigana
Sort yomigana by gojuon.
The hard part is obviously step two. I have two guys in my lab: 高橋 and 高谷. Naturally, when sorting reports etc. by name they appear nowhere near each other.
EDIT
If you're fluent in Japanese, have a look here: http://mecab.sourceforge.net/
It's a pretty popular tool, so you should be able to find English documentation too (the man page for mecab has English info).
I'm not familiar with MeCab, but I think using MeCab is good idea.
Then, I'll introduce another method.
If your app is written in Microsoft VBA, you can call "GetPhonetic" function. It's easy to use.
see : http://msdn.microsoft.com/en-us/library/aa195745(v=office.11).aspx
Sorting prefectures by its pronunciation is not common. Most Japanese are used to prefectures sorted by 「都道府県コード」.
e.g. 01:北海道, 02:青森県, …, 13:東京都, …, 27:大阪府, …, 47:沖縄県
These codes are defined in "JIS X 0401" or "ISO-3166-2 JP".
see (Wikipedia Japanese) :
http://ja.wikipedia.org/wiki/%E5%85%A8%E5%9B%BD%E5%9C%B0%E6%96%B9%E5%85%AC%E5%85%B1%E5%9B%A3%E4%BD%93%E3%82%B3%E3%83%BC%E3%83%89
Could anyone suggest me a way to make long words (like serial numbers) to be wrapped? I tried some commercial software and there is no such issue. Is it a fop bug or probably there is a solution available?
I can't insert zero length space after each character of every word in document. This solution sounds insane for me.
You can specify the wrap-option attribute in your fo:block like so:
<fo:block wrap-option="wrap"> ... stuff </fo:block>
Here's the XSL-FO specification for this attribute:
XSL Definition:
Value: no-wrap | wrap | inherit
Initial: wrap
Applies to: fo:block, fo:inline, fo:page-number,
fo:page-number-citation
Inherited: yes
Percentages: N/A
Media: visual
Values have the following meanings:
no-wrap
No line-wrapping will be performed.
In the case when lines are longer than
the available width of the
content-rectangle, the overflow will
be treated in accordance with the
"overflow" property specified on the
reference-area.
wrap
Line-breaking will occur if the
line overflows the available block
width. No special markers or other
treatment will occur.
Specifies how line-wrapping
(line-breaking) of the content of the
formatting object is to be handled.
Implementations must support the
"no-wrap" value, as defined in this
Recommendation, when the value of
"linefeed-treatment" is "preserve".
You can also define the wrap-option attribute in an fo:table-cell
<fo:table-cell wrap-option="wrap"> ... </fo:table-cell>
and the fo:blocks within will inherit the property.
Zkoh's answer (wraping) will help you only if the text contains multiple words split by white spaces. In case of long words (as mensioned in question), hyphenation is way to go (as Daniel suggested).
There can be quite a few problems with hyphenation in FOP:
FOP is using hyphenation algorithms from TeX and because of some licencing issues, those algorithms (at least for some languages) are not part of standard FOP binary distribution (as stated here) and must be downloaded separately from OFFO web site. There are two kinds of hyphenation pattern files on the website. XML format (which needs to be compiled 1st to be used with FOP) and JAR file (already compiled). Be sure to download compiled version! Installation is straightforward and well documented - just drop the OFFO binary into FOP's lib folder and thats it...
Don't forget to specify language of your document and if needed, enable hyphenation on block level (its inherited so add it to the root element and you should be fine) - see FOP FAQ
Would hyphenation solve your problem? You should be able to enable hyphenation with a hyphenate="true" attribute. Placement of this attribute will depend on where you want to enable hyphenation.
Here's a link to FOP's hyphenation compliance: Apache FOP Compliance Page
Here's a link to the XSL spec: XSL Spec #hyphenate
If not, you may need to experiment with some keeps properties (like keep-together.within-line).
Use keep-together.within-column="always" instead of keep-together="always" of to keep long lines with in table cell.
The question is about serial numbers, not about dictionary words. Specifying hyphenate="true" is useful only when the hyphenation dictionary or hyphenation algorithm can successfully hyphenate the words in the text. Serial numbers would rarely generate sequences that can usefully be hyphenated as if they are words.
You can, of course, use XSLT to add zero-width spaces in text in table cells rather than doing it manually. StackOverflow likes duplicate questions (see https://stackoverflow.blog/2010/11/16/dr-strangedupe-or-how-i-learned-to-stop-worrying-and-love-duplication/), but, all the same, please see the answers in XSL-FO: Force Wrap on Table Entries.
For text overflow problem use keep-together="auto" attribute.
Text Overflow Issue
Fixed version after using keep-together="auto" attribute.
Is there a standard or open format which can be used to describe the formating of a flat file. My company integrates many different customer file formats. With an XML file it's easy to get or create an XSD to describe the XML file format. I'm looking for something similar to describe a flat file format (fixed width, delimited etc). Stylus Studio uses a proprietary .conv format to do this. That .conv format can be used at runtime to transform an arbitrary flat file to an XML file. I was just wondering if there was any more open or standards based method for doing the same thing.
I'm looking for one method of describing a variety of flat file formats whether they are fixed width or delimited, so CSV is not an answer to this question.
XFlat:
http://www.infoloom.com/gcaconfs/WEB/philadelphia99/lyons.HTM#N29
http://www.unidex.com/overview.htm
For complex cases (e.g. log files) you may consider a lexical parser.
About selecting existing flat file formats: There is the Comma-separated values (CSV) format. Or, more generally, DSV. But these are not "fixed-width", since there's a delimiter character (such as a comma) that separates individual cells. Note that though CSV is standardized, not everybody adheres to the standard. Also, CSV may be to simple for your purposes, since it doesn't allow a rich document structure.
In that respect, the standardized and only slightly more complex (but thus more useful) formats JSON and YAML are a better choice. Both are supported out of the box by plenty of languages.
Your best bet is to have a look at all languages listed as non-binary in this overview and then determine which works best for you.
About describing flat file formats: This could be very easy or difficult, depending on the format. Though in most cases easier solutions exist, one way that will work in general is to view the file format as a formal grammar, and write a lexer/parser for it. But I admit, that's quite† heavy machinery.
If you're lucky, a couple of advanced regular expressions may do the trick. Most formats will not lend themselves for that however.‡ If you plan on writing a lexer/parser yourself, I can advise PLY (Python Lex-Yacc). But many other solutions exists, in many different languages, a lot of them more convenient than the old-school Lex & Yacc. For more, see What parser generator do you recommend?
†: Yes, that may be an understatement.
‡: Even properly describing the email address format is not trivial.
COBOL (whether you like it or not) has a standard format for describing fixed-width record formats in files.
Other file formats, however, are somewhat simpler to describe. A CSV file, for example, is just a list of strings. Often the first row of a CSV file is the column names -- that's the description.
There are examples of using JSON to formulate metadata for text files. This can be applied to JSON files, CSV files and fixed-format files.
Look at http://www.projectzero.org/sMash/1.1.x/docs/zero.devguide.doc/zero.resource/declaration.html
This is IBM's sMash (Project Zero) using JSON to encode metadata. You can easily apply this to flat files.
At the end of the day, you will probably have to define your own file standard that caters specifically to your storage needs. What I suggest is using xml, YAML or JSON as your internal container for all of the file types you receive. On top of this, you will have to implement some extra validation logic to maintain meta-data such as the column sizes of the fixed width files (for importing from and exporting to fixed width). Alternatively, you can store or link a set of metadata to each file you convert to the internal format.
There may be a standard out there, but it's too hard to create 'one size fits all' solutions for these problems. There are entity relationship management tools out there (Talend, others) that make creating these mappings easier, but you will still need to spend a lot of time maintaining file format definitions and rules.
As for enforcing column width, xml might be the best solution as you can describe the formats using xml schemas (with the length restriction). For YAML or JSON, you may have to write your own logic for this, although I'm sure someone else has come up with a solution.
See XML vs comma delimited text files for further reference.
I don't know if there is any standard or open format to describe a flat file format. But one industry has done this: the banking industry. Financial institutions are indeed communicating using standardized message over a dedicated network called SWIFT. SWIFT messages were originally positional (before SWIFTML, the XMLified version). I don't know if it's a good suggestion as it's kinda obscure but maybe you could look at the SWIFT Formatting Guide, it may gives you some ideas.
Having that said, check out Flatworm, an humble flat file parser. I've used it to parse positional and/or CSV file and liked its XML descriptor format. It may be a better suggestion than SWIFT :)
CSV
CSV is a delimited data format that has fields/columns separated by the comma character and records/rows separated by newlines. Fields that contain a special character (comma, newline, or double quote), must be enclosed in double quotes. However, if a line contains a single entry which is the empty string, it may be enclosed in double quotes. If a field's value contains a double quote character it is escaped by placing another double quote character next to it. The CSV file format does not require a specific character encoding, byte order, or line terminator format.
The CSV entry on wikipedia allowed me to find a comparison of data serialization formats that is pretty much what you asked for.
The only similar thing I know of is Hachoir, which can currently parse 70 file formats:
http://bitbucket.org/haypo/hachoir/wiki/Home
I'm not sure if it really counts as a declarative language, since it's plugin parser based, but it seems to work, and is extensible, which may meet your needs just fine.
As an aside, there are interesting standardised, extensible flat-file FORMATS, such as IFF (Interchange File Format).