Do you know where can I find more details on the description of the Stanford NERFeatureFactory?
I read the one at: https://nlp.stanford.edu/nlp/javadoc/javanlp/edu/stanford/nlp/ie/NERFeatureFactory.html
but I do not understand them all (and some have no description).
For example:usePrev,
useWordPairs,
conjoinShapeNGrams,
useSum, ...
or
(pw,c) (t,c)
There was a similar question 2 years ago without a better description. I was wondering if something new came out since then.
Thanks for your help!
If you look through the source code of NERFeatureFactory you can see what is going on.
The source code is available here: https://github.com/stanfordnlp/CoreNLP/blob/master/src/edu/stanford/nlp/ie/NERFeatureFactory.java
For example, useWordPairs creates features for the word under consideration and the previous/next word. You can see this in the code starting on line 1062...
As an example, consider the features for the word New in this text ...from New York......the useWordPairs feature produces the features New-from-W-PW and New-York-W-NW
A lot of the features have descriptions in that file as well.
It's helpful to look through the code and see what is being produced. For instance the conjoinShapeNGrams feature is producing features that attach the overall shape of the word and substrings of the word. You can see fully what is going on by looking at the code.
As an example of conjoinShapeNGrams, consider the name Wordsworth which would get features like worth-Xxxxxxxxxx-CNGram-CS , Words-Xxxxxxxxxx-CNGram-CS, etc...
This feature is capturing the presence of a certain substring and word shape together.
(pw, c) refers to "previous word" and "current word", which is linked to the usePrev flag
(t, c) refers to "part of speech tag" and "current word", which is linked to the useTags flag
It doesn't look like useSum does anything anymore...
Related
I was reading the template documentation of rsyslog to find better properties and I stumble upon this one:
spifno1stsp - expert options for RFC3164 template processing
However, as you can see, the documentation is quite vague. Moreover, I have not been able to find a longer explanation anywhere. The only mentions found with Google are always about the same snippet or the same very short description.
Indeed, there is no explanation of this property:
on the entire rsyslog.com website,
or in the RFC3164,
or anywhere else actually.
It is like everybody copy & paste the same snippet here and there but it is very difficult to understand what it is actually doing.
Any idea ?
Think of it as somewhat like an if statement. If a space is present, don't do anything. Otherwise, if a space is not present, add a space.
It is useful for ensuring that just one space is added to the output, often between two strings.
For any cases like this that you find where the docs can be improved please feel free to open an issue with a request for clarification in the official GitHub rsyslog documentation project. The documentation team is understaffed, but team members will assist where they can.
If you're looking for general help, the rsyslog-users mailing list is also a good resource. I've learned a lot over the years by going over the archives and reading prior threads.
Back to your question about the spifno1stsp option:
While you will get a few hits on that option, what you'll probably find more results on is searching for the older string template option, sp-if-no-1st-sp. Here is an example of its use from the documentation page you linked to:
template(name="forwardFormat" type="string"
string="<%PRI%>%TIMESTAMP:::date-rfc3339% %HOSTNAME% %syslogtag:1:32%%msg:::sp-if-no-1st-sp%%msg%"
)
Here is the specific portion that is relevant here:
`%msg:::sp-if-no-1st-sp%%msg%`
From the Property Replacer documentation:
sp-if-no-1st-sp
This option looks scary and should probably not be used by a user. For
any field given, it returns either a single space character or no
character at all. Field content is never returned. A space is returned
if (and only if) the first character of the field’s content is NOT a
space. This option is kind of a hack to solve a problem rooted in RFC
3164: 3164 specifies no delimiter between the syslog tag sequence and
the actual message text. Almost all implementation in fact delimit the
two by a space. As of RFC 3164, this space is part of the message text
itself. This leads to a problem when building the message (e.g. when
writing to disk or forwarding). Should a delimiting space be included
if the message does not start with one? If not, the tag is immediately
followed by another non-space character, which can lead some log
parsers to misinterpret what is the tag and what the message. The
problem finally surfaced when the klog module was restructured and the
tag correctly written. It exists with other message sources, too. The
solution was the introduction of this special property replacer
option. Now, the default template can contain a conditional space,
which exists only if the message does not start with one. While this
does not solve all issues, it should work good enough in the far
majority of all cases. If you read this text and have no idea of what
it is talking about - relax: this is a good indication you will never
need this option. Simply forget about it ;)
In short, sp-if-no-1st-sp (string template option) is analogous to spifno1stsp (standard template option).
Hope that helps.
I have a large source code where most of the documentation and source code comments are in english. But one of the minor contributors wrote comments in a different language, spread in various places.
Is there a simple trick that will let me find them ? I imagine first a way to extract all comments from the code and generate a single text file (with possible source file / line number info), then pipe this through some language detection app.
If that matters, I'm on Linux and the current compiler on this project is CLang.
The only thing that comes to mind is to go through all of the code manually and check it yourself. If it's a similar language, that doesn't contain foreign letters, consider using something with a spellchecker. This way, the text that isn't recognized will get underlined, and easy to spot.
Other than that, I don't see an easy way to go through with this.
You could make a program, that reads the files and only prints the comments out to another output file, where you then spell check that file, but this would seem to be a waste of time, as you would easily be able to spot the comments yourself.
If you do make a program for that, however, keep in mind that there are three things to check for:
If comment starts with /*, make sure it stops reading when encountering */
If comment starts with //, only read one line - unless:
If line starting with // ends with \, read next line as well
While it is possible to detect a language from a string automatically, you need way more words than fit in a usual comment to do so.
Solution: Use your own eyes and your own brain...
I'm doing part-of-speech & morphological analysis project for Japanese sentences. Each sentence will have its own webpage. To make this page more visual, I want to show one picture which is somehow related to the sentence. For example, For the sentence "私は学生です" ("I'm a student"), the relevant pictures would be pictures of school, Japanese textbook, students, etc. What I have: part-of-speech tagging for every word. My approach now: use 2-3 nouns from every sentence and retrieve the first image from search results using Bing Images API. Note: all the sentence processing up to this point was done in Java.
Have a couple of questions though:
1) what is better (richer corpus & powerful search), Google Images API, Bing Images API, Flickr API, etc. for searching nouns in Japanese?
2) how do you select the most important noun from the sentence to do the query in Image Search Engine without doing complicated topic modeling, etc.?
Thanks!
Japanese WordNet has links to OpenClipart pictures. That could be another relevant source. They describe it in their paper called "Enhancing the Japanese WordNet".
I thought you would start by choosing any noun before は、が and を and giving these priority - probably in that order.
But that assumes that your part-of-speech tagging is good enough to get は=subject identified properly (as I guess you know that は is not always the subject marker).
I looked at a bunch of sample sentences here with this technique in mind and found it as good as could be expected. Except where none of those are used, which is rarish.
And sentences like this one, where you'd have to consider maybe looking for で and a noun before it in the case where there is no を or は. Because if you notice here, the word 人 (people) really doesn't tell you anything about what's being said. Without parsing context properly, you don't even know if the noun is person or people.
毎年 交通事故で 多くの人が 死にます
(many people die in traffic accidents every year)
But basically, couldn't you implement a priority/fallback type system like this?
BTW I hope your sentences all use kanji, or when you see はし (in one of the sentences linked to) you won't know whether to show a bridge or chopsticks - and showing the wrong one will probably not be good.
To make matter more specific:
How to detect people names (seems like simple case of named entity extraction?)
How to detect addresses: my best guess - find postcode (regexes); country and town names and take some text around them.
As for phones, emails - they could be probably caught by various regexes + preprocessing
Don't care about education/working experience at this point
Reasoning:
In order to build a fulltext index on resumes all vulnerable information should be stripped out from them.
P.S. any 3rd party APIs/services won't do as a solution.
The problem you're interested in is information extraction from semi structured sources. http://en.wikipedia.org/wiki/Information_extraction
I think you should download a couple of research papers in this area to get a sense of what can be done and what can't.
I feel it can't be done by a machine.
Every other resume will have a different format and layout.
The best you can do is to design an internal format and manually copy every resume content in there. Or ask candidates to fill out your form (not many will bother).
I think that the problem should be broken up into two search domains:
Finding information relating to proper names
Finding information that is formulaic
Firstly the information relating to proper names could probably be best found by searching for items that are either grammatically important or significant. I.e. English capitalizes only the first word of the sentence and proper nouns. For the gramatical rules you could look for all of the words that have the first letter of the word capitalized and check it against a database that contains the word and the type [i.e. Bob - Name, Elon - Place, England - Place].
Secondly: Information that is formulaic. This is more about the email addresses, phone numbers, and physical addresses. All of these have a specific formats that don't change. Use a regex and use an algorithm to detect the quality of the matches.
Watch out:
The grammatical rules change based on language. German capitalizes EVERY noun. It might be best to detect the language of the document prior to applying your rules. Also, another issue with this [and my resume sometimes] is how it is designed. If the resume was designed with something other than a text editor [designer tools] the text may not line up, or be in a bitmap format.
TL;DR Version: NLP techniques can help you a lot.
I'm curious what the programming terms or methodology is used when Google shows you the "did you mean" link for a word that is made up of multiple words?
For example if I type in "redflower.jpg" It knows to break that up into Red Flower
Is there a common paradigm for doing that sort of operation? Would a Lucene search give you that?
thanks!
If google does not see a lot of matching results for reflowers.jpg, it might then try to cut the words in multiple words until it finds a lot of matching results.
It might also recognize the extension (.jpg), recognize the image extension and then try to find images with the similar name.
If I would have to make an algorithm like this, I would use an huge EXISTING database (either a dictionary or a search engine) and then try what I said in the beginning of my post.
Perhaps they could to look at what other people do when they have searched for redflowers.jpg? Maybe a number of people searched for "redflowers.jpg", didn't click on any links, and then searched for "Red Flower" and found some results worth clicking on.
Of course they would have to take into account that the queries are similar (contain matching strings), otherwise some strange results might appear.