I'm using Stanford CoreNLP to make some annotations (I also added some custom annotator) and I'd like to measure how much time each annotator takes. Is there a way to do so easily?
In addition, I noticed that in the source code of the annotator there is a boolean variable verbose, but I cannot understand how to set it. Is it documented anywhere?
StanfordCoreNLP has a method String timingInformation() which will return information on how much time is being used by each annotator.
(This is unrelated to any extra printing that may be controlled by verbose, but you can often set verbose flags for individual annotators although not quite consistently. Thing like: -parse.debug or -pos.verbose on the command line, or the corresponding things in a properties file.
Related
When we are coding it is really important to maintain our code base while preserving coding standards. There are plenty of code formatting tools which are matched with the respective IDE. By using them we can format our code while preserving line indentation. So is there any tool to measure that usage of a code formatting in our code base. I already heard there are some tools to measure the test coverage. There that tool gives a percentage of test cases for actual methods in the code. By using that tool we can find the places where we do not implement test cases. Like that I want to catch the places where I do not use the code formatting. When code base has so many source files and number of lines, it's a tedious task to test if we were forgotten to run the code formatting tool. As I know there's no way to run the tool in all the files at once. So guys, if you know a solution to my problem please let me know.
I want to apply various operations to data files : algebra of sets, statistics, reporting, changes. But the format of the files is far from code examples and a bit weird. There are differents sorts of items, items type, and some of them are put together as a collection. There is a simplistic example below.
I'm new in boost::spirit and I have tried coding to split the items and get basic informations (name, version, date) required for most of treatments. Eventually it seems tricky for me. Is the problem my lack of skills or boost::spirit is not suitable to this format?
Studying boost::spirit is not a waste of time, I am sure to use it later. But I didn't find examples of code like mine, I may not go the right way.
>>>process_type_A
//name(typeA_1)
//version(A.1.99)
//date(2016.01.01)
//property1 "pA11"
//property2 "pA12"
//etc_A_1 (thousand of lines - a lot are "multiline" and/or mulitline sub-records)
<<<process_type_A
>>>process_type_A
//name(typeA_2)
//version(A.2.99)
//date(2016.01.02)
//property1 "pA21"
//property2 "pA22"
//etc_A_2 (hundred or thousand of lines)
<<<process_type_A
>>>process_type_B
//name(typeB_1)
//version(B.1.99)
//date(2016.02.01)
//property1 "pB11"
//property2 "pB12"
//etc_B_1 (hundred or thousand of lines)
<<<process_type_B
>>>paramset_type_C
//>>paramlist
////name(typeC_1)
////version(C.1.99)
////date(2016.03.01)
////property1 "pC11"
////property2 "pC12"
////etc_C_1 (hundred or thousand of lines)
//<<paramlist
//>>paramlist
////name(typeC_2)
////version(C.2.99)
////date(2016.04.01)
////property1 "pC21"
////property2 "pC22"
////etc_C_2 (hundred or thousand of lines)
//<<paramlist
<<<paramset_type_C
Code::Blocks
Boost 1.60.0
GCC Compiler on Windows and Linux
I think #Orient is right: regex w/captures is enough here.
However, Spirit has the upside of coming without a linker dependency. Here's some approaches (using seek[] and raw[]) for inspiration:
Boost spirit revert parsing
rule to extract key+phrases from a text document
Parsing text file with binary envelope using boost Spririt (binary content)
much more involved logic: How to implement #ifdef in a boost::spirit::qi grammar?
Note that Spirit X3 (still experimental) also has a seek[] directive and it will compiler much faster.
The main advice I would give about Qi is that it is a very powerful and flexible tool for parsing. You can define quite complicated, possibly recursive structures, using boost::variant, boost::optional, etc., and associate these types with qi rules and it seemingly magically does the right thing, giving you a nice AST for your data.
The biggest sources of difficulty in my (limited) experience are when you try to make it do more than that and also process the data. It's sometimes tempting to try to "eagerly" do some processing at the same time that you are parsing the data, often in a semantic action or something. Don't do it! It usually makes things harder to read in the end, a bit harder to debug, and sometimes you can be surprised what will happen if the grammar has to backtrack across your semantic action which it already executed.
qi should work great if you can write a nice grammar for your data. If you can't write an unambiguous grammar, you might be able to use qi::eps to make it parseable but you don't want to have to do that too often IMO. I don't think "hundreds or thousands" of items will pose any particular problem.
Right now the question is rather opinion-oriented -- if you can post a more complete description of the data format you have, or better, a complete code example which is failing, it might make it easier to give precise answers.
I'm parsing over 60,000 sentences with CoreNLP to get dependencies relations.
Because I only need collapsed dependencies, other dependencies types -- basic and collapsed-cc-processed -- are redundant for my own use, and make it hard to build my own codes, which take xml-output as input.
Can I get only collapsed dependencies?
If so, please let me know.
Thanks.
There is currently no way to do this. Computing the additional representations take very little computation, and so they are always reported. They should be marked specially in the XML output, however; hopefully it's not hard to filter the correct representation in the downstream code.
I am trying to use the Stanford Shift Reduce Parser with the Spanish model supplied. I am noticing, however, that unlike the Lexicalized Parser, I cannot get the TypedDependencies, despite sending the adequate flag -outputFormat typedDependencies, as it can be seen in lexparser.bat/sh.
Just in case, this is the Java code I'm using to pass the flags and creating the parser.
ShiftReduceParser model = ShiftReduceParser.loadModel(modelPath);
model.setOptionFlags("-factored", "-outputFormat", "penn,typedDependencies");
ArrayList<TaggedWord> taggedWords = new ArrayList<TaggedWord>();
Thank you
The problem here is not the ShiftReduceParser, but simply that we don't currently support typed dependencies for Spanish currently - we only have them for English and Chinese.
(Looking ahead, the most likely thing to appear first is support for Universal Dependencies in the Neural Network Dependency Parser. Indeed, you could probably train such a model yourself now.)
I'm working on a project which will do some complicated analyzing on some user-supplied input. There will be 3 parts of the code:
1) Input supplied by user, such as keywords
2) Rules, such as if keyword 1 is repeated 3 times in keyword 5, do this, etc.
3) And the analyzing itself which executes the rules and processes the user input, and generates the output necessary based on the processing.
Naturally this will lead to a lot of spaghetti code and many, many if statements in the processing code. I want to avoid that, and keep the rules (i.e. the if statements) separately from the code which loops through the user input and generates the output.
How can I do that, i.e. what is the best way?
If you have enough rules that you want to externalize, you could try using a business rules engines, like Drools in Java.
A business rules engine is a software system that executes one or more business rules in a runtime production environment. The rules might come from legal regulation ("An employee can be fired for any reason or no reason but not for an illegal reason"), company policy ("All customers that spend more than $100 at one time will receive a 10% discount"), or other sources. (Wikipedia)
It could be a little bit overhead depending of what you're trying to do. In my company we're using such kind of tools for our quality analysis tool.
Store it in XML. Easy to parse and update.
I had designed a code generator, which can be controllable from a xml file.
For each command I had a entry in the xml. I was processing the node to generate the opcode for that command. Node itself contains the actions I need to do for getting the opcode. For some commands I had to look into database, all those things I had put in this xml file.
Well, i doubt that it is necessary to have hughe if statements if polymorphism is applied correctly.
Actually, you need a proper domain model for your rules. This goes somehow into the direction of the command pattern, depending on the complexitiy of your code maybe in combination with the state machine pattern.
Once you have your model, defining rules is instantiate them correctly.
This could be done by having an xml definition, which is parsed and transformed into your model. But the new modern and even more fancy way would be using DSLs. If you program in Java and have a certain freedom about your libraries, this would be a proper use case for Embedded DSLs with Groovy. Basically you would need a Builder which constructs your model, that's all.
You always can implement factory that will create certain strategies according to passed parameters. And then you will use those strategies in your code without any if.
If it's just detecting keywords, a finite state machine or similar. If it's doing more, then other pattern matching systems, such as rules engines.
Adding an embedded scripting language to your application might help. The rules would then be expressed in scripts, executed by the applications on processing.
The idea is that scripts are easy to change and contain high level logic that will be executed by your application in details.
There are a lot of scripting languages available to do this : lua, Python, Falcon, squirrel, angelscript, etc.
Have a look at rule engines!
The approach from Lars may also be arguable.