Warnings on Spanish text processing Stanford CoreNLP Number in types > column for ... is probably priority - stanford-nlp

I've downloaded the Stanford CoreNLP from https://stanfordnlp.github.io/CoreNLP/index.html current version 3.9.2
Downloaded the Spanish Language JAR
http://nlp.stanford.edu/software/stanford-spanish-corenlp-2018-10-05-models.jar
Put that in application root folder.
Fired up server with:
C:\Stanford>java -mx4g -cp "*"
edu.stanford.nlp.pipeline.StanfordCoreNLPServer - port 9000 -timeout
15000
Loaded http://localhost:9000
Entered the text "Sí, sabes que ya llevo un rato mirándote" and selected "Spanish" and Submit.
In the console readout there are lots of warning like:
[pool-1-thread-1] WARN
edu.stanford.nlp.pipeline.TokensRegexNERAnnotator - Number in types
column for [ejecución] is probably priority: 1
Output suggests defaults are working OK, but what misconfiguration is causing this warning?

This issue should be resolved in future versions. The output doesn't mean anything regarding performance, it's just that the rules files for Spanish were missing a column. We've fixed the files, so in 4.0.0 and on those warnings should go away.

Related

Error while generating report in apache-jmeter-5.4.1.tgz

sh jmeter.sh -n -t filePath.jmx -l outFilePath.jtl -e -o folderPath
Error generating the report: org.apache.jmeter.report.dashboard.GenerationException: Error while processing samples: Consumer failed with message :Consumer failed with message :Consumer failed with message :Consumer failed with message :Begin size 0 is not equal to fixed size 5
In resume
Consumer failed with message :Begin size 0 is not equal to fixed size 5
currently using Java version "17" 2021-09-14 LTS
MacOS big SUR version 11.4
the properties files are fresh and values are equal to the default ones
Jmeter 5.4.1
outFile.jtl
timeStamp,elapsed,label,responseCode,responseMessage,threadName,dataType,success,failureMessage,bytes,sentBytes,grpThreads,allThreads,URL,Latency,IdleTime,Connect
1632430450882,1117,HTTP Request,200,OK,FIRST_Jmeter_Test 1-3,text,true,,3824,557,3,3,Url_hidden,1111,0,256
1632430450448,1755,HTTP Request,200,OK,FIRST_Jmeter_Test 1-2,text,true,,3836,557,3,3,Url_hidden,1755,0,690
1632430450448,1755,HTTP Request,200,OK,FIRST_Jmeter_Test 1-1,text,true,,3828,557,3,3,Url_hidden,1755,0,690
1632430452312,585,HTTP Request,200,OK,FIRST_Jmeter_Test 1-2,text,true,,3836,557,3,3,Url_hidden,585,0,144
1632430452238,758,HTTP Request,200,OK,FIRST_Jmeter_Test 1-3,text,true,,3832,557,3,3,Url_hidden,757,0,137
1632430452301,806,HTTP Request,200,OK,FIRST_Jmeter_Test 1-1,text,true,,3828,557,3,3,Url_hidden,805,0,136
1632430452962,550,HTTP Request,200,OK,FIRST_Jmeter_Test 1-2,text,true,,3824,557,3,3,Url_hidden,550,0,152
1632430453328,593,HTTP Request,200,OK,FIRST_Jmeter_Test 1-1,text,true,,3828,557,2,2,Url_hidden,592,0,135
1632430453276,815,HTTP Request,200,OK,FIRST_Jmeter_Test 1-3,text,true,,3840,557,1,1,Url_hidden,814,0,142
The thread run successfully and the jtl file is created as well.
I quite new on Jmeter and tried to see where that "size" attribute is currently locate to see how change it, but could not find it on any *.properties file
any though how can be this fixed, what the message is referring to?
thanks
This error is likely due to an incompatibility of JMeter with Java 17 (as mentioned by Dmitri T).
Whilst we wait for a fix, a workaround would be downgrading to Java 16. I can confirm this solved the issue for me.
I had the same issue with:
MacOS big SUR version 11.6
Jmeter 5.4.1 (installed via brew)
Temurin 11 (LTS) OpenJDK & Temurin 8 (LTS) OpenJDK
Running Jmeter with Java 8 solved my issue. The problem was, that Jmeter always used Java 11. I struggled some days to find out, how to set the Jmeter Java version:
set the correct Java 8 Home in: /usr/local/Cellar/jmeter/5.4.1/bin/jmeter:
JAVA_HOME=$JAVA_8_HOME exec "/usr/local/Cellar/jmeter/5.4.1/libexec/bin/jmeter" "$#"
Maybe there are easier ways to set Java 8 for Jmeter - but this was the only solution which worked for me.
I had Java 8 installed however, the JMeter was picking up Java 1.17 which was nowhere in my system. So uninstalling and reinstalling jmeter, worked like a charm for me.
I cannot reproduce your issue using:
openjdk:8-jre-alpine docker image
JMeter 5.4.1
Test plan Test.jmx from extras folder of JMeter
Demo:
If you cannot reproduce the above behaviour I think you made some changes either to Results File Configuration or to Reporting Configuration or both so you need to inspect all the JMeter Properties which differ from the defaults and restore their values to the original ones.
If you need further support you need to share at least first 2 lines of your outFilePath.jtl results file. Better if possible the full file and all the .properties files from JMeter's "bin" folder.
exec commandline /usr/libexec/java_home -V
enter image description here
replace bin of jmeter JAVA_HOME
enter image description here
and then ,it is success.

Starting a payara 5 has encountered

I have build a very simple project of hello world in
Payara 5 (5.181)
JSF 2.3
JDK 1.8
CDI 2.0
Maven
and encountered a problem
Unable to start server due following issues: Launch process failed with exit code 1
in console it throws an error of :
Error: Could not find or load main class server\payara5\glassfish.lib.grizzly-npn-bootstrap.jar
[PIC] Payara 5 Error
It seems that the Payara Tools for Eclipse suffer by several bugs that may cause this. In my case, the following workarounds helped:
The Payara installation path should not contain spaces (e.g. Program Files\Payara)
It seems that only Java 8 is supported at the time
Open the domain.xml configuration file for the domain you are trying to start (typically payara_install_path/glassfish/domains/domain1/config/domain1.xml) and search for "Xbootclasspath". You should find a couple of lines like
<jvm-options>[1.8.0|1.8.0u120]-Xbootclasspath/p:${com.sun.aas.installRoot}/lib/grizzly-npn-bootstrap-1.6.jar</jvm-options>
<jvm-options>[1.8.0u121|1.8.0u160]-Xbootclasspath/p:${com.sun.aas.installRoot}/lib/grizzly-npn-bootstrap-1.7.jar</jvm-options>
<jvm-options>[1.8.0u161|1.8.0u190]-Xbootclasspath/p:${com.sun.aas.installRoot}/lib/grizzly-npn-bootstrap-1.8.jar</jvm-options>
<jvm-options>[1.8.0u191|1.8.0u500]-Xbootclasspath/p:${com.sun.aas.installRoot}/lib/grizzly-npn-bootstrap-1.8.1.jar</jvm-options>
Depending of your installed Java version (try running java --version) and choose the appropriate line (most likely the last one). Remove the remaining lines and remove the [...] part at the beginning of the chosen line so you will get something like
<jvm-options>-Xbootclasspath/p:${com.sun.aas.installRoot}/lib/grizzly-npn-bootstrap-1.8.1.jar</jvm-options>
After this, the tools seem to start normally.
The Problem is with Java version. The grizzly-npn-bootstrap-1.8.1.jar Jar is placed in bootclasspath, thats why it requires proper java version to start payara server. So remove unnecessary bootstrap jar from domain.xml.
In Windows:
1) Go To ---C:\Users\xxxx\payara5\glassfish\domains\domain1\config\domain.xml
2) According to my java verson(java version "1.8.0_191") I deleted the following lines from domain.xml. So delete according to your java version.
<jvm-options>[1.8.0|1.8.0u120]-Xbootclasspath/p:${com.sun.aas.installRoot}/lib/grizzly-npn-bootstrap-1.6.jar</jvm-options>
<jvm-options>[1.8.0u121|1.8.0u160]-Xbootclasspath/p:${com.sun.aas.installRoot}/lib/grizzly-npn-bootstrap-1.7.jar</jvm-options>
<jvm-options>[1.8.0u161|1.8.0u190]-Xbootclasspath/p:${com.sun.aas.installRoot}/lib/grizzly-npn-bootstrap-1.8.jar</jvm-options>
3) Remove this [1.8.0u191|1.8.0u500] part from jvm-options & Edit the line in your domain.xml(w.r.t java -version) as shown below
<jvm-options>-Xbootclasspath/p:${com.sun.aas.installRoot}/lib/grizzly-npn-bootstrap-1.8.1.jar</jvm-options>
4) restart your server.
As Radkovo said, "The Payara installation path should not contain spaces (e.g. Program Files\Payara)", so I moved the Payara to the Documents folder.
Problem solved!

Too small initial heap error - stanford parser

I am trying my hands on the Stanford dependency parser. I tried running the parser from command line on windows to extract the dependencies using this command:
java -mx100m -cp "stanford-parser.jar" edu.stanford.nlp.trees.EnglishGrammaticalStructure -sentFile english-onesent.txt -collapsedTree -CCprocessed -parserFile englishPCFG.ser.gz
I am getting the following error:
Error occurred during initialization of VM
Too small initial heap
I changed the memory size to -mx1024, -mx2048 as well as -mx4096. It didn't change anything and the error persists.
What am I missing?
Type -Xmx1024m instead of -mx1024.
See https://docs.oracle.com/javase/8/docs/technotes/tools/windows/java.html
It should be -mx1024m. I skipped m.
One more thing: in the -cp, the model jar should also be included.
... -cp "stanford-parser.jar;stanford-parser-3.5.2-models.jar"...
(assuming you are using the latest version).
Otherwise an IO Exception will be thrown.
There may be some arguments that are preexisting in the IDE.
In eclipse:
Go to-> Run as-> run configuration-> Arguments
then Delete the arguments that are used previously.
Restart your eclipse.
Worked for me!

NullPointerException with Stanford NLP Spanish POS tagging

All -
Running Stanford CoreNLP 3.4.1, plus the Spanish models. I have a directory of approximately 100 Spanish raw text documents, UTF-8 encoded. For each one, I execute the following commandline:
java -cp stanford-corenlp-3.4.1.jar:stanford-spanish-corenlp-2014-08-26-models.jar:xom.jar:joda-time.jar:jollyday.jar:ejml-0.23.jar -Xmx2g edu.stanford.nlp.pipeline.StanfordCoreNLP -props <propsfile> -file <txtfile>
The props file looks like this:
annotators = tokenize, ssplit, pos
tokenize.language = es
pos.model = edu/stanford/nlp/models/pos-tagger/spanish/spanish-distsim.tagger
For almost every file, I get the following error:
Exception in thread "main" java.lang.RuntimeException: Error annotating :
at edu.stanford.nlp.pipeline.StanfordCoreNLP$15.run(StanfordCoreNLP.java:1287)
at edu.stanford.nlp.pipeline.StanfordCoreNLP.processFiles(StanfordCoreNLP.java:1347)
at edu.stanford.nlp.pipeline.StanfordCoreNLP.run(StanfordCoreNLP.java:1389)
at edu.stanford.nlp.pipeline.StanfordCoreNLP.main(StanfordCoreNLP.java:1459)
Caused by: java.lang.NullPointerException
at edu.stanford.nlp.tagger.maxent.ExtractorSpanishStrippedVerb.extract(ExtractorFramesRare.java:1626)
at edu.stanford.nlp.tagger.maxent.Extractor.extract(Extractor.java:153)
at edu.stanford.nlp.tagger.maxent.TestSentence.getExactHistories(TestSentence.java:465)
at edu.stanford.nlp.tagger.maxent.TestSentence.getHistories(TestSentence.java:440)
at edu.stanford.nlp.tagger.maxent.TestSentence.getHistories(TestSentence.java:428)
at edu.stanford.nlp.tagger.maxent.TestSentence.getExactScores(TestSentence.java:377)
at edu.stanford.nlp.tagger.maxent.TestSentence.getScores(TestSentence.java:372)
at edu.stanford.nlp.tagger.maxent.TestSentence.scoresOf(TestSentence.java:713)
at edu.stanford.nlp.sequences.ExactBestSequenceFinder.bestSequence(ExactBestSequenceFinder.java:91)
at edu.stanford.nlp.sequences.ExactBestSequenceFinder.bestSequence(ExactBestSequenceFinder.java:31)
at edu.stanford.nlp.tagger.maxent.TestSentence.runTagInference(TestSentence.java:322)
at edu.stanford.nlp.tagger.maxent.TestSentence.testTagInference(TestSentence.java:312)
at edu.stanford.nlp.tagger.maxent.TestSentence.tagSentence(TestSentence.java:135)
at edu.stanford.nlp.tagger.maxent.MaxentTagger.tagSentence(MaxentTagger.java:998)
at edu.stanford.nlp.pipeline.POSTaggerAnnotator.doOneSentence(POSTaggerAnnotator.java:147)
at edu.stanford.nlp.pipeline.POSTaggerAnnotator.annotate(POSTaggerAnnotator.java:110)
at edu.stanford.nlp.pipeline.AnnotationPipeline.annotate(AnnotationPipeline.java:67)
at edu.stanford.nlp.pipeline.StanfordCoreNLP.annotate(StanfordCoreNLP.java:847)
at edu.stanford.nlp.pipeline.StanfordCoreNLP$15.run(StanfordCoreNLP.java:1275)
Any ideas? I haven't even begun to track this down. I'm certain the problem is in POS; tokenize and ssplit run just fine.
P.S. Please don't say "Upgrade to 3.5.0"; I don't currently have Java 8 installed and don't want to install it yet.
Thanks in advance.
Yes, it seems like there's a bug in the 3.4.1 Spanish models.
The Spanish 3.5.0 models actually seem to be compatible with Java 7. You can download the models used in 3.5 (stanford-spanish-corenlp-2014-10-23-models.jar) and put that on your classpath instead. This fixed the problem for me running Java 7 locally.

Sonar - OutOfMemoryError: Java heap space

I am deploying a large Java project on Sonar using "Findbugs" as profile and getting the error below:
Caused by: java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError:
Java heap space
What i have tried to resolve this:
Replaced %SONAR_RUNNER_OPTS% with -Xms256m -Xmx1024m to increase the heap size in sonar-runner bat file.
Put "sonar.findbugs.effort" parameter as "Min" in Sonar global parameters.
But both of above methods didn't work for me.
I had the same problem and found a very different solution, perhaps because I'm having a hard time swallowing the previous answers / comments. With 10 million lines of code (that's more code than is in an F16 fighter jet), if you have a 100 characters per line (a crazy size), you could load the whole code base into 1GB of memory. I set it 8GB of memory and it still failed. Why?
Answer: Because the community Sonar C++ scanner seems to have a bug where it picks up ANY file with the letter 'c' in its extension. That includes .doc, .docx, .ipch, etc. Hence, the reason it's running out of memory is because it's trying to read some file that it thinks is 300mb of pure code but really it should be ignored.
Solution: Find the extensions used by all of the files in your project (see more here):
dir /s /b | perl -ne 'print $1 if m/\.([^^.\\\\]+)$/' | sort -u | grep c
Then add these other extensions as exclusions in your sonar.properties file:
sonar.exclusions=**/*.doc,**/*.docx,**/*.ipch
Then set your memory limits back to regular amounts.
%JAVA_EXEC% -Xmx1024m -XX:MaxPermSize=512m -XX:ReservedCodeCacheSize=128m %SONAR_RUNNER_OPTS% ...
this has worked for me:
SONAR_RUNNER_OPTS="-Xmx3062m -XX:MaxPermSize=512m -XX:ReservedCodeCacheSize=128m"
I set it direct in the sonar-runner(.bat) file
I had the same problem when running sonar with maven. In my case it helped to call sonar separately:
mvn clean install && mvn sonar:sonar
instead of
mvn clean install sonar:sonar
http://docs.sonarqube.org/display/SONAR/Analyzing+with+Maven
Remark: Because my solution is connected to maven, this is not the direct answer for the question. But it might help other users who stumple upon it.
What you can do it to create your own quality profile with just some Findbugs rules at first, and then progressively add more and more until you reach his OutOfMemoryError. There's probably only a single rule that makes all this fail because your code violates it - and if you deactivate this rule, it will certainly work.
I know this thread is a bit old but this info might help someone.
For me the problem was not like suggested by the top-answer with the C++ plugin.
Instead my problem was the Xml-Plugin (https://docs.sonarqube.org/display/PLUG/SonarXML)
after I deactivated it the analysis worked again.
You can solve this issue by increase the maximum memory allocated to the appropriate process by increasing the -Xmx memory setting for the corresponding Java process in your sonar.properties file
under SonarQube/conf/sonar.properties
uncomment below lines and increase the memory as you want:
For Web: Xmx5123m -Xms1536m -XX:+HeapDumpOnOutOfMemoryError
For ElasticSearch: Xms512m -Xmx1536m -XX:+HeapDumpOnOutOfMemoryError
For Compute Engine: sonar.ce.javaOpts=-Xmx1536m -Xms128m -XX:+HeapDumpOnOutOfMemoryError
The problem is on FindBugs side. I suppose you're analyzing a large project that probably has many violations. Take a look at two threads in Sonar's mailing list having the same issue. There are some ideas you can try for yourself.
http://sonar.15.n6.nabble.com/java-lang-OutOfMemoryError-Java-heap-space-td4898141.html
http://sonar.15.n6.nabble.com/java-lang-OutOfMemoryError-Java-heap-space-td5001587.html
I know this is old, but I am just posting my answer anyway. I realized I was using the 32bit JDK (version 8) and after uninstalling it and then installing 64bit JDK (version 12) the problem disappeared.

Resources