can someone help me to correct my setting for performing coreference annotation for French by using coreNLP? I have tryed the basic suggestion by editing the properties file:
annotators = tokenize, ssplit, pos, parse, lemma, ner, parse, depparse, mention, coref
tokenize.language = fr
pos.model = edu/stanford/nlp/models/pos-tagger/french/french.tagger
parse.model = edu/stanford/nlp/models/lexparser/frenchFactored.ser.gz
The command:
java -cp "*" -Xmx2g edu.stanford.nlp.pipeline.StanfordCoreNLP -props frenchProps.properties -file frenchFile.txt
which gets the following output log:
[main] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator tokenize
[main] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator ssplit
[main] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator pos
Reading POS tagger model from edu/stanford/nlp/models/pos-tagger/french/french.tagger ... done [0.3 sec].
[main] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator parse
[main] INFO edu.stanford.nlp.parser.common.ParserGrammar - Loading parser from serialized file edu/stanford/nlp/models/lexparser/frenchFactored.ser.gz ...
done [2.2 sec].
[main] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator lemma
[main] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator ner
Loading classifier from edu/stanford/nlp/models/ner/english.all.3class.distsim.crf.ser.gz ... done [2.0 sec].
Loading classifier from edu/stanford/nlp/models/ner/english.muc.7class.distsim.crf.ser.gz ... done [0.7 sec].
Loading classifier from edu/stanford/nlp/models/ner/english.conll.4class.distsim.crf.ser.gz ... done [0.9 sec].
[main] INFO edu.stanford.nlp.time.JollyDayHolidays - Initializing JollyDayHoliday for SUTime from classpath edu/stanford/nlp/models/sutime/jollyday/Holidays_sutime.xml as sutime.binder.1.
Reading TokensRegex rules from edu/stanford/nlp/models/sutime/defs.sutime.txt
ago 23, 2016 5:37:34 PM edu.stanford.nlp.ling.tokensregex.CoreMapExpressionExtractor appendRules
INFORMACIÓN: Read 83 rules
Reading TokensRegex rules from edu/stanford/nlp/models/sutime/english.sutime.txt
ago 23, 2016 5:37:34 PM edu.stanford.nlp.ling.tokensregex.CoreMapExpressionExtractor appendRules
INFORMACIÓN: Read 267 rules
Reading TokensRegex rules from edu/stanford/nlp/models/sutime/english.holidays.sutime.txt
ago 23, 2016 5:37:34 PM edu.stanford.nlp.ling.tokensregex.CoreMapExpressionExtractor appendRules
INFORMACIÓN: Read 25 rules
[main] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator parse
[main] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator depparse
Loading depparse model file: edu/stanford/nlp/models/parser/nndep/english_UD.gz ...
PreComputed 100000, Elapsed Time: 1.639 (s)
Initializing dependency parser done [6.4 sec].
[main] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator mention
Using mention detector type: rule
[main] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator coref
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOfRange(Arrays.java:3664)
at java.lang.String.<init>(String.java:207)
at java.lang.StringBuilder.toString(StringBuilder.java:407)
at java.io.ObjectInputStream$BlockDataInputStream.readUTFBody(ObjectInputStream.java:3097)
at java.io.ObjectInputStream$BlockDataInputStream.readUTF(ObjectInputStream.java:2892)
at java.io.ObjectInputStream.readString(ObjectInputStream.java:1646)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1344)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:373)
at java.util.HashMap.readObject(HashMap.java:1402)
at sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1058)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1909)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1808)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2018)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1942)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1808)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:373)
at edu.stanford.nlp.io.IOUtils.readObjectFromURLOrClasspathOrFileSystem(IOUtils.java:324)
at edu.stanford.nlp.scoref.SimpleLinearClassifier.<init>(SimpleLinearClassifier.java:30)
at edu.stanford.nlp.scoref.PairwiseModel.<init>(PairwiseModel.java:75)
at edu.stanford.nlp.scoref.PairwiseModel$Builder.build(PairwiseModel.java:57)
at edu.stanford.nlp.scoref.ClusteringCorefSystem.<init>(ClusteringCorefSystem.java:31)
at edu.stanford.nlp.scoref.StatisticalCorefSystem.fromProps(StatisticalCorefSystem.java:48)
at edu.stanford.nlp.pipeline.CorefAnnotator.<init>(CorefAnnotator.java:66)
at edu.stanford.nlp.pipeline.AnnotatorImplementations.coref(AnnotatorImplementations.java:220)
at edu.stanford.nlp.pipeline.AnnotatorFactories$13.create(AnnotatorFactories.java:515)
at edu.stanford.nlp.pipeline.AnnotatorPool.get(AnnotatorPool.java:85)
at edu.stanford.nlp.pipeline.StanfordCoreNLP.construct(StanfordCoreNLP.java:375)
Which made me to think there are extra missing configuration stuff.
AFAIK CoreNLP doesn't offer coreference resolution for French. (see also http://stanfordnlp.github.io/CoreNLP/coref.html)
Related
I am getting below error while running import-hive.sh
Could you please help me out on this?
hadoop#0.0.0.0:~/apache-atlas-2.1.0/hook/apache-atlas-hive-hook-2.1.0/hook-bin$ ./import-hive.sh
Using Hive configuration directory [/home/hadoop/hive/conf]
Log file for import is /home/hadoop/apache-atlas-2.1.0/hook/apache-atlas-hive-hook-2.1.0/logs/import-hive.log
2021-07-13T15:43:21,449 INFO [main] org.apache.atlas.ApplicationProperties - Looking for atlas-application.properties in classpath
2021-07-13T15:43:21,452 INFO [main] org.apache.atlas.ApplicationProperties - Loading atlas-application.properties from file:/home/hadoop/hive/conf/atlas-application.properties
2021-07-13T15:43:21,505 INFO [main] org.apache.atlas.ApplicationProperties - Using graphdb backend 'janus'
2021-07-13T15:43:21,505 INFO [main] org.apache.atlas.ApplicationProperties - Using storage backend 'hbase2'
2021-07-13T15:43:21,506 INFO [main] org.apache.atlas.ApplicationProperties - Using index backend 'solr'
2021-07-13T15:43:21,506 INFO [main] org.apache.atlas.ApplicationProperties - Atlas is running in MODE: PROD.
2021-07-13T15:43:21,506 INFO [main] org.apache.atlas.ApplicationProperties - Setting solr-wait-searcher property 'true'
2021-07-13T15:43:21,506 INFO [main] org.apache.atlas.ApplicationProperties - Setting index.search.map-name property 'false'
2021-07-13T15:43:21,506 INFO [main] org.apache.atlas.ApplicationProperties - Setting atlas.graph.index.search.max-result-set-size = 150
2021-07-13T15:43:21,506 INFO [main] org.apache.atlas.ApplicationProperties - Property (set to default) atlas.graph.cache.db-cache = true
2021-07-13T15:43:21,506 INFO [main] org.apache.atlas.ApplicationProperties - Property (set to default) atlas.graph.cache.db-cache-clean-wait = 20
2021-07-13T15:43:21,506 INFO [main] org.apache.atlas.ApplicationProperties - Property (set to default) atlas.graph.cache.db-cache-size = 0.5
2021-07-13T15:43:21,506 INFO [main] org.apache.atlas.ApplicationProperties - Property (set to default) atlas.graph.cache.tx-cache-size = 15000
2021-07-13T15:43:21,506 INFO [main] org.apache.atlas.ApplicationProperties - Property (set to default) atlas.graph.cache.tx-dirty-size = 120
Enter username for atlas :- admin
Enter password for atlas :-
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/security/authentication/client/ConnectionConfigurator
at org.apache.atlas.AtlasBaseClient.getClient(AtlasBaseClient.java:287)
at org.apache.atlas.AtlasBaseClient.initializeState(AtlasBaseClient.java:454)
at org.apache.atlas.AtlasBaseClient.initializeState(AtlasBaseClient.java:449)
at org.apache.atlas.AtlasBaseClient.<init>(AtlasBaseClient.java:132)
at org.apache.atlas.AtlasClientV2.<init>(AtlasClientV2.java:94)
at org.apache.atlas.hive.bridge.HiveMetaStoreBridge.main(HiveMetaStoreBridge.java:134)
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.security.authentication.client.ConnectionConfigurator
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
... 6 more
Failed to import Hive Meta Data!!!
I am using the pycorenlp client in order to talk to the Stanford CoreNLP Server. In my setup I am setting pipelineLanguage to german like this:
from pycorenlp import StanfordCoreNLP
nlp = StanfordCoreNLP('http://localhost:9000')
text = 'Das große Auto.'
output = nlp.annotate(text, properties={
'annotators': 'tokenize,ssplit,pos,depparse,parse',
'outputFormat': 'json',
'pipelineLanguage': 'german'
})
However, from the looks I'd say that it's not working:
output['sentences'][0]['tokens']
will return:
[{'after': ' ',
'before': '',
'characterOffsetBegin': 0,
'characterOffsetEnd': 3,
'index': 1,
'originalText': 'Das',
'pos': 'NN',
'word': 'Das'},
{'after': ' ',
'before': ' ',
'characterOffsetBegin': 4,
'characterOffsetEnd': 9,
'index': 2,
'originalText': 'große',
'pos': 'NN',
'word': 'große'},
{'after': '',
'before': ' ',
'characterOffsetBegin': 10,
'characterOffsetEnd': 14,
'index': 3,
'originalText': 'Auto',
'pos': 'NN',
'word': 'Auto'},
{'after': '',
'before': '',
'characterOffsetBegin': 14,
'characterOffsetEnd': 15,
'index': 4,
'originalText': '.',
'pos': '.',
'word': '.'}]
This should be more like
Das große Auto
POS: DT JJ NN
It seems to me that setting 'pipelineLanguage': 'de' does not work for some reason.
I've executed
java -mx4g -cp "*" edu.stanford.nlp.pipeline.StanfordCoreNLPServer -port 9000 -timeout 15000
in order to start the server.
I am getting the following from the logger:
[main] INFO CoreNLP - StanfordCoreNLPServer listening at /0:0:0:0:0:0:0:0:9000
[pool-1-thread-3] ERROR CoreNLP - Failure to load language specific properties: StanfordCoreNLP-german.properties for german
[pool-1-thread-3] INFO CoreNLP - [/127.0.0.1:60700] API call w/annotators tokenize,ssplit,pos,depparse,parse
Das große Auto.
[pool-1-thread-3] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator tokenize
[pool-1-thread-3] INFO edu.stanford.nlp.pipeline.TokenizerAnnotator - No tokenizer type provided. Defaulting to PTBTokenizer.
[pool-1-thread-3] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator ssplit
[pool-1-thread-3] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator pos
[pool-1-thread-3] INFO edu.stanford.nlp.tagger.maxent.MaxentTagger - Loading POS tagger from edu/stanford/nlp/models/pos-tagger/english-left3words/english-left3words-distsim.tagger ... done [0.5 sec].
[pool-1-thread-3] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator depparse
[pool-1-thread-3] INFO edu.stanford.nlp.parser.nndep.DependencyParser - Loading depparse model file: edu/stanford/nlp/models/parser/nndep/english_UD.gz ...
[pool-1-thread-3] INFO edu.stanford.nlp.parser.nndep.Classifier - PreComputed 99996, Elapsed Time: 8.645 (s)
[pool-1-thread-3] INFO edu.stanford.nlp.parser.nndep.DependencyParser - Initializing dependency parser ... done [9.8 sec].
[pool-1-thread-3] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator parse
[pool-1-thread-3] INFO edu.stanford.nlp.parser.common.ParserGrammar - Loading parser from serialized file edu/stanford/nlp/models/lexparser/englishPCFG.ser.gz ... done [0.3 sec].
Apparently the server is loading the models for the English language - without warning me about that.
Alright, I just downloaded the models jar for German from the website and moved it into the directory where I extracted the server e.g.
~/Downloads/stanford-corenlp-full-2017-06-09 $
After re-running the server, the model was successfully loaded.
[pool-1-thread-3] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator tokenize
[pool-1-thread-3] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator ssplit
[pool-1-thread-3] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator pos
[pool-1-thread-3] INFO edu.stanford.nlp.tagger.maxent.MaxentTagger - Loading POS tagger from edu/stanford/nlp/models/pos-tagger/german/german-hgc.tagger ... done [5.1 sec].
[pool-1-thread-3] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator depparse
[pool-1-thread-3] INFO edu.stanford.nlp.parser.nndep.DependencyParser - Loading depparse model file: edu/stanford/nlp/models/parser/nndep/UD_German.gz ...
[pool-1-thread-3] INFO edu.stanford.nlp.parser.nndep.Classifier - PreComputed 99984, Elapsed Time: 11.419 (s)
[pool-1-thread-3] INFO edu.stanford.nlp.parser.nndep.DependencyParser - Initializing dependency parser ... done [12.2 sec].
[pool-1-thread-3] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator parse
[pool-1-thread-3] INFO edu.stanford.nlp.parser.common.ParserGrammar - Loading parser from serialized file edu/stanford/nlp/models/lexparser/germanFactored.ser.gz ... done [1.0 sec].
[pool-1-thread-3] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator lemma
[pool-1-thread-3] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator ner
[pool-1-thread-3] INFO edu.stanford.nlp.ie.AbstractSequenceClassifier - Loading classifier from edu/stanford/nlp/models/ner/german.conll.hgc_175m_600.crf.ser.gz ... done [0.7 sec].
I'm trying to parse below input (there are 2 records in this input)using Elephantbird json loader
[{"node_disk_lnum_1":36,"node_disk_xfers_in_rate_sum":136.40000000000001,"node_disk_bytes_in_rate_22":
187392.0, "node_disk_lnum_7": 13}]
[{"node_disk_lnum_1": 36, "node_disk_xfers_in_rate_sum":
105.2,"node_disk_bytes_in_rate_22": 123084.8, "node_disk_lnum_7":13}]
Here is my syntax:
register '/home/data/Desktop/elephant-bird-pig-4.1.jar';
a = LOAD '/pig/tc1.log' USING
com.twitter.elephantbird.pig.load.JsonLoader('-nestedLoad') as (json:map[]);
b = FOREACH a GENERATE flatten(json#'node_disk_lnum_1') AS
node_disk_lnum_1,flatten(json#'node_disk_xfers_in_rate_sum') AS
node_disk_xfers_in_rate_sum,flatten(json#'node_disk_bytes_in_rate_22') AS
node_disk_bytes_in_rate_22, flatten(json#'node_disk_lnum_7') AS
node_disk_lnum_7;
DESCRIBE b;
b describe result:
b: {node_disk_lnum_1: bytearray,node_disk_xfers_in_rate_sum:
bytearray,node_disk_bytes_in_rate_22: bytearray,node_disk_lnum_7:
bytearray}
c = FOREACH b GENERATE node_disk_lnum_1;
DESCRIBE c;
c: {node_disk_lnum_1: bytearray}
DUMP c;
Expected Result:
36, 136.40000000000001, 187392.0, 13
36, 105.2, 123084.8, 13
Throwing the below error
2017-02-06 01:05:49,337 [main] INFO
org.apache.pig.tools.pigstats.ScriptState - Pig features used in the
script: UNKNOWN 2017-02-06 01:05:49,386 [main] INFO
org.apache.pig.data.SchemaTupleBackend - Key [pig.schematuple] was not
set... will not generate code. 2017-02-06 01:05:49,387 [main] INFO
org.apache.pig.newplan.logical.optimizer.LogicalPlanOptimizer -
{RULES_ENABLED=[AddForEach, ColumnMapKeyPrune, ConstantCalculator,
GroupByConstParallelSetter, LimitOptimizer, LoadTypeCastInserter,
MergeFilter, MergeForEach, PartitionFilterOptimizer,
PredicatePushdownOptimizer, PushDownForEachFlatten, PushUpFilter,
SplitFilter, StreamTypeCastInserter]} 2017-02-06 01:05:49,390 [main]
INFO org.apache.pig.newplan.logical.rules.ColumnPruneVisitor - Map
key required for a: $0->[node_disk_lnum_1,
node_disk_xfers_in_rate_sum, node_disk_bytes_in_rate_22,
node_disk_lnum_7]
2017-02-06 01:05:49,395 [main] INFO
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler
- File concatenation threshold: 100 optimistic? false 2017-02-06 01:05:49,398 [main] INFO
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer
- MR plan size before optimization: 1 2017-02-06 01:05:49,398 [main] INFO
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer
- MR plan size after optimization: 1 2017-02-06 01:05:49,425 [main] INFO org.apache.pig.tools.pigstats.mapreduce.MRScriptState - Pig
script settings are added to the job 2017-02-06 01:05:49,426 [main]
INFO
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler
- mapred.job.reduce.markreset.buffer.percent is not set, set to default 0.3 2017-02-06 01:05:49,428 [main] ERROR
org.apache.pig.tools.grunt.Grunt - ERROR 2998: Unhandled internal
error. com/twitter/elephantbird/util/HadoopCompat
Please help what am I missing?
You do not have any nested data in your json,so remove -nestedload
a = LOAD '/pig/tc1.log' USING com.twitter.elephantbird.pig.load.JsonLoader() as (json:map[]);
I'm trying to run JMeter for performance testing on a Magento 2 website. So, far I've been able to integrate the benchmark.jmx file provided by Magento into JMeter. But when I try to run it, it starts and ends immediately. This is the error I get
2016/09/01 09:43:43 WARN - jmeter.testbeans.BeanInfoSupport: Localized strings not available for bean class kg.apc.jmeter.config.redis.RedisDataSet java.util.MissingResourceException: Can't find bundle for base name kg.apc.jmeter.config.redis.RedisDataSetResources, locale en_US
at java.util.ResourceBundle.throwMissingResourceException(ResourceBundle.java:1499)
at java.util.ResourceBundle.getBundleImpl(ResourceBundle.java:1322)
at java.util.ResourceBundle.getBundle(ResourceBundle.java:795)
at org.apache.jmeter.testbeans.BeanInfoSupport.<init>(BeanInfoSupport.java:126)
at kg.apc.jmeter.config.redis.RedisDataSetBeanInfo.<init>(RedisDataSetBeanInfo.java:69)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at java.lang.Class.newInstance(Class.java:383)
at com.sun.beans.finder.InstanceFinder.instantiate(InstanceFinder.java:96)
at com.sun.beans.finder.InstanceFinder.find(InstanceFinder.java:66)
at java.beans.Introspector.findExplicitBeanInfo(Introspector.java:438)
at java.beans.Introspector.<init>(Introspector.java:388)
at java.beans.Introspector.getBeanInfo(Introspector.java:163)
at org.apache.jmeter.testbeans.gui.TestBeanGUI.<init>(TestBeanGUI.java:168)
at org.apache.jmeter.gui.util.MenuFactory.initializeMenus(MenuFactory.java:488)
at org.apache.jmeter.gui.util.MenuFactory.<clinit>(MenuFactory.java:160)
at org.apache.jmeter.control.gui.TestPlanGui.createPopupMenu(TestPlanGui.java:93)
at org.apache.jmeter.gui.tree.JMeterTreeNode.createPopupMenu(JMeterTreeNode.java:156)
at org.apache.jmeter.gui.action.EditCommand.doAction(EditCommand.java:47)
at org.apache.jmeter.gui.action.ActionRouter.performAction(ActionRouter.java:80)
at org.apache.jmeter.gui.action.ActionRouter.access$000(ActionRouter.java:40)
at org.apache.jmeter.gui.action.ActionRouter$1.run(ActionRouter.java:62)
at java.awt.event.InvocationEvent.dispatch(InvocationEvent.java:312)
at java.awt.EventQueue.dispatchEventImpl(EventQueue.java:745)
at java.awt.EventQueue.access$300(EventQueue.java:103)
at java.awt.EventQueue$3.run(EventQueue.java:706)
at java.awt.EventQueue$3.run(EventQueue.java:704)
at java.security.AccessController.doPrivileged(Native Method)
at java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(ProtectionDomain.java:77)
at java.awt.EventQueue.dispatchEvent(EventQueue.java:715)
at java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:242)
at java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:161)
at java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:150)
at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:146)
at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:138)
at java.awt.EventDispatchThread.run(EventDispatchThread.java:91)
2016/09/01 09:43:44 INFO - jmeter.util.BSFTestElement: Registering JMeter version of JavaScript engine as work-round for BSF-22
2016/09/01 09:43:45 INFO - jmeter.protocol.http.sampler.HTTPSamplerBase: Parser for text/html is org.apache.jmeter.protocol.http.parser.LagartoBasedHtmlParser
2016/09/01 09:43:45 INFO - jmeter.protocol.http.sampler.HTTPSamplerBase: Parser for application/xhtml+xml is org.apache.jmeter.protocol.http.parser.LagartoBasedHtmlParser
2016/09/01 09:43:45 INFO - jmeter.protocol.http.sampler.HTTPSamplerBase: Parser for application/xml is org.apache.jmeter.protocol.http.parser.LagartoBasedHtmlParser
2016/09/01 09:43:45 INFO - jmeter.protocol.http.sampler.HTTPSamplerBase: Parser for text/xml is org.apache.jmeter.protocol.http.parser.LagartoBasedHtmlParser
2016/09/01 09:43:45 INFO - jmeter.protocol.http.sampler.HTTPSamplerBase: Parser for text/vnd.wap.wml is org.apache.jmeter.protocol.http.parser.RegexpHTMLParser
2016/09/01 09:43:45 INFO - jmeter.protocol.http.sampler.HTTPSamplerBase: Parser for text/css is org.apache.jmeter.protocol.http.parser.CssParser
2016/09/01 09:43:45 INFO - jorphan.exec.KeyToolUtils: keytool found at 'keytool'
2016/09/01 09:43:45 INFO - jmeter.protocol.http.proxy.ProxyControl: HTTP(S) Test Script Recorder SSL Proxy will use keys that support embedded 3rd party resources in file /home/yassar/Downloads/jmeter/apache-jmeter-3.0/bin/proxyserver.jks
2016/09/01 09:43:45 INFO - jmeter.gui.util.MenuFactory: Skipping org.apache.jmeter.protocol.mongodb.config.MongoSourceElement
2016/09/01 09:43:45 INFO - jmeter.gui.util.MenuFactory: Skipping org.apache.jmeter.protocol.mongodb.sampler.MongoScriptSampler
2016/09/01 09:43:45 WARN - jmeter.util.JMeterUtils: ERROR! Resource string not found: [mqtt_qos]
2016/09/01 09:43:45 WARN - jmeter.util.JMeterUtils: ERROR! Resource string not found: [mqtt_at_most_once]
2016/09/01 09:43:45 WARN - jmeter.util.JMeterUtils: ERROR! Resource string not found: [mqtt_at_least_once]
2016/09/01 09:43:45 WARN - jmeter.util.JMeterUtils: ERROR! Resource string not found: [mqtt_exactly_once]
2016/09/01 09:43:45 WARN - jmeter.util.JMeterUtils: ERROR! Resource string not found: [mqtt_client_types]
2016/09/01 09:43:45 WARN - jmeter.util.JMeterUtils: ERROR! Resource string not found: [mqtt_blocking_client]
2016/09/01 09:43:45 WARN - jmeter.util.JMeterUtils: ERROR! Resource string not found: [mqtt_async_client]
2016/09/01 09:43:45 WARN - jmeter.util.JMeterUtils: ERROR! Resource string not found: [mqtt_message_input_type]
2016/09/01 09:43:45 WARN - jmeter.util.JMeterUtils: ERROR! Resource string not found: [mqtt_message_input_type_text]
2016/09/01 09:43:45 WARN - jmeter.util.JMeterUtils: ERROR! Resource string not found: [mqtt_message_input_type_file]
2016/09/01 09:43:45 WARN - jmeter.util.JMeterUtils: ERROR! Resource string not found: [mqtt_qos]
2016/09/01 09:43:45 WARN - jmeter.util.JMeterUtils: ERROR! Resource string not found: [mqtt_at_most_once]
2016/09/01 09:43:45 WARN - jmeter.util.JMeterUtils: ERROR! Resource string not found: [mqtt_at_least_once]
2016/09/01 09:43:45 WARN - jmeter.util.JMeterUtils: ERROR! Resource string not found: [mqtt_exactly_once]
2016/09/01 09:43:45 WARN - jmeter.util.JMeterUtils: ERROR! Resource string not found: [mqtt_client_types]
2016/09/01 09:43:45 WARN - jmeter.util.JMeterUtils: ERROR! Resource string not found: [mqtt_blocking_client]
2016/09/01 09:43:45 WARN - jmeter.util.JMeterUtils: ERROR! Resource string not found: [mqtt_async_client]
2016/09/01 09:43:46 INFO - jmeter.gui.util.MenuFactory: Skipping org.apache.jmeter.visualizers.DistributionGraphVisualizer
2016/09/01 09:43:46 INFO - jmeter.samplers.SampleResult: Note: Sample TimeStamps are START times
2016/09/01 09:43:46 INFO - jmeter.samplers.SampleResult: sampleresult.default.encoding is set to ISO-8859-1
2016/09/01 09:43:46 INFO - jmeter.samplers.SampleResult: sampleresult.useNanoTime=true
2016/09/01 09:43:46 INFO - jmeter.samplers.SampleResult: sampleresult.nanoThreadSleep=5000
2016/09/01 09:43:46 INFO - jmeter.gui.util.MenuFactory: Skipping org.apache.jmeter.visualizers.SplineVisualizer
2016/09/01 10:25:41 INFO - jmeter.engine.StandardJMeterEngine: Running the test!
2016/09/01 10:25:41 INFO - jmeter.samplers.SampleEvent: List of sample_variables: []
2016/09/01 10:25:41 INFO - jmeter.samplers.SampleEvent: List of sample_variables: []
2016/09/01 10:25:41 INFO - jmeter.gui.util.JMeterMenuBar: setRunning(true,*local*)
2016/09/01 10:25:42 INFO - jmeter.engine.StandardJMeterEngine: No enabled thread groups found
2016/09/01 10:25:42 INFO - jmeter.engine.StandardJMeterEngine: Notifying test listeners of end of test
2016/09/01 10:25:42 INFO - jmeter.services.FileServer: Default base='/home/yassar/Downloads/jmeter/apache-jmeter-3.0/bin'
2016/09/01 10:25:42 INFO - jmeter.gui.util.JMeterMenuBar: setRunning(false,*local*)
2016/09/01 10:25:50 INFO - jmeter.gui.action.Load: Loading file: /home/yassar/Downloads/benchmark.jmx
2016/09/01 10:25:50 INFO - jmeter.services.FileServer: Set new base='/home/yassar/Downloads'
2016/09/01 10:25:50 INFO - jmeter.save.SaveService: Testplan (JMX) version: 2.2. Testlog (JTL) version: 2.2
2016/09/01 10:25:50 INFO - jmeter.save.SaveService: Using SaveService properties file encoding UTF-8
2016/09/01 10:25:50 INFO - jmeter.save.SaveService: Using SaveService properties version 2.9
2016/09/01 10:25:50 INFO - jmeter.save.SaveService: All converter versions present and correct
2016/09/01 10:25:50 INFO - jmeter.save.SaveService: Loading file: /home/yassar/Downloads/benchmark.jmx
2016/09/01 10:25:50 WARN - jmeter.gui.action.Load: Unexpected error java.lang.IllegalArgumentException: Problem loading XML from:'/home/yassar/Downloads/benchmark.jmx', cannot determine class for element: com.thoughtworks.xstream.mapper.CannotResolveClassException: is-copy-enabled is-u2f-enabled
at org.apache.jmeter.save.SaveService.readTree(SaveService.java:533)
at org.apache.jmeter.save.SaveService.loadTree(SaveService.java:503)
at org.apache.jmeter.gui.action.Load.loadProjectFile(Load.java:130)
at org.apache.jmeter.gui.action.Load.loadProjectFile(Load.java:102)
at org.apache.jmeter.gui.action.Load.doAction(Load.java:89)
at org.apache.jmeter.gui.action.ActionRouter.performAction(ActionRouter.java:80)
at org.apache.jmeter.gui.action.ActionRouter.access$000(ActionRouter.java:40)
at org.apache.jmeter.gui.action.ActionRouter$1.run(ActionRouter.java:62)
at java.awt.event.InvocationEvent.dispatch(InvocationEvent.java:312)
at java.awt.EventQueue.dispatchEventImpl(EventQueue.java:745)
at java.awt.EventQueue.access$300(EventQueue.java:103)
at java.awt.EventQueue$3.run(EventQueue.java:706)
at java.awt.EventQueue$3.run(EventQueue.java:704)
at java.security.AccessController.doPrivileged(Native Method)
at java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(ProtectionDomain.java:77)
at java.awt.EventQueue.dispatchEvent(EventQueue.java:715)
at java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:242)
at java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:161)
at java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:150)
at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:146)
at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:138)
at java.awt.EventDispatchThread.run(EventDispatchThread.java:91)
Caused by: com.thoughtworks.xstream.mapper.CannotResolveClassException: is-copy-enabled is-u2f-enabled
at com.thoughtworks.xstream.mapper.DefaultMapper.realClass(DefaultMapper.java:79)
at com.thoughtworks.xstream.mapper.MapperWrapper.realClass(MapperWrapper.java:30)
at com.thoughtworks.xstream.mapper.DynamicProxyMapper.realClass(DynamicProxyMapper.java:55)
at com.thoughtworks.xstream.mapper.MapperWrapper.realClass(MapperWrapper.java:30)
at com.thoughtworks.xstream.mapper.PackageAliasingMapper.realClass(PackageAliasingMapper.java:88)
at com.thoughtworks.xstream.mapper.MapperWrapper.realClass(MapperWrapper.java:30)
at com.thoughtworks.xstream.mapper.ClassAliasingMapper.realClass(ClassAliasingMapper.java:79)
at com.thoughtworks.xstream.mapper.MapperWrapper.realClass(MapperWrapper.java:30)
at com.thoughtworks.xstream.mapper.MapperWrapper.realClass(MapperWrapper.java:30)
at com.thoughtworks.xstream.mapper.MapperWrapper.realClass(MapperWrapper.java:30)
at com.thoughtworks.xstream.mapper.MapperWrapper.realClass(MapperWrapper.java:30)
at com.thoughtworks.xstream.mapper.MapperWrapper.realClass(MapperWrapper.java:30)
at com.thoughtworks.xstream.mapper.MapperWrapper.realClass(MapperWrapper.java:30)
at com.thoughtworks.xstream.mapper.ArrayMapper.realClass(ArrayMapper.java:74)
at com.thoughtworks.xstream.mapper.MapperWrapper.realClass(MapperWrapper.java:30)
at com.thoughtworks.xstream.mapper.MapperWrapper.realClass(MapperWrapper.java:30)
at com.thoughtworks.xstream.mapper.MapperWrapper.realClass(MapperWrapper.java:30)
at com.thoughtworks.xstream.mapper.MapperWrapper.realClass(MapperWrapper.java:30)
at com.thoughtworks.xstream.mapper.MapperWrapper.realClass(MapperWrapper.java:30)
at com.thoughtworks.xstream.mapper.MapperWrapper.realClass(MapperWrapper.java:30)
at com.thoughtworks.xstream.mapper.SecurityMapper.realClass(SecurityMapper.java:71)
at com.thoughtworks.xstream.mapper.MapperWrapper.realClass(MapperWrapper.java:30)
at com.thoughtworks.xstream.mapper.MapperWrapper.realClass(MapperWrapper.java:30)
at org.apache.jmeter.save.SaveService$XStreamWrapper$1.realClass(SaveService.java:98)
at com.thoughtworks.xstream.mapper.MapperWrapper.realClass(MapperWrapper.java:30)
at com.thoughtworks.xstream.mapper.CachingMapper.realClass(CachingMapper.java:47)
at com.thoughtworks.xstream.core.util.HierarchicalStreams.readClassType(HierarchicalStreams.java:31)
at com.thoughtworks.xstream.core.TreeUnmarshaller.start(TreeUnmarshaller.java:133)
at com.thoughtworks.xstream.core.AbstractTreeMarshallingStrategy.unmarshal(AbstractTreeMarshallingStrategy.java:32)
at com.thoughtworks.xstream.XStream.unmarshal(XStream.java:1206)
at com.thoughtworks.xstream.XStream.unmarshal(XStream.java:1190)
at com.thoughtworks.xstream.XStream.fromXML(XStream.java:1061)
at org.apache.jmeter.save.SaveService.readTree(SaveService.java:524)
... 21 more
2016/09/01 10:26:12 INFO - jmeter.gui.action.Load: Loading file: /home/yassar/Projects/m205/setup/performance-toolkit/benchmark.jmx
2016/09/01 10:26:12 INFO - jmeter.services.FileServer: Set new base='/home/yassar/Projects/m205/setup/performance-toolkit'
2016/09/01 10:26:12 INFO - jmeter.save.SaveService: Loading file: /home/yassar/Projects/m205/setup/performance-toolkit/benchmark.jmx
2016/09/01 10:26:12 INFO - jmeter.protocol.http.control.CookieManager: Settings: Delete null: true Check: true Allow variable: true Save: false Prefix: COOKIE_
2016/09/01 10:26:13 INFO - jmeter.services.FileServer: Set new base='/home/yassar/Projects/m205/setup/performance-toolkit'
2016/09/01 10:27:08 INFO - jmeter.services.FileServer: Set new base='/home/yassar/Projects/m205/setup/performance-toolkit'
2016/09/01 10:27:11 INFO - jmeter.engine.StandardJMeterEngine: Running the test!
2016/09/01 10:27:11 INFO - jmeter.samplers.SampleEvent: List of sample_variables: []
2016/09/01 10:27:11 INFO - jmeter.gui.util.JMeterMenuBar: setRunning(true,*local*)
2016/09/01 10:27:12 INFO - jmeter.engine.StandardJMeterEngine: Starting setUp thread groups
2016/09/01 10:27:12 INFO - jmeter.engine.StandardJMeterEngine: Starting setUp ThreadGroup: 1 : setUp Thread Group
2016/09/01 10:27:12 INFO - jmeter.engine.StandardJMeterEngine: Starting 1 threads for group setUp Thread Group.
2016/09/01 10:27:12 INFO - jmeter.engine.StandardJMeterEngine: Test will stop on error
2016/09/01 10:27:12 INFO - jmeter.threads.ThreadGroup: Starting thread group number 1 threads 1 ramp-up 1 perThread 1000.0 delayedStart=false
2016/09/01 10:27:12 INFO - jmeter.threads.ThreadGroup: Started thread group number 1
2016/09/01 10:27:12 INFO - jmeter.engine.StandardJMeterEngine: Waiting for all setup thread groups to exit
2016/09/01 10:27:12 INFO - jmeter.threads.JMeterThread: Thread started: setUp Thread Group 1-1
2016/09/01 10:27:12 ERROR - jmeter.util.BeanShellInterpreter: Error invoking bsh method: eval Sourced file: inline evaluation of: ``Boolean stopTestOnError (String error) { log.error(error); System.out.pr . . . '' : Method Invocation path.substring
2016/09/01 10:27:12 WARN - jmeter.protocol.java.sampler.BeanShellSampler: org.apache.jorphan.util.JMeterException: Error invoking bsh method: eval Sourced file: inline evaluation of: ``Boolean stopTestOnError (String error) { log.error(error); System.out.pr . . . '' : Method Invocation path.substring
2016/09/01 10:27:12 INFO - jmeter.threads.JMeterThread: Stop Test detected by thread: setUp Thread Group 1-1
2016/09/01 10:27:12 INFO - jmeter.threads.JMeterThread: Thread finished: setUp Thread Group 1-1
2016/09/01 10:27:12 INFO - jmeter.threads.JMeterThread: Stopping: setUp Thread Group 1-1
2016/09/01 10:27:12 INFO - jmeter.engine.StandardJMeterEngine: All Setup Threads have ended
2016/09/01 10:27:12 INFO - jmeter.engine.StandardJMeterEngine: No enabled thread groups found
2016/09/01 10:27:12 INFO - jmeter.engine.StandardJMeterEngine: Starting tearDown thread groups
2016/09/01 10:27:12 INFO - jmeter.engine.StandardJMeterEngine: Notifying test listeners of end of test
2016/09/01 10:27:12 INFO - jmeter.gui.util.JMeterMenuBar: setRunning(false,*local*)
It seems you are missing some plug-ins that are used along with this JMX. You need to copy these plug-ins in under JMeter /lib/ext folder and it should work.
Check what plug-ins are being used by the benchmark.jmx
Success Finally...
I have been able to launch it. Actually the issue was mostly with urls. I dont know why but 'host' and 'admin_path' variables work in funny ways with Magento. But i find a walk around by manually going through the 'html requests' and added the variable required. Now it is running
I think you're missing the Redis Data Set plugin also.
Please take a look at: https://jmeter-plugins.org/wiki/RedisDataSet/
I'm running the CoreNLP dedicated server on AWS and trying to make a request from ruby. The server seems to be receiving the request correctly but the issue is the server seems to ignore the input annotators list and always default to all annotators. My Ruby code to make the request looks like so:
uri = URI.parse(URI.encode('http://ec2-************.compute.amazonaws.com//?properties={"tokenize.whitespace": "true", "annotators": "tokenize,ssplit,pos", "outputFormat": "json"}'))
http = Net::HTTP.new(uri.host, uri.port)
request = Net::HTTP::Post.new("/v1.1/auth")
request.add_field('Content-Type', 'application/json')
request.body = text
response = http.request(request)
json = JSON.parse(response.body)
In the nohup.out logs on the server I see the following:
[/38.122.182.107:53507] API call w/annotators tokenize,ssplit,pos,depparse,lemma,ner,mention,coref,natlog,openie
....
INPUT TEXT BLOCK HERE
....
[pool-1-thread-1] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator tokenize
[pool-1-thread-1] INFO edu.stanford.nlp.pipeline.TokenizerAnnotator - TokenizerAnnotator: No tokenizer type provided. Defaulting to PTBTokenizer.
[pool-1-thread-1] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator ssplit
[pool-1-thread-1] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator pos
Reading POS tagger model from edu/stanford/nlp/models/pos-tagger/english-left3words/english-left3words-distsim.tagger ... done [2.0 sec].
[pool-1-thread-1] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator depparse
Loading depparse model file: edu/stanford/nlp/models/parser/nndep/english_UD.gz ...
PreComputed 100000, Elapsed Time: 2.259 (s)
Initializing dependency parser done [5.1 sec].
[pool-1-thread-1] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator lemma
[pool-1-thread-1] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator ner
Loading classifier from edu/stanford/nlp/models/ner/english.all.3class.distsim.crf.ser.gz ... done [2.6 sec].
Loading classifier from edu/stanford/nlp/models/ner/english.muc.7class.distsim.crf.ser.gz ... done [1.2 sec].
Loading classifier from edu/stanford/nlp/models/ner/english.conll.4class.distsim.crf.ser.gz ... done [7.2 sec].
[pool-1-thread-1] INFO edu.stanford.nlp.time.JollyDayHolidays - Initializing JollyDayHoliday for SUTime from classpath edu/stanford/nlp/models/sutime/jollyday/Holidays_sutime.xml as sutime.binder.1.
Reading TokensRegex rules from edu/stanford/nlp/models/sutime/defs.sutime.txt
Feb 22, 2016 11:37:20 PM edu.stanford.nlp.ling.tokensregex.CoreMapExpressionExtractor appendRules
INFO: Read 83 rules
Reading TokensRegex rules from edu/stanford/nlp/models/sutime/english.sutime.txt
Feb 22, 2016 11:37:20 PM edu.stanford.nlp.ling.tokensregex.CoreMapExpressionExtractor appendRules
INFO: Read 267 rules
Reading TokensRegex rules from edu/stanford/nlp/models/sutime/english.holidays.sutime.txt
Feb 22, 2016 11:37:20 PM edu.stanford.nlp.ling.tokensregex.CoreMapExpressionExtractor appendRules
INFO: Read 25 rules
[pool-1-thread-1] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator mention
Using mention detector type: dependency
[pool-1-thread-1] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator coref
etc etc.
When I run test queries using wget on the command line it seems to work fine.
wget --post-data 'the quick brown fox jumped over the lazy dog' 'ec2-*******.compute.amazonaws.com/?properties={"tokenize.whitespace": "true", "annotators": "tokenize,ssplit,pos", "outputFormat": "json"}' -O -
Any help as to why this is happening would be appreicated thanks!
It turns out the request was being constructed incorrectly. The path should be in the argument to the Post.new. Corrected code below in case it helps anyone:
host = "http://ec2-***********.us-west-2.compute.amazonaws.com"
path = '/?properties={"tokenize.whitespace": "true", "annotators": "tokenize,ssplit,pos", "outputFormat": "json"}'
encoded_path = URI.encode(path)
uri = URI.parse(URI.encode(host))
http = Net::HTTP.new(uri.host, uri.port)
http.set_debug_output($stdout)
# request = Net::HTTP::Post.new("/v1.1/auth")
request = Net::HTTP::Post.new(encoded_path)
request.add_field('Content-Type', 'application/json')
request.body = text
response = http.request(request)
json = JSON.parse(response.body)