I'm using the following code:
ArrayList<String> tabs2 = new ArrayList<String> (WDS.browser.getWindowHandles());
WDS.browser.switchTo().window(tabs2.get(1));
It works in WebDriver with the driver variable, but produces the error in the WebDriver Sampler:
javax.script.ScriptException: In file: inline evaluation of: ``import org.openqa.selenium.*; import org.openqa.selenium.support.ui.*; import ja . . . '' Encountered "=" at line 24, column 25.
in inline evaluation of: ``import org.openqa.selenium.*; import org.openqa.selenium.support.ui.*; import ja . . . '' at line number 24
at bsh.engine.BshScriptEngine.evalSource(BshScriptEngine.java:81)
at bsh.engine.BshScriptEngine.eval(BshScriptEngine.java:46)
at javax.script.AbstractScriptEngine.eval(AbstractScriptEngine.java:264)
at com.googlecode.jmeter.plugins.webdriver.sampler.WebDriverSampler.sample(WebDriverSampler.java:76)
at org.apache.jmeter.threads.JMeterThread.executeSamplePackage(JMeterThread.java:475)
at org.apache.jmeter.threads.JMeterThread.processSampler(JMeterThread.java:418)
at org.apache.jmeter.threads.JMeterThread.run(JMeterThread.java:249)
at java.lang.Thread.run(Thread.java:745)
Please help.
You are using a very outdated scripting engine which does not support diamond operators which where introduced in Java 7 while beanshell and bsh are stuck at Java 5 language level therefore you won't be able to use modern language features.
You have the following options:
Remove diamond operators like:
ArrayList tabs2 = new ArrayList (WDS.browser.getWindowHandles());
Switch to Groovy scripting engine. Groovy is more Java-compliant, valid Java code in 99% of cases will be valid groovy code so problems like this will go away. Moreover, Groovy performs much better, for WebDriver Sampler it is not that important, but it is vital for other scripting tasks. See Apache Groovy - Why and How You Should Use It for more details.
Related
How can a Chez Scheme program or library find out which operating system and machine architecture it's running on (from within Scheme code)?
From the Chez Scheme Version 9 User's Guide:
Section 6.10. Bytevectors
(native-endianness) import (rnrs) or (rnrs bytevectors)
Section 12.4. Compilation, Evaluation, and Loading
(machine-type) import (chezscheme)
Section 12.15. Environmental Queries and Settings
(scheme-version) import (chezscheme)
(scheme-version-number) import (chezscheme)
(petite?) import (chezscheme)
(threaded?) import (chezscheme)
(interactive?) import (chezscheme)
Unfortunately (machine-type) is a cryptic string idiomatic to Chez (instead of a standard symbol like x86-64) and may change from version to version. The other procedures work in the obvious manner.
I found these in the r7rs-benchmarks repo.
Parsing the machine type
The machine type string is constructed as follows:
Start with an empty string.
For a build that supports threads, append the letter t.
Append the machine architecture.
Append the operating system.
Current architectures and operating systems:
(define arch-pairs
'(("a6" . amd64)
("arm32" . arm32)
("i3" . i386)
("ppc32" . ppc32)))
(define os-pairs
'(("fb" . freebsd)
("le" . linux)
("nb" . netbsd)
("nt" . windows)
("ob" . openbsd)
("osx" . macos)
("qnx" . qnx)
("s2" . solaris)))
To find all the machine types, look for all the makefiles named Mf-* in the c directory of the Chez Scheme source repo.
While reading the docs of System.Process and trying to use callCommand I discovered that it is not available:
test.hs:1:24: Module `System.Process' does not export `callCommand'
Why?
callProcess was added to the process library in version 1.2.0.0. I suspect you are using an earlier version.
All -
Running Stanford CoreNLP 3.4.1, plus the Spanish models. I have a directory of approximately 100 Spanish raw text documents, UTF-8 encoded. For each one, I execute the following commandline:
java -cp stanford-corenlp-3.4.1.jar:stanford-spanish-corenlp-2014-08-26-models.jar:xom.jar:joda-time.jar:jollyday.jar:ejml-0.23.jar -Xmx2g edu.stanford.nlp.pipeline.StanfordCoreNLP -props <propsfile> -file <txtfile>
The props file looks like this:
annotators = tokenize, ssplit, pos
tokenize.language = es
pos.model = edu/stanford/nlp/models/pos-tagger/spanish/spanish-distsim.tagger
For almost every file, I get the following error:
Exception in thread "main" java.lang.RuntimeException: Error annotating :
at edu.stanford.nlp.pipeline.StanfordCoreNLP$15.run(StanfordCoreNLP.java:1287)
at edu.stanford.nlp.pipeline.StanfordCoreNLP.processFiles(StanfordCoreNLP.java:1347)
at edu.stanford.nlp.pipeline.StanfordCoreNLP.run(StanfordCoreNLP.java:1389)
at edu.stanford.nlp.pipeline.StanfordCoreNLP.main(StanfordCoreNLP.java:1459)
Caused by: java.lang.NullPointerException
at edu.stanford.nlp.tagger.maxent.ExtractorSpanishStrippedVerb.extract(ExtractorFramesRare.java:1626)
at edu.stanford.nlp.tagger.maxent.Extractor.extract(Extractor.java:153)
at edu.stanford.nlp.tagger.maxent.TestSentence.getExactHistories(TestSentence.java:465)
at edu.stanford.nlp.tagger.maxent.TestSentence.getHistories(TestSentence.java:440)
at edu.stanford.nlp.tagger.maxent.TestSentence.getHistories(TestSentence.java:428)
at edu.stanford.nlp.tagger.maxent.TestSentence.getExactScores(TestSentence.java:377)
at edu.stanford.nlp.tagger.maxent.TestSentence.getScores(TestSentence.java:372)
at edu.stanford.nlp.tagger.maxent.TestSentence.scoresOf(TestSentence.java:713)
at edu.stanford.nlp.sequences.ExactBestSequenceFinder.bestSequence(ExactBestSequenceFinder.java:91)
at edu.stanford.nlp.sequences.ExactBestSequenceFinder.bestSequence(ExactBestSequenceFinder.java:31)
at edu.stanford.nlp.tagger.maxent.TestSentence.runTagInference(TestSentence.java:322)
at edu.stanford.nlp.tagger.maxent.TestSentence.testTagInference(TestSentence.java:312)
at edu.stanford.nlp.tagger.maxent.TestSentence.tagSentence(TestSentence.java:135)
at edu.stanford.nlp.tagger.maxent.MaxentTagger.tagSentence(MaxentTagger.java:998)
at edu.stanford.nlp.pipeline.POSTaggerAnnotator.doOneSentence(POSTaggerAnnotator.java:147)
at edu.stanford.nlp.pipeline.POSTaggerAnnotator.annotate(POSTaggerAnnotator.java:110)
at edu.stanford.nlp.pipeline.AnnotationPipeline.annotate(AnnotationPipeline.java:67)
at edu.stanford.nlp.pipeline.StanfordCoreNLP.annotate(StanfordCoreNLP.java:847)
at edu.stanford.nlp.pipeline.StanfordCoreNLP$15.run(StanfordCoreNLP.java:1275)
Any ideas? I haven't even begun to track this down. I'm certain the problem is in POS; tokenize and ssplit run just fine.
P.S. Please don't say "Upgrade to 3.5.0"; I don't currently have Java 8 installed and don't want to install it yet.
Thanks in advance.
Yes, it seems like there's a bug in the 3.4.1 Spanish models.
The Spanish 3.5.0 models actually seem to be compatible with Java 7. You can download the models used in 3.5 (stanford-spanish-corenlp-2014-10-23-models.jar) and put that on your classpath instead. This fixed the problem for me running Java 7 locally.
just download and extracted jmeter on my pc.
when i double click jmeter.bat, this error occurred
An error occurred: class com.google.common.collect.AbstractMapBasedMultimap over
rides final method setMap.(Ljava/util/Map;)V
errorlevel=1
Press any key to continue . . .
why is jmeter have to do with com.google.common.collect.AbstractMapBasedMultimap.
any idea what is wrong with my environment?
im using
apache-jmeter-2.11 and java version "1.7.0_51"
i have also extracted 5 plugins: standard set, extras with lib set, extras set, WebDriver set and Hadoop set.
updated:
i sort of found out the problem. It is because i din't extract the plugins in correct order.
it must follow this order: Extras > ExtrasLib > Hadoop > Standard > WebDriver when extracting the plugins into jmeter directory. Then the jmeter can be started without error.
You error may be related to the presence of 2 different versions of GUAVA library.
I suppose you are using some third party JMeter plugin (possibly JMeter-Plugins + Selenium Driver).
You may find this interesting:
http://www.simplehooman.co.uk/2013/03/jmeter-could-not-initialize-class-org-apache-jmeter-gui-util-menufactory/
I am new to Antlr and my setup is as follows: Windows 7, Java JDK 1.7.0_17, AntlrWorks 1.5, Antlr 3.5.
The AntlrWorks Help-About shows the following information:
ANTLRWorks 1.5
ANTLR 3.5
StringTemplate v3 3.2.1
StringTemplate v4 4.0.7-SNAPSHOT
Java 1.7.0_17 (Oracle Corporation)
Chapter 3 of the Definitive Antlr Reference book introduces a sample grammar for expression evaluation (Expr.g), which I downloaded from the hyperlink in the PDF version of the book.
The book recomends using AntlrWorks and I am, however when I Generate Code (Ctrl+Shift+G) in AntlrWorks it produces code without the "throws" clause.
For example, the following is generated in AntlrWorks:
// $ANTLR start "prog"
// C:\\Users\\Mark\\Documents\\output\\Expr.g:12:1: prog : ( stat )+ ;
public final void prog() throws {
try {
Note the missing code after the throws keyword...
If I generate from the command prompt using this command line:
java -cp antlr-3.5-complete.jar org.antlr.Tool Expr.g
I get this output:
// $ANTLR start "prog"
// Expr.g:12:1: prog : ( stat )+ ;
public final void prog() throws RecognitionException {
try {
My question is this - how do I get AntlrWorks to generate the same code?
This is a known issue in ANTLRWorks 1.5 which has been resolved for the next release.
#5: ANTLRworks fails to generate proper Java Code