Using UltimateThreadGroup with Java API gives Nullpointer exception - jmeter

I am trying to use UltimateThreadGroup with the JMeter Java API, I am creating the UltimateThreadGroup object as in the below code[2] and add it to the Hashtree.
Finally handover it to the JMeter engine to execute. 
But It gives the following NullPointerException in the middle of execution. As I debugged the code, the Issue seems coming from JMeterThread class following method.
public JMeterThread(HashTree test, JMeterThreadMonitor monitor, ListenerNotifier note, Boolean isSameUserOnNextIteration)
But issue throws on different code lines from execution to execution. So it's difficult figure out what causing the NullPointer.
Does anybody have an idea on what's going on here? Appreciate your answers.
[1]
2020-11-03 13:08:20 DEBUG TestCompiler:273 - adding controller: kg.apc.jmeter.threads.UltimateThreadGroup#30b2b76f to sampler config
2020-11-03 13:08:22 ERROR JMeterThread:319 - Test failed!
java.lang.NullPointerException
at org.apache.jmeter.threads.AbstractThreadGroup.addTestElement(AbstractThreadGroup.java:122)
at org.apache.jmeter.threads.AbstractThreadGroup.addTestElementOnce(AbstractThreadGroup.java:131)
at org.apache.jmeter.threads.TestCompiler.subtractNode(TestCompiler.java:151)
at org.apache.jorphan.collections.HashTree.traverseInto(HashTree.java:997)
at org.apache.jorphan.collections.HashTree.traverseInto(HashTree.java:994)
at org.apache.jorphan.collections.HashTree.traverse(HashTree.java:976)
at org.apache.jmeter.threads.JMeterThread.initRun(JMeterThread.java:704)
at org.apache.jmeter.threads.JMeterThread.run(JMeterThread.java:252)
at java.lang.Thread.run(Thread.java:748)
[2]
UltimateThreadGroup ultimateThreadGroup = new UltimateThreadGroup();
ultimateThreadGroup.setName(threadGroupName);
ultimateThreadGroup.setProperty(AbstractThreadGroup.ON_SAMPLE_ERROR, AbstractThreadGroup.ON_SAMPLE_ERROR_CONTINUE);
PowerTableModel dataModel = new PowerTableModel(UltimateThreadGroupGui.columnIdentifiers, UltimateThreadGroupGui.columnClasses);
dataModel.addRow(new Integer[]{2, 4, 10, 60, 10});
dataModel.addRow(new Integer[]{3, 4, 10, 120, 10});
CollectionProperty prop = JMeterPluginsUtils.tableModelRowsToCollectionProperty(dataModel, UltimateThreadGroup.DATA_PROPERTY);
ultimateThreadGroup.setData(prop);
ultimateThreadGroup.setEnabled(setEnabled);
ultimateThreadGroup.setProperty(TestElement.TEST_CLASS, UltimateThreadGroup.class.getName());
ultimateThreadGroup.setProperty(TestElement.GUI_CLASS, UltimateThreadGroupGui.class.getName());

First you need to create a Loop Controller instance like:
LoopController loopController = new LoopController();
loopController.setLoops(1);
loopController.setFirst(true);
loopController.setProperty(TestElement.TEST_CLASS, LoopController.class.getName());
loopController.setProperty(TestElement.GUI_CLASS, LoopControlPanel.class.getName());
loopController.initialize();
Second you need to add the Loop Controller from the step 1 to your Ultimate Thread Group:
ultimateThreadGroup.setSamplerController(loopController);
More information:
Five Ways To Launch a JMeter Test without Using the JMeter GUI
jmeter-from-code
Going forward just compare the .jmx which you generate programatically with the one which is created by JMeter GUI, you will be able to see what fields, properties, etc. are missing

Related

Mulesoft EC2 *describeInstances* with *filter* option

I'm having problems using the EC2 connector with filters for DescribeInstances. Specifically, I'm trying to find all instances that have the tag "classId" set.
I've also tried to find all instances that have the classId tag with specific string, e.g. "123".
Below are the XMLs of the describeInstance for both scenarios.
tag-key ------
<ec2:describe-instances doc:name="Describe instances" doc:id="ca64b7d4-99bb-4045-bbb4-16c0c27b1df5" config-ref="Amazon_EC2_Configuration">
<ec2:filters>
<ec2:filter name="tag-key" values="#[['classId']]">
</ec2:filter>
</ec2:filters>
</ec2:describe-instances>
tag:classId:----
<ec2:describe-instances doc:name="Describe instances" doc:id="ca64b7d4-99bb-4045-bbb4-16c0c27b1df5" config-ref="Amazon_EC2_Configuration">
<ec2:filters>
<ec2:filter name="tag:classId">
<ec2:values >
<ec2:value value="#['123']" />
</ec2:values>
</ec2:filter>
</ec2:filters>
</ec2:describe-instances>
Each time I receive an error like the following (for tag:classId):
ERROR 2021-03-29 08:32:49,693 [[MuleRuntime].uber.04: [ec2-play].ec2-playFlow.BLOCKING #1092a5bc] [processor: ; event: df5e2df0-908a-11eb-94b5-38f9d38da5c3] org.mule.runtime.core.internal.exception.OnErrorPropagateHandler: 
********************************************************************************
Message        : The filter 'null' is invalid (Service: AmazonEC2; Status Code: 400; Error Code: InvalidParameterValue; Request ID: 33e3bbfb-99ea-4382-932f-647662810c92; Proxy: null)
Element        : ec2-playFlow/processors/0 # ec2-play:ec2-play.xml:33 (Describe instances)
Element DSL      : <ec2:describe-instances doc:name="Describe instances" doc:id="ca64b7d4-99bb-4045-bbb4-16c0c27b1df5" config-ref="Amazon_EC2_Configuration">
<ec2:filters>
<ec2:filter name="tag:classId">
<ec2:values>
<ec2:value value="#['123']"></ec2:value>
</ec2:values>
</ec2:filter>
</ec2:filters>
</ec2:describe-instances>
Error type      : EC2:INVALID_PARAMETER_VALUE
FlowStack       : at ec2-playFlow(ec2-playFlow/processors/0 # ec2-play:ec2-play.xml:33 (Describe instances))
 (set debug level logging or '-Dmule.verbose.exceptions=true' for everything)
********************************************************************************
NOTE: The code works without a filter, returning all instances. But, that isn't what I want or need. The more filtering I can do the faster the response.
Does anyone have samples of the filter option working? Can you tell me what I'm doing wrong?
Thanks!
This surely is a bug. I tried the same and it was not working for me as well. I enabled debug logging and found that the connector is not sending the filter.1.Name=tag:classId as a query parameter in the request. Here is the debug log that I found. (Notice there is no filter.1.Name=tag:classId in the query string)
DEBUG 2021-04-02 21:55:17,198 [[MuleRuntime].uber.03: [test-aws-connector].test-aws-connectorFlow.BLOCKING #2dff3afe] [processor: ; event: 91a34891-93d0-11eb-af49-606dc73d31d1] org.apache.http.wire: http-outgoing-0 >> "Action=DescribeInstances&Version=2016-11-15&Filter.1.Value.1=123"
However, I tried to use the Expression or Bean Reference option and set the expression directly as [{name: 'tag:classId', values:['123']}] like this:
and it worked correctly. Here is the same debug log after this change
DEBUG 2021-04-02 21:59:17,198 [[MuleRuntime].uber.03: [test-aws-connector].test-aws-connectorFlow.BLOCKING #2dff3afe] [processor: ; event: 91a34891-93d0-11eb-af49-606dc73d31d1] org.apache.http.wire: http-outgoing-0 >> "Action=DescribeInstances&Version=2016-11-15&Filter.1.Name=tag%3AclassId&Filter.1.Value.1=123"
Also, I want to point out very weird behaviour, this does not work if you try to format [{name: 'tag:classId',values: ['123']}] across multiple lines in the expression and will give an error during deployment.

Dumping Beam-scores to an HDF File

I have a translation model (TM), which synthesizes its hypotheses using beam-search. For analysis purposes, I would like to study all hypotheses in each beam emitted by the TM’s ChoiceLayer. I’m able to fetch the hypotheses for each input sequence from the TM’s ChoiceLayer and write it to my file system, using the HDFDumpLayer:
'__SEARCH_dump_beam__': {
'class': 'hdf_dump',
'from': ['output'],
'filename': "<my-path>/beams.hdf",
'is_output_layer': True
}
But beside the hypotheses, I would also like to store the score of each hypothesis. I’m able to fetch the beam scores from the ChoiceLayer using a ChoiceGetBeamScoresLayer, but I was not able to dump the scores using an HDFDumpLayer:
'get_scores': {'class': 'choice_get_beam_scores', 'from': ['output']},
'__SEARCH_dump_scores__': {
'class': 'hdf_dump',
'from': ['get_scores'],
'filename': "<my-path>/beam_scores.hdf",
'is_output_layer': True
}
Running the config like likes this, makes RETURNN complain about the ChoiceGetBeamScoresLayer output not having a time axis:
Exception creating layer root/'__SEARCH_dump_scores__' of class HDFDumpLayer with opts:
{'filename': '<my-path>/beam_scores.hdf',
'is_output_layer': True,
'name': '__SEARCH_dump_scores__',
'network': <TFNetwork 'root' train=False search>,
'output': Data(name='__SEARCH_dump_scores___output', shape=(), time_dim_axis=None, beam=SearchBeam(name='output/output', beam_size=12, dependency=SearchBeam(name='output/prev:output', beam_size=12)), batch_shape_meta=[B]),
'sources': [<ChoiceGetBeamScoresLayer 'get_scores' out_type=Data(shape=(), time_dim_axis=None, beam=SearchBeam(name='output/output', beam_size=12, dependency=SearchBeam(name='output/prev:output', beam_size=12)), batch_shape_meta=[B])>]}
Unhandled exception <class 'AssertionError'> in thread <_MainThread(MainThread, started 139964674299648)>, proc 31228.
...
File "<...>/returnn/repository/returnn/tf/layers/basic.py", line 6226, in __init__
line: assert self.sources[0].output.have_time_axis()
locals:
self = <local> <HDFDumpLayer '__SEARCH_dump_scores__' out_type=Data(shape=(), time_dim_axis=None, beam=SearchBeam(name='output/output', beam_size=12, dependency=SearchBeam(name='output/prev:output', beam_size=12)), batch_shape_meta=[B])>
self.sources = <local> [<ChoiceGetBeamScoresLayer 'get_scores' out_type=Data(shape=(), time_dim_axis=None, beam=SearchBeam(name='output/output', beam_size=12, dependency=SearchBeam(name='output/prev:output', beam_size=12)), batch_shape_meta=[B])>]
output = <not found>
output.have_time_axis = <not found>
AssertionError
I tried to alter the shape of the score data using ExpandDimsLayer and EvalLayer, with several different configurations, but those all lead to different errors.
I’m sure I am not the first person trying to dump beam scores. Can anybody tell me, how to do that properly?
To answer the question:
The HDFDumpLayer assert you are hitting is simply due to the fact that this is not implemented yet (support of dumping data without time axis).
You can create a pull request and add this support for HDFDumpLayer.
If you just want to dump this information in any way, not necessarily via HDFDumpLayer, there are a couple of other options:
With task="search", as the search_output_layer, just select the layer which still includes the beam information, i.e. not after a DecideLayer.
That will simply dump all hypotheses including their beam scores.
Use a custom EvalLayer and dump it in whatever way you want.

Getting Content is not allowed in prolog when running in Java

I am facing Content is not allowed in prolog exception when I run my xxx.jmx file from Java (Jmeter 5.0).
I tested the jmx in the GUI mode and everything works fine and in the Java I am just following the standard way to call the jmx file and execute it.
The jmx just have some normal stuff. Sending HTTP request and validate the expected and received XML (I am using this snipped to validate):
import org.apache.commons.io.FileUtils
expect = FileUtils.readFileToString(new File('some_path'))
XmlParser parser = new XmlParser()
expectedXML = new XmlSlurper().parseText(expect)
actualXML = new XmlSlurper().parseText(prev.getResponseDataAsString())
if (expectedXML != actualXML) {
AssertionResult.setFailure(true)
AssertionResult.setFailureMessage('Mismatch between expected and actual XML \n'+ prev.getResponseDataAsString())
and the stack trace:
2018/10/24 15:18:03,386 12675 [ERROR ] [Thread Group 1-1] (JSR223Assertion.java:52) – Problem in JSR223 script: Validate resposne
javax.script.ScriptException: org.xml.sax.SAXParseException; lineNumber: 1; columnNumber: 1; Content is not allowed in prolog.
at org.codehaus.groovy.jsr223.GroovyScriptEngineImpl.eval(GroovyScriptEngineImpl.java:320)
at org.codehaus.groovy.jsr223.GroovyCompiledScript.eval(GroovyCompiledScript.java:72)
at javax.script.CompiledScript.eval(CompiledScript.java:92)
at org.apache.jmeter.util.JSR223TestElement.processFileOrScript(JSR223TestElement.java:221)
at org.apache.jmeter.assertions.JSR223Assertion.getResult(JSR223Assertion.java:49)
at org.apache.jmeter.threads.JMeterThread.processAssertion(JMeterThread.java:901)
at org.apache.jmeter.threads.JMeterThread.checkAssertions(JMeterThread.java:892)
at org.apache.jmeter.threads.JMeterThread.executeSamplePackage(JMeterThread.java:565)
at org.apache.jmeter.threads.JMeterThread.processSampler(JMeterThread.java:486)
at org.apache.jmeter.threads.JMeterThread.run(JMeterThread.java:253)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.xml.sax.SAXParseException; lineNumber: 1; columnNumber: 1; Content is not allowed in prolog.
at org.apache.xerces.parsers.AbstractSAXParser.parse(Unknown Source)
at org.apache.xerces.jaxp.SAXParserImpl$JAXPSAXParser.parse(Unknown Source)
at groovy.util.XmlSlurper.parse(XmlSlurper.java:207)
at groovy.util.XmlSlurper.parse(XmlSlurper.java:260)
at groovy.util.XmlSlurper.parseText(XmlSlurper.java:286)
at groovy.util.XmlSlurper$parseText.call(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:125)
at Script1.run(Script1.groovy:9)
at org.codehaus.groovy.jsr223.GroovyScriptEngineImpl.eval(GroovyScriptEngineImpl.java:317)
... 10 more
UPDATE 1:
The problem of using Response Assertion is it cannot ignore the space or tab. So if the format is not exactly the same when I use equal will always fail. Any ideas how to ignore these things by using Response Assertion?
UPDATE 2:
I found out that the issue is not related to the BOM. Is becasue if I run the jmx from my Java application:
prev.getResponseDataAsString()
above function always return:
${__FileToString(${inputFilePath},,)}
but not the actual response. This function is come from the body data of the HTTP Request sampler!!!!!! If I give the acutal body there then I am able to run the jmx...... Any idea how to deal with this dynamic body data?
It might be the case your "expected" XML file contains BOM and it causes your code failure.
BOM is basically 3 first bytes so you can remove them using code like:
def expect = FileUtils.readFileToString(new File('some_path')).getBytes().flatten()
1.upto(3) {
expect.remove(0)
}
XmlParser parser = new XmlParser()
def expectedXML = parser.parseText(new String(expect.toArray(new Byte[0])))
The rest of your code should work fine.
Check out The Groovy Templates Cheat Sheet for JMeter article to learn more Groovy tips and tricks.
Also be informed that in majority of cases it's easier to use Response Assertion or when it comes to XML - XPath Assertion, Java code will work faster than Groovy in any case.

CQ 5.6.1 getWorkflowSession cause Uncaught Throwable java.lang.NullPointerException

at com.cuso.Mao.doGet(Mao.java:97)
at org.apache.sling.api.servlets.SlingSafeMethodsServlet.mayService(SlingSafeMethodsServlet.java:268)
at org.apache.sling.api.servlets.SlingSafeMethodsServlet.service(SlingSafeMethodsServlet.java:344)
at org.apache.sling.api.servlets.SlingSafeMethodsServlet.service(SlingSafeMethodsServlet.java:375)
at org.apache.sling.engine.impl.request.RequestData.service(RequestData.java:508)
at org.apache.sling.engine.impl.filter.SlingComponentFilterChain.render(SlingComponentFilterChain.java:45)
at org.apache.sling.engine.impl.filter.AbstractSlingFilterChain.doFilter(AbstractSlingFilterChain.java:64)
at com.day.cq.wcm.core.impl.WCMDebugFilter.doFilter(WCMDebugFilter.java:146)
at org.apache.sling.engine.impl.filter.AbstractSlingFilterChain.doFilter(AbstractSlingFilterChain.java:60)
at com.day.cq.wcm.core.impl.WCMComponentFilter.filterRootInclude(WCMComponentFilter.java:356)
at com.day.cq.wcm.core.impl.WCMComponentFilter.doFilter(WCMComponentFilter.java:168)
at org.apache.sling.engine.impl.filter.AbstractSlingFilterChain.doFilter(AbstractSlingFilterChain.java:60)
at com.day.cq.personalization.impl.TargetComponentFilter.doFilter(TargetComponentFilter.java:96)
at org.apache.sling.engine.impl.filter.AbstractSlingFilterChain.doFilter(AbstractSlingFilterChain.java:60)
at org.apache.sling.engine.impl.SlingRequestProcessorImpl.processComponent(SlingRequestProcessorImpl.java:254)
line 97 is just calling the WorkflowSession wf = workflowServiceObject.getWorkflowSession(jcrsessionObject);
Should I use JACKRABBIT SESSION instead of jcrSession? Which one is right?
I was facing a similar situation.There is nothing wrong with the model node that you are passing.
Getting new workflowSession from workflowService was giving nullpointer so I had to get the workflowService inside my Activator using the getServiceReference way and have it assigned to a static variable in a utility class.Still I got:
and following was getting logged:-
Cannot read workflow model from node: /etc/workflow/models/deletecontent/jcr:content/model
And there was one more "session already closed" issue. For that I again made the resolverFactory a part of my utility class using which I could get administrativeResourceResolver in my servlet.

How can I use S3InboundFileSynchronizer to synchronize an S3Bucket organized with directories?

I'm trying to use the S3InboundFileSynchronizer to synchronize an S3Bucket to a local directory. The bucket is organised with sub-directories such as:
bucket ->
2016 ->
08 ->
daily-report-20160801.csv
daily-report-20160802.csv
etc...
Using this configuration:
#Bean
public S3InboundFileSynchronizer s3InboundFileSynchronizer() {
S3InboundFileSynchronizer synchronizer = new S3InboundFileSynchronizer(amazonS3());
synchronizer.setDeleteRemoteFiles(true);
synchronizer.setPreserveTimestamp(true);
synchronizer.setRemoteDirectory("REDACTED");
synchronizer.setFilter(new S3RegexPatternFileListFilter(".*\\.csv$"));
Expression expression = PARSER.parseExpression("#this.substring(#this.lastIndexOf('/')+1)");
synchronizer.setLocalFilenameGeneratorExpression(expression);
return synchronizer;
}
I'm able to get as far as connecting to the bucket and listing its contents. When it comes time to read from the bucket the following exception is thrown:
org.springframework.messaging.MessagingException: Problem occurred while synchronizing remote to local directory; nested exception is
org.springframework.messaging.MessagingException: Failed to execute on session;
nested exception is
java.lang.IllegalStateException: 'path' must in pattern [BUCKET/KEY].
at org.springframework.integration.file.remote.synchronizer.AbstractInboundFileSynchronizer.synchronizeToLocalDirectory(AbstractInboundFileSynchronizer.java:266)
Reviewing the code it seems that it'd be impossible to ever synchronize an S3Bucket w/ sub-directories:
private String[] splitPathToBucketAndKey(String path) {
Assert.hasText(path, "'path' must not be empty String.");
String[] bucketKey = path.split("/");
Assert.state(bucketKey.length == 2, "'path' must in pattern [BUCKET/KEY].");
Assert.state(bucketKey[0].length() >= 3, "S3 bucket name must be at least 3 characters long.");
bucketKey[0] = resolveBucket(bucketKey[0]);
return bucketKey;
}
Is there some configuration I'm missing or is this a bug?
(I'm assuming it's a bug 'till I'm told otherwise so I've submitted a pull request with a proposed fix.)
Yes, it is a bug and submitted PullRequest is good for fix.
Only the solution as a workaround is like custom SessionFactory<S3ObjectSummary> which returns a custom S3Session extension with the provided fix in the PR.

Resources