Jmeter Construct Parameter Value based on DataSet - jmeter

In JMeter, I want to construct the request parameter value from a dataset file based on the PropertyCount.
Dataset
PropertyCount propertyid1 propertyid2 propertyid3
2 13029526 15763743
3 13029526 15763743 12345645
2 13029526 15763743
Request Input Parameter
"values":["13029526","15763743"]
"values":[${outputString}]
PreProcessor Script
With the the below preprocessor script, I am getting the following output but looking to get the values as in Request Input Parameter, with quotes.
2021-08-29 22:15:04,706 INFO o.a.j.m.J.JSR223 PreProcessor: Required output: 13029526,15763743,
2021-08-29 22:15:04,785 INFO o.a.j.m.J.JSR223 PreProcessor: Required output: 13029526,15763743,
JSR223 PreProcessor
def requiredOutput = new StringBuilder()
1.upto(vars.get('propertycount') as int, {
requiredOutput.append(vars.get('propertyid' + it))
requiredOutput
requiredOutput.append(',')
vars.put("outputString",requiredOutput.toString());
})

You're seem to be constructing a JSON Array therefore it makes more sense to consider using Groovy's JsonBuilder instead of doing manual string concatenation:
def outputString = []
1.upto(vars.get('PropertyCount') as int, {
outputString.add(vars.get("propertyid$it"))
})
vars.put('outputString', new groovy.json.JsonBuilder(outputString).toPrettyString())
More information:
Apache Groovy - Parsing and producing JSON
Apache Groovy - Why and How You Should Use It

Related

Getting error in accessing Jmeter variables (variable_MarchNr) in yaml in beanshell script

I am trying to bulk edit one entity called issues, I call GetIssues API and via json extractor, get all issueId's in variable "issueIds"
json extractor to extract issueIds
Now I want to pass these Ids in other api bulk edit issues, If I directly use these array in next API, I get below error:
{"details":"Unexpected token $ in JSON at position 19","metadata":{}}
So I used to below code in Beanshell Post processor:
var mylist;
props.put("mylist", new ArrayList());
log.info("Detected " + vars.get("issueIds_matchNr") + " issueIds");
for (int i=1; i<= Integer.parseInt(vars.get("issueIds_matchNr")); i++) {
log.info("IDs # " + i + ": " + vars.get("issueIds_" + i));
props.get("mylist").add('"' + vars.get("issueIds_" + i) + '"' );
}
log.info(props.get("mylist").toString());
var issueIdList;
vars.put("issueIdList", props.get("mylist").toString());
log.info(vars.get("issueIdList"));
In my next api call if I pass issueIdList variable, then this works fine in jmeter.
sample variable values in debug sampler are like:
issueIdList=["555bcfc2", "33974d2c", "e58db1d6"]
issueIds_1=555bcfc2
issueIds_2=33974d2c
issueIds_3=e58db1d6
issueIds_matchNr=3
Problem I am facing if I convert my jmx2yaml and tried to run this file with
bzt issues.yml
then while executing above shell script, these issueIds_matchNr, issueIds_3 are not detected, I get below error;
2022-05-29 08:26:10,785 INFO o.a.j.e.J.JSR223PostProcessor: Detected null issueIds
2022-05-29 08:26:10,795 ERROR o.a.j.e.JSR223PostProcessor: Problem in JSR223 script, JSR223PostProcessor
javax.script.ScriptException: Sourced file: eval stream : Method Invocation Integer.parseInt : at Line: 4 : in file: eval stream : Integer .parseInt ( vars .get ( "issueIds_matchNr" ) )
Target exception: java.lang.NumberFormatException: null
in eval stream at line number 4
at bsh.engine.BshScriptEngine.evalSource(BshScriptEngine.java:87) ~[bsh-2.0b6.jar:2.0b6 2016-02-05 05:16:19]
My Yaml script is:
- extract-jsonpath:
issueIds:
default: NotFound
jsonpath: $..id
follow-redirects: true
jsr223:
- compile-cache: true
execute: after
language: beanshell
parameters: issueIds
script-file: script.bsh
label: Get Issue Id's
method: GET
url: https://${BASE_URL1}/${PROJECT_ID}/issues?limit=5&sortBy=-displayId&filter%5Bvalid%5D=true
You're missing one important bit: setting the Match Nr in your Taurus YAML
The correct definition of the JSON Extractor would be something like:
extract-jsonpath:
issueIds:
jsonpath: $..id
match-no: -1 #this is what you need to add
Also be informed that starting from JMeter 3.1 it's recommended to use Groovy as the scripting language so consider migrating, it will be as simple as removing the first line from your script.bsh

Error on accessing Output Variable in beanshell preprocessor in Jmeter

I am unable to print 'Output Variable' value of foreach Controller in Beanshell Pre/Post-processor in Jmeter.
log.info("inside hash"+ ${current_file} ); //current_file is the Output variable name defined in foreach controller and has the value of current file path.
File file=new File(${current_file});
byte[] content = FileUtils.readFileToByteArray(file);
Whenever I execute the tests, I get this error:
2021-12-15 19:58:25,208 ERROR o.a.j.u.BeanShellInterpreter: Error invoking bsh method: eval In file: inline evaluation of: ``import org.apache.commons.io.FileUtils; import org.apache.jmeter.services.FileSe . . . '' Encountered "( "inside hash" + C :" at line 4, column 9.
Can anyone help me fix this error?
Don't inline JMeter functions or variables in form of ${current_file}, use vars shorthand for JMeterVariables class instance instead
Something like:
String current_file = vars.get("current_file");
log.info("inside hash"+ current_file );
File file=new File(current_file);
Don't use Beanshell, since JMeter 3.1 it's recommended to use JSR223 Test Elements and Groovy language for scripting, there is a chance that your code will just start working after switching to Groovy or at least you will get more informative errors.

How to access JSON from external data source in Terraform?

I am receiving JSON from a http terraform data source
data "http" "example" {
url = "${var.cloudwatch_endpoint}/api/v0/components"
# Optional request headers
request_headers {
"Accept" = "application/json"
"X-Api-Key" = "${var.api_key}"
}
}
It outputs the following.
http = [{"componentID":"k8QEbeuHdDnU","name":"Jenkins","description":"","status":"Partial Outage","order":1553796836},{"componentID":"ui","name":"ui","description":"","status":"Operational","order":1554483781},{"componentID":"auth","name":"auth","description":"","status":"Operational","order":1554483781},{"componentID":"elig","name":"elig","description":"","status":"Operational","order":1554483781},{"componentID":"kong","name":"kong","description":"","status":"Operational","order":1554483781}]
which is a string in terraform. In order to convert this string into JSON I pass it to an external data source which is a simple ruby function. Here is the terraform to pass it.
data "external" "component_ids" {
program = ["ruby", "./fetchComponent.rb",]
query = {
data = "${data.http.example.body}"
}
}
Here is the ruby function
#!/usr/bin/env ruby
require 'json'
data = JSON.parse(STDIN.read)
results = data.to_json
STDOUT.write results
All of this works. The external data outputs the following (It appears the same as the http output) but according to terraform docs this should be a map
external1 = {
data = [{"componentID":"k8QEbeuHdDnU","name":"Jenkins","description":"","status":"Partial Outage","order":1553796836},{"componentID":"ui","name":"ui","description":"","status":"Operational","order":1554483781},{"componentID":"auth","name":"auth","description":"","status":"Operational","order":1554483781},{"componentID":"elig","name":"elig","description":"","status":"Operational","order":1554483781},{"componentID":"kong","name":"kong","description":"","status":"Operational","order":1554483781}]
}
I was expecting that I could now access data inside of the external data source. I am unable.
Ultimately what I want to do is create a list of the componentID variables which are located within the external data source.
Some things I have tried
* output.external: key "0" does not exist in map data.external.component_ids.result in:
${data.external.component_ids.result[0]}
* output.external: At column 3, line 1: element: argument 1 should be type list, got type string in:
${element(data.external.component_ids.result["componentID"],0)}
* output.external: key "componentID" does not exist in map data.external.component_ids.result in:
${data.external.component_ids.result["componentID"]}
ternal: lookup: lookup failed to find 'componentID' in:
${lookup(data.external.component_ids.*.result[0], "componentID")}
I appreciate the help.
can't test with the variable cloudwatch_endpoint, so I have to think about the solution.
Terraform can't decode json directly before 0.11.x. But there is a workaround to work on nested lists.
Your ruby need be adjusted to make output as variable http below, then you should be fine to get what you need.
$ cat main.tf
variable "http" {
type = "list"
default = [{componentID = "k8QEbeuHdDnU", name = "Jenkins"}]
}
output "http" {
value = "${lookup(var.http[0], "componentID")}"
}
$ terraform apply
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Outputs:
http = k8QEbeuHdDnU

illegal characters at line 1 and col 1 in xml response in jmeter

I am trying to validate the xml response from an REST API using JMETER. I am using xml schema assertion to validate the response using xsd.
I am getting the following error on XML Schema Assertion
Assertion error: true
Assertion failure: false
Assertion failure message: fatal: line=1 col=1 Content is not allowed in prolog.
When i see the response that i have received i see there are some illegal characters that is added at the beginning of the response xml.
enter image description here
I have tried to modify jmeter.properties file and changed the following values
jmeter.save.saveservice.output_format=xml
jmeter.save.saveservice.data_type=true
jmeter.save.saveservice.label=true
jmeter.save.saveservice.response_code=true
jmeter.save.saveservice.successful=true
jmeter.save.saveservice.thread_name=true
Please help me in understanding how to remove the illegal characters in the response and allow the xsd validation to pass through.
These characters indicate Byte Order Mark so you can use BOMInputStream from the JSR223 PostProcessor in order to remove them from the response and replace the response data with "sanitized" XML.
Add JSR223 PostProcessor as a child of the HTTP Request sampler where you want to remove this BOM
Put the following code into "Script" area:
def is = new ByteArrayInputStream(prev.getResponseData())
def bOMInputStream = new org.apache.commons.io.input.BOMInputStream(is)
def bom = bOMInputStream.getBOM()
def charsetName = bom == null ? 'UTF-8' : bom.getCharsetName()
def reader = new InputStreamReader(new BufferedInputStream(bOMInputStream), charsetName)
prev.setResponseData(org.apache.commons.io.IOUtils.toByteArray(reader, 'UTF-8'))
That's it, your assertion shouldn't be failing anymore.
More information on Groovy scripting in JMeter: Apache Groovy - Why and How You Should Use It

Is there any diffrence between hmacSha1Hex and hmacSha1?

I am using org.apache.commons.codec.digest.HmacUtils.hmacSha1Hex("secretkey", "message");
and getting a long string in output.
i tried executing org.apache.commons.codec.digest.HmacUtils.hmacSha1("secretkey", "message"); but facing an error
ERROR - jmeter.util.BeanShellInterpreter: Error invoking bsh method: eval Sourced file: inline evaluation of: ``String hmac_Sha1 = org.apache.commons.codec.digest.HmacUtils.hmacSha1("secretkey . . . '' : Typed variable declaration
2016/11/29 17:09:07 WARN - jmeter.modifiers.BeanShellPreProcessor: Problem in BeanShell script org.apache.jorphan.util.JMeterException: Error invoking bsh method: eval Sourced file: inline evaluation of: String hmac_Sha1 = org.apache.commons.codec.
Basically i want to know the length of output for both functions
for hmacSha1Hex output is like HMAC SHA1 HASH: 0ff4e6a0b47baebe19c392e706fffaa13664a1df
I am expecting output like btuU9CPfMQMswNgxPIMjRkTjfks%3D difference is of length
The answer you're looking for is in HmacUtils JavaDoc:
HmacUtils.hmacSha1Hex - is a String
HmacUtils.hmacSha1 - is a byte[]
You can convert byte array to string like:
String s = new String (your byte array here);
I would also recommend using JSR223 Sampler and Groovy language instead of Beanshell, it is compliant with modern Java features and has better performance.

Resources