Concatenating responses in jmeter - jmeter

I tried a lot in google and did not find any solution. If I missed then I am sorry.
In Jmeter, I am running the same request in a loop. For 'n' number of times. For every request, I need to extract the json response and pass it to next request. I am able to extract the response of last one request and save it to a variable and then pass to next request. I used JSON Path Extractor. I also figured out extracting the response using BeanShell and JSR223 Pre and Post Processors
The thing here is I need to extract all previous responses data and build the request body for the next request, not just last 1 response. Please
I do not want to append the extracted response to a file then pass the data to request from the file.
Request1 (Requestbody:[]). Response1: Product A
Request2 (Requestbody: [Product A]). Response: Product B
Request3 (Requestbody: [Product A, Product B]. Response Product C
Request4 (Requestbody: [Product A, Product B, Product C]). Response: Product
.. ... .....
Requestn (body: [Product A, Product B, Product C, Product D]....), response: no more products
Any thoughts please
Thanks Jack

If you need to build a JSON Array from previous responses I would recommend consider using JSR223 PostProcessor (assumes Groovy language) and JSONBuilder class for this.
Groovy has built-in JSON support therefore you will have the full flexibility in reading and writing arbitrary JSON structures.
Example:
References:
Parsing and producing JSON
Creating JSON using Groovy
Groovy Is the New Black

hmm.. what a requirement. I wonder what would be the use case. :) I achieved this with a LinkedList and passing the object back and forth between the pre and post processors using getObject and putObject. The below codes contains plenty of debug statements. Please discard them. YOu can also optimize the same.
HTTP Sampler
with "BODY DATA" tab having just ${request}
--> Beanshell pre-processor
log.info("Entering preprocessor..");
LinkedList itemsArrayLocal = vars.getObject("itemsArrayLocal");
if ( itemsArrayLocal == null) {
try {
itemsArrayLocal = new LinkedList();
//itemsArrayLocal.add("");
vars.putObject("itemsArrayLocal", itemsArrayLocal );
vars.put("request", "(Requestbody:[" + "" + "]");
}
catch (Exception e) {
e.printStackTrace();
log.info(e);
}
}
else {
String s = "";
for ( int i=0; i < itemsArrayLocal.size(); i++) {
if ( i >= 1) {
s = s + ",";
}
s = s + itemsArrayLocal.get(i).toString() ;
log.info("i=" + String.valueOf(i) + s);
}
log.info("s=" + s);
vars.put("request", "(Requestbody:[" + s + "]");
}
--> Beanshell post-processor
log.info("Entering POST PROCESSOR..");
LinkedList itemsArrayLocal = (LinkedList) vars.getObject("itemsArrayLocal");
String o = prev.getResponseDataAsString().substring(2,10);
//log.info("o=" + o);
try {
log.info("Added..");
itemsArrayLocal.add(o);
log.info("Size=" + String.valueOf(itemsArrayLocal.size()));
}
catch (Exception e) {
e.printStackTrace();
log.info(e);
}

Related

Simultaneous http post request spring boot

Hi,
I have a list with size of 500k and I have to make a request to a server with hash parameters.
The server accepts JSON Array of 200 objects. so I could send 200 items each time.
But still I need to split the list each time and send that part to server.
I have a method for this which makes the http post request. and I want to use spring boot options (if available) to call the method with different threads and get the response back and merge them into one.
I did it using java CompletableFuture class without any springboot tags. but you Could use #async for your method too. sample code :
var futures = new ArrayList<CompletableFuture<List<Composite>>>();
var executor = Executors.newFixedThreadPool(Runtime.getRuntime().availableProcessors());
for (List<CompositeRecord> records : Lists.partition(recordsList, 200)) {
var future = CompletableFuture.supplyAsync(() -> /* call your method here */, executor);
futures.add(future);
Thread.sleep(2000);
}
CompletableFuture.allOf(futures.toArray(new CompletableFuture[0])).exceptionally(ex -> null).join(); // avoid throwing an exception in the join() call
var futureMap = futures.stream().collect(Collectors.partitioningBy(CompletableFuture::isCompletedExceptionally));
var compositeWithErrorList = new ArrayList<Composite>();
futureMap.get(false).forEach(l -> {
try {
compositeWithErrorList.addAll(l.get());
} catch (InterruptedException | ExecutionException e) {
e.printStackTrace();
}
});
after the code is executed you will have a map of done and undone futures.

How to update tag value for exported metric in micrometer?

Im using micrometer for exporting summery of third party api consumption.
Now I want to precisely count failed requests and export each failed request ids.
Invoking below method for each restTemplate exchange call.
private DistributionSummary incFailedCounter(String requestId) {
this.registry = beanProvider.getRegistry();
DistributionSummary summary = summarys.get(myCounter);
if (summary == null) {
Builder tags = DistributionSummary.builder("failed.test").tags("req_id", requestId, "count", "1");
summary = tags.register(registry);
summarys.put(myCounter, summary);
} else {
String tag = summary.getId().getTag("req_id");
String[] split = tag.split(",");
summary.close();
summarys.put(myCounter,
DistributionSummary.builder("failed.test")
.tags("req_id", tag + ", " + requestId, "count", String.valueOf(split.length + 1))
.register(registry));
}
return summary;
}
This code insert new line to metric for each request.
failed_test_count{count="1",instance="localhost:8080",job="monitor-app",req_id="1157408321"}
failed_test_count{count="2",instance="localhost:8080",job="monitor-app",req_id="1157408321, 1157408321"}
failed_test_count{count="3",instance="localhost:8080",job="monitor-app",req_id="1157408321, 1157408321, 1157408321"}
Problem is this metric size is increased with many requests.
Is there way to remove or replace same tag and export only one dynamic metric with updated req_ids ?
Can not remove or update tags, cause they are immutable. One way is to unregister current meter. used below method to removed registered meter and applied new one.
registry.remove(summary.getId());
This produces one line metric.
failed_test_count{count="4",instance="localhost:8080",job="monitor-app",req_id="1157408321, 58500184, 58500184, 58500184"}

Jenkins Groovy Active choice parameter how to pass first dropdown values to second dropdown boy

How do I pass the selected value from the first dropdown to the second drop down ?
this is the link I am following [link][1]
// Import the JsonSlurper class to parse Dockerhub API response
import groovy.json.JsonSlurper
// Set the URL we want to read from, it is MySQL from official Library for this example, limited to 20 results only.
docker_image_tags_url = "https://hub.docker.com/v2/repositories/library/mysql/tags/?page_size=20"
try {
// Set requirements for the HTTP GET request, you can add Content-Type headers and so on...
def http_client = new URL(docker_image_tags_url).openConnection() as HttpURLConnection
http_client.setRequestMethod('GET')
// Run the HTTP request
http_client.connect()
// Prepare a variable where we save parsed JSON as a HashMap, it's good for our use case, as we just need the 'name' of each tag.
def dockerhub_response = [:]
// Check if we got HTTP 200, otherwise exit
if (http_client.responseCode == 200) {
dockerhub_response = new JsonSlurper().parseText(http_client.inputStream.getText('UTF-8'))
} else {
println("HTTP response error")
System.exit(0)
}
// Prepare a List to collect the tag names into
def image_tag_list = []
// Iterate the HashMap of all Tags and grab only their "names" into our List
dockerhub_response.results.each { tag_metadata ->
image_tag_list.add(tag_metadata.name)
}
// The returned value MUST be a Groovy type of List or a related type (inherited from List)
// It is necessary for the Active Choice plugin to display results in a combo-box
return image_tag_list.sort()
} catch (Exception e) {
// handle exceptions like timeout, connection errors, etc.
println(e)
}
I have another active choice box groovy script,expecting a value from the above dropdown
box. I have tried
env = params.mysql_image_version
// def env="dev" // this works but I am hard coding the value ,instead of getting it dynamically from the above dropdown box
env_list.each { env ->
stack_list.add(github_response.get("dev"))
}
print stack_list

How to validate json response in bean shell and perform some action in case the response is not as expected in jmeter?

I want to extract the json response in jmeter and if the response is not as expected, i need to print it to a csv file.
i tried using the contains method to check if the response contains an expected keyword but it doesn't seem to work. Is there anyother way I can do this?
Sample Code here:
log.info(ctx.getPreviousResult().getResponseDataAsString());
r = ctx.getPreviousResult().getResponseCode();
d = ctx.getPreviousResult().getResponseDataAsString();
if(!d.contains("valid")){
p.println(r +","+ vars.get("email") +",");
}
This is my json response
{
"isBlueLinkServicePinValid": "valid"
}
I'm checking for the keyword "valid"
if(!d.contains("valid"))
But it doesn't seem to work?
TIA
Since JMeter 3.1 it is not recommended to use Beanshell for scripting, you should go for JSR223 Test Elements and Groovy language instead. The main reason is that Groovy has much better performance than Beanshell does.
Groovy has built-in JSON support therefore you can extract isBlueLinkServicePinValid attribute value in an easy way:
String response = prev.getResponseDataAsString();
log.info("Response: " + response)
String valid = new groovy.json.JsonSlurper().parseText(response).isBlueLinkServicePinValid
log.info("Valid: " + valid);
if (valid.equals("valid")) {
log.info("Do something");
}
else {
log.info("Do something else");
}
Demo:

How to extract and manipulate data within a Nifi processor

I'm trying to write a custom Nifi processor which will take in the contents of the incoming flow file, perform some math operations on it, then write the results into an outgoing flow file. Is there a way to dump the contents of the incoming flow file into a string or something? I've been searching for a while now and it doesn't seem that simple. If anyone could point me toward a good tutorial that deals with doing something like that it would be greatly appreciated.
The Apache NiFi Developer Guide documents the process of creating a custom processor very well. In your specific case, I would start with the Component Lifecycle section and the Enrich/Modify Content pattern. Any other processor which does similar work (like ReplaceText or Base64EncodeContent) would be good examples to learn from; all of the source code is available on GitHub.
Essentially you need to implement the #onTrigger() method in your processor class, read the flowfile content and parse it into your expected format, perform your operations, and then re-populate the resulting flowfile content. Your source code will look something like this:
#Override
public void onTrigger(final ProcessContext context, final ProcessSession session) throws ProcessException {
FlowFile flowFile = session.get();
if (flowFile == null) {
return;
}
final ComponentLog logger = getLogger();
AtomicBoolean error = new AtomicBoolean();
AtomicReference<String> result = new AtomicReference<>(null);
// This uses a lambda function in place of a callback for InputStreamCallback#process()
processSession.read(flowFile, in -> {
long start = System.nanoTime();
// Read the flowfile content into a String
// TODO: May need to buffer this if the content is large
try {
final String contents = IOUtils.toString(in, StandardCharsets.UTF_8);
result.set(new MyMathOperationService().performSomeOperation(contents));
long stop = System.nanoTime();
if (getLogger().isDebugEnabled()) {
final long durationNanos = stop - start;
DecimalFormat df = new DecimalFormat("#.###");
getLogger().debug("Performed operation in " + durationNanos + " nanoseconds (" + df.format(durationNanos / 1_000_000_000.0) + " seconds).");
}
} catch (Exception e) {
error.set(true);
getLogger().error(e.getMessage() + " Routing to failure.", e);
}
});
if (error.get()) {
processSession.transfer(flowFile, REL_FAILURE);
} else {
// Again, a lambda takes the place of the OutputStreamCallback#process()
FlowFile updatedFlowFile = session.write(flowFile, (in, out) -> {
final String resultString = result.get();
final byte[] resultBytes = resultString.getBytes(StandardCharsets.UTF_8);
// TODO: This can use a while loop for performance
out.write(resultBytes, 0, resultBytes.length);
out.flush();
});
processSession.transfer(updatedFlowFile, REL_SUCCESS);
}
}
Daggett is right that the ExecuteScript processor is a good place to start because it will shorten the development lifecycle (no building NARs, deploying, and restarting NiFi to use it) and when you have the correct behavior, you can easily copy/paste into the generated skeleton and deploy it once.

Resources