Keep original order of requests in JMeter reports - jmeter

I use following code to label requests by response times.
if (prev.getTime() > 170 && prev.getTime() < 340) {
prev.setSampleLabel(prev.getSampleLabel() + " > 170")
} else if (prev.getTime() > 340 && prev.getTime() < 4000) {
prev.setSampleLabel(prev.getSampleLabel() + " > 340")
} else if (prev.getTime() > 4000 && prev.getTime() < 8000) {
prev.setSampleLabel(prev.getSampleLabel() + " > 4000")
} else if (prev.getTime() > 8000) {
prev.setSampleLabel(prev.getSampleLabel() + " > 8000")
}
Aggregate report and Summary report contain name of requests in a different order than original one in Thread group. Total number of samples per request is not visible in this way.

JMeter's Aggregate Report and Summary Report listeners always store the Sample Results in their execution order and JMeter executes Samplers upside down (or according to the Logic Controllers)
As you can see, Sampler 4 is the first one because it has been executed first, however it is possible to sort requests by label by clicking the column header since JMeter 3.2):
Another option to sort requests by label is generating HTML Reporting Dashboard

Related

ArangoDB: How to run 2 queries in parallel in community edition

Hi I have written the below 2 queries and would like to run in these queries in parallel and not execute them sequentially. Is it possible to execute them parallelly in the community edition of the ArangoDB?
FOR d IN Transaction
FILTER d._to == "Account/123"
COLLECT AGGREGATE length = COUNT_UNIQUE(d._id),
totamnt = SUM(d.Amount),
daysactive = COUNT_UNIQUE(DATE_TRUNC(d.Time, "day"))
RETURN {
"Incoming Accounts": length ,
"Days Active": LENGTH(daysactive),
"Total Amount": totamnt
}
FOR d IN Transaction
FILTER d._from == "Account/123"
COLLECT AGGREGATE length = COUNT_UNIQUE(d._id),
totamnt = SUM(d.Amount),
daysactive = COUNT_UNIQUE(DATE_TRUNC(d.Time, "day"))
RETURN {
"Outgoing Accounts": length ,
"Days Active": LENGTH(daysactive),
"Total Amount": totamnt
}
of course it is possible to run multiple requests in parallel. Just fire 2 curl calls to _api/cursor or use 2 different arangosh shells.
Or run 2 curl calls in the same shell and use the x-arango-async header for each request to retrieve the result asynchronously as documented here: https://www.arangodb.com/docs/stable/http/async-results-management.html#async-execution-and-later-result-retrieval

Can I use variables across all the threads in the thread groups in jmeter?

I'm trying to create a test plan for rate-limiting behavior.
I set a rule that blocks after X requests per minute, and I want to check that I get response code 200 until I reached the X requests, and from then, to get 429. I created a counter that shared between all the threads, but it seems to be a mess because it's not a thread-safe.
This is my beanshell "once only controller":
String props_pre_fix = ${section_id} + "-" + ${START.HMS};
props.remove("props_pre_fix" + ${section_id}, props_pre_fix);
props.put("props_pre_fix" + ${section_id}, props_pre_fix);
props.put(props_pre_fix + "_last_response_code", "200");
props.put(props_pre_fix + "_my_counter", "0");
and this is the beanshell assertion:
String props_pre_fix = props.get("props_pre_fix" + ${section_id});
//log.info("props_pre_fix " + props_pre_fix);
//extract my counter from props
int my_counter = Integer.parseInt(props.get(props_pre_fix + "_my_counter"));
//extract last response code
String last_response_code = props.get(props_pre_fix + "_last_response_code");
log.info("last_response_code " + last_response_code);
//if last seconds is greater than current seconds it means we are in a new minute - set counter to zero
if(last_response_code.equals("429") && ResponseCode.equals("200")){
log.info("we moved to a new minute - my_counter should be zero");
my_counter = 0;
}
//increase counter
my_counter++;
log.info("set counter with value: " + my_counter);
//save counter
props.put(props_pre_fix + "_my_counter", my_counter + "");
log.info("counter has set with value: " + my_counter);
if (ResponseCode.equals("200")) {
props.put(props_pre_fix + "_last_response_code", "200");
if(my_counter <= ${current_limit}){
Failure = false;
}
else {
Failure = true;
FailureMessage = "leakage of " + (my_counter - ${current_limit}) + " requests";
}
}
else if (ResponseCode.equals("429")) {
props.put(props_pre_fix + "_last_response_code", "429");
if(my_counter > ${current_limit}){
Failure = false;
}
}
I'm using props to share the counter, but I obviously feel that this is not the right way to do it.
Can you suggest me how to do that?
I don't think that it is possible to automatically test this requirement using JMeter Assertions because you don't have access to the current throughput so I would rather recommend considering cross-checking Response Codes per Second and Transactions per Second charts (can be installed using JMeter Plugins Manager)
All the 200 and 429 responses can be marked as successful using Response Assertion configured like:
If for some reason you still want to do this programmatically you might want to take a look at Summariser class source which is used for displaying current throughput in the STDOUT.
Also be informed that starting from JMeter 3.1 you should be using JSR223 Test Elements and Groovy language for scripting.

How to loop CSV file values using Ultimate Thread Group?

I have this links.csv file:
METHOD,HOST,PATH,HITS
GET,google.com,/,7
GET,facebook.com,/,3
I want to create a JMeter test plan using Ultimate Thread Group (UTG) that randomize the hits based on the last column in the CSV above (HITS).
When viewing the results tree, I want to see something like this:
1. google.com
2. google.com
3. facebook.com
4. google.com
5. google.com
6. google.com
7. google.com
8. google.com
9. facebook.com
10. facebook.com
Ideally, I want to set the UTG to use the following settings:
Start Threads Count = sum of all hits in the CSV file (e.g. 7 + 3)
Initial Delay = 0
Startup Time = 60
Hold Load For = 30
Shutdown Time = 0
How to achieve this? I appreciate code samples and screenshots since I'm still new to JMeter.
I can only think of generating a new CSV file out of your original one in order to:
Get the "sum" of "HITS"
Generate a line containing method, host and path per "hit"
In order to achieve this:
Add setUp Thread Group to your Test Plan
Add JSR223 Sampler to the Thread Group
Put the following code into "Script" area:
def entries = new File('/path/to/original.csv').readLines().drop(1)
def sum = 0
def newCSV = new File('/path/to/generated.csv')
newCSV << 'METHOD,HOST,PATH' << System.getProperty('line.separator')
entries.each { entry ->
def values = entry.split(',')
def hits = values[3] as int
sum += hits
1.upto(hits, {
newCSV << values[0] << ',' << values[1] << ',' << values[2] << System.getProperty('line.separator')
})
}
props.put('threads', sum as String)
Use __P() function like ${__P(threads,)} in the Ultimate Thread Group
Use the new "generated" CSV file in the CSV Data Set Config in the Ultimate Thread Group

Why is Google Translate API giving me so many 403s?

I've posted the relevant code below. I have a quote of 100 requests / second and a total quota of 50M characters daily (the latter of which I've never hit). I'm including 75 requests in each batch (i.e. in the below, there are 75 strings in each group).
I'm constantly running into 403s, usually after a very short time span of less than a minute of firing off requests. After that, no amount of backoff works until the next day. This is really debilitating and I'm very unsure why it's happening. So far, their response team hasn't been helpful for diagnosing the issue.
Here's an example error:
"Google Translate Error on checksum 48af8c32261d9cb8911d99168a6f5b21: https://www.googleapis.com/language/translate/v2?q=QUERYSTRING&source=ja&target=en&key=MYKEY&format=text&alt=json returned "User Rate Limit Exceeded">"
def _google_translate_callback(self, request_id, response, err):
if err:
print 'Google Translate Error on request_id %s: %s' % (request_id, err)
print 'Backing off for %d seconds.' % self.backoff
sleep(self.backoff)
if self.backoff < 4096:
self.backoff = self.backoff * 2
self._translate_array_google_helper()
else:
translation = response['translations'][0]['translatedText'] \
.replace('"', '"') \
.replace(''', "'")
self.translations.append((request_id, translation))
if is_done():
self.is_translating = False
else:
self.current_group += 1
self._translate_array_google_helper()
def _translate_array_google_helper(self):
if self.current_group >= len(self.groups):
self.is_translating = False
return
service = self.google_translator.translations()
group = self.groups[self.current_group]
batch = self.google_translator.new_batch_http_request(
callback=self._google_translate_callback
)
for text, request_id in group:
format_ = 'text'
if is_html(text):
format_ = 'html'
batch.add(
service.list(q=text, format=format_,
target=self.to_lang, source=self.from_lang),
request_id=request_id
)
batch.execute()

AWK performance while processing big files

I have an awk script that I use for calculate how much time some transactions takes to complete. The script gets the unique ID of each transaction and stores the minimum and maximum timestamp of each one. Then it calculates the difference and at the end it shows those results that are over 60 seconds.
It works very well when used with some thousand (200k) but it takes more time when used in real world. I tested it several times and it takes about 15 minutes to process about 28 million of lines. Can I consider this good performance or it is possible to improve it?
I'm open to any kind of suggestion.
Here you have the complete code
zgrep -E "\(([a-z0-9]){15,}:" /path/to/very/big/log | awk '{
gsub("[()]|:.*","",$4); #just removing ugly chars
++cont
min=$4"min" #name for maximun value of current transaction
max=$4"max" #same as previous, just for readability
split($2,secs,/[:,]/) #split hours,minutes and seconds
seconds = 3600*secs[1] + 60*secs[2] + secs[3] #turn everything into seconds
if(arr[min] > seconds || arr[min] == 0)
arr[min]=seconds
if(arr[max] < seconds)
arr[max]=seconds
dif=arr[max] - arr[min]
if(dif > 60)
result[$4] = dif
}
END{
for(x in result)
print x" - "result[x]
print ":Processed "cont" lines"
}'
You don't need to calculate the dif every time you read a record. Just do it once in the END section.
You don't need that cont variable, just use NR.
You dont need to populate min and max separately string concatenation is slow in awk.
You shouldn't change $4 as that will force the record to be recompiled.
Try this:
awk '{
name = $4
gsub(/[()]|:.*/,"",name); #just removing ugly chars
split($2,secs,/[:,]/) #split hours,minutes and seconds
seconds = 3600*secs[1] + 60*secs[2] + secs[3] #turn everything into seconds
if (NR==1) {
min[name] = max[name] = seconds
}
else {
if (min[name] > seconds) {
min[name] = seconds
}
if (max[name] < seconds) {
max[name] = seconds
}
}
}
END {
for (name in min) {
diff = max[name] - min[name]
if (diff > 60) {
print name, "-", diff
}
}
print ":Processed", NR, "lines"
}'
After making some test, and with the suggestions gave by Ed Morton (both for code improvement and performance test) I found that the bottleneck was the zgrep command. Here is an example that does several things:
Check if we have a transaction line (first if)
Cleans the transaction id
checks if this has been already registered (second if) by checking if it is in the array
If is not registered then checks if it is the appropriate type of transaction and if so it registers the timestamp in second
If is already registered saves the new time-stamp as the maximun
After all it makes the necessary operations to calculate the time difference
Thank you very much to all that helped me.
zcat /veryBigLog.gz | awk '
{if($4 ~ /^\([:alnum:]/ ){
name=$4;gsub(/[()]|:.*/,"",name);
if(!(name in min)){
if($0 ~ /TypeOFTransaction/ ){
split($2,secs,/[:,]/)
seconds = 3600*secs[1] + 60*secs[2] + secs[3]
max[name] = min[name]=seconds
print lengt(min) "new "name " start at "seconds
}
}else{
split($2,secs,/[:,]/)
seconds = 3600*secs[1] + 60*secs[2] + secs[3]
if( max[name] < seconds) max[name]=seconds
print name " new max " max[name]
}
}}END{
for(x in min){
dif=max[x]- min[x]
print max[x]" max - min "min[x]" : "dif
}
print "Processed "NR" Records"
print "Found "length(min)" MOs" }'

Resources