I have a post request that is taking input from a JSR223 PreProcessor.
This is so i can include 50 rows from csv per request.
I would now like to iterate over the csv so i can request all rows in the csv, 50 at a time per request.
The PreProcessor code below works great to take the first 50 records;
def rows = 50 // how many rows you need to collect from the CSV
def content = [:]
def body = []
1.upto(rows, { index ->
def values = new File('locations.csv').readLines().get(index).split(',')
def entry = [:]
entry.put("sAddress", values[0])
entry.put("sPostcode", values[1])
entry.put("sID", values[2])
body.add(entry)
})
content.put('data', body)
sampler.addNonEncodedArgument('', new groovy.json.JsonBuilder(content).toPrettyString(),'')
sampler.setPostBodyRaw(true)
This works amazingly well.
However, i would like to iterate through this taking the next 50 records in the next thread group loop and for it to complete say 100 loops and therefore 5000 rows of csv will have been sent. Is there a way of doing this?
You can get current iteration number as vars.getIteration()
Assuming above you can use some calculated offset instead of 1 hard-coded value
Something like:
def rows = 50
def start = vars.getIteration() - 1
def offset = rows * vars.getIteration() - 1
(start * rows).upto(offset, {index->
//do what you need here
})
In the above example vars stands for JMeterVariables class instance, see Top 8 JMeter Java Classes You Should Be Using with Groovy article for more information on this and other JMeter API shorthands which are available for the JSR223 Test Elements
Related
I have a requirement where I need to create 2 million data for past 3 months from April till July. Based on calculation I need to create 2,222 data every Day. I have a Insert Statement, where I need to pass time Stamp which is perfectly working fine. If I use time shift function for example..__timeShift(yyyy-MM-dd HH:mm:ss:SSS,now,-PT2H,,)} it successfully insert the data, but once 2222 loop is complete I don't want to wait for 24 hr based on timeshift..once loop is complete, it should take the next date and again start the loop, can someone help me how to solve the problem
You can iterate each day for the past 3 months using any suitable JSR223 Test Element and the following Groovy code:
def format = 'yyyy-MM-dd HH:mm:ss:SSS'
def now = new Date()
use(groovy.time.TimeCategory) {
def april = now - 3.month
april.upto(now) {
def minus2hours = it - 2.hour
println(minus2hours.format(format))
1.upto(2222) {
//here will be your current loop
}
}
}
Demo:
More information:
Groovy TimeCategory
Creating and Testing Dates in JMeter - Learn How
I'm currently trying to figure out how to get a count of unique records to display using DJ.js and D3.js
The data set looks like this:
id,name,artists,genre,danceability,energy,key,loudness,mode,speechiness,acousticness,instrumentalness,liveness,valence,tempo,duration_ms,time_signature
6DCZcSspjsKoFjzjrWoCd,God's Plan,Drake,Hip-Hop/Rap,0.754,0.449,7,-9.211,1,0.109,0.0332,8.29E-05,0.552,0.357,77.169,198973,4
3ee8Jmje8o58CHK66QrVC,SAD!,XXXTENTACION,Hip-Hop/Rap,0.74,0.613,8,-4.88,1,0.145,0.258,0.00372,0.123,0.473,75.023,166606,4
There are 100 records in the data set, and I would expect the count to display 70 for the count of unique artists.
var ndx = crossfilter(spotifyData);
totalArtists(ndx);
....
function totalArtists(ndx) {
// Select the artists
var totalArtistsND = dc.numberDisplay("#unique-artists");
// Count them
var dim = ndx.dimension(dc.pluck("artists"));
var uniqueArtist = dim.groupAll();
totalArtistsND.group(uniqueArtist).valueAccessor(x => x);
totalArtistsND.render();
}
I am only getting 100 as a result when I should be getting 70.
Thanks a million, any help would be appreciated
You are on the right track - a groupAll object is usually the right kind of object to use with dc.numberDisplay.
However, dimension.groupAll doesn't use the dimension's key function. Like any groupAll, it looks at all the records and returns one value; the only difference between dimension.groupAll() and crossfilter.groupAll() is that the former does not observe the dimension's filters while the latter observes all filters.
If you were going to use dimension.groupAll, you'd have to write reduce functions that watch the rows as they are added and removed, and keeps a count of how many unique artists it has seen. Sounds kind of tedious and possibly buggy.
Instead, we can write a "fake groupAll", an object whose .value() method returns a value dynamically computed according to the current filters.
The ordinary group object already has a unique count: the number of bins. So we can create a fake groupAll which wraps an ordinary group and returns the length of the array returned by group.all():
function unique_count_groupall(group) {
return {
value: function() {
return group.all().filter(kv => kv.value).length;
}
};
}
Note that we also have to filter out any bins of value zero before counting.
Use the fake groupAll like this:
var uniqueArtist = unique_count_groupall(dim.group());
Demo fiddle.
I just added this to the FAQ.
I am looking to see if it's possible to easily compute the total time taken to complete a Thread Group. In my case, I have a thread with 100 concurrent users with 1 HTTP request. I would like to know how long did it take to complete requests from all 100 users.
I tried using Transaction controller with Aggregation Report but it doesn't seem to capture the value across all concurrent users.
Thanks,
J
Add tearDown Thread Group to your Test Plan. It is executed after all thread groups so it is a good place to measure test duration.
Add JSR223 Sampler as a child of the tearDown Thread Group
Put the following Groovy code into "Script" area:
def testStart = new Date(vars.get('TESTSTART.MS') as long)
def testEnd = new Date()
use(groovy.time.TimeCategory) {
def duration = testEnd - testStart
log.info("Test duration: ${duration.seconds}")
}
Once test finishes you will see its duration in seconds in jmeter.log file.
You can also use ${duration.hours}, ${duration.days}, etc.
I have a table with around 50,000,000 records.
I would like to fetch one column of the whole table
SELECT id FROM `project.dataset.table`
Running this code in the Web Console takes around 80 seconds.
However when doing this with the Ruby Gem, I'm limited to fetch only 100,000 records per query. With the #next method I can access the next 100,000 records.
require "google/cloud/bigquery"
#big_query = Google::Cloud::Bigquery.new(
project: "project",
keyfile: "keyfile"
)
#dataset = #big_query.dataset("dataset")
#table = #dataset.table("table")
queue = #big_query.query("SELECT id FROM `project.dataset.table`", max: 1_000_000)
stash = queue
loop do
queue = queue.next
unless queue
break
else
O.timed stash.size
stash += queue
end
end
The problem with this is that each request takes around 30 seconds. max: 1_000_000 is of no use, I'm stuck at 100,000. This way the query takes over 4 hours, which is not acceptable.
What am I doing wrong?
You should rather do an export job, this way you will have as file(s) on GCS.
Then downloading from there is easy.
https://cloud.google.com/bigquery/docs/exporting-data
Ruby example here https://github.com/GoogleCloudPlatform/google-cloud-ruby/blob/master/google-cloud-bigquery/lib/google/cloud/bigquery.rb
I have test plan with 1 transaction controller, inside controller i have 2 http sampler.
When generate Summary Report table, i have output
what does Total mean in jmeter listener Summary Report table for transaction controller?
why i have 1500 in the total, in transaction controller i have 500 ( combine from 2 sampler).
and my understanding is total should be 1000 (from 2 sampler) or 500 (from 1 transaction controller)
Total row just sums up all the rows reported. (http samplers and transaction controllers if any).
Following are how Total row values are calculated:
#Total Samples = all rows Samples addition (= 500 + 500 + 500)
#Total min = min of all rows min, min(153, 239, 418) = 153
#Total max = max of all rows max, max(3788, 2218, 4008) = 4088
#Total throughput = all rows throughput addition (= 4.2 + 4.2 + 4.2)
Transaction is defined as collection of multiple HTTP requests (samplers). You use transaction controller to know the collective response times for a bunch of requests related to one transaction.
Real-time example: Loading Home Page of any web application, triggers multiple HTTP requests to load resources like images, .js, .css. In JMeter, each HTTP request is represented as HTTP Sampler. you get those response times by default at the sampler level. but you want to know overall response time to load the page. so, you group all those requests under one transaction controller, which calculates the overall metrics based on all its child samplers/requests to give the overall response time to load the page, i.e., at the transaction level.
Transaction Controller (TC) row values defined as follows:
#TC Samples = how many times the transaction is performed (= number of times any of its child sampler, http request, is sent i.e., 500)
#TC min = sum of min response times of all child samplers (153+239) // min resposne time to perform the transaction
#TC max = sum of max response times of all child samplers (3788+2218) // max resposne time to perform the transaction
#TC throughput = each child sampler throughput (= 4.2)