I am trying to calculate SSE traffic latency in a simple load test using the following JSR223 Sampler:
EventHandler eventHandler = eventText -> {
count++;
// get the time from the server
def result = eventText.substring(eventText.indexOf("data='") + 6, eventText.indexOf("', event")).trim() as Long;
def currenTime = System.currentTimeMillis();
def diff = currenTime - result;
list.add (diff);
resp = resp + "Time from server:" + result + ", JMeter time:" + currenTime + ", diff:"+ diff +"\n";
};
SSEClient sseClient = SSEClient.builder().url(pURL).eventHandler(eventHandler).build();
sseClient.start();
sleep(SLEEP_TIME);
sseClient.shutdown();
The time from the server (NodeJS -JavaScript) is Date.now() and the time on JMeter is System.currentTimeMillis()
Both Server and JMeter are on the same computer.
It seems that the time methods are not aligned as I can see that in some cases the JMeter time is earlier than the server time:
So I cannot trust the results...
Any other methods I should use on the JavaScript side or the JMeter side?
You cannot trust the results in any case because having the system under test and the load generator on the same machine is not the best idea, you won't get reliable results due to race conditions. Moreover it will be much harder to analyze the bottlenecks even with PerfMon Plugin
Also as per System.currentTimeMillis() function JavaDoc:
Returns the current time in milliseconds. Note that while the unit of time of the return value is a millisecond, the granularity of the value depends on the underlying operating system and may be larger. For example, many operating systems measure time in units of tens of milliseconds.
So if you want to measure the time difference between previous and next SSE you can consider using System.nanoTime()
However it's better to move JMeter to another machine and preferably a Linux as the precision of System.currentTimeMillis() function there is much higher
Related
I am attempting to write a simulation that can read from a config file for a set of apis that each have a set of properties.
I read the config for n active scenarios and create requests from a CommonRequest class
Then those requests are built into scenarios from a CommonScenario
CommonScenarios have attributes that are using to create their injection profiles
That all seems to work no issue. But when I try to use the properties / CommonScenario request to build a set of Assertions it does not work as expected.
// get active scenarios from the config
val activeApiScenarios: List[String] = Utils.getStringListProperty("my.active_scenarios")
// build all active scenarios from config
var activeScenarios: Set[CommonScenario] = Set[CommonScenario]()
activeApiScenarios.foreach { scenario =>
activeScenarios += CommonScenarioBuilder()
.withRequestName(Utils.getProperty("my." + scenario + ".request_name"))
.withRegion(Utils.getProperty("my." + scenario + ".region"))
.withConstQps(Utils.getDoubleProperty("my." + scenario + ".const_qps"))
.withStartQps(Utils.getDoubleListProperty("my." + scenario + ".toth_qps").head)
.withPeakQps(Utils.getDoubleListProperty("my." + scenario + ".toth_qps")(1))
.withEndQps(Utils.getDoubleListProperty("my." + scenario + ".toth_qps")(2))
.withFeeder(Utils.getProperty("my." + scenario + ".feeder"))
.withAssertionP99(Utils.getDoubleProperty("my." + scenario + ".p99_lte_assertion"))
.build
}
// build population builder set by adding inject profile values to scenarios
var injectScenarios: Set[PopulationBuilder] = Set[PopulationBuilder]()
var assertions : Set[Assertion] = Set[Assertion]()
activeScenarios.foreach { scenario =>
// create injection profiles from CommonScenarios
injectScenarios += scenario.getCommonScenarioBuilder
.inject(nothingFor(5 seconds),
rampUsersPerSec(scenario.startQps).to(scenario.rampUpQps).during(rampOne seconds),
rampUsersPerSec(scenario.rampUpQps).to(scenario.peakQps).during(rampTwo seconds),
rampUsersPerSec(scenario.peakQps).to(scenario.rampDownQps) during (rampTwo seconds),
rampUsersPerSec(scenario.rampDownQps).to(scenario.endQps).during(rampOne seconds)).protocols(httpProtocol)
// create scenario assertions this does not work for some reason
assertions += Assertion(Details(List(scenario.requestName)), TimeTarget(ResponseTime, Percentiles(4)), Lte(scenario.assertionP99))
}
setUp(injectScenarios.toList)
.assertions(assertions)
Note scenario.requestName is straight from the build scenario
.feed(feederBuilder)
.exec(commonRequest)
I would expect the Assertions get built from their scenarios into an iterable and pass into setUp().
What I get:
When I print out everything the scenarios, injects all look good but then I print my "assertions" and get 4 assertions for the same scenario name with 4 different Lte() values. This is generalized but I configured 12 apis all with different names and Lte() values, etc.
Details(List(Request Name)) - TimeTarget(ResponseTime,Percentiles(4.0)) - Lte(500.0)
Details(List(Request Name)) - TimeTarget(ResponseTime,Percentiles(4.0)) - Lte(1500.0)
Details(List(Request Name)) - TimeTarget(ResponseTime,Percentiles(4.0)) - Lte(1000.0)
Details(List(Request Name)) - TimeTarget(ResponseTime,Percentiles(4.0)) - Lte(2000.0)
After the simulation the assertions all run like normal:
Request Name: 4th percentile of response time is less than or equal to 500.0 : false
Request Name: 4th percentile of response time is less than or equal to 1500.0 : false
Request Name: 4th percentile of response time is less than or equal to 1000.0 : false
Request Name: 4th percentile of response time is less than or equal to 2000.0 : false
Not sure what I am doing wrong when building my assertions. Is this even a valid approach? I wanted to ask for help before I abandon this for a different approach.
Disclaimer: Gatling creator here.
It should work.
Then, there are several things I'm super not found of.
assertions += Assertion(Details(List(scenario.requestName)), TimeTarget(ResponseTime, Percentiles(4)), Lte(scenario.assertionP99))
You shouldn't be using the internal AST here. You should use the DSL like you've done for the injection profile.
var assertions : Set[Assertion] = SetAssertion
activeScenarios.foreach { scenario =>
You should use map on activeScenarios (similar to Java's Stream API), not use a mutable accumulator.
val activeScenarios = activeApiScenarios.map(???)
val injectScenarios = activeScenarios.map(???)
val assertions = activeScenarios.map(???)
Also, as you seem to not be familiar with Scala, you should maybe switch to Java (supported for more than 1 year).
I'm using the jmeter api to develop pressure testing tools,
How to modify the parameters of jmeter at runtime? such as the number of thread pools,
ConstantThroughputTimer.throughput
demo
github,but not found answer
You cannot change the number of threads in the runtime (at least not with JMeter 5.5)
What you can do is to use Constant Throughput Timer in combination with Beanshell Server to control requests execution rate.
I tried and found the answer by writing my own code. Parameters can be dynamically modified in the form of apis. Just call JMeterUtils.getJMeterProperties().setProperty("throughput", prop)。
ConstantThroughputTimer :
ConstantThroughputTimer timer = new ConstantThroughputTimer();
long rpsCalc = (long) (rps * 60);
String paramStr = "${__P(throughput,50)}";
timer.setProperty("calcMode", 2);
StringProperty stringProperty = new StringProperty();
stringProperty.setName("throughput");
stringProperty.setValue(paramStr);
timer.setProperty(stringProperty);
timer.setEnabled(true);
timer.setProperty(TestElement.TEST_CLASS, ConstantThroughputTimer.class.getName());
timer.setProperty(TestElement.GUI_CLASS, TestBeanGUI.class.getName());
return timer;
I have the following scenario to emulate in jMeter:
10 users (ThreadGroup) are logging in and each user (ThreadGroup) should wait/delay for 10 secs to start next user (ThreadGroup). How do I implement this?
Right now I have something like this:
ThreadGroup(10usrs)
Http Sampler Request(LogIn)
Http Sampler Request(LookUpStatement)
Http Sampler Request(ControlPanel)
Http Sampler Request(CapAvailableList)
Http Sampler Request(LoadAllChatCount)
Http Sampler Request(ReturnNotificationCount)
Timer (10 sec)?
Which timer should i use? Constant Throughput Timer or Stepping Throughput Timer
Is it even possible or do I have to use some workaround?
Any help with tutorial or links much appreciated.
You can start a new user (up to 10 users) each second by using the "Stepping Thread Group"
http://jmeter-plugins.org/wiki/SteppingThreadGroup/
If you only need to create a timer between each requests, putting a Constant Timer will do the trick. (although I would prefer the Gaussian Random Timer)
The Constant Throughput Timer will create a dynamic delay time to limit your Hits/s produced by your script - I don't think this is what you meant.
Best,
You can use the test action sampler along with beanshell timer for this. In the below steps, we use the pacing of 4500 milliseconds. Irrespective of how much time the previous request took, it will apply the remaining time. If the request took 1000 mSec, it will apply 4500-1000 = 3500 mSec as the pacing.
Add a Test Action Sampler
In the "Duration (milliseconds)" field, set the value as simply ${mydelay}
Right Click Test Action Sampler > Add > Timer > Beanshell timer. Paste the following code.
Long pacing = 4500 - prev.getTime();
if (pacing > 0) {
Integer iPacing = pacing != null ? pacing.intValue() : null;
log.info(String.valueOf(iPacing));
vars.put("mydelay", String.valueOf(iPacing));
return iPacing;
} else {
vars.put("mydelay", "0");
return 0;
}
Use a Test Action with JSR223 Timer in the beginning (JSR timer is the beginning is not really necessary since all its doing is setting the start time) and the end of the main loop for which you want to maintain the pace and use the code below to achieve the interval pacing. Action in Test Action should be set to Pause for the duration of 0ms.
Also create a JMeter variable called pacing which should hold the value of pacing you require.
Use the following code in the JSR223 Timer under the Test Action.
/**
* PACING START
* Set the start time for pacing calculation
*
*/
def d = new Date()
try {
vars.put("pacingStartTime", "${d.getTime()}")
return 1
}
catch (Exception e) {
log.warn("[ Pacing: Failed to set the start time ]", e)
throw e;
}
Use following in the timer at the end.
/**
* PACING END
* Calculate the pacing and apply // return!
*
*/
def d = new Date()
try {
def pacing = Long.parseLong(vars.get("pacing")) // get the required pacing value from jmeter variable.
String startTime = vars.get("pacingStartTime") // get the start time which was set in the beginning of the loop
def diff = d.getTime() - Long.parseLong(startTime) // current time minus start time
def sleep = pacing > diff ? pacing - diff : 0 // logic for sleep time
log.info("[ Pacing: ${pacing}ms, Remaining time: ${sleep}ms ]")
return sleep
}
catch (Exception e) {
return 1000
log.warn("[ Pacing: Failed to calculate pacing ]", e)
throw e;
}
I have something like a microtime() function at the very start of my node.js / express app.
function microtime (get_as_float) {
// Returns either a string or a float containing the current time in seconds and microseconds
//
// version: 1109.2015
// discuss at: http://phpjs.org/functions/microtime
// + original by: Paulo Freitas
// * example 1: timeStamp = microtime(true);
// * results 1: timeStamp > 1000000000 && timeStamp < 2000000000
var now = new Date().getTime() / 1000;
var s = parseInt(now, 10);
return (get_as_float) ? now : (Math.round((now - s) * 1000) / 1000) + ' ' + s;
}
The code of the actual app looks something like this:
application.post('/', function(request, response) {
t1 = microtime(true);
//code
//code
response.send(something);
console.log("Time elapsed: " + (microtime(true) - t1));
}
Time elapsed: 0.00599980354309082
My question is, does this mean that from the time a POST request hits the server to the time a response is sent out is give or take ~0.005s?
I've measured it client-side but my internet is pretty slow so I think there's some lag that has nothing to do with the application itself. What's a quick and easy way to check how quickly the requests are being processed?
Shameless plug here. I've written an agent that tracks the time usage for every Express request.
http://blog.notifymode.com/blog/2012/07/17/profiling-express-web-framwork-with-notifymode/
In fact when I first started writing the agent, I took the same approach. But I soon realized that it is not accurate. My implementation tracks the time difference between request and the response by substituting the Express router. That allowed me to add tracker functions. Feel free to give it a try.
C# .NET 2.0 if that turns out to be applicable.
I'm going to start to get to the bottom of why a web service we have is acting slow - this web service jumps over a proxy, into another domain, queries a stored procedure for some data, and then returns a int/string/dataset, depending on what I asked for. I just wrote a console app that queries it repeatedly in the same fashion so that I can gather some statistics to start out with.
Keep-alive is turned off for each request, for some legacy reason nobody documented, so there's an immediate smell.
When looping through the same request multiple times, I noticed some strange behavior. Here's my output that reflects how long each iteration took to make the query and return data.
Beginning run #1...completed in 4859.3128 ms
Beginning run #2...completed in 3812.4512 ms
Beginning run #3...completed in 3828.076 ms
Beginning run #4...completed in 3828.076 ms
Beginning run #5...completed in 546.868 ms
Beginning run #6...completed in 3828.076 ms
Beginning run #7...completed in 546.868 ms
Beginning run #8...completed in 3828.076 ms
Beginning run #9...completed in 3828.076 ms
Beginning run #10...completed in 578.1176 ms
Beginning run #11...completed in 3796.8264 ms
Beginning run #12...completed in 3828.076 ms
Beginning run #13...completed in 3828.076 ms
Beginning run #14...completed in 3828.076 ms
Beginning run #15...completed in 3828.076 ms
Beginning run #16...completed in 3828.076 ms
Beginning run #17...completed in 546.868 ms
Beginning run #18...completed in 3828.076 ms
Beginning run #19...completed in 3828.076 ms
Beginning run #20...completed in 546.868 ms
Total time: 61165 ms
Average time per request: 3058 ms
I find it odd that there are multiple repeated values, down to a very small level. Is there some bottleneck that would cause it to be returned in the same amount of time repeatedly?
...hopefully my code for figuring out and displaying the millisecond duration isn't off, but the TimeSpan object tracking it is local to each loop, so I don't think it's that.
EDIT: Jon asked for the timing code, so here ya go (variable names changed to protect the proprietary, so might have fat-fingered something that would make this not compile)...
int totalRunTime = 0;
for (int i = 0; i < numberOfIterations; i++)
{
Console.Write("Beginning run #" + (i + 1).ToString() + "...");
DateTime start = DateTime.Now;
SimpleService ws = new SimpleService();
DataSet ds = ws.CallSomeMethod();
DateTime end = DateTime.Now;
TimeSpan runTime = end - start;
totalRunTime += (int)runTime.TotalMilliseconds;
Console.Write("completed in " + runTime.TotalMilliseconds.ToString() + " ms\n");
}
Console.WriteLine("Total time: " + totalRunTime.ToString() + " ms");
Console.WriteLine("Average time per request: " + (totalRunTime / numberOfIterations).ToString() + " ms\n");
The simplest way, without running a profiler etc, is to make the web app log the exact time (as near as you can get it, obviously) it starts the operation, various times within the call, and the time it finishes. Then you can see where it's taking the time. (Using a Stopwatch will give you more accuracy, but it'll be slightly harder to do.)
I agree that it's odd that you've got repeated times. Could you post the code that's measuring it? I wouldn't be hugely surprised to see some sort of captured variable problem which is confusing your timings.
EDIT: Your timing code looks okay. That's very strange. I suggest you record the times at the web service as well, and see whether it looks the same. It's almost as if there's something deliberately throttling it.
When you run it, does it look like it's taking the amount of time it says it is - i.e. when it says it's taken 3 seconds, is that about 3 seconds after the last line was written?
now you need to get some benchmark values for the other steps in the chain. See the server logs to get the time your request hit the webserver, and add some logging into the webservice code to see when the webserver hands off to the actual "working" code.
Once you've done that you can start to narrow down the performance of the slowest part, repeat as much as you like.
Could creating (and timing the creation) of the SimpleService be skewing your numbers?
What happens if you pull that out of the loop?
int totalRunTime = 0;
SimpleService ws = new SimpleService();
for (int i = 0; i < numberOfIterations; i++)
{
Console.Write("Beginning run #" + (i + 1).ToString() + "...");
DateTime start = DateTime.Now;
DataSet ds = ws.CallSomeMethod();
DateTime end = DateTime.Now;
TimeSpan runTime = end - start;
totalRunTime += (int)runTime.TotalMilliseconds;
Console.Write("completed in " + runTime.TotalMilliseconds.ToString() + " ms\n");
}
Console.WriteLine("Total time: " + totalRunTime.ToString() + " ms");
Console.WriteLine("Average time per request: " + (totalRunTime / numberOfIterations).ToString() + " ms\n");