I am attempting to write a simulation that can read from a config file for a set of apis that each have a set of properties.
I read the config for n active scenarios and create requests from a CommonRequest class
Then those requests are built into scenarios from a CommonScenario
CommonScenarios have attributes that are using to create their injection profiles
That all seems to work no issue. But when I try to use the properties / CommonScenario request to build a set of Assertions it does not work as expected.
// get active scenarios from the config
val activeApiScenarios: List[String] = Utils.getStringListProperty("my.active_scenarios")
// build all active scenarios from config
var activeScenarios: Set[CommonScenario] = Set[CommonScenario]()
activeApiScenarios.foreach { scenario =>
activeScenarios += CommonScenarioBuilder()
.withRequestName(Utils.getProperty("my." + scenario + ".request_name"))
.withRegion(Utils.getProperty("my." + scenario + ".region"))
.withConstQps(Utils.getDoubleProperty("my." + scenario + ".const_qps"))
.withStartQps(Utils.getDoubleListProperty("my." + scenario + ".toth_qps").head)
.withPeakQps(Utils.getDoubleListProperty("my." + scenario + ".toth_qps")(1))
.withEndQps(Utils.getDoubleListProperty("my." + scenario + ".toth_qps")(2))
.withFeeder(Utils.getProperty("my." + scenario + ".feeder"))
.withAssertionP99(Utils.getDoubleProperty("my." + scenario + ".p99_lte_assertion"))
.build
}
// build population builder set by adding inject profile values to scenarios
var injectScenarios: Set[PopulationBuilder] = Set[PopulationBuilder]()
var assertions : Set[Assertion] = Set[Assertion]()
activeScenarios.foreach { scenario =>
// create injection profiles from CommonScenarios
injectScenarios += scenario.getCommonScenarioBuilder
.inject(nothingFor(5 seconds),
rampUsersPerSec(scenario.startQps).to(scenario.rampUpQps).during(rampOne seconds),
rampUsersPerSec(scenario.rampUpQps).to(scenario.peakQps).during(rampTwo seconds),
rampUsersPerSec(scenario.peakQps).to(scenario.rampDownQps) during (rampTwo seconds),
rampUsersPerSec(scenario.rampDownQps).to(scenario.endQps).during(rampOne seconds)).protocols(httpProtocol)
// create scenario assertions this does not work for some reason
assertions += Assertion(Details(List(scenario.requestName)), TimeTarget(ResponseTime, Percentiles(4)), Lte(scenario.assertionP99))
}
setUp(injectScenarios.toList)
.assertions(assertions)
Note scenario.requestName is straight from the build scenario
.feed(feederBuilder)
.exec(commonRequest)
I would expect the Assertions get built from their scenarios into an iterable and pass into setUp().
What I get:
When I print out everything the scenarios, injects all look good but then I print my "assertions" and get 4 assertions for the same scenario name with 4 different Lte() values. This is generalized but I configured 12 apis all with different names and Lte() values, etc.
Details(List(Request Name)) - TimeTarget(ResponseTime,Percentiles(4.0)) - Lte(500.0)
Details(List(Request Name)) - TimeTarget(ResponseTime,Percentiles(4.0)) - Lte(1500.0)
Details(List(Request Name)) - TimeTarget(ResponseTime,Percentiles(4.0)) - Lte(1000.0)
Details(List(Request Name)) - TimeTarget(ResponseTime,Percentiles(4.0)) - Lte(2000.0)
After the simulation the assertions all run like normal:
Request Name: 4th percentile of response time is less than or equal to 500.0 : false
Request Name: 4th percentile of response time is less than or equal to 1500.0 : false
Request Name: 4th percentile of response time is less than or equal to 1000.0 : false
Request Name: 4th percentile of response time is less than or equal to 2000.0 : false
Not sure what I am doing wrong when building my assertions. Is this even a valid approach? I wanted to ask for help before I abandon this for a different approach.
Disclaimer: Gatling creator here.
It should work.
Then, there are several things I'm super not found of.
assertions += Assertion(Details(List(scenario.requestName)), TimeTarget(ResponseTime, Percentiles(4)), Lte(scenario.assertionP99))
You shouldn't be using the internal AST here. You should use the DSL like you've done for the injection profile.
var assertions : Set[Assertion] = SetAssertion
activeScenarios.foreach { scenario =>
You should use map on activeScenarios (similar to Java's Stream API), not use a mutable accumulator.
val activeScenarios = activeApiScenarios.map(???)
val injectScenarios = activeScenarios.map(???)
val assertions = activeScenarios.map(???)
Also, as you seem to not be familiar with Scala, you should maybe switch to Java (supported for more than 1 year).
Related
I'm coding an application that it use MapKit features. When I make request with for loop to get directions for many locations using MapKit's MKDirections , it gives error as "Directions is not available " with following details :
Error Domain=MKErrorDomain Code=3 "Directions Not Available" UserInfo={NSLocalizedFailureReason=Route information is not available at this moment., MKErrorGEOError=-3, MKErrorGEOErrorUserInfo={
details = (
{
intervalType = short;
maxRequests = 50;
"throttler.keyPath" = "app:lszlp.nobetciEczane/0x20200/short(default/any)";
timeUntilReset = 54;
windowSize = 60;
}
);
timeUntilReset = 54; ```
what is the possible causes ??
I've recognized that 2 arguments must be taken into account to avoid this type of error.
Firstly, Apple Map Server doesn't give permission more than one location's request in 60 seconds, so you have to check 2 consecutive location request time.
Second, maximum request number is set to 50 for Apple Map Server as it's written in error definition. So , you have to limit your "for loop" with 50 loops. I could not find any argument why this limitation requires.
with this 2 approaches , the problem has gone.
I am trying to calculate SSE traffic latency in a simple load test using the following JSR223 Sampler:
EventHandler eventHandler = eventText -> {
count++;
// get the time from the server
def result = eventText.substring(eventText.indexOf("data='") + 6, eventText.indexOf("', event")).trim() as Long;
def currenTime = System.currentTimeMillis();
def diff = currenTime - result;
list.add (diff);
resp = resp + "Time from server:" + result + ", JMeter time:" + currenTime + ", diff:"+ diff +"\n";
};
SSEClient sseClient = SSEClient.builder().url(pURL).eventHandler(eventHandler).build();
sseClient.start();
sleep(SLEEP_TIME);
sseClient.shutdown();
The time from the server (NodeJS -JavaScript) is Date.now() and the time on JMeter is System.currentTimeMillis()
Both Server and JMeter are on the same computer.
It seems that the time methods are not aligned as I can see that in some cases the JMeter time is earlier than the server time:
So I cannot trust the results...
Any other methods I should use on the JavaScript side or the JMeter side?
You cannot trust the results in any case because having the system under test and the load generator on the same machine is not the best idea, you won't get reliable results due to race conditions. Moreover it will be much harder to analyze the bottlenecks even with PerfMon Plugin
Also as per System.currentTimeMillis() function JavaDoc:
Returns the current time in milliseconds. Note that while the unit of time of the return value is a millisecond, the granularity of the value depends on the underlying operating system and may be larger. For example, many operating systems measure time in units of tens of milliseconds.
So if you want to measure the time difference between previous and next SSE you can consider using System.nanoTime()
However it's better to move JMeter to another machine and preferably a Linux as the precision of System.currentTimeMillis() function there is much higher
How can I get an overall PASS/FAIL result for a JMeter thread group without using a post processor on every sampler?
I've tried using a beanshell listener, but it doesn't work for instances where there are multiple samplers inside a transaction controller with "Generate Parent Sample" enabled. In that case, the listener only gets called once per transaction controller and I'm only able to access the result of the last sampler inside the transaction controller.
Edit:
I would like to be able to save a pass/fail value as Jmeter variable or property for the thread group. If one or more components of the thread group fail or return an error, then that would be an overall fail. This variable will then be used for reporting purposes.
My current beanshell listener code:
SampleResult sr = ctx.getPreviousResult();
log.info(Boolean.toString(sr.isSuccessful()));
if (!sr.isSuccessful()){
props.put("testPlanResult", "FAIL");
testPlanResultComment = props.get("testPlanResultComment");
if(testPlanResultComment == ""){
testPlanResultComment = sr.getSampleLabel();
}else {
testPlanResultComment = testPlanResultComment + ", " + sr.getSampleLabel();
}
props.put("testPlanResultComment", testPlanResultComment);
log.info(testPlanResultComment);
}
If you call prev.getParent() you will be able to fetch individual sub-samples via getSubResults() function, something like:
prev.getParent().getSubResults().each {result ->
log.info('Sampler: ' + result.getSampleLabel() + ' Elapsed time: ' + result.getTime() )
}
log.info('Total: ' + prev.getParent().getTime())
Demo:
More information: Apache Groovy - Why and How You Should Use It
I am trying to port tests from using FakeRequest to using WithServer.
In order to simulate a session with FakeRequest, it is possible to use WithSession("key", "value") as suggested in this post: Testing controller with fake session
However when using WithServer, the test now looks like:
"render the users page" in WithServer {
val users = await(WS.url("http://localhost:" + port + "/users").get)
users.status must equalTo(OK)
users.body must contain("Users")
}
Since there is no WithSession(..) method available, I tried instead WithHeaders(..) (does that even make sense?), to no avail.
Any ideas?
Thanks
So I found this question, which is relatively old:
Add values to Session during testing (FakeRequest, FakeApplication)
The first answer to that question seems to have been a pull request to add .WithSession(...) to FakeRequest, but it was not applicable to WS.url
The second answer seems to give me what I need:
Create cookie:
val sessionCookie = Session.encodeAsCookie(Session(Map("key" -> "value")))
Create and execute request:
val users = await(WS.url("http://localhost:" + port + "/users")
.withHeaders(play.api.http.HeaderNames.COOKIE -> Cookies.encodeCookieHeader(Seq(sessionCookie))).get())
users.status must equalTo(OK)
users.body must contain("Users")
Finally, the assertions will pass properly, instead of redirecting me to the login page
Note: I am using Play 2.4, so I use Cookies.encodeCookieHeader, because Cookies.encode is deprecated
I have something like a microtime() function at the very start of my node.js / express app.
function microtime (get_as_float) {
// Returns either a string or a float containing the current time in seconds and microseconds
//
// version: 1109.2015
// discuss at: http://phpjs.org/functions/microtime
// + original by: Paulo Freitas
// * example 1: timeStamp = microtime(true);
// * results 1: timeStamp > 1000000000 && timeStamp < 2000000000
var now = new Date().getTime() / 1000;
var s = parseInt(now, 10);
return (get_as_float) ? now : (Math.round((now - s) * 1000) / 1000) + ' ' + s;
}
The code of the actual app looks something like this:
application.post('/', function(request, response) {
t1 = microtime(true);
//code
//code
response.send(something);
console.log("Time elapsed: " + (microtime(true) - t1));
}
Time elapsed: 0.00599980354309082
My question is, does this mean that from the time a POST request hits the server to the time a response is sent out is give or take ~0.005s?
I've measured it client-side but my internet is pretty slow so I think there's some lag that has nothing to do with the application itself. What's a quick and easy way to check how quickly the requests are being processed?
Shameless plug here. I've written an agent that tracks the time usage for every Express request.
http://blog.notifymode.com/blog/2012/07/17/profiling-express-web-framwork-with-notifymode/
In fact when I first started writing the agent, I took the same approach. But I soon realized that it is not accurate. My implementation tracks the time difference between request and the response by substituting the Express router. That allowed me to add tracker functions. Feel free to give it a try.