JMeter test not stopping after duration ends in distribution mode - jmeter

I'm running a very simple JMeter test with Basic Thread Group and HTTP Sampler. The duration set for execution is 10 min (600 sec). enter image description here
The test ran and stopped itself (after 10 min) successfully on my local both in JMeter GUI as well as CLI mode.
However, when I run the same test in distributed mode, the test does not stop itself and gets hanged. I've been observing this issue mostly with thread count 200 and above.
Some of the JMeter properties I have overridden:
"server.rmi.ssl.disable": "true",
"jmeterengine.nongui.maxport": JMETER_EXEC_PORT,
"jmeterengine.nongui.port": JMETER_EXEC_PORT,
"client.tries": str(3),
"client.retries_delay": str(5000),
"client.rmi.localport": CLIENT_RMI_LOCALPORT,
"server.rmi.localport": SERVER_RMI_LOCALPORT,
"server_port": SERVER_PORT,
"server.exitaftertest": "true",
"jmeterengine.stopfail.system.exit": "true",
"jmeterengine.remote.system.exit": "true",
"jmeterengine.force.system.exit": "true",
"jmeter.save.saveservice.output_format": "csv",
"jmeter.save.saveservice.autoflush": "true",
"beanshell.server.file": "./extras/startup.bsh",
"jmeter.save.saveservice.connect_time": "true",
"jpgc.repo.sendstats": "false",
Here're the JMeter CLI commands I'm using for JMeter client and server respectively:
// JMeter Client
jmeter.sh -n -f -t {testPlan} -j jmeter.log -l report.csv -LINFO -Lorg.apache.http=DEBUG -Lorg.apache.http.wire=ERROR -Ljmeter.engine=DEBUG -X -R {serverIPs}
// JMeter Server
jmeter.sh -s -Jbeanshell.server.port={beanshellServerPort}
Any help pointers to make sure the test ends after the specified
duration?
Can this be controlled/enforced by any JMeter
setting/property?
Is it something related to Basic thread group v/s
thread group plugins like Concurrency/Ultimate thread group?
Example test run:
I tried to run this test plan with 10 workers out of which 6 were successfully finished while 4 of them hanged. Please find the logs and test plan here.
Also, why does the Summariser show Active + Finished threads more than Started?

Looking into the log it seems that JMeter didn't finish all the threads after reporting the test completion:
2020-12-16 08:19:48,933 INFO o.a.j.t.JMeterThread: Thread finished: 10.244.11.4-Thread Group 1-86
2020-12-16 08:25:22,926 INFO o.a.j.e.RemoteJMeterEngineImpl: Shutting test ...
2020-12-16 08:25:22,927 INFO o.a.j.e.RemoteJMeterEngineImpl: ... stopped
2020-12-16 08:25:22,928 INFO o.a.j.t.JMeterThread: Stopping: 10.244.11.4-Thread Group 1-78
2020-12-16 08:25:22,928 INFO o.a.j.t.JMeterThread: Stopping: 10.244.11.4-Thread Group 1-190
You need to compare log files from the slaves which are failing with the ones which are passing, it will allow you to detect any inconsistencies.
One of possible reasons I can think of is that you allocated too little JVM Heap space, at least one controller is using default 1 GB of heap which might be not sufficient for executing your test for 200+ virtual users. Consider ramping-up this to a higher value. See 9 Easy Solutions for a JMeter Load Test “Out of Memory” Failure article for more details on this and other JMeter performance and tuning tips
If the issue persists the only way of finding out the cause of why threads are stuck is taking a thread dump

Related

Jmeter || Error generating the report: java.lang.NullPointerException while running test from CMD

I am trying to execute a test from CMD but getting following error :
Command : jmeter -n -t D:\Users\load.test\Desktop\Performance\apache-jmeter-5.5\bin\UserServices.jmx -l D:\Users\load.test\Desktop\Performance\apache-jmeter-5.5\PerformanceData\MICR_Project\MICR_TestResults\DebugOOM\DebugOOM_1.csv -e -o D:\Users\load.test\Desktop\Performance\apache-jmeter-5.5\PerformanceData\MICR_Project\MICR_TestResults\DebugOOM\DebugOOM_1 -JURL=localhost -JPort=7058 -JUser=5 -JRampUp=1 -JDuration=900 -JRampUpSteps=1 -JOutputFileName=OutputOOM.jtl -JErrorFileName=ErrorOOM.jtl -JFlow1=100
What could be the possible reasons for this error as its not very informative.emphasized text
The NullPointerException is related to generating the HTML Reporting Dashboard, the dashboard generation fails because your test failed to execute at all - no Samplers were run.
The reason why no Samplers were run can be found in jmeter.log file, the most "popular" reasons are:
The number of threads in the Thread Group is 0
The test uses CSV Data Set Config for parameterization and the CSV file is not present.
The test uses a JMeter Plugin and the plugin is not installed

Spring Scheduler not working in google cloud run with cpu throttling off

Hello All I have a spring scheduler job running which has to be run on google cloud run with a scheduled time gap.
It works perfectly fine with docker-compose local deployment. It gets triggered without any issue.
Although it works fine locally in google cloud run service with CPU throttling off which keeps CPU 100% on always it is not working after the first run.
I will paste the docker file for any once reference but am pretty sure it is working fine
FROM maven:3-jdk-11-slim AS build-env
# Set the working directory to /app
WORKDIR /app
COPY pom.xml ./
COPY src ./src
COPY css-common ./css-common
RUN echo $(ls -1 css-common/src/main/resources)
# Build and create the common jar
RUN cd css-common && mvn clean install
# Build and the job
RUN mvn package -DskipTests
# It's important to use OpenJDK 8u191 or above that has container support enabled.
# https://docs.docker.com/develop/develop-images/multistage-build/#use-multi-stage-builds
FROM openjdk:11-jre-slim
# Copy the jar to the production image from the builder stage.
COPY --from=build-env /app/target/css-notification-job-*.jar /app.jar
# Run the web service on container startup
CMD ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
And below is the terraform script used for the deployment
resource "google_cloud_run_service" "job-staging" {
name = var.cloud_run_job_name
project = var.project
location = var.region
template {
spec {
containers {
image = "${var.docker_registry}/${var.project}/${var.cloud_run_job_name}:${var.docker_tag_notification_job}"
env {
name = "DB_HOST"
value = var.host
}
env {
name = "DB_PORT"
value = 3306
}
}
}
metadata {
annotations = {
"autoscaling.knative.dev/maxScale" = "4"
"run.googleapis.com/vpc-access-egress" = "all-traffic"
"run.googleapis.com/cpu-throttling" = false
}
}
}
timeouts {
update = "3m"
}
}
Something I noticed in the logs itself
2022-01-04T00:19:39.178057Z2022-01-04 00:19:39.177 INFO 1 --- [ionShutdownHook] j.LocalContainerEntityManagerFactoryBean : Closing JPA EntityManagerFactory for persistence unit 'default'
Standard
2022-01-04T00:19:39.182017Z2022-01-04 00:19:39.181 INFO 1 --- [ionShutdownHook] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Shutdown initiated...
Standard
2022-01-04T00:19:39.194117Z2022-01-04 00:19:39.193 INFO 1 --- [ionShutdownHook] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Shutdown completed.
It is shutting down the entity manager. I provided -Xmx1024m heap memory to make sure it has enough memory.
Although in google documentation it has mentioned it should work I am not sure for some reason the scheduler not getting triggered. Any help would be really nice.
TL;DR: Using Spring Scheduler on Cloud Run is a bad idea. Prefer Cloud Scheduler instead
In fact, you have to understand what is the lifecycle of a Cloud Run instance. First of all, CPU is allocated to the process ONLY when a request is processed.
The immediate effect of that is that background process, like a scheduler, can't work, because there isn't CPUs allocated out of request processing.
Except if you set the CPU Throttling to off. You did it? Yes great, but there are another caveats!
An instance is created when a request comes in, and live up to 15 minutes without any request processing. Then the instance is offloaded and you scale to 0.
Here again, the scheduler can't work if the instance is shut down. The solution is to set the min instance to 1 AND the CPU throttling to false to keep 1 instance 100% up and let the scheduler do its job.
Final issue with Cloud Run, is the scalability. You set 4 in your terraform, that means, you can have up to 4 instances in parallel, and therefore 4 scheduler running in parallel, one on each instance. Is it really what you want? If not, you can set the max instance to 1 to limit the number of parallel instance to 1.
At the end, you have 1 instance, full time up, and that can't scale up and down. Because it can't scale, I don't recommend you to perform processing on the current instance but to call another API which run on another Cloud Run instance and that will be able to scale up and down according to the scheduler requirement.
And so, you will have only 1 scheduler that will perform API call to another Cloud Run services to perform task. That's the purpose of Cloud Scheduler.

Why isn't JMeter saving some requests statistics on non-GUI mode?

I'm doing some tests using JMeter but it seems that when running the test on GUI mode, some HTTPs requests response statistics can be seen via listeners but, when running the same test on non-GUI mode the same responses aren't saved in the jtl file and thus aren't shown in listeners when loading the jtl file on GUI-mode.
After running the test on GUI mode:
Results after running test
And then, running the same test but on non-GUI mode:
Command:
path/to/jmeter -n -t path/to/test.jmx -l path/to/results.jtl -j path/to/logfile.log -JnumUsers=10 -Jjmeterengine.force.system.exit=true -Dnashorn.args=--no-deprecation-warning
Results after loading the jtl file into a listener
You can see that the /buscarAvaliacaoAluno and /alterarAvaliacaoAluno responses aren't there anymore.
edit with error in log
It seems that it says that it can't find the javascript engine used by a postprocessor
After reading this post I understood that if I'm using java 11 or above, using javascript shouldn't work, but when running java -version I get "openjdk version "1.8.0_292" and echo ${JAVA_HOME} I get /usr/java/jdk1.8.0_91
javax.script.ScriptException: Cannot find engine named: 'javascript', ensure you set language field in JSR223 Test Element: Pega id's questionários
at org.apache.jmeter.util.JSR223TestElement.getScriptEngine(JSR223TestElement.java:101) ~[ApacheJMeter_core.jar:5.3]
at org.apache.jmeter.extractor.JSR223PostProcessor.process(JSR223PostProcessor.java:44) [ApacheJMeter_components.jar:5.3]
at org.apache.jmeter.threads.JMeterThread.runPostProcessors(JMeterThread.java:940) [ApacheJMeter_core.jar:5.3]
at org.apache.jmeter.threads.JMeterThread.executeSamplePackage(JMeterThread.java:572) [ApacheJMeter_core.jar:5.3]
at org.apache.jmeter.threads.JMeterThread.processSampler(JMeterThread.java:489) [ApacheJMeter_core.jar:5.3]
at org.apache.jmeter.threads.JMeterThread.run(JMeterThread.java:256) [ApacheJMeter_core.jar:5.3]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_91]
We cannot provide a comprehensive response without seeing the test plan, the full listener output and the jmeter.log file
If you don't see a sampler result in the .jtl file most probably it hasn't been executed and there could be various reasons for not executing the sampler.
You're looking into old results. Given you provide number of users as 50 I would expect at least 50 sampler results and you have only 10. Try adding -f command-line argument to your JMeter startup script so it would overwrite the existing .jtl file with the new data
You have logic controllers like If Controller for conditionally executing your /buscarAvaliacaoAluno and /alterarAvaliacaoAluno and the condition is not met
You have "Action to be taken after a Sampler error" other than Continue in Thread Group so your test fails somewhere before these samplers and hence they are not being executed
etc.

Parsing concurrency '${addressThread}' in group 'ThreadGroup' failed, choose 1

I am trying to define a percentage of threads for each ThreadGroup in my load testing .jmx file, and pass the total number of threads from taurus config .yaml file.
However, taurus fails to parse the expression, even though when I try to debug it using jmeter I can see that the expressions works. ( I am setting the total number of users in user.property file in jmeter).
This is my yaml config file.
---
scenarios:
student_service:
script: ~/jmeter/TestPlan.jmx
variables:
addressThread: 100
think-time: 500ms
execution:
- scenario: student_service
hold-for: 5m
versions I am using:
Taurus CLI Tool
MacOs10.13.6
Jmeter 5.0
You're mixing properties and variables.
It should be:
---
scenarios:
student_service:
script: ~/jmeter/TestPlan.jmx
properties:
addressThread: 100
think-time: 500ms
execution:
- scenario: student_service
hold-for: 5m
And in JMeter, you should be using __P function:
${__P(addressThread)}
Still, there is a bug in current version of Taurus 1.13.2, so you need to wait for next version:
https://groups.google.com/d/msg/codename-taurus/QggRz9QDnO0/_FEGllDoGAAJ

How to get better error messaging from nightwatch when running tests in parallel

We have a problem when we run our nightwatch tests in parallel and there is a problem with the setup, for example the selenium grid is not available. The tests execute very quickly and we get no error messages.
Started child process for: folder1/test1
Started child process for: folder1/test2
Started child process for: folder1/test3
Started child process for: folder1/test4
>> folder1/test1 finished.
>> folder1/test2 finished.
>> folder1/test3 finished.
>> folder1/test4 finished.
But when I run the tests serially, I get a good error message like
Error retrieving a new session from the selenium server
Connection refused! Is selenium server started?
{ status: 13,
value: { message: 'Error forwarding the new session Empty pool of VM for setup Capabilities [{acceptSslCerts=true, name=Test1, browserName=chrome, javascriptEnabled=true, uuid=ab54872b-10ee-43a1-bf65-7676262fa647, platform=ANY}]',
class: 'org.openqa.grid.common.exception.GridException' } }
Why don't I get the good error message when running in parallel mode? Is there something I can change so I get the good error message in parallel mode?
By setting
live_output: true
in your nightwatch config file, you'll see logs while running in parallel.
More information: config-basic

Resources