Jmeter in distributed mode: JVM doesn't stop because of BeanShell server - jmeter

WHAT I HAVE:
Jmeter network created for tests in distributed mode: one Jmeter master plus few Jmeter slaves. BeanShell Server disabled. Everything works fine.
WHAT I WANT TO DO:
I want to enable BeanShell server to be able to modify properties on the fly.
ACTUAL RESULT:
BeanShell server starts and works successfully.
Once the test is done, the following message appears in the log:
The JVM should have exited but did not.
The following non-daemon threads are still running (DestroyJavaVM is OK):
Thread[Thread-7,5,main], stackTrace:java.net.PlainSocketImpl#socketAccept
java.net.AbstractPlainSocketImpl#accept at line:409
java.net.ServerSocket#implAccept at line:545
java.net.ServerSocket#accept at line:513
bsh.util.Sessiond#run at line:65
java.lang.Thread#run at line:748
Thread[Thread-5,5,main], stackTrace:java.net.PlainSocketImpl#socketAccept
java.net.AbstractPlainSocketImpl#accept at line:409
java.net.ServerSocket#implAccept at line:545
java.net.ServerSocket#accept at line:513
bsh.util.Httpd#run at line:64
java.lang.Thread#run at line:748
It's clear that it happens because of BeanShell server that is running and doesn't exit for some reason.
As a result, java process will never exit and will hang.
QUESTION:
Any ideas why it happens? How to avoid it? I don't connect to beanshell server, neither by http nor by telnet.
MORE DETAILS:
All the nodes are running as Docker containers.
All nodes are deployed in AWS.
Can't reproduce the same issue locally on my machine. Even with BeanShell server enabled, all works smooth, and java exists as it should.
WHAT I TRIED:
Tried to set up the following properties:
jmeterengine.remote.system.exit=true
jmeterengine.stopfail.system.exit=true
jmeterengine.force.system.exit=true
Doesn't help, java process still hangs.
The same for
bsh.system.shutdownOnExit = true;
Any ideas are very much appreciated.

I don't think there is a way of stopping the Beanshell Server along with the test as it runs in an endless-loop Thread
I would recommend stopping your test and the Beanshell server using a .bsh file like:
stopEngine();
Thread.sleep(5000L); // just in case wait for 5 seconds for graceful shutdown
System.exit(0);
where:
stopEngine() is a shorthand for StandardJMeterEngine.stopEngine(), it's defined in startup.bsh
System.exit(0); - basically shuts down the whole JVM with 0 exit status code
So you will be able to turn everything off the same way you amend the properties in the runtime.
You can also achieve the same by executing the System.exit(0); command automatically from i.e. tearDown Thread Group using OS Process Sampler, however in this case make sure to set the following property:
jmeter.save.saveservice.autoflush=true
otherwise you can loose some results which have not been written to disk yet

Finally, I found a workaround, so posting it as an answer here.
Actually, it doesn't answer the original question in fact, but don't want to waste time trying to find an original root cause as it seems to be related to AWS structure (just because I can't reproduce it locally inside the same container I launch remotely).
So, the final schema is:
To update properties on the fly, I don't have to launch BeanShell on slave and master both. Real property change happens on slave, so master is not required. It solves my specific case.
To stop test, you can use existent shutdown.sh and stoptest.sh scripts. No need to do it over beanshell server.
If the test was properly stopped, then -X option (server.exitaftertest=true) also works fine, so force shutdown for JVM is not required, and you don't have to add tearDown thread group that does System.exit. But the idea is nice.
It's worth to have a shutdown.bsh script to force kill it anyway as a tool.

Related

Jmeter tests freeze at o.a.j.p.h.s.h.LazyLayeredConnectionSocketFactory: Setting up HTTP TrustAll Socket Factory

I am executing performance tests using jmeter+docker setup. When I try to run the tests after docker set up, It starts execution but freezes at
INFO o.a.j.p.h.s.h.LazyLayeredConnectionSocketFactory: Setting up HTTP TrustAll Socket Factory
This is before starting the first test.
If I kill the process and restart the execution, It will move from above step, execute first thread group and freeze again on second thread group.
FYI - I have added trust certificates and invoking them using command line
-Djavax.net.ssl.keyStore=path of certificates
-Djavax.net.ssl.keyStorePassword=password
Can anyone please help me figuring out what am I missing here ? TIA.
Unfortunately your question doesn't provide sufficient level of details.
Try the following:
First of all try reproducing the issue without Docker virtualization, if it cannot be reproduced on the host machine - double check your docker setup, inspect docker logs, container logs and so on.
If it can be reproduced without Docker:
Increase JMeter's logging level to DEBUG
Add the next line to system.properties file
javax.net.debug=all
it will enable debug output for SSL/TLS connections
When JMeter "freezes" next time take a thread dump
This way you will know at which line of code JMeter gets stuck so you will know the exact reason.

JMeter Non gui mode doesnot show stats for the script, but it does work for rest other scripts

While running the script in non-gui mode for jmeter tests, test gets completed but it does shows min: 0, max: 0...
note:
JMeter 5.4 is installed without any 3rd party plugin.
java version "1.8.0_261"
Java(TM) SE Runtime Environment (build 1.8.0_261-b12)
Java HotSpot(TM) 64-Bit Server VM (build 25.261-b12, mixed mode)
Please help
It means that none of Samplers were executed and the reasons could be in:
The number of threads in Thread Group is 0
The test contains CSV Data Set Config and the .csv file is not there
The test relies on a JMeter Plugin which is not installed
etc.
The reason should be in jmeter.log file, check it and I believe you will find either the cause or at least a clue there.
Also be aware that according to 9 Easy Solutions for a JMeter Load Test “Out of Memory” Failure article you should always be using the latest version of JMeter so consider upgrading to JMeter 5.4.1 (or whatever is the latest stable version available at JMeter Downloads page) on next available opportunity
I guess you probably found the answer - but maybe this response will be useful to someone.
I've been battling a similar issue -> locally all was working fine (in GUI and non-GUI mode), but as soon as I tried to run the jmx on the server I could see it would start threads, all logs seemed normal (standard info logs within jmeter.log at start) but still no requests were actually made (confirmed by observing logs from the target server). I tried running trace logs but still nothing out of the ordinary - I could see proper HTTP config, threads were as if normal... But the generated jtl file only contained a header and no requests actually were made.
After some hair pulling I finally found my issue - I'm using variables for threads, rampup and loop which I invoke using ${__P(threads,)} syntax and when running the non-GUI load test I pass those with -J to the command line (f.ex. jmeter -Jthreads=50 ...)
I didn't pass the loop parameter since I figured it would just default to 1 (and that is also the impression I get when looking at the logs where I see all thread started ... entries) - turns out for some reason it will log, but won't execute anything actually till I pass a value for that loop.
So if you run a test plan and notice it only simulates doing something but you get no results in the target log file -> confirm you're setting ALL variables used for thread groups including those you suspect to default

The JVM should have exited but did not

During Distributed testing with Jmeter 3.3 in non gui mode i'm getting error as, how can I fix this :
I'm using same version of JMeter and JDK on Master as well as Slave machines.
The JVM should have exited but did not.
The following non-daemon threads are still running (DestroyJavaVM is OK):
Thread[main,5,main],
stackTrace:java.net.SocketInputStream#socketRead0
java.net.SocketInputStream#socketRead
java.net.SocketInputStream#read
java.net.SocketInputStream#read
java.io.BufferedInputStream#fill
java.io.BufferedInputStream#read
java.io.DataInputStream#readByte
sun.rmi.transport.StreamRemoteCall#executeCall
sun.rmi.server.UnicastRef#invoke
java.rmi.server.RemoteObjectInvocationHandler#invokeRemoteMethod
java.rmi.server.RemoteObjectInvocationHandler#invoke
com.sun.proxy.$Proxy19#rrunTest
org.apache.jmeter.engine.ClientJMeterEngine#runTest at line:149
org.apache.jmeter.engine.DistributedRunner#start at line:132
org.apache.jmeter.engine.DistributedRunner#start at line:149
org.apache.jmeter.JMeter#runNonGui at line:1005
org.apache.jmeter.JMeter#startNonGui at line:910
org.apache.jmeter.JMeter#start at line:538
sun.reflect.NativeMethodAccessorImpl#invoke0
sun.reflect.NativeMethodAccessorImpl#invoke
sun.reflect.DelegatingMethodAccessorImpl#invoke
java.lang.reflect.Method#invoke
org.apache.jmeter.NewDriver#main at line:248
I strongly recommend using this jmeter property:
jmeterengine.force.system.exit=true
documented here. These Chinese language web pages link link tipped me off.
You can add -Jjmeterengine.force.system.exit=true on the command line when launching JMeter, or add jmeterengine.force.system.exit=true to JMETER_HOME/bin/jmeter.properties.
How I Confirmed This Fix
With JMeter 5.1 and java version "1.8.0_231" on MS-Win10, we're using a customized version of this JMeter InfluxDB backend Listener.
After my 60 second test run from the command line (jmeter.bat -n -t plan.jtl), the command line hung after displaying this output (very similar to op):
Tidying up ... # Wed Jan 29 14:41:04 CST 2020 (1580330464874)
... end of run
The JVM should have exited but did not.
The following non-daemon threads are still running (DestroyJavaVM is OK):
Thread[DestroyJavaVM,5,main], stackTrace:
Thread[pool-2-thread-3,5,main], stackTrace:sun.misc.Unsafe#park
java.util.concurrent.locks.LockSupport#parkNanos
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject#awaitNanos
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue#take
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue#take
java.util.concurrent.ThreadPoolExecutor#getTask
java.util.concurrent.ThreadPoolExecutor#runWorker
java.util.concurrent.ThreadPoolExecutor$Worker#run
java.lang.Thread#run
Thread[pool-2-thread-4,5,main], stackTrace:sun.misc.Unsafe#park
java.util.concurrent.locks.LockSupport#parkNanos
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject#awaitNanos
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue#take
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue#take
java.util.concurrent.ThreadPoolExecutor#getTask
java.util.concurrent.ThreadPoolExecutor#runWorker
java.util.concurrent.ThreadPoolExecutor$Worker#run
java.lang.Thread#run
Thread[pool-2-thread-1,5,main], stackTrace:sun.misc.Unsafe#park
java.util.concurrent.locks.LockSupport#parkNanos
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject#awaitNanos
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue#take
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue#take
java.util.concurrent.ThreadPoolExecutor#getTask
java.util.concurrent.ThreadPoolExecutor#runWorker
java.util.concurrent.ThreadPoolExecutor$Worker#run
java.lang.Thread#run
After modifying my command line as follows, jmeter.bat cleanly exited instead of hanging and all the ugly stack trace went away too:
jmeter.bat -n -Jjmeterengine.force.system.exit=true -t plan.jtl
To confirm that the problem was caused by our customized JMeter InfluxDB backend Listener, I removed it from the .jmx and I also removed the jmeterengine.force.system.exit=true. No hang, no ugly stacktrace (I actually love stacktraces).
I have not taken the next step to discover whether the problem is with the official JMeter InfluxDB backend Listener or with our customized variant, which is not (and will never be) available publicly.
Should mention one gap in this story. I feel this test conclusively points to our customized backend listener (or jmeter's). However, its odd that none of the threads in the above thread dump seem to belong to the backend listener. So I applaud that JMeter did the right thing by dumping the stack trace -- few other apps go to the extent of auto-dumping when appropriate for troubleshooting. But in this case, perhaps that jmeter auto-dump code needs to be enhanced, because it did not point to the culprit backend listener code. Anyone over there at Apache listening in on this?
Good luck.
Most probably your JMeter engine(s) is(are) overloaded therefore cannot gracefully shut down running threads when you request them to do so.
Make sure you follow JMeter Best Practices
The very first "best practice" states Always use latest version of JMeter so consider migrating to JMeter 5.0 or whatever latest version is available at JMeter Downloads page
Make sure your JMeter instances have enough headroom to operate in terms of CPU, RAM and so on. You can use JMeter PerfMon Plugin for this if you don't have other monitoring software in place/in mind.
Take a thread dump and examine it - this way you will know where exactly your test is stuck
Introduce reasonable timeout values in HTTP Request Defaults so in case when server fails to respond JMeter wouldn't wait infinitely but rather fail with an error
And finally (however I wouldn't recommend this) you can suppress this check by adding the next line to user.properties file:
jmeter.exit.check.pause=-1
if you go for this keep in mind that you may run into a situation when JMeter slaves will still be trying to execute something even after your test ends so you will need to kill and restart the processes manually or using a script.

Jmeter: how to modify properties on the fly while distributed test is running?

WHAT I HAVE:
Huge Jmeter agents network, 1 master + many slaves.
Master sends tasks to Slaves, they start test execution and report data to Master.
WHAT I WANT:
Be able to modify inputs (passed as global properties to Master, -G) on the fly, while test is executed, without necessity to stop/restart test.
WHAT I KNOW:
I can use BeanShell server to modify Jmeter properties while running the test. It works fine. But BeanShell server starts only on Master, not on Slaves.
Master sends -G properties to Slaves only once, before test execution. Even if Master properties are updated, slaves never get this information until test is restarted.
QUESTION:
How can I modify properties on Slaves as well? Is there any proper solution?
WHAT I THOUGHT ABOUT:
Well, I can do the same trick with BeanShell server for every Slave node too. But this solution doesn't look really ideal for me as I have a lot of agents and have to update all of them. It takes time.
I can change my test logic to update properties from file periodically, and then modify files on Slaves on the fly. Looks even easier for me than #1 as it's cheaper to modify a file over ssh remotely then to launch a separate server on every node.
Actually, all I need in fact is to write a proper method/function for BeanShell server that not only changes properties locally but informs Slaves about it. I'm not so familiar with Jmeter source code yet (just started) but know that it's already implemented inside the code as a part of remote launching procedure. So, if you can point me to the right class to take a look into - it can help me a lot.
Any ideas appreciated. Thanks in advance.
You can add to your Test Plan Property File Reader which will read and update properties in Slaves, you can add properties on runtime in Master using Beanshell/JSR233 element.

Jenkins+Yandex-tank+Jmeter and hanged jobs

I am using CI Jenkins for automation of load-testing with yandex-tank + jmeter. I am using distributed testing and starting summary 10k threads. So, I have a problem when the test should be finished but it`s not happening because (I think so) some threads on remote machines are stuck.
Also, I tried to use these settings in jmeter.properties file:
jmeterengine.threadstop.wait=1000
jmeterengine.remote.system.exit=true
jmeterengine.stopfail.system.exit=true
jmeterengine.force.system.exit=true
jmeter.exit.check.pause=1000
But it does not help. Are there some another for force stopping of jmeter without killing java process?
I am having a similar problem, I've tried the Jenkins option:
Abort the build if it's stuck, Strategy: Absolut, 120 minutes
It didn't help for me, but it would worth a try.

Resources