I am using CI Jenkins for automation of load-testing with yandex-tank + jmeter. I am using distributed testing and starting summary 10k threads. So, I have a problem when the test should be finished but it`s not happening because (I think so) some threads on remote machines are stuck.
Also, I tried to use these settings in jmeter.properties file:
jmeterengine.threadstop.wait=1000
jmeterengine.remote.system.exit=true
jmeterengine.stopfail.system.exit=true
jmeterengine.force.system.exit=true
jmeter.exit.check.pause=1000
But it does not help. Are there some another for force stopping of jmeter without killing java process?
I am having a similar problem, I've tried the Jenkins option:
Abort the build if it's stuck, Strategy: Absolut, 120 minutes
It didn't help for me, but it would worth a try.
Related
I am facing performance bottleneck while initiating a Process if I set customSLADueDate in jBPM workflow like below in the workflow
<bpmn2:extensionElements>
<tns:metaData name="customSLADueDate">
<tns:metaValue><![CDATA[4d]]></tns:metaValue>
</tns:metaData>
<bpmn2:extensionElements>
But, if I remove the above snippet from the workflow xml then I do not face any performance issue. I am using PER_REQUEST Runtime strategy to deploy the container container containing the said process.
Can anyone please help identifying what should be done in order to resolve the performance issue while setting the customeSLADueDate?
Note: I am trying to initiate the workflow through jMeter where I am running 25 Threads with 25s as ramp up time. It gives me approx. 5 seconds of 95%ile with customSLADueDate and approx 500m without setting customSLADueDate.
Thanks in advance for your help.
I am trying to run long SOAK tests (24h) monitoring server CPU/RAM utilization in Jmeter. Using perfmon server agent and plugin. Tests are run headless, using JMeter Docker image. Got everything setup and job is running fine in Jenkins. Measurements are sent to the server every 10 minutes. The tests results are saved in CSV during the test.
However, although the test runs for 24 hours, Perfmon agent seems to be sending data only for around 2 hours. This is how much data i can see is saved in CSV file. Regardless test runs for 5 hours or 24, 2 hours of data is saved.
I wonder what causes it, ideally i would like to see all the data saved for the whole duration of the test. Would appreciate comments. Cheers
The issue was caused by 'packet_write_wait: connection to 3.126.218.168 port 22: broken pipe' error.
This was resolved by keeping ssh session alive, i.e. modifying ~/.ssh/config file and setting required values for 'Host * ServerAliveInterval ServerAliveCountMax'.
WHAT I HAVE:
Jmeter network created for tests in distributed mode: one Jmeter master plus few Jmeter slaves. BeanShell Server disabled. Everything works fine.
WHAT I WANT TO DO:
I want to enable BeanShell server to be able to modify properties on the fly.
ACTUAL RESULT:
BeanShell server starts and works successfully.
Once the test is done, the following message appears in the log:
The JVM should have exited but did not.
The following non-daemon threads are still running (DestroyJavaVM is OK):
Thread[Thread-7,5,main], stackTrace:java.net.PlainSocketImpl#socketAccept
java.net.AbstractPlainSocketImpl#accept at line:409
java.net.ServerSocket#implAccept at line:545
java.net.ServerSocket#accept at line:513
bsh.util.Sessiond#run at line:65
java.lang.Thread#run at line:748
Thread[Thread-5,5,main], stackTrace:java.net.PlainSocketImpl#socketAccept
java.net.AbstractPlainSocketImpl#accept at line:409
java.net.ServerSocket#implAccept at line:545
java.net.ServerSocket#accept at line:513
bsh.util.Httpd#run at line:64
java.lang.Thread#run at line:748
It's clear that it happens because of BeanShell server that is running and doesn't exit for some reason.
As a result, java process will never exit and will hang.
QUESTION:
Any ideas why it happens? How to avoid it? I don't connect to beanshell server, neither by http nor by telnet.
MORE DETAILS:
All the nodes are running as Docker containers.
All nodes are deployed in AWS.
Can't reproduce the same issue locally on my machine. Even with BeanShell server enabled, all works smooth, and java exists as it should.
WHAT I TRIED:
Tried to set up the following properties:
jmeterengine.remote.system.exit=true
jmeterengine.stopfail.system.exit=true
jmeterengine.force.system.exit=true
Doesn't help, java process still hangs.
The same for
bsh.system.shutdownOnExit = true;
Any ideas are very much appreciated.
I don't think there is a way of stopping the Beanshell Server along with the test as it runs in an endless-loop Thread
I would recommend stopping your test and the Beanshell server using a .bsh file like:
stopEngine();
Thread.sleep(5000L); // just in case wait for 5 seconds for graceful shutdown
System.exit(0);
where:
stopEngine() is a shorthand for StandardJMeterEngine.stopEngine(), it's defined in startup.bsh
System.exit(0); - basically shuts down the whole JVM with 0 exit status code
So you will be able to turn everything off the same way you amend the properties in the runtime.
You can also achieve the same by executing the System.exit(0); command automatically from i.e. tearDown Thread Group using OS Process Sampler, however in this case make sure to set the following property:
jmeter.save.saveservice.autoflush=true
otherwise you can loose some results which have not been written to disk yet
Finally, I found a workaround, so posting it as an answer here.
Actually, it doesn't answer the original question in fact, but don't want to waste time trying to find an original root cause as it seems to be related to AWS structure (just because I can't reproduce it locally inside the same container I launch remotely).
So, the final schema is:
To update properties on the fly, I don't have to launch BeanShell on slave and master both. Real property change happens on slave, so master is not required. It solves my specific case.
To stop test, you can use existent shutdown.sh and stoptest.sh scripts. No need to do it over beanshell server.
If the test was properly stopped, then -X option (server.exitaftertest=true) also works fine, so force shutdown for JVM is not required, and you don't have to add tearDown thread group that does System.exit. But the idea is nice.
It's worth to have a shutdown.bsh script to force kill it anyway as a tool.
During Distributed testing with Jmeter 3.3 in non gui mode i'm getting error as, how can I fix this :
I'm using same version of JMeter and JDK on Master as well as Slave machines.
The JVM should have exited but did not.
The following non-daemon threads are still running (DestroyJavaVM is OK):
Thread[main,5,main],
stackTrace:java.net.SocketInputStream#socketRead0
java.net.SocketInputStream#socketRead
java.net.SocketInputStream#read
java.net.SocketInputStream#read
java.io.BufferedInputStream#fill
java.io.BufferedInputStream#read
java.io.DataInputStream#readByte
sun.rmi.transport.StreamRemoteCall#executeCall
sun.rmi.server.UnicastRef#invoke
java.rmi.server.RemoteObjectInvocationHandler#invokeRemoteMethod
java.rmi.server.RemoteObjectInvocationHandler#invoke
com.sun.proxy.$Proxy19#rrunTest
org.apache.jmeter.engine.ClientJMeterEngine#runTest at line:149
org.apache.jmeter.engine.DistributedRunner#start at line:132
org.apache.jmeter.engine.DistributedRunner#start at line:149
org.apache.jmeter.JMeter#runNonGui at line:1005
org.apache.jmeter.JMeter#startNonGui at line:910
org.apache.jmeter.JMeter#start at line:538
sun.reflect.NativeMethodAccessorImpl#invoke0
sun.reflect.NativeMethodAccessorImpl#invoke
sun.reflect.DelegatingMethodAccessorImpl#invoke
java.lang.reflect.Method#invoke
org.apache.jmeter.NewDriver#main at line:248
I strongly recommend using this jmeter property:
jmeterengine.force.system.exit=true
documented here. These Chinese language web pages link link tipped me off.
You can add -Jjmeterengine.force.system.exit=true on the command line when launching JMeter, or add jmeterengine.force.system.exit=true to JMETER_HOME/bin/jmeter.properties.
How I Confirmed This Fix
With JMeter 5.1 and java version "1.8.0_231" on MS-Win10, we're using a customized version of this JMeter InfluxDB backend Listener.
After my 60 second test run from the command line (jmeter.bat -n -t plan.jtl), the command line hung after displaying this output (very similar to op):
Tidying up ... # Wed Jan 29 14:41:04 CST 2020 (1580330464874)
... end of run
The JVM should have exited but did not.
The following non-daemon threads are still running (DestroyJavaVM is OK):
Thread[DestroyJavaVM,5,main], stackTrace:
Thread[pool-2-thread-3,5,main], stackTrace:sun.misc.Unsafe#park
java.util.concurrent.locks.LockSupport#parkNanos
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject#awaitNanos
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue#take
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue#take
java.util.concurrent.ThreadPoolExecutor#getTask
java.util.concurrent.ThreadPoolExecutor#runWorker
java.util.concurrent.ThreadPoolExecutor$Worker#run
java.lang.Thread#run
Thread[pool-2-thread-4,5,main], stackTrace:sun.misc.Unsafe#park
java.util.concurrent.locks.LockSupport#parkNanos
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject#awaitNanos
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue#take
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue#take
java.util.concurrent.ThreadPoolExecutor#getTask
java.util.concurrent.ThreadPoolExecutor#runWorker
java.util.concurrent.ThreadPoolExecutor$Worker#run
java.lang.Thread#run
Thread[pool-2-thread-1,5,main], stackTrace:sun.misc.Unsafe#park
java.util.concurrent.locks.LockSupport#parkNanos
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject#awaitNanos
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue#take
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue#take
java.util.concurrent.ThreadPoolExecutor#getTask
java.util.concurrent.ThreadPoolExecutor#runWorker
java.util.concurrent.ThreadPoolExecutor$Worker#run
java.lang.Thread#run
After modifying my command line as follows, jmeter.bat cleanly exited instead of hanging and all the ugly stack trace went away too:
jmeter.bat -n -Jjmeterengine.force.system.exit=true -t plan.jtl
To confirm that the problem was caused by our customized JMeter InfluxDB backend Listener, I removed it from the .jmx and I also removed the jmeterengine.force.system.exit=true. No hang, no ugly stacktrace (I actually love stacktraces).
I have not taken the next step to discover whether the problem is with the official JMeter InfluxDB backend Listener or with our customized variant, which is not (and will never be) available publicly.
Should mention one gap in this story. I feel this test conclusively points to our customized backend listener (or jmeter's). However, its odd that none of the threads in the above thread dump seem to belong to the backend listener. So I applaud that JMeter did the right thing by dumping the stack trace -- few other apps go to the extent of auto-dumping when appropriate for troubleshooting. But in this case, perhaps that jmeter auto-dump code needs to be enhanced, because it did not point to the culprit backend listener code. Anyone over there at Apache listening in on this?
Good luck.
Most probably your JMeter engine(s) is(are) overloaded therefore cannot gracefully shut down running threads when you request them to do so.
Make sure you follow JMeter Best Practices
The very first "best practice" states Always use latest version of JMeter so consider migrating to JMeter 5.0 or whatever latest version is available at JMeter Downloads page
Make sure your JMeter instances have enough headroom to operate in terms of CPU, RAM and so on. You can use JMeter PerfMon Plugin for this if you don't have other monitoring software in place/in mind.
Take a thread dump and examine it - this way you will know where exactly your test is stuck
Introduce reasonable timeout values in HTTP Request Defaults so in case when server fails to respond JMeter wouldn't wait infinitely but rather fail with an error
And finally (however I wouldn't recommend this) you can suppress this check by adding the next line to user.properties file:
jmeter.exit.check.pause=-1
if you go for this keep in mind that you may run into a situation when JMeter slaves will still be trying to execute something even after your test ends so you will need to kill and restart the processes manually or using a script.
I am fighting an issue where my Selenium IDE Test succeeds when searching for specific elements on a page (Command=waitForElementPresent and Target=link=Related Sites) but that same command is failing when I run it from within Jenkins 'SeleniumHQ htmlSuite Run'.
As an example;
waitForElementPresent link=Related Sites Timed out after 30000ms
I don't want to give the impression that all 'waitForElementtPresent' commands are failing because some are succeeding.
I just don't know if I'm dealing with a timing issue or I have to code the test differently when it will run within Jenkins 'SeleniumHQ htmlSuite Run'.
Any advice to help me understand why I get different behavior for the same command is perplexing me and am unsure how to solve.
Thank you in advance for assistance.
I guess your script has a default timeout set, that's why you are facing the problem.
Just try using selenium.setTimeout("0") in the constructor where you start selenium.