How do I get jconsole working quickly over X11 - x11

Scenario:
I am trying to get some realtime analysis across our threads to isolate the cause of a thread deadlock.
I have an issue where jconsole was very slow when running via SSH with X11 forwarding.
I was inexperience up to 30 seconds delay per click.
What can be done to make JConsole run at a reasonable speed.

Launch jconsole in the following way
jconsole -J-Dsun.java2d.xrender=True
jconsole is faster but is still very slow to respond.
The next step is to Minimize any graphs.
The table containing threads can now be browsed quickly.

Related

Jmeter threads stuck during the load test

I am running a load test using JMeter with 200 users for approx 1hr. So, the observation is that a few threads are stuck even after the duration completes. Like 60 out of 200 get stuck. When I take the thread dump and observe that these threads are in a Runnable state. Any suggestions for resolving this issue? And I do not see anything meaningful from the JMeter log file.
You will find an unexpected increase in response time at the end of that time.
This is because of the thread's insufficient ramp-down time. Some of your threads were active and made requests to the server and didn't receive the response but threads were closed forcefully. If your JMeter test is stopped forcefully, all the active threads will be closed immediately. So the requests generated by those threads will get higher response time.
You can use Ultimate Thread Group for graceful shutdown time(ramp-down time) of threads just like the ramp-up time.
Here is an example setting:
This is not a normal behaviour for a JMeter test, most probably it indicates that either JMeter engine is overloaded (not properly configured for high loads) or the machine where JMeter is running is overloaded (i.e. lacks RAM and starts intensive swapping)
Make sure to follow JMeter Best Practices (run your test in non-GUI mode, remove all Listeners and test elements you don't need, increase JMeter heap size, etc.)
Make sure to monitor the essential health metrics of the machine where JMeter is running (CPU, RAM, Network and Disk IO, Swap file usage). You can use JMeter PerfMon Plugin for this if you don't have any better software
It might be the case you'll have to switch to Distributed Testing, 200 virtual users doesn't seem to be a "high" load to me, but it depends on what exactly these users are doing, if they're uploading/downloading large files it may be sufficient to cause the problems
Going forward consider adding the thread dump and jmeter log file contents to your question as it doesn't contain any clues so we can only come up with "blind shot" answers
You may want to check your HTTP timeouts.
I usually set Connect Timeout to 5000 milliseconds, and Response Time out to 30000.
Your values may vary for your specific environment/ application.
In this way, if things go bad on the server under test, all requests terminate within the timeout (with errors).
You have also to consider that, if you are retrieving an HTML page with all its embedded objects, and the web server is stuck, you need to wait for multiple timeouts to expire before the operation terminate.

Threads keep running even after test finishes in Jmeter

I am running 24 Hr load test on jmeter with 3256 threads. But even after 28 hrs some of the threads keep running and does not get ramp down. There are several errors in the run.
Even when I choose to stop the threads,"Shutting down all the threads, please be patient" Pop up appears and stays forever and no threads are ramping down.
For your information:- Number of threads-3256; Ramp up period-300; Loop Count-192
Considering all the think/wait time in the script , scenario should run for 24Hrs.
How can I close all the threads forcefully.
There are following options available:
JMeter is listening to shutdown messages on port 4445. There are 2 scripts in /bin folder of your JMeter installation:
shutdown.cmd(sh) - send graceful shutdown request to all threads
stoptest.cmd(sh) - force stop threads
Use Test Action Sampler "Stop Now" option for "All Threads"
Use Beanshell Sampler with the following code:
SampleResult.setStopTestNow(true);
However in that way you can get lots of errors caused by force shutting down of test threads which will be in your test results.
Actually I think that behavior your're experiencing is being caused by lack of resources on your load generator (JMeter) side. Try following recommendations from JMeter Performance and Tuning Tips guide to see if it helps (you don't need to wait all 24h, it will be enough to wait till all threads are ramped up).
If adjusting JMeter parameters won't help it looks like that you'll need to consider distributed testing and generate the load from more than one host.

Jmeter remote/distributed test throughput error

I have created a simple test (just to download a file from famous site like flickr or google.) I run the test locally (either from jmeter directly or talk to the locally running jmeter-server,) the average time is 250ms and the throughput 29.4/s. Then I remote start this test on a host (which has much better internet connection,) the resulting average time is 225ms but the throughput is extremely low -- like 2/s or even below 1/s. The average time number looks reasonable. The throughput number is totally useless. It appears that the jmeter is somehow counting the time between the local jmeter driver and the jmeter server, rather than just averaging the throughput a experienced by every jmeter servers. How can we get the right throughput numbers in remote/distributed tests?
One more addition (after removing the inactive slaves from jmeter.properties):
Time must be synch'd between all the machines: Master and all the slaves. If time is not synch'd then throughput will plummet. As said by Hacking Bear Jmeter is not smart enough to aggregate things in local machines and sum it up in server. Rather it sends all the start-time and finish-time to the Master, and the Master will do the aggregation. So if time is not synch'd between all the machines, we wont get proper throughput.
If you want to set time-date of one machine(machine-A) to all others, then run
sudo ntpdate <machine-A-ip-address>
on all machines where you are running Jmeter(slaves) and also in the Master machine.
figured out. The reason is that when you have multiple remote jmeter servers configured, but start only one, jmeter is not smart enough to know that! so it keeps waiting for other non-starters to reply, causing the stats to plummet. The work around is to ensure all jmeter server started and working,

Log file writing extremely delayed in WebSphere App Server

I am experiencing an issue with delayed writes to the application logs for a Java EE web application running in IBM WebSphere v. 7.x. Logging statements taking up to an hour to appear in the application logs.
The problem doesn't appear related to heavy loads; WAS is responding to page requests almost instantly, and I am testing against a box that isn't used for performance testing, and on a holiday no less -- there is very little activitiy on the server.
My guess would be that the thread associated with logging has been configured with very low priority, but I cant figure out where that would be configured via the admin console or the configuration files.
Has anyone else experienced this sort of issue with WebSphere?
it's possible you don't even enough available threads in the thread pool. Its consistant with the page requests being fast, as they are controlled by the WebContainer threads.
Try increasing it:
Servers > Application Servers > Thread pools > ...
Not sure exactly which one to increase its max value. In worst case, increase'em all. Increase it heavily, so to be sure.
Other options:
make sure you enough disk space / try to connect with jConsole to inquire.

Is there any reson to not reduce Ping Maximum Response Time in IIS 7

IIS includes a worker process health check "ping" function that pings worker processes every 90 seconds by default and recycles them if they don't respond. I have an application that is chronically putting app pools into a bad state and I'm curious if there is any reason not to lower this time to force IIS to recycle a failed worker process quicker. Searching the web all I can find is people that are increasing the time to allow for debugging. It seems like 90 seconds is far to high for a web application, but perhaps I'm missing something.
Well the obvious answer is that in some situations requests would take longer than 90 seconds for the worker process to return. If you can't imagine a situation where this would be appropriate, then feel free to lower it.
I wouldn't recommend going too much lower than 30 seconds. I can see situations where you get in recycle loops. However you can do testing and see what makes sense in your situation. I would recommend Siege for load testing to see how your application behaves.

Resources