I created jmeter test plan with 2000 threads and 10 ramp-up time.
When i ran the test against apache server, some of my test results give a connection refused error.
The connection refused error occured after 21 seconds.
So, my question is this 21 seconds originates from the jmeter or the apache web server?
As far as I know, apache server timeout default is 30 seconds, i didn't change that.
This means your apache server is refusing connections, which means it could be overloaded or misconfigured.
Related
I am using Jmeter with MQTT JMeter Plugin to do loading test.
Here is my use cas:
Started 8000 users(threads) during 30 minutes
Each user do one mqtt connect message
Each user do 720 loops to publish a message with 5 seconds timer
Here is my jmeter test plan
My threads
My loop controller:
My timer:
After starting Jmeter, every thing is good:
But after 20 minutes, i am getting many errors for my pub messags:
Here is the error message:
My mqtt server is up and no pb with it.
Jmeter logs:
Aug 01, 2021 3:04:33 PM java.util.Optional ifPresent
INFO: MQTT client is not connected.
Aug 01, 2021 3:04:33 PM net.xmeter.samplers.PubSampler sample
INFO: ** [clientId: ps303411a2200c4e1ca4f34, topic: /test/, payload: 1627830273593ts Publish failed for connection HiveMQTTConnection{clientId='ps303411a2200c4e1ca4f34'}.
Aug 01, 2021 3:04:33 PM java.util.Optional ifPresent
INFO: MQTT client is not connected.
What is the pb ? is this related to Jmeter test plan ? or to my local machine ? i am using EC2 x3 large machine to start Jmeter in background.
since your ramp up period is 1800 sec, you have nearly 5.6k threads at 20th minute where, i think your server starts to saturate. The 501 return code may indicate that some kind of fallback mechanism can give more details about the error, but not sure...
MQTT client is not connected.
It indicates that the connection is down, if you don't see anything suspicious in JMeter logs - most probably it means that your server gets overloaded and cannot handle that many concurrent connections/messages.
Use a combination of listeners like Active Threads Over Time and Response Codes per Second to see what is exact number of users where the problems start occurring
Monitor resources usage like CPU, RAM, Network sockets, Disk IO, etc. to ensure that the MQTT server has enough space to operate, you can use JMeter PerfMon Plugin for this
Check your server logs
Increase JMeter logging verbosity for the MQTT plugin by adding the next line to log4j2.xml file:
<Logger name="net.xmeter" level="debug" />
All required changes have been done to respective files like:
stalecheck=true,
keepalive is checked from HTTP request defaults,
retrycount=1,
hc.parameters file changes,
Socket timeout is 240000
Still we see "java.net.SocketException: Connection reset" in response data however I see the valid requests been passed to Server.
The issue wasnt till we reach 3000 users, worked smoothly till 3000 users.
Connection Reset has a lot of meaning, possible reasons are:
One of the server components is not able to handle load so it closes connections on its side
On JMeter side, check that you running in NON GUI mode and that neither JMeter JVM nor injector machine are overloaded which could explain this. See:
https://jmeter.apache.org/usermanual/get-started.html#non_gui
I am running Jmeter test in distributed mode. I have set up SSH tunneling for slaves since they are not in same region. I have executed 2700 users which ran fine. When we try to run 5200 users, users are going into finished status although steady state is 1 hours. I am using ultimate thread group.
4500 users are running fine and 600 Users are going into finished state out of 5200 usesr.
Seeing below error in Jmeter Server logs:
ERROR o.a.j.t.JMeterThread: Test failed!
org.apache.jorphan.util.JMeterError: Could not return sample
at org.apache.jmeter.samplers.StandardSampleSender.sampleOccurred(StandardSampleSender.java:70) ~[ApacheJMeter_core.jar:3.3 r1808647]
Caused by: java.rmi.ConnectException: Connection refused to host: 127.0.0.1; nested exception is:
java.net.ConnectException: Connection refused: connect
at sun.rmi.transport.tcp.TCPEndpoint.newSocket(Unknown Source) ~[?:1.8.0_151]
Any idea? What is causing this?
It seems from logs that you tunnel is broken:
Caused by: java.rmi.ConnectException: Connection refused to host: 127.0.0.1;
See this:
https://superuser.com/questions/37738/how-to-reliably-keep-an-ssh-tunnel-open
The error is about JMeter slave being unable to establish connection to deliver results to the master so the problem is on SSH server side, you can look into its logs to see what went wrong.
The options are in:
SSH stands for "Secure Shell" so all the traffic gets encrypted. It also gets compressed in order to decrease footprint. When it comes to high loads it might be the case SSH server and/or client consumes a lot of resources and i.e. being killed by OOM Killer or whatever. You can try configuring your SSH server to disable compression and use weaker encryption algorithm, for example arcfour
It might be easier to move "master" to the "other region" than passing a lot of test results over SSH
Another option could be using VPN with no encryption, this way all JMeter nodes will be in the same subnet.
And finally you can disable automatic sending of test results from slaves to the master:
set mode=DiskStore property on JMeter Slaves
when your test is finished collect results from slaves and copy them over to master
use Merge Results Tool in order to combine multiple results into single .jtl file. Merge Results Tool can be installed using JMeter Plugins Manager
I have 50 threads test in JMeter with multiple session but when I test it half of the threads is failed and I got this error Connection:
Response code: 500
Response message: Connection refused: connect Aborting action - session 656255658 was closed
Check 2 things:
are you sure you’re not reusing same session accross threads ? Are you correctly correlating the session id.
If issue only happens over some limit (not at 25 users for example, but at 50) then it’s a load issue or configuration limit on server side
When the Oracle 10 databases are up and running fine, OCILogon2() will connect immediately. When the databases are turned off or inaccessible due to network issues - it will fail immediately.
However when our DBAs go into emergency maintenance and block incomming connections, it can take 5 to 10 minutes to timeout.
This is problematic for me since I've found that OCILogin2 isn't thread safe and we can only use it serially - and I connect to quite a few Oracle DBs. 3 blocked servers X 5-10 minutes = 15 to 30 minutes of lockup time
Does anyone know how to set the OCILogon2 connection timeout?
Thanks.
I'm currenty playing with OCI and it seems to me that it's impossible.
The only way I can think of is to use non-blocking mode. You'll need OCIServerAttach() and OCISessionBegin() instead of OCILogon() in this case. But when I tried this, OCISessionBegin() constantly returns OCI_ERROR with the following error code:
ORA-03123 operation would block
Cause: The attempted operation cannot complete now.
Action: Retry the operation later.
It looks strange and I don't yet know how to deal with it.
Possible workaround is to run your logon in another process, which you can kill after timeout...
We think we found the right file setting - but it's one of those problems where we have to wait until something rare and horrible occurs before we can verify it :-/
[sqlnet.ora]
SQLNET.OUTBOUND_CONNECT_TIMEOUT=60
From the Oracle docs..
http://download.oracle.com/docs/cd/B28359_01/network.111/b28317/sqlnet.htm#BIIFGFHI
5.2.35 SQLNET.OUTBOUND_ CONNECT _TIMEOUT
Purpose
Use the SQLNET.OUTBOUND_ CONNECT _TIMEOUT parameter to specify the time, in seconds, for a client to establish an Oracle Net connection to the database instance.
If an Oracle Net connection is not established in the time specified, the connect attempt is terminated. The client receives an ORA-12170: TNS:Connect timeout occurred error.
The outbound connect timeout interval is a superset of the TCP connect timeout interval, which specifies a limit on the time taken to establish a TCP connection. Additionally, the outbound connect timeout interval includes the time taken to be connected to an Oracle instance providing the requested service.
Without this parameter, a client connection request to the database server may block for the default TCP connect timeout duration (approximately 8 minutes on Linux) when the database server host system is unreachable.
The outbound connect timeout interval is only applicable for TCP, TCP with SSL, and IPC transport connections.
Default
None
Example
SQLNET.OUTBOUND_ CONNECT _TIMEOUT=10