I get the below error in the master machine while running distributed load test in non gui mode with JMETER. How can I resolve this.
Message in master
C:\apache-jmeter-5.4.1\bin>jmeter -Djava.rmi.server.hostname=xx.xx.xx.xx -n -t C:\apache-jmeter-5.4.1\bin\examples\masterslavetest.jmx -l C:\apache-jmeter-5.4.1\bin\examples\result.jtl -R xx.xx.xx.xx
Creating summariser <summary>
Created the tree successfully using C:\apache-jmeter-5.4.1\bin\examples\masterslavetest.jmx
Configuring remote engine: xx.xx.xx.xx
Using local port: 4000
Starting distributed test with remote engines: [xx.xx.xx.xx] # Thu Mar 04 18:53:43 GMT 2021 (1614884023471)
Error in rconfigure() method java.rmi.MarshalException: error marshalling arguments; nested exception is:
java.io.NotSerializableException: org.apache.jmeter.JMeter$ListenToTest
Remote engines have been started:[]
The following remote engines have not started:[xx.xx.xx.xx]
Waiting for possible Shutdown/StopTestNow/HeapDump/ThreadDump message on port 4445
Message in slave
Using local port: 4000
Created remote object: UnicastServerRef2 [liveRef: [endpoint:[xx.xx.xx.xx:4000,SSLRMIServerSocketFactory(host=lhr4-pegajm-03/xx.xx.xx.xx, keyStoreLocation=rmi_keystore.jks, type=JKS, trustStoreLocation=rmi_keystore.jks, type=JKS, alias=rmi),SSLRMIClientSocketFactory(keyStoreLocation=rmi_keystore.jks, type=JKS, trustStoreLocation=rmi_keystore.jks, type=JKS, alias=rmi)](local),objID:[79aa42b8:177fe8fb2b5:-7fff, 5964228045381296735]]]
Below are few additional details.
Jmeter : 5.4.1
Java : 15
Running the tests on windows 10 vms
Opened server.rmi.localport, client.rmi.localport, server.rmi.port
Slave doesn't show any logs
Related
I have an issue when i run jmeter distributed :
Error in rconfigure() method java.rmi.ConnectException: Connection refused to host: 192.168.200.22; nested exception is: java.net.ConnectException: Connection timed out: connect
Master: 192.168.200.21
Slave: 192.168.200.22
i have configured remote-hosts in jmeter.properties and started jmeter-server.bat in the master machine and slave machine
enter image description here
How do I fix it?
I try connect to the slave machine
Most probably you need to open port 1099 (or whatever is the value of your server_port property) in your operating system firewall
Also be aware that the slave will need to send the test results back to the master so you need to open client.rmi.localport and up to 3 ports up in the master machine.
And finally for any JMeter configuration overrides you should use user.properties file
More information:
Remote hosts and RMI configuration
Configuring JMeter
How to Perform Distributed Testing in JMeter
I try to run a distributed test using JMeter, I have 2 EC2 instances
Master Public IP: 54.xxx.xx.xx
Slave Public IP: 204.xxx.xxx.xxx
I have opened all the necessary ports that were used in the configuration.
I can ping each EC2 from the other one and the ping is successful.
But when I try to start the test, the server failed and return [No route to host (Host unreachable)].
My plan is to use more than 1 slave.
Error Return From Master Server
As per NoRouteToHostException exception description:
Signals that an error occurred while attempting to connect a socket to a remote address and port. Typically, the remote host cannot be reached because of an intervening firewall, or if an intermediate router is down.
So make sure that the RMI ports which JMeter slave is listening are:
Not dynamic
Open in your operating system firewall
Open in EC2 Security Groups
More information:
Remote hosts and RMI configuration
How to Perform Distributed Testing in JMeter
Apache JMeter Distributed Testing Step-by-step
Restarting a windows server that is a swarm worker, causes windows containers to get stuck in a "Preparing" state indefinitely once the server and docker daemon are back online.
Image of tasks/containers stuck in preparing state:
https://user-images.githubusercontent.com/4528753/65180353-4e5d6e80-da22-11e9-8060-451150865177.png
Steps to reproduce the issue:
Create a swarm (in my case I have CentOS7 managers, and a few windows server 1903 workers)
Create a "global" docker service that only runs on the windows machines. They should start up fine
initially and work just fine.
Drain one or more of the windows nodes that is running the windows container(s) from step 2 (docker node update --availability=drain nodename)
Restart one or more of the nodes that were drained in step 3, wait for them to come back up
Set the windows node(s) back to active (docker node update --availability=active nodename)
At this point, just observe that the docker service created in step 2 will be "Preparing" the containers to start up on these nodes, and there it will stay (docker service ps servicename --no-trunc) -- you can observe this and run these commands from any master node
memberlist: Refuting a suspect message (from: c9347e85405d)
memberlist: Failed to send ping: write udp 10.60.3.40:7946->10.60.3.110:7946: wsasendto: The requested address is not valid in its
context.
grpc: addrConn.createTransport failed to connect to {10.60.3.110:2377 0 <nil>}. Err :connection error: desc = "transport: Error while
dialing dial tcp 10.60.3.110:2377: connectex: A socket operation was attempted to an unreachable host.". Reconnecting... [module=grpc]
memberlist: Failed to send ping: write udp 10.60.3.40:7946->10.60.3.186:7946: wsasendto: The requested address is not valid in its
context.
grpc: addrConn.createTransport failed to connect to {10.60.3.110:2377 0 <nil>}. Err :connection error: desc = "transport: Error while
dialing dial tcp 10.60.3.110:2377: connectex: A socket operation was attempted to an unreachable host.". Reconnecting... [module=grpc]
agent: session failed [node.id=wuhifvg9li3v5zuq2xu7c6hxa module=node/agent error=rpc error: code = Unavailable desc = all SubConns are
in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial tcp 10.60.3.69:2377:
connectex: A socket operation was attempted to an unreachable host." backoff=6.3s]
Failed to send gossip to 10.60.3.110: write udp 10.60.3.40:7946->10.60.3.110:7946: wsasendto: The requested address is not valid in its
context.
Failed to send gossip to 10.60.3.69: write udp 10.60.3.40:7946->10.60.3.69:7946: wsasendto: The requested address is not valid in its
context.
Failed to send gossip to 10.60.3.105: write udp 10.60.3.40:7946->10.60.3.105:7946: wsasendto: The requested address is not valid in its
context.
Failed to send gossip to 10.60.3.69: write udp 10.60.3.40:7946->10.60.3.69:7946: wsasendto: The requested address is not valid in its
context.
Failed to send gossip to 10.60.3.186: write udp 10.60.3.40:7946->10.60.3.186:7946: wsasendto: The requested address is not valid in its
context.
Failed to send gossip to 10.60.3.105: write udp 10.60.3.40:7946->10.60.3.105:7946: wsasendto: The requested address is not valid in its
context.
Failed to send gossip to 10.60.3.186: write udp 10.60.3.40:7946->10.60.3.186:7946: wsasendto: The requested address is not valid in its
context.
Failed to send gossip to 10.60.3.69: write udp 10.60.3.40:7946->10.60.3.69:7946: wsasendto: The requested address is not valid in its
context.
Failed to send gossip to 10.60.3.105: write udp 10.60.3.40:7946->10.60.3.105:7946: wsasendto: The requested address is not valid in its
context.
Failed to send gossip to 10.60.3.109: write udp 10.60.3.40:7946->10.60.3.109:7946: wsasendto: The requested address is not valid in its
context.
Failed to send gossip to 10.60.3.69: write udp 10.60.3.40:7946->10.60.3.69:7946: wsasendto: The requested address is not valid in its
context.
Failed to send gossip to 10.60.3.110: write udp 10.60.3.40:7946->10.60.3.110:7946: wsasendto: The requested address is not valid in its
context.
memberlist: Failed to send gossip to 10.60.3.105:7946: write udp 10.60.3.40:7946->10.60.3.105:7946: wsasendto: The requested address is
not valid in its context.
memberlist: Failed to send gossip to 10.60.3.186:7946: write udp 10.60.3.40:7946->10.60.3.186:7946: wsasendto: The requested address is
not valid in its context.
Many of these errors are odd, for example... 7946 is totally open between the cluster nodes, telnets confirm this.
I expect to see the docker service containers start promptly, and not stuck in a Preparing state. The docker image is already pulled, it should be fast.
docker version output
Client: Docker Engine - Enterprise
Version: 19.03.2
API version: 1.40
Go version: go1.12.8
Git commit: c92ab06ed9
Built: 09/03/2019 16:38:11
OS/Arch: windows/amd64
Experimental: false
Server: Docker Engine - Enterprise
Engine:
Version: 19.03.2
API version: 1.40 (minimum version 1.24)
Go version: go1.12.8
Git commit: c92ab06ed9
Built: 09/03/2019 16:35:47
OS/Arch: windows/amd64
Experimental: false
docker info output
Client:
Debug Mode: false
Plugins:
cluster: Manage Docker clusters (Docker Inc., v1.1.0-8c33de7)
Server:
Containers: 4
Running: 0
Paused: 0
Stopped: 4
Images: 4
Server Version: 19.03.2
Storage Driver: windowsfilter
Windows:
Logging Driver: json-file
Plugins:
Volume: local
Network: ics l2bridge l2tunnel nat null overlay transparent
Log: awslogs etwlogs fluentd gcplogs gelf json-file local logentries splunk syslog
Swarm: active
NodeID: wuhifvg9li3v5zuq2xu7c6hxa
Is Manager: false
Node Address: 10.60.3.40
Manager Addresses:
10.60.3.110:2377
10.60.3.186:2377
10.60.3.69:2377
Default Isolation: process
Kernel Version: 10.0 18362 (18362.1.amd64fre.19h1_release.190318-1202)
Operating System: Windows Server Datacenter Version 1903 (OS Build 18362.356)
OSType: windows
Architecture: x86_64
CPUs: 4
Total Memory: 8GiB
Name: SWARMWORKER1
ID: V2WJ:OEUM:7TUQ:WPIO:UOK4:IAHA:KWMN:RQFF:CAUO:LUB6:DJIJ:OVBX
Docker Root Dir: E:\docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
Product License: this node is not a swarm manager - check license status on a manager node
Additional Details
These nodes are not using Docker Desktop for windows. I am provisioning docker on the box primarily based on the powershell instructions here: https://docs.docker.com/install/windows/docker-ee/
Windows firewall is disabled
iptables/firewalld is disabled
Communication is completely open between the cluster nodes
Totally up-to-date on cumulative updates
I posted on the moby repo issues but never heard a peep:
https://github.com/moby/moby/issues/39955
The ONLY way I've found to temporarily fix the issue is to drain the node from the swarm, delete docker files, reinstall windows "Containers" feature, then rejoin to the swarm. But, it happens again on reboot.
What's interesting is that when I see a swarm task in a "Preparing" state on the windows worker, the server doesn't seem to be doing anything at all, it's like the manager thinks the worker is preparing the container, but it isn't...
Anyone have any suggestions??
I am trying do distributed testing on linux server using apache-jmeter 2.9
The default port (1099) is already used(by jboss)
I changed the port as 1097
I start jmeter-server on one machine for now and start test on single machine.
jmeter-server seems to start succesfuly
but when evern i trying to exceute script is shows following error.
[jboss#StagingSvr2 bin]$ ./jmeter -n -t CBL_Load/CBL_Admin_Load.jmx -l
.jtl -R 172.16.0.2
Creating summariser <summary>
Created the tree successfully using CBL_Load/CBL_Admin_Load.jmx
Configuring remote engine for 172.16.0.2
Failure connecting to remote host: 172.16.0.2
java.rmi.ConnectIOException: non-JRMP server at remote endpoint
Failed to configure 172.16.0.2
[![enter image description here][1]][1]No remote engines were started.
I have gone through google but not able to find exact solution that where I am doing blundder!
Make sure nothing is listening at the port 1097 using netstat or nc or telnet. Looking into non-JRMP server at remote endpoint something is present there which is not JMeter RMI endpoint. Try locating a free port using aforementioned tools and bind JMeter slave to it
With regards to bind JMeter slave I would recommend amending your startup command to something like:
./jmeter-server -Dserver_port=xxxx
where xxxx is a free port on your Linux system
Amend your Master startup command to include the port as well like:
./jmeter -R 172.16.0.2:xxxx -n -t CBL_Load/CBL_Admin_Load.jmx -l result.jtl
More information:
JMeter Remote Testing: Using a different port
How to Perform Distributed Testing in JMeter
Setup:
Jmeter Master:
machine1
Jmeter slaves:
machine1
machine2
Sometimes, I get a
java.rmi.ConnectException: Connection refused to host
when the Jmeter Master (machine1) tries to connect to the slave (machine1)
Configuring remote engine for XX.XX.XX.XX
[info] Failure connecting to remote host: XX.XX.XX.XX java.rmi.ConnectException: Connection refused to host: XX.XX.XX.XX; nested exception is:
[info] java.net.ConnectException: Connection refused
Any idea? Is it even ok for the Jmeter master and slave to be on the same machine?
I'm using the Jmeter maven plugin. I manually start the jmeter-server process before each test.
Jmeter distributed setup should be on 2 separate machines, otherwise it does not solve the purpose.
Ideal setup is:
Master (machine 1)
Slave 1 (machine 2)
Slave 2 (machine 3)