I am setting up distributed testing in Linux machines with a master and a slave. Need your advice to overcome the below error message
"Exception Creating a connection to 192.xx.xx. xx ;nested exception is java.net.NoRouteToHostException: no route to host ( Host Unreachable )"
I did the following steps
Ensured Master and Slave has the same version of Jmeter
Added the slave machine IP in Master's Jmeter.properites
Created a keystore file in master and copied the generated
rmi_keystore.jks to slave machine bin folder
Run jmeter-server file in the slave machine ( There was an error
hence added the slave machines ip in
RMI_HOST_DEF=-Djava.rmi.server.hostname=192.xx.xx.xx)- success up
and running
Run the intended jmx file in master Run - Remote start - Slave
machine
Error Exception Creating a connection to 192.xx.xx.xx ; nested
exception is java.net.NoRouteToHostException: no route to host (Host
Unreachable )
I did check the connectivity between two machines when I ping i was able to reach each machine from one another,
Could be an issue with Firewall or port ? not sure
I have been banging my head , any pointers would be helpfull
thanks in advance
check your Slave from Master
example: nc -zvw3 192.168.1.31 1099
if you get Ncat: No route to host then disable firewall on Salve and Master:
systemctl stop firewalld
systemctl disable firewalld
Related
I have an issue when i run jmeter distributed :
Error in rconfigure() method java.rmi.ConnectException: Connection refused to host: 192.168.200.22; nested exception is: java.net.ConnectException: Connection timed out: connect
Master: 192.168.200.21
Slave: 192.168.200.22
i have configured remote-hosts in jmeter.properties and started jmeter-server.bat in the master machine and slave machine
enter image description here
How do I fix it?
I try connect to the slave machine
Most probably you need to open port 1099 (or whatever is the value of your server_port property) in your operating system firewall
Also be aware that the slave will need to send the test results back to the master so you need to open client.rmi.localport and up to 3 ports up in the master machine.
And finally for any JMeter configuration overrides you should use user.properties file
More information:
Remote hosts and RMI configuration
Configuring JMeter
How to Perform Distributed Testing in JMeter
I try to run a distributed test using JMeter, I have 2 EC2 instances
Master Public IP: 54.xxx.xx.xx
Slave Public IP: 204.xxx.xxx.xxx
I have opened all the necessary ports that were used in the configuration.
I can ping each EC2 from the other one and the ping is successful.
But when I try to start the test, the server failed and return [No route to host (Host unreachable)].
My plan is to use more than 1 slave.
Error Return From Master Server
As per NoRouteToHostException exception description:
Signals that an error occurred while attempting to connect a socket to a remote address and port. Typically, the remote host cannot be reached because of an intervening firewall, or if an intermediate router is down.
So make sure that the RMI ports which JMeter slave is listening are:
Not dynamic
Open in your operating system firewall
Open in EC2 Security Groups
More information:
Remote hosts and RMI configuration
How to Perform Distributed Testing in JMeter
Apache JMeter Distributed Testing Step-by-step
I'm Trying to setup Master Slave setup of Jmeter in Cloud.
Things I have done the setup in CLOUD for Master and Slave
I have created rmi.keystore.jks file in Master and copied the same to Slave machine.
I have installed the same versions of Java and Jmeter in both the Machines
I have added IP address of Slave machine in jmeter.properties file of Master
Please help me why I'm facing connection timed out error when trying to execute jmeter script in Cloud machine. Receiving:
Connection refused to host: 10.XXX.XX.XXX; nested exception is: java.net.ConnectException: Connection timed out: connect
If this is really in CLOUD then:
My expectation is that you need to use external IP addresses as I can only see Class B network-type address which most probably cannot be connected from the outer world
You need to ensure that the following ports are open in the CLOUD (whatever you mean by this term) and in operating system firewalls/security groups:
1099 (or whatever port you define as the server_port)
the port you define as the server.rmi.localport
the port(s) you define as the client.rmi.localport
More information:
Remote hosts and RMI configuration
How to Perform Distributed Testing in JMeter
So I am running VMs on VirtualBox to try to get Docker to work in distributed mode. As per this tutorial (https://docs.docker.com/get-started/part4/#configure-a-docker-machine-shell-to-the-swarm-manager) , I set VM called "myvm1" to the be swarm manager with ssh myvm1 "docker swarm init --advertise-addr 10.0.2.15",
however, when I try to add workers to that swarm, I get an error:
Error response from daemon: rpc error: code = Unavailable desc =
all SubConns are in TransientFailure, latest connection error:
connection error: desc = "transport: Error while dialing dial tcp
10.0.2.15:2377: connect: connection refused"
exit status 1
where 10.0.2.15 is the IP of the manager VM I got from running VBoxManage guestproperty get myvm1 "/VirtualBox/GuestInfo/Net/0/V4/IP"
Anyone know what could be the cause? Is my IP wrong? Do I need to open ports?
FYI: To add attempt adding a worker, I tired:
docker-machine ssh myvm2 "docker swarm join --token [token returned by swarm init on myvm1] 10.0.2.15:2377"
Not sure what else I can do.
This is probably because you are running Virtual Box. This mean that some of your interfaces are shared with other VMs and the host.
If you run ifconfig on your VMs and host, Choose the interface which exhibit different IPs for every machine VM.
I had this problem, too and figured out that the eth0 IPs were the same on every machine. Of course, this cannot work.
eth1 also had different IPs for every machine.
Hope this helps.
I'm setting up Hadoop (0.20.2). For starters, I just want it to run on a single machine - I'll probably need a cluster at some point, but I'll worry about that when I get there. I got it to the point where my client code can connect to the job tracker and start jobs, but there's one problem: the job tracker is only accessible from the same machine that it's running on. I actually did a port scan with nmap, and it shows port 9001 open when scanning from the Hadoop machine, and closed when it's from somewhere else.
I tried this on three machines (one Mac, one Ubuntu, and an Ubuntu VM running in VirtualBox), it's the same. None of them have any firewalls set up, so I'm pretty sure it's a Hadoop problem. Any suggestions?
In your hadoop configuration files, does fs.default.name and mapred.job.tracker refer to localhost?
If so, then Hadoop will only listen to port 9000 and 9001 on the loopback interface, which is inaccessible from any other host. Make sure fs.default.name and mapred.job.tracker refer to your machine's externally accessible host name.
Make sure that you have not double listed your master in the /etc/hosts file.
I had the following which only allowed master to listen on 127.0.1.1
127.0.1.1 hostname master
192.168.x.x hostname master
192.168.x.x slave-1
192.168.x.x slave-2
The above answer caused the problem. I changed my /ect/hosts file to the following to make it work.
127.0.1.1 hostname
192.168.x.x hostname master
192.168.x.x slave-1
192.168.x.x slave-2
Use the command netstat -an | grep :9000 to verify your connections are working!
In addition to the above answer, I found that in /etc/hosts on the master (running ubuntu) had the line:
127.0.1.1 master
Which meant that running nslookup master on the master returned a local address - so in spite of using master in mapred-site.xml I suffered the same problem. My solution (there are probably better ones) was to create an alias in my DNS server and use that instead. I guess you could probably also change the IP address in /etc/hosts to the external one, but I haven't tried this - I'm not sure what implications it would have for other services.