I try to start remote testing using my computer as Client/Master and EC2 instance as Slave
I achieve all these Points:
Disable Firewall on both Master and Slave.
I have the same version of java and JMeter on both Master and Slave.
I have set all communication to be over port 4000.
My configuration for Master:
remote_hosts=10.xx.xx.xxx
server_port=4000
server.rmi.port=4000
server.rmi.localport=4000
server.rmi.ssl.disable=true
My Configuration for Slave [EC2 instance]
server_port=4000
server.rmi.port=4000
server.rmi.localport=4000
server.rmi.ssl.disable=true
Command to start JMeter server on slave [ EC2 instance]:
./jmeter-server -Gjava.rmi.server.hostname:10.xx.xx.xxx
Command to start JMeter server on Master [ My Computer]:
./jmeter-server -Gjava.rmi.server.hostname:192.xx.xx.xxx
After Running the test from the master the test started on the slave and finished.
My issue that the Client/Master didn't get any Result or Summary, it stuck and freeze on this line:
Waiting for possible Shutdown/StopTestNow/HeapDump/ThreadDump message on port 4445.
Your 10.xx.xx.xxx and 192.xx.xx.xxx are class A and class C local networks, it means that they are not accessible from anywhere else, only from their respective local networks.
So you won't be able to reach the EC2 instance internal IP from your computer and vice versa.
In order to be able to connect to the EC2 instance you need to:
Use external (public) IP address or DNS hostname
Open the port 4000 in the AWS Security Groups
In order to get results back to your computer from the EC2 machine you need to have static external IP address, you need to reach out to your ISP or network administrator to get this configured and assigned
An example of master/slave configuration with custom ports can be found in the JMeter Distributed Testing with Docker article.
More information: Remote hosts and RMI configuration
If you have only one slave machine it doesn't make sense to invest into master/slave configuration at all, just run JMeter in command-line non-GUI mode in the EC2 instance and analyze the results locally.
If you plan to use more than 1 slave - it makes sense to transfer the master to the EC2 as well, this way you will be able to use internal IP addresses
Related
I am trying to set up an Apache Airflow server on ec2. I managed to get it running and verify status by hitting /health endpoint using curl on http://localhost:8989. Airflow listens on port 8989 here.
The next I want is to be able to connect to the admin dashboard/UI using the browser on EC2's public IP. So I added the inbound rule in the AWS security group ec2 instance belongs to.
While connecting to Airflow, I am getting the following error
Failed to connect to ec2-XX-XX-XXX-XXX.compute-1.amazonaws.com port 8989: Operation timed out
Not sure what else I need to do to reach server running on ec2.
If you can SSH to an EC2 instance, you've added a security group rule for ingress on another port, but can't reach the instance on that port, here are some other things to check:
Firewall running on the instance. Amazon Linux and recent official
Ubuntu AMIs shouldn't have iptables or some other firewall running on
them by default, but if you're using another AMI or someone else has
configured the EC2 instance, it's possible to have iptables/ufw or
some other firewall running. Check processes on your instance to make
sure you don't have a firewall.
Network ACL on the VPC subnet. The default ACL will permit
traffic on all ports. It's possible that the default has been changed
to allow traffic only on selected ports.
Multiple security groups assigned to the EC2 instance. It's possible
to assign more than one security group to the instance. Check to make
sure you don't have a rule in some other security group that's
blocking the port.
Should I run consul slaves alongside nomad slaves or inside them?
The later might not make sense at all but I'm asking it just in case.
I brought my own nomad cluster up with consul slaves running alongside nomad slaves (inside worker nodes), my deployable artifacts are docker containers (java spring applications).
The issue with my current setup is that my applications can't access consul slaves (to read configurations) (none of 0.0.0.0, localhost, worker node ip worked)
Lets say my service exposes 8080, I configured docker part (in hcl file) to use bridge as network mode. Nomad maps 8080 to 43210.
Everything is fine until my service tries to reach the consul slave to read configuration. Ideally giving nomad worker node IP as consul host to Spring should suffice. But for some reason it's not.
I'm using latest version of nomad.
I configured my nomad slaves like https://github.com/bmd007/statefull-geofencing-faas/blob/master/infrastructure/nomad/client1.hcl
And the link below shows how I configured/ran my consul slave:
https://github.com/bmd007/statefull-geofencing-faas/blob/master/infrastructure/server2.yml
Note: if I use static port mapping and host as the network mode for docker (in nomad) I'll be fine but then I can't deploy more than one instance of each application in each worker node (due to port conflic)
Nomad jobs listen on a specific host/port pair.
You might want to ssh into the server and run docker ps to see what host/port pair the job is listening on.
a93c5cb46a3e image-name bash 2 hours ago Up 2 hours 10.0.47.2:21435->8000/tcp, 10.0.47.2:21435->8000/udp foo-bar
Additionally, you will need to ensure that the consul nomad job is listening on port 0.0.0.0, or the specific ip of the machine. I believe that is this config value: https://www.consul.io/docs/agent/options.html#_bind
All those will need to match up in order to consul to be reachable.
More generally, I might recommend: if you're going to run consul with nomad, you might want to switch to host networking, so that you don't have to deal with the specifics of the networking within a container. Additionally, you could schedule consul as a system job so that it is automatically present on every host.
So I managed to solve the issue like this:
nomad.job.group.network.mode = host
nomad.job.group.network.port: port "http" {}
nomad.job.group.task.driver = docker
nomad.job.group.task.config.network_mode = host
nomad.job.group.task.config.ports = ["http"]
nomad.job.group.task.service.connect: connect { native = true }
nomad.job.group.task.env: SERVER_PORT= "${NOMAD_PORT_http}"
nomad.job.group.task.env: SPRING_CLOUD_CONSUL_HOST = "localhost"
nomad.job.group.task.env: SPRING_CLOUD_SERVICE_REGISTRY_AUTO_REGISTRATION_ENABLED = "false"
Running consul agent (slaves) using docker-compose alongside nomad agent (slave) with host as network mode + exposing all required ports.
Example of nomad job: https://github.com/bmd007/statefull-geofencing-faas/blob/master/infrastructure/nomad/location-update-publisher.hcl
Example of consul agent config (docker-compose file): https://github.com/bmd007/statefull-geofencing-faas/blob/master/infrastructure/server2.yml
Disclaimer: The LAB is part of Cluster Visualization Framework called: LiteArch Trafik which I have created as an interesting exercise to understand Nomad and Consul.
It took me long time to shift my mind from K8S to Nomad and Consul,
Integration them was one of my effort I spent in the last year.
When service resolution doesn't work, I found out it's more or less the DNS configuration on servers.
There is a section for it on Hashicorp documentation called DNS Forwarding
Hashicorp DNS Forwarding
I have created a LAB which explains how to set up Nomad and Consul.
But you can use the LAB seperately.
I created the LAB after learning the hard way how to install the cluster and how to integrate Nomad and Consul.
With the LAB you need Ubuntu Multipass installed.
You execute one script and you will get full functional Cluster locally with three servers and three nodes.
It shows you as well how to install docker and integrate the services with Consul and DNS services on Ubuntu.
After running the LAB you will get the links to Nomad, Fabio, Consul.
Hopefully it will guide you through the learning process of Nomad and Consul
LAB: LAB
Trafik:Trafik Visualizer
I have a kubernetes cluster running on GKE and a Jenkins server running on a GCP instance.
I am using the Kubernetes plugin to dynamically create pods on the kubernetes cluster. I created a pipeline(Declarative syntax) for the same.
So I am aware that the Jenkins slave agents communicates with the Jenkins master on port 50000.
A snip of the configuration
But for some reason when I viewed the logs for the JNLP container creates by Jenkins, I received an exception - tcpSlaveAgentListener not found.
A snip of the container log
According to the above image, I assume the tunneling is unsuccessful as it is trying to connect to http://34.90.46.204:8080/tcpSlaveAgentListener/ whereas it should connect to http://34.90.46.204:50000/tcpSlaveAgentListener/.
It was a lazy question for me to ask, but I solved the issue.
In the Manage Jenkins-> Configure Global Security settings:
For the option on setting a port for TCP inbound agents: unselect the disable option which is selected by default and then provide a port for the inbound agents to interact on (50000).
A snip of the configuration
Jenkins uses a TCP port to communicate with agents connected inbound. If you're going to use inbound agents, you can allow the system to randomly select a port at launch (this avoids interfering with other programs, including other Jenkins instances). As it's hard for firewalls to secure a random port, you can instead specify a fixed port number and configure your firewall accordingly.
Hope this helps someone.
we are facing issue, while trying to run jmeter distribution testing with master and slave configuration on different machines. Jmeter distribution test is running fine on same machine , but we are getting Connection refused to host: xxx.xxx.xxx.xx; nested exception is:java.net.ConnectException: Connection timed out: connect Failed to configure xxx.xxx.xxx.xxx
Most probably your networking configuration is not correct. Make sure that:
JMeter master and slaves reside in the same subnet, to wit you should be able to reach out from any machine to any machine
The network ports are open in the firewalls so JMeter master could communicate with slaves, the ports are:
1099
the port you define as server.rmi.localport
the ports you define as client.rmi.localport
Check out the following materials:
Remote hosts and RMI configuration
Apache JMeter Distributed Testing Step-by-step
How to Perform Distributed Testing in JMeter
In case of any problems look into jmeter.log file, normally it contains enough information in order to get to the bottom of the issue
I would like to use one EC2 instance as a controller for 1 or more remote server instances in Amazon. I can start and run the test (on the remote side it runs) but the controller never exits. It fails with:
2015/02/12 17:34:25 ERROR - jmeter.JMeter: Error in NonGUIDriver
java.lang.IllegalStateException: Engine is busy - please try later
at org.apache.jmeter.engine.RemoteJMeterEngineImpl.rconfigure(RemoteJMeterEngineImpl.java:151)
If I run the test w/o the -R it works fine.
The same test setup works in SoftLayer so I think it is a firewall issue but I think I have put all the ports into my security group.
JMeter use Java RMI for distributed tests. This protocol is sensible for using different networks or firewalls.
For EC2 I suggest to use:
Amazon VPC for creating own virtual network for Jmeter instances.
Use ssh tunnels between Jmeter machines - this type of conenction is usually more stable and better "debuggable" then RMI