Change of master server IP in a Jenkins master child cluster - windows

We have a Jenkins cluster running with master node and 6 child nodes(4 windows child nodes, 2 Linux child nodes).
Recently the IP of the master node has changed due to network updates.
The Linux child node are successful in connecting with the master node whereas the windows child nodes are not able to connect.
The client jar file after change of master node IP for windows fails to connect with the to master node from child node.
Does anyone know how to configure the system when Jenkins master node IP has changed with windows child nodes?
Jenkins version is 2.9
Tried changing configs for windows child nodes.
Generated Client JAR files after change of master node IP.

Related

how to disjoin a datacenter from two consul datacenter wan

I have a cluster of consul servers in two datacenters. each datacenter consists of 3 servers each. When I execute consul members -wan command I can see all 6 servers.
I want to separate these two into individual clusters and no connection between them.
I tried to use the command force-leave and leave as per the consul documentation:
https://www.consul.io/commands/force-leave: When I used this command
the result was a 500 - no node is found. I tried using the node name as server.datacenter, full FQDN of the server, IP of the server, none of them worked for me.
https://www.consul.io/commands/leave: When I used this command from
the node which I want to remove from the cluster, the response was
success but when I execute consul members -wan I still can see this
node.
I tried another approach where in I stopped the consul on the node I want to remove from cluster. Then executed the command: consul force-leave node-name. Then the command: consul members -wan showed this node as left. When I started the consul on this node, the node is back in cluster.
What steps am I missing here?
I think I solved the problem I had. I followed the instructions here:
https://support.hashicorp.com/hc/en-us/articles/1500011413401-Remove-WAN-federation-between-Consul-clusters

Server 2019 Data Center Failover Cluster Role not working as expected when port forwarding public traffic from firewall router

I have a 3 node windows failover cluster and each node runs Windows Server Data Center 2019.
Each node has a mail server installed (hmailserver) and each nodes mailserver service connects to a central mySQL database and uses cluster shared storage that is visible to each node.
The IP addresses in my network are:
Firewall/Router 192.168.1.1
Node 1 192.168.1.41
Node 2 192.168.1.42
Node 3 192.168.1.43
Mailserver Role 192.168.1.51 (virtual IP)
I added port forwards in my firewall router for all standard mail server ports and pointed these to the physical IP of node 1 then tested public could access the mail server properly. I then changed the port forwards to point to nodes 2 then node 3 to test each node had a working mail server and all worked perfectly.
Then I setup a cluster role using a GENERIC SERVICE pointing it to the mail server service and gave this role an IP of 192.168.1.51. Initially, the role attached itself to node 3 as its host node.
I changed the port forwards in the router to point to this virtual IP and while I was connected inside the same LAN, I could failover the role and the mail server kept going from one node to another but when I tried to connect to the system through public internet again, the mail server was only accessible when the cluster role was hosted on node 3. If changed to node 1 or 2 the connection was broken.
I tried turning off and disabling and resetting the windows firewalls on each node and ensured each node settings in firewall were identical but this had no effect on the problem. What am I missing ?

How do I change slave node's ssh port in Hadoop?

Currently I'm trying to change data node's ssh port in Hadoop. I have master and one data node and each host's ssh ports are different.
What I did :
Generated ssh key and I can connect to data node without password.
Added both on /etc/hosts as master and worker1
Changed port for master node in hadoop-env.sh file.
Changed the file on data node also.
The problem is, Hadoop uses same ports for master and data node like this.
How do I make Hadoop use different port for master and data node?
Any help would be appreciated :)

DC/OS - Dashboard showing 0 connected nodes

After restarting my 3 masters in my DC/OS cluster, the DC/OS dashboard is showing 0 connected nodes. However from the DC/OS cli I see all 6 of my agent nodes:
$ dcos node
HOSTNAME IP ID
172.16.1.20 172.16.1.20 a7af5134-baa2-45f3-892e-5e578cc00b4d-S7
172.16.1.21 172.16.1.21 a7af5134-baa2-45f3-892e-5e578cc00b4d-S12
172.16.1.22 172.16.1.22 a7af5134-baa2-45f3-892e-5e578cc00b4d-S8
172.16.1.23 172.16.1.23 a7af5134-baa2-45f3-892e-5e578cc00b4d-S6
172.16.1.24 172.16.1.24 a7af5134-baa2-45f3-892e-5e578cc00b4d-S11
172.16.1.25 172.16.1.25 a7af5134-baa2-45f3-892e-5e578cc00b4d-S10`
I am still able to schedule tasks in Marathon both from the dcos cli and from the Marathon gui, they then are properly scheduled and executed on the agents. Also, from the mesos interface on :5050 I can see all of the agents in the slaves page.
I have restarted agent nodes and master nodes. I have also rerun the DC/OS GUI installer and run preflight check, which of course fails with an "already installed" error.
Is there a way to re-register the node with DC/OS GUI short of uninstalling/reinstalling a node?
For anyone who is running into this, my problem was related to our corporate proxy. In order to get the Universe working in my cluster I had to add proxy settings to /opt/mesosphere/environment. I then restarted the dcos-cosmos.service and life was good. However, upon server restart, dcos-history-service.service was now running with the new environment and was unable to resolve my local names with our proxy server. To solve, I added a NO_PROXY to the /opt/mesosphere/environment and DCOS dashboard is again happy.

Setting up Ganglia on spark running on EC2

I am trying to set up ganglia on ec2 servers, but can't seem to recevie info from any other host than my master.
The master meterics are showing fine, but I have 4 more nodes that aren't listed in the web interface.
My question is - could it be that because I'm using unicast I can only see the master node, and that it's aggregating all of the other nodes data?
I have ran both gmetad and gmond in the foreground, and saw that the node and the master are communicating with each other, but still can't see the node in the web UI.
Any help would be appreciated.

Resources