Server 2019 Data Center Failover Cluster Role not working as expected when port forwarding public traffic from firewall router - high-availability

I have a 3 node windows failover cluster and each node runs Windows Server Data Center 2019.
Each node has a mail server installed (hmailserver) and each nodes mailserver service connects to a central mySQL database and uses cluster shared storage that is visible to each node.
The IP addresses in my network are:
Firewall/Router 192.168.1.1
Node 1 192.168.1.41
Node 2 192.168.1.42
Node 3 192.168.1.43
Mailserver Role 192.168.1.51 (virtual IP)
I added port forwards in my firewall router for all standard mail server ports and pointed these to the physical IP of node 1 then tested public could access the mail server properly. I then changed the port forwards to point to nodes 2 then node 3 to test each node had a working mail server and all worked perfectly.
Then I setup a cluster role using a GENERIC SERVICE pointing it to the mail server service and gave this role an IP of 192.168.1.51. Initially, the role attached itself to node 3 as its host node.
I changed the port forwards in the router to point to this virtual IP and while I was connected inside the same LAN, I could failover the role and the mail server kept going from one node to another but when I tried to connect to the system through public internet again, the mail server was only accessible when the cluster role was hosted on node 3. If changed to node 1 or 2 the connection was broken.
I tried turning off and disabling and resetting the windows firewalls on each node and ensured each node settings in firewall were identical but this had no effect on the problem. What am I missing ?

Related

Unable to ping local IP address back from Alibaba ECS

I have an ECS instance running in alibaba. My ECS is in a VPC that has a SSL server. I have downloaded the SSL client certificate which allows me to connect to openVPN and to ping the ECS instance from my local box while connected to openVPN.
However, when I login to the ECS instance, I am unable to ping back my local box. My security group is a basic one which allows all connections. I didn't touch the outbound connection.
Here are the details of my SSL Server, and successful ping screenshot (My ECS Primary Private IP Address is 192.168.0.201)
Here is a screenshot of my unsuccessful attempt to ping my local home IP address (The IP, 192.168.10.190,in the screenshot below is an arbitrary one for illustration purpose) from the ECS instance.
When you connect to VPN, you're assigned a private client IP of 192.168.2.0/24 as per your SSL VPN settings. This is the network that will be used for your VPN connection. From your screenshot I see that you're pinging to your local 192.168.10.190. Your cloud server does not have access to this network.
You can try pinging to your client 192.168.2.0/24 IP from your ECS. You probably will need to a the route to your VPC > Route Tables. I haven't tried connecting cloud server via SSL VPN myself, but I've used IPSec for two-way site-to-site connection, which is more suitable for this situation.

Pull data from icinga satellite to master behind firewall

I have the following situation:
A private enterprise network with a Icinga2 master, monitoring the internal servers. The firewall blocks all inbound access, however all servers to have outbound internet access (multiple protocols, such as SSH, HTTP, HTTPS).
We also have an environment in Azure with 1 publicly accessable VM (nginx) and behind that, in a private network, application servers. I'd also like to monitor these servers. I read that I can set up a Icinga2 satellite (in Azure), that monitors the Azure environment and sends the data to the master.
This would be a great solution. However, my master is in our private enterprise network, so the Icinga satellite can't push any data to the master. The only option would be that the master pulls the data periodically from the satellite(s). It's possible for the master to login via SSH agent forwarding to the servers in Azure. Is this possible or is there a better solution? I'd rather not create a reverse SSH tunnel.
You might just use the icinga2 client and let the master connect to the client (ensure that the endpoint object contains host/port attributes). Once the connection is established the client will send its check results (and historical replay logs even if there).

Google Cloud Network Load Balancer - Health checks always unhealthy

I tried to set up a network load balancer on google cloud but the heath check always returns unhealthy.
I give you the steps that i did follow
I created two windows servers 2012 R2 instances
I checked that the port 80 is open to public over both instances
I created the forwarding rules and Google Cloud gave me a External IP
I set up the external IP in a Network loopback interface on both server instances
I created a Network Route that forwarding the traffic on both instances (route menu)
I created another Network Route for the 169.254.169.254/32 (Source of Network load balancer traffic) and Pointing to both windows instances server
I created the same site (example.com) on IIS 8 in both instances server and the site is running correctly.
The DNS settings of the domain example.com is pointing to the external IP google cloud that I using for Network load balancer
I configured the health check
PATH : /
Protocol : HTTP
HOST: example.com
Session Afinity : Client IP
I created a Target Pool and I added both server instances and heath check
I Asigned the target pool to forwarding rule
When I select the Target Pool option, both instances marked as Unhealthy for the external IP that Google cloud gave me and I don't know why this happens.
I see the web page is switching the server instances randomly all the time.
Your Help is apreciated!, thank you!
You don't need to add any GCE Network Route.
The GCE agent is taking care of adding the load balancer IP to the VM's network configuration. There is no need to do it manually. https://github.com/GoogleCloudPlatform/compute-image-windows
IIS must respond to requests on the LB IP:
Check the IIS bindings from IIS manager. Reset IIS.
Confirm from netstat that IIS is listening on 0.0.0.0 or the load balanced IP.
Access the LB IP from one of the servers. It should work.
The GCE firewall must allow traffic from the clients' IPs and also from the metadata server (169.254.169.254). The metadata server is used for healthchecks.
Network Load Balancing tutorial. https://cloud.google.com/compute/docs/load-balancing/network/example

MARIADB GALERA CLUSTR

I have a working cluster with three nodes on 192.168.14 subnet
I wanted to add an external machine to cluster ip 78.3.157.x
External machine fails to join cluster with "failed to open gcom backend connection 110"
Is such a configuration actually possible and if so how
I think you have a network level problem. The servers are not able to reach each other because you probably have some sort of non-transparent NAT device between the server. One of the IPs is public and the others are private.
Fix your networking layer so that all your nodes have full non-NAT:ed connectivity between each other. They do not need to be in the same subnet as long as the connectivity between them works in every direction.

Windows Server 2012 MSMQ Issue

I have setup a failover cluster in Windows Server 2012 which contains two nodes. I have used Microsoft MSMQ for communication. If I try the write on the queue remotely using the node name DIRECT=OS:mymachine\private$\MyQueueQueue then I am able to write on it but when I use the virtual IP of the cluster then the message got stuck in Outgoing Queues
I have disabled the firewall.
I am able to telnet to the MSMQ port 1801 from node IPs but not able to telnet with the virtual IP which is used by failover cluster

Resources