I have setup a failover cluster in Windows Server 2012 which contains two nodes. I have used Microsoft MSMQ for communication. If I try the write on the queue remotely using the node name DIRECT=OS:mymachine\private$\MyQueueQueue then I am able to write on it but when I use the virtual IP of the cluster then the message got stuck in Outgoing Queues
I have disabled the firewall.
I am able to telnet to the MSMQ port 1801 from node IPs but not able to telnet with the virtual IP which is used by failover cluster
Related
I have a 3 node windows failover cluster and each node runs Windows Server Data Center 2019.
Each node has a mail server installed (hmailserver) and each nodes mailserver service connects to a central mySQL database and uses cluster shared storage that is visible to each node.
The IP addresses in my network are:
Firewall/Router 192.168.1.1
Node 1 192.168.1.41
Node 2 192.168.1.42
Node 3 192.168.1.43
Mailserver Role 192.168.1.51 (virtual IP)
I added port forwards in my firewall router for all standard mail server ports and pointed these to the physical IP of node 1 then tested public could access the mail server properly. I then changed the port forwards to point to nodes 2 then node 3 to test each node had a working mail server and all worked perfectly.
Then I setup a cluster role using a GENERIC SERVICE pointing it to the mail server service and gave this role an IP of 192.168.1.51. Initially, the role attached itself to node 3 as its host node.
I changed the port forwards in the router to point to this virtual IP and while I was connected inside the same LAN, I could failover the role and the mail server kept going from one node to another but when I tried to connect to the system through public internet again, the mail server was only accessible when the cluster role was hosted on node 3. If changed to node 1 or 2 the connection was broken.
I tried turning off and disabling and resetting the windows firewalls on each node and ensured each node settings in firewall were identical but this had no effect on the problem. What am I missing ?
I want to create a failover web server using Windows Server 2016. If the first machine fails, the server should move to the second machine. However I'm not really sure which Clustered Role should I use.
Clustered Role Role or Feature Prerequisite
DFS Namespace Server DFS Namespaces (part of File Server role)
DHCP Server DHCP Server role
Distributed Transaction Coordinator (DTC) None
File Server File Server role
Generic Application Not applicable
Generic Script Not applicable
Generic Service Not applicable
Hyper-V Replica Broker Hyper-V role
iSCSI Target Server iSCSI Target Server (part of File Server role)
iSNS Server iSNS Server Service feature
Message Queuing Message Queuing Services feature
Other Server None
Virtual Machine Hyper-V role
WINS Server WINS Server feature
https://learn.microsoft.com/en-us/windows-server/failover-clustering/create-failover-cluster#create-clustered-roles. PS: I want a situation with and without a database.
I need to block all IP on a network and give unrestricted access to 3-4 computers.
So I have created IP Sec policy as follows:
IP List 1: added to 4 IPs and associated a filter to Permit
IP List 2: added to Any IP Address and associated a filter to Block
After adding this policy, application (using socket communication for IPC in local machine) is working fine in Win7 machine. But not in Win2k8 machine.
Please note for inpterprocess communication within a machine we use sockets.
When we enable block filter for all IPs in IPSec pollicy, the if there are programs which uses local machine IP itself like 10.78.78.78 connects to 10.78.78.78 (Both server and client application in same machine) for Inter process communication, then had to add a firewall exception for exempting authentication between the same machine IPs.
I have the following situation:
A private enterprise network with a Icinga2 master, monitoring the internal servers. The firewall blocks all inbound access, however all servers to have outbound internet access (multiple protocols, such as SSH, HTTP, HTTPS).
We also have an environment in Azure with 1 publicly accessable VM (nginx) and behind that, in a private network, application servers. I'd also like to monitor these servers. I read that I can set up a Icinga2 satellite (in Azure), that monitors the Azure environment and sends the data to the master.
This would be a great solution. However, my master is in our private enterprise network, so the Icinga satellite can't push any data to the master. The only option would be that the master pulls the data periodically from the satellite(s). It's possible for the master to login via SSH agent forwarding to the servers in Azure. Is this possible or is there a better solution? I'd rather not create a reverse SSH tunnel.
You might just use the icinga2 client and let the master connect to the client (ensure that the endpoint object contains host/port attributes). Once the connection is established the client will send its check results (and historical replay logs even if there).
I recently installed the CDH distribution of Cloudera to create a 2 node cluster. From the Cloudera Manager UI, all services are running fine.
All the command line tools (hive etc ) are also working fine and I am able to read and write data to hdfs.
However the namenode (and datanode) web UI alone is not opening. Checking on netstat -a | grep LISTEN, the processes are listening on the assigned ports and there are no firewall rules which are blocking the connections ( I already disabled iptables)
I initially though that it could be a DNS issue but even the IP address is not working. The Cloudera Manager installed on the same machine on another port is opening fine.
Any pointers on how to debug this problem?
I had faced the same issue.
First it was because NAMENODE in safemode
then after because of two IP address (I have two NIC configured on CDH Cluster one internal connectivity of the servers (10.0.0.1) and other is to connect servers form Internet (192.168.0.1))
When i try to open NAMENODE GUI form any of the server connected to cluster on network 10.0.0.1 then GUI is opening and works fine but from other any other machine connected to servers by 192.168.0.1 network it fails.