How can I ensure that a software installed on a cluster is always available.
I understand that I can install the software in a shared drive and if one node goes down, the other node will take care.
But what about the windows system dependencies like the registries, windows dir,
services etc?
Will these things as well get shared across the node?
Basically if I have a software written in C++/C# which has lots of windows O/S resource dependencies(registry, service etc), how can I ensure that it is highly available through a cluster? Is it possible?
Thanks & Regards
Sunil
For this scenario, let's assume:
There are two servers in the cluster. ServerA and ServerB.
Each server has their own local drive. (C:)
Each server has access to a shared/common drive called F:\ (probably on an external SAN)
When installing or updating your application on the Failover Cluster, first ensure ServerA is the cluster owner/active node. Install your application as usual, insuring the install path is a folder on the shared drive F:.
Once the install is complete to ServerA, go into Failover Cluster manager and make ServerB the cluster owner/active node. Repeat the install on ServerB, using the same folder on F:\ for the installation path.
If your application is a Windows service (or set of services), make sure after the application installation that you configure the service as a Generic Service Resource in the Failover Cluster. Then, always stop/start the service via Failover Cluster Manager
Related
I am migrating a standard all-linux nomad/consul cluster where the nomad/consul servers use almost no resources with our workloads, and spinning up dedicated linux VMs just for them in our new environment seems a bit wasteful, when the environment I am moving to has multiple windows VMs with spare capacity which I could use for the nomad server and consul server processes to give me the necessary redundancy.
So my question boils down to: If I have the consul server and nomad server processes exclusively on windows and the nomad agent and consul agent processes exclusively on linux-- will they all just get along? The nomad jobs are all dockerized except for a native system prometheus exporter.
Both Consul and Nomad are operating system agnostic. You can use a mix of OS's within your cluster without issue. The main requirement is that you have direct IP connectivity between the agents (i.e., no NAT), low latency (sub 10ms), and the required ports opened for Consul and/or Nomad agent communication.
See https://www.consul.io/docs/install/ports and https://www.nomadproject.io/docs/install/production/requirements#ports-used for more detail.
I don't see a way to configure the cluster FQDN for On Premise installation.
I create a 6 nodes cluster (each nodes running on a physical server) and I'm only able to contact each node on their own IP instead of contacting the cluster on a "general FQDN". With this model, I'm to be are of which node is up, and which node is down.
Does somebody know how to achieve it, based on the sample configurations files provided with Service Fabric standalone installation package?
You need to add a network load balancer to your infrastructure for that. This will be used to route traffic to healthy nodes.
I need to setup two-node Web cluster for Apache web site. I have Hyper-V infrastructure and only two nodes.
The points are load-balancing and high availability.
I installed and configured two VMs with CentOS 7, Pacemaker cluster, MariaDB 10. I configured Master/Slave ocf::percona:mysql resource in Pacemaker.
Next i need a shared storage for web site content.
I created DRBD disk in dual-primary mode and GFS2 in top of it. I tested it without adding to Pacemaker. All worked fine but, to make it automaticaly promoted, i need to manage these via Pacemaker.
The problem is that Pacemaker need fencing to create DRBD resource but there is no stonith agents for Hyper-V.
I read that in previous version for CentOS 6 it was possible to create SSH stonith agent. I tried to do this, but pcs not works with it.
Is it possible to use Pacemaker in top of Hyper-V for now? Or may be exist another way to use DRBD in dual primary?
I have tried many solutions but no one did not work well.
I have made two-way file replication using lsyncd instead.
Newbie w/ etcd/zookeeper type services ...
I'm not quite sure how to handle cluster installation for etcd. Should the service be installed on each client or a group of independent servers? I ask because if I'm on a client, how would I query the cluster? Every tutorial I've read shows a curl command running against localhost.
For etcd cluster installation, you can install the service on independent servers and form a cluster. The cluster information can be queried by logging onto one of the machines and running curl or remotely by specifying the IP address of one of the cluster member node.
For more information on how to set it up, follow this article
I spun up a Mesosphere cluster on Digital Ocean (development) and it's not allowing me to allow external (non vpn) connections to containers or apps. How can this be solved ?
To ensure that the world doesn't have access to your cluster normally, there have been iptables rules installed. By default, these allow full access inside the cluster and nothing externally.
If you're interested in running real applications, I'd recommend the following:
Put HAProxy on a single node.
Setup the haproxy-marathon-bridge script.
On the same box that you installed HAProxy on, setup iptables to allow access to the port that HAProxy is listening on.
By doing this, you'll have a single place to refer to when giving access to applications running on your Mesos cluster. No matter where the app or container is scheduled (with marathon), you'll always be able to reach it via. haproxy.