patroni.yml file configuration for psql HA mode - etcd

I m setting up an 3 node postgres HA setup with patroni and etcd for high availability. when I define data directory variable in patroni.yml file, same directory on all 3 nodes ( local path ) its working fine. but if i update different values for different server like below :
server A /etc/patroni/patroni.yml
data directory : /nas/serverA/data
server B /etc/patroni/patroni.yml
data directory : /nas/serverB/data
server C /etc/patroni/patroni.yml
data directory : /nas/serverC/data
replica nodes are in start state only but not moving to running state.
Quetion 1: can we use nas location for patroni yml file ? if yes should i gave same nas location for all 3 servers or should the location should be different according to the server ?
right now i m expecting that on all 3 servers patorni.yml should be same. but if i want to use nas location can i give only 1 location to all 3 servers ?

Related

Is there an in built processor in Apache NiFi which can create a Password enabled SSH connection?

I have an HDInsight Cluster setup on Azure Cloud. Also have installed Apache NiFi on a separate VM. Please Note I have SCP & SSH access enabled from VM to my cluster. I am trying to setup some processors as per my requirement, first one in the list is an "ExecuteProcess" processor. What I am trying to achieve through that is to establish an SSH connection with my HDInsight Cluster and once that's successful pass that result (connection established = 'Y') through a FlowFile to my second processor which is a "GetFile" processor that will basically read a JSON file from a particular path in that HDInsight cluster.
I have added "ExecuteProcess" processor and in the Configure option -> Properties section, have set the below:
Command : ssh sshdepuser#demodepdata-ssh.azurehdinsight.net
command arguments: sshdepuser#demodepdata-ssh.azurehdinsight.net
Batch Duration : No Value Set
Redirect Error System : True
Working Directory : No Value Set
Argument Delimiter : No Value Set
Command : ssh sshdepuser#demodepdata-ssh.azurehdinsight.net
command arguments: sshdepuser#demodepdata-ssh.azurehdinsight.net
Batch Duration : No Value Set
Redirect Error System : True
Working Directory : No Value Set
Argument Delimiter : No Value Set
Please Note sshdepuser#demodepdata-ssh.azurehdinsight.net is the server hostname for my HDInsight Cluster to which I am trying to establish connectivity from my VM (Server DNS Name : dep-hadoop.eastus.cloudapp.azure.com)
I am trying to setup some processors as per my requirement, first one in the list is an "ExecuteProcess" processor. What I am trying to achieve through that is to establish an SSH connection with my HDInsight Cluster and once that's successful pass that result (connection established = 'Y') through a FlowFile to my second processor which is a "GetFile" processor that will basically read a JSON file from a particular path in that HDInsight cluster.
I am afraid that this doesn't work this way, you are not going to be able to pass an ssh connection as a flow file, nut you can try a workaround: in the execute processor, instead of make only an ssh connection, copy also the file to a local folder, then you can use the GetFile processor.

CTDB Samba failover not highly available

My Setup
3 nodes running ceph + cephfs
2 of these nodes running CTDB & Samba
1 client (not one of the 3 servers)
It is a Lab setup, so only one nic per server=node, one subnet as well as all Ceph components plus Samba on the same servers. I'm aware, that this is not the way to go.
The problem
I want to host a clustered Samba file share on top of Ceph with ctdb. I followed the CTDB documentation (https://wiki.samba.org/index.php/CTDB_and_Clustered_Samba#Configuring_Clusters_with_CTDB) and parts of this: https://wiki.samba.org/index.php/Samba_CTDB_GPFS_Cluster_HowTo.
The cluster is running and a share is reachable, readable and writeable on both nodes, my smb.conf looks as follows:
[global]
netbios name = CEPHFS
workgroup = SIMPLE
clustering = yes
idmap config * : backend = autorid
idmap config * : range = 1000000-1999999
log file = /var/log/samba/smb.log
# Set files creation permissions
create mask = 664
force create mode = 664
# Set directory creation mask
directory mask = 2775
force directory mode = 2775
[public]
comment = public share
path = /mnt/mycephfs/testshare
public = yes
writeable = yes
only guest = yes
ea support = yes
CTDB manages Samba and reports both nodes as OK.
But when i read or write to one of the nodes via the public IP and let it fail (restarting ctdb), the read or write fails. A second write attempt succeeds (the public IP gets taken by the other host successfully).
But CTDB should be able to do this according to https://ctdb.samba.org/ -> IP Takeover?
I have a tcpdump of the new server (the one taking over the public ip) sending a tcp RST to my client after the client sending retransmissions to the server.
Any idea, what the problem could be?
PS: I'm more than happy to provide you with more Information (ctdb config file, firewall configuration, pcap, whatever ;) ) but this is long enough ....

Unable to start Redis Cluster servers

I'm trying to start the Redis cluster servers by turning on 6 servers from ports 7000 to 7005, each with a redis.conf in their own directories on my macOS Sierra. I can start the first server fine (either of the 6) and here's that output and the info in the cli: Here's an example of one of these commands I run, using redis 3.2.1
redis-server /private/etc/redis-3.2.1/src/7002/redis.conf
but starting another would give this error:
11245:M 06 Mar 22:45:22.536 * Increased maximum number of open files to 10032 (it was originally set to 7168).
11245:M 06 Mar 22:45:22.537 # Sorry, the cluster configuration file nodes.conf is already used by a different Redis Cluster node. Please make sure that different nodes use different cluster configuration files.
Following the docs, I have each redis.conf configured to this with their corresponding port numbers
port 7000
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
appendonly yes
This used to work for me. I don't know for certain if it's related, but since then I have built these files into Docker images and containers. However, as far as I can tell I have deleted them, and also this file: /Users/MyUserAccount/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux
I also just deleted all the directories and recreated them in a different directory, but still it does not work. What can I do to get these ports available for Redis Cluster again?
UPDATE:
Also, my nodes.conf file is not being recreated in any of the port folders, and all of them only has the redis.conf file. Before when it worked, there was a nodes.conf file generated with 2 other files (I think a dump file and one other one).
Looks like a nodes.conf is being generated from wherever I call redis-server from, and I am able to start the servers if I cd into the different directories. This seems kind of inconvenient since before I just had a script that called from a single location the redis.conf at their absolute paths. But at least I have some solution.

Why is Windows Server 2003 addressing Directories and files like so --- C:\Directory\file\something.htm instead of C:/Directory/file/etc?

The service that houses my server recently had to reprovision the server after a failure. The server is Windows and I'm running Apache to power a website.
Prior to the redo the directories and files were addressed with / as the following slash. Note the drive is partitioned and what used to be simple drive letters C, D, E are now C:\ , etc. How can this be changed?
Apache HTTPD should always do the directory-separator translation for you. See this httpd file system documentation

Changing hostname breaks Rabbitmq when running on Kubernetes

I'm trying to run Rabbitmq using Kubernetes on AWS. I'm using the official Rabbitmq docker container. Each time the pod restarts the rabbitmq container gets a new hostname. I've setup a service (of type LoadBalancer) for the pod with a resolvable DNS name.
But when I use an EBS to make the rabbit config/messsage/queues persistent between restarts it breaks with:
exception exit: {{failed_to_cluster_with,
['rabbitmq#rabbitmq-deployment-2901855891-nord3'],
"Mnesia could not connect to any nodes."},
{rabbit,start,[normal,[]]}}
in function application_master:init/4 (application_master.erl, line 134)
rabbitmq-deployment-2901855891-nord3 is the previous hostname rabbitmq container. It is almost like Mnesia saved the old hostname :-/
The container's info looks like this:
Starting broker...
=INFO REPORT==== 25-Apr-2016::12:42:42 ===
node : rabbitmq#rabbitmq-deployment-2770204827-cboj8
home dir : /var/lib/rabbitmq
config file(s) : /etc/rabbitmq/rabbitmq.config
cookie hash : XXXXXXXXXXXXXXXX
log : tty
sasl log : tty
database dir : /var/lib/rabbitmq/mnesia/rabbitmq
I'm only able to set the first part of the node name to rabbitmq using the RABBITMQ_NODENAME environment variable.
Setting RABBITMQ_NODENAME to a resolvable DNS name breaks with:
Can't set short node name!\nPlease check your configuration\n"
Setting RABBITMQ_USE_LONGNAME to true breaks with:
Can't set long node name!\nPlease check your configuration\n"
Update:
Setting RABBITMQ_NODENAME to rabbitmq#localhost works but that negates any possibility to cluster instances.
Starting broker...
=INFO REPORT==== 26-Apr-2016::11:53:19 ===
node : rabbitmq#localhost
home dir : /var/lib/rabbitmq
config file(s) : /etc/rabbitmq/rabbitmq.config
cookie hash : 9WtXr5XgK4KXE/soTc6Lag==
log : tty
sasl log : tty
database dir : /var/lib/rabbitmq/mnesia/rabbitmq#localhost
Setting RABBITMQ_NODENAME to the service name, in this case rabbitmq-service like so rabbitmq#rabbitmq-service also works since kubernetes service names are internally resolvable via DNS.
Starting broker...
=INFO REPORT==== 26-Apr-2016::11:53:19 ===
node : rabbitmq#rabbitmq-service
home dir : /var/lib/rabbitmq
config file(s) : /etc/rabbitmq/rabbitmq.config
cookie hash : 9WtXr5XgK4KXE/soTc6Lag==
log : tty
sasl log : tty
database dir : /var/lib/rabbitmq/mnesia/rabbitmq#rabbitmq-service
Is this the right way though? Will I still be able to cluster multiple instances if the node names are the same?
The idea is to use a different 'service' and 'deployment' for each of the node you want to create.
As you said, you have to create a custom NODENAME for each i.e:
RABBITMQ_NODENAME=rabbit#rabbitmq-1
Also rabbitmq-1,rabbitmq-2,rabbitmq-3 have to be resolved from each nodes. For that you can use kubedns. The /etc/resolv.conf will look like:
search rmq.svc.cluster.local
and /etc/hosts must contains:
127.0.0.1 rabbitmq-1 # or rabbitmq-2 on node 2...
The services are here to create a stable network identity for each nodes
rabbitmq-1.svc.cluster.local
rabbitmq-2.svc.cluster.local
rabbitmq-3.svc.cluster.local
The different deployments resources will allow you to mount a different volume on each node.
I'm working on a deployment tool to simplify those actions:
I've done a demo on how I scale and deploy rabbitmq from 1 to 3 nodes on kubernetes:
https://asciinema.org/a/2ktj7kr2d2m3w25xrpz7mjkbu?speed=1.5
More generally, the complexity your facing to deploy a clustered application is addressed in the 'petset proposal': https://github.com/kubernetes/kubernetes/pull/18016
In addition to the first reply by #ant31:
Kubernetes now allows to setup a hostname, e.g. in yaml:
template:
metadata:
annotations:
"pod.beta.kubernetes.io/hostname": rabbit-rc1
See https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/dns # A Records and hostname Based on Pod Annotations - A Beta Feature in Kubernetes v1.2
It seems that the whole configuration alive multiple restarts or re-schedules. I've not setup a cluster however I'm going to follow the tutorial for mongodb, see https://www.mongodb.com/blog/post/running-mongodb-as-a-microservice-with-docker-and-kubernetes
The approach will be probably almost same from kubernetes point of view.

Resources