File sync between n web servers in cluster - go

There are n nodes in a web cluster. Files may be uploaded to any node and then must be distributed to every other node. This distribution does not have to happen in a transaction (in fact it must not, distributed transactions don't scale) and some latency is acceptable, although must be minimal. Conflicts can be resolved arbitrarily (typically last write wins) provided that the resolution is also distributed to all nodes so that eventually all nodes have the same set of files. Nodes can be added and removed dynamically without having to reconfigure existing nodes. There must be no single point of failure and no additional boxes required to solve this (such as RabbitMQ)
I am thinking along the lines of using consul.io for dynamic configuration so that each node can refer to consul to determine what other nodes are available and writing a daemon (Golang) that monitors the relevant folders and communicates with other nodes using ZeroMQ.
Feels like I would be re-inventing the wheel though. This is a common problem and I expect there are solutions available already that I don't know about? Or perhaps my approach is wrong and there is another way to solve this?

Yes, there has been some stuff going on with distributed synchronization lately:
You could use syncthing (open source) or BitTorrent Sync.
Syncthing is node-based, i.e. you add nodes to a cluster and choose which folders to synchronize.
BTSync is folder-based, i.e. you obtain a "secret" for a folder and can synchronize with everyone in the swarm for that folder.
From my experience, BTSync has a better discovery and connectivity, but the whole synchronization process is closed source and nobody really knows what happens. Syncthing is written in go, but sometimes has trouble discovering peers.
Both syncthing and BTSync use LAN discovery via broadcast and a tracker for discovery, AFAIK.
EDIT: Or, if you're really cool, use IPFS to host the latest version, IPNS to "name" that and mount the IPNS on the servers. You can set the IPFS bootstrap list to some of your servers, which would even make you independent of external trackers. :)

Related

ActiveMQ Artemis - how does slave become live if there's only one master in the group?

I'm trying to understand a couple of items relating to failover in the ActiveMQ Artemis documentation. Specifically, there is a section (I'm sure I'm reading it wrong) that seems as if it's impossible to for a slave to take over for a master:
Specifically, the backup will become active when it loses connection to its live server. This can be problematic because this can also happen because of a temporary network problem. In order to address this issue, the backup will try to determine whether it still can connect to the other servers in the cluster. If it can connect to more than half the servers, it will become active, if more than half the servers also disappeared with the live, the backup will wait and try reconnecting with the live. This avoids a split brain situation
If there is only one other server, the master, the slave will not be able to connect to it. Since that is 100% of the other servers, it will stay passive. How can this work?
I did see that a pluggable quorum vote replication could be configured, but before I delve into that, I'd like to know what I'm missing here.
When using replication with only a single primary/backup pair there is no mitigation against split brain. When the backup loses its connection with the primary it will activate since it knows there are no other primary brokers in the cluster. Otherwise it would never activate, as you note.
The documentation should be clarified to remove this ambiguity.
Lastly, the documentation you referenced does not appear to be the most recent based on the fact that the latest documentation is slightly different from what you quoted (although it still contains this ambiguity).
Generally speaking, a single primary/backup pair with replication is only recommended with the new pluggable quorum voting since the risk of split brain is so high otherwise.

Does the kubernetes scheduler support anti-affinity?

I'm looking at deploying Kubernetes on top of a CoreOS cluster, but I think I've run into a deal breaker of sorts.
If I'm using just CoreOS and fleet, I can specify within the unit files that I want certain services to not run on the same physical machine as other services (anti-affinity). This is sort of essential for high availability. But it doesn't look like kubernetes has this functionality yet.
In my specific use-case, I'm going to need to run a few clusters of elasticsearch machines that need to always be available. If, for any reason, kubernetes decides to schedule all of my elasticsearch node containers for a given ES cluster on a single machine, (or even the majority on a single machine), and that machine dies, then my elasticsearch cluster will die with it. That can't be allowed to happen.
It seems like there could be work-arounds. I could set up the resource requirements and machine specs such that only one elasticsearch instance could fit on each machine. Or I could probably use labels in some way to specify that certain elasticsearch containers should go on certain machines. I could also just provision way more machines than necessary, and way more ES nodes than necessary, and assume kubernetes will spread them out enough to be reasonably certain of high availability.
But all of that seems awkward. It's much more elegant from a resource-management standpoint to just specify required hardware and anti-affinity, and let the scheduler optimize from there.
So does Kubernetes support anti-affinity in some way I couldn't find? Or does anyone know if it will any time soon?
Or should I be thinking about this another way? Do I have to write my own scheduler?
Looks like there are a few ways that kubernetes decides how to spread containers, and these are in active development.
Firstly, of course there have to be the necessary resources on any machine for the scheduler to consider bringing up a pod there.
After that, kubernetes spreads pods by replication controller, attempting to keep the different instances created by a given replication controller on different nodes.
It seems like there was recently implemented a method of scheduling that considers services and various other parameters. https://github.com/GoogleCloudPlatform/kubernetes/pull/2906 Though I'm not completely clear on exactly how to use it. Perhaps in coordination with this scheduler config? https://github.com/GoogleCloudPlatform/kubernetes/pull/4674
Probably the most interesting issue to me is that none of these scheduling priorities are considered during scale-down, only scale-up. https://github.com/GoogleCloudPlatform/kubernetes/issues/4301 That's a bit of big deal, it seems like over time you could weird distributions of pods because they stay whereever they are originally placed.
Overall, I think the answer to my question at the moment is that this is an area of kubernetes that is in flux (as to be expected with pre-v1). However, it looks like much of what I need will be done automatically with sufficient nodes, and proper use of replication controllers and services.
Anti-Affinity is when you don’t want certain pods to run on certain nodes. For the present scenario I think TAINTS AND TOLERATIONS can come handy. You can mark nodes with a taint, then ,only those pods which explicitly have tolerance for that taint are going to be allowed to run on that specific node.
Below I am describing how can anti-affinity concept be implemented:
Taint any node you want.
kubectl taint nodes gke-cluster1-default-pool-4db7fabf-726z env=dev:NoSchedule
here, env=dev is the key:value pair or rather the label for this node ,
NoSchedule is the effect which describes not to schedule any pod on this node unless with the specific toleration.
Create a deployment
kubectl run newdep1 --image=nginx
Add the same label as the label to the node to this deployment so that all pods of this deployment are hosted on this node and this node will not host any other pod without the matching label.
kubectl label deployments/newdep1 env=dev
kubectl scale deployments/newdep1 --replicas=5
kubectl get pods -o wide
running this you will see all pods of newdep1 will run on your tainted node.

configuring datastax cluster dynamically

I want to create a connection routine that does not include nodes which are down during time of connection, does not create pools to the node. If some node goes down in between, then it is automatically blacklisted from round robin. Is there a way to do so?
The whitelist policy starts by asking a bunch of nodes but does not dynamically change it (to my knowledge). I did not find a way to make "Host" object. I did not find a way to get up/down status through java code as obtained from nodetool utility - but then I'd like to do so without starting a session, just like nodetool utility does without going in cqlsh.
Are you using the DataStax Java driver? I'm guessing you are by your mention of the white list policy. The driver already does automatic node discovery and transparent failover.
If a node goes down, the driver will detect it and stop issuing queries to this node until it comes back up. If you are looking for different functionality can you elaborate?

Is it safe to use etcd across multiple data centers?

Is it safe to use etcd across multiple data centers? As it expose etcd port to public internet.
Do I have to use client certificates in this case or etcd has some sort of authification?
Yes, but there are two big issues you need to tackle:
Security. This all depends on what type of info you are storing in etcd. Using a point to point VPN is probably preferred over exposing the entire cluster to the internet. Client certificates can also be used.
Tuning. etcd relies on replication between machines for two things, aliveness and consensus. Since a successful write must be committed to at majority of the cluster before it returns as successful, your write performance will degrade as the distance between the machines increases. Aliveness is measured with periodic heartbeats between the machines. By default, etcd has a fairly aggressive 50ms heartbeat timeout, which is optimized for bare metal servers running on a local network. Without tuning this timeout value, your cluster will constantly think that members have disappeared and trigger frequent master elections. This gets worse if both of your environments are on cloud providers that have variable networks plus disk writes that traverse the network, a double whammy.
More info on etcd tuning: https://etcd.io/docs/latest/tuning/

Is there a way asterisk reconnect calls when internet connection is missed

For being specific, I am using asterisk with a Heartbeat active/pasive cluster. There are 2 nodes in the cluster. Let's suppose Asterisk1 Asterisk2. Eveything is well configured in my cluster. When one of the nodes looses internet connection, asterisk service fails or the Asterisk1 is turned off, the asterisk service and the failover IP migrate to the surviving node (Asterisk2).
The problem is if we actually were processing a call when the Asterisk1 fell down asterisk stops the call and I can redial until asterisk service is up in asterisk2 (5 seconds, not a bad time).
But, my question is: Is there a way to make asterisk work like skype when it looses connection in a call? I mean, not stopping the call and try to reconnect the call, and reconnect it when asterisk service is up in Asterisk2?
There are some commercial systems that support such behavour.
If you want do it on non-comercial system there are 2 way:
1) Force call back to all phones with autoanswer flag. Requerment: Guru in asterisk.
2) Use xen and memory mapping/mirror system to maintain on other node vps with same memory state(same running asterisk). Requirment: guru in XEN. See for example this: http://adrianotto.com/2009/11/remus-project-full-memory-mirroring/
Sorry, both methods require guru knowledge level.
Note, if you do sip via openvpn tunnel, very likly you not loose calls inside tunnel if internet go down for upto 20 sec. That is not exactly what you asked, but can work.
Since there is no accepted answer after almost 2 years I'll provide one: NO. Here's why.
If you failover from one Asterisk server 1 to Asterisk server 2, then Asterisk server 2 has no idea what calls (i.e. endpoint to endpoing) were in progress. (Even if you share a database of called numbers, use asterisk realtime, etc). If asterisk tried to bring up both legs of the call to the same numbers, these might not be the same endpoints of the call.
Another server cannot resume the SIP TCP session of the other server since it closed with the last server.
The MAC source/destination ports may be identical and your firewall will not know you are trying to continue the same session.
etc.....
If you goal is high availability of phone services take a look at the VoIP Info web site. All the rest (network redundancy, disk redundancy, shared block storage devices, router failover protocol, etc) is a distraction...focus instead on early DETECTION of failures across all trunks/routes/devices involved with providing phone service, and then providing the highest degree of recovery without sharing ANY DEVICES. (Too many HA solutions share a disk, channel bank, etc. that create a single point of failure)
Your solution would require a shared database that is updated in realtime on both servers. The database would be managed by an event logger that would keep track of all calls in progress; flagged as LINEUP perhaps. In the event a failure was detected, then all calls that were on the failed server would be flagged as DROPPEDCALL. When your fail-over server spins up and takes over -- using heartbeat monitoring or somesuch -- then the first thing it would do is generate a set of call files of all database records flagged as DROPPPEDCALL. These calls can then be conferenced together.
The hardest part about it is the event monitor, ensuring that you don't miss any RING or HANGUP events, potentially leaving a "ghost" call in the system to be erroneously dialed in a recovery operation.
You likely should also have a mechanism to build your Asterisk config on a "management" machine that then pushes changes out to your farm of call-manager AST boxen. That way any node is replaceable with any other.
What you should likely have is 2 DB servers using replication techniques and Linux High-Availability (LHA) (1). Alternately, DNS round-robin or load-balancing with a "public" IP would do well, too. These machine will likely be light enough load to host your configuration manager as well, with the benefit of getting LHA for "free".
Then, at least N+1 AST Boxen for call handling. N is the number of calls you plan on handling per second divided by 300. The "+1" is your fail-over node. Using node-polling, you can then set up a mechanism where the fail-over node adopts the identity of the failed machine by pulling the correct configuration from the config manager.
If hardware is cheap/free, then 1:1 LHA node redundancy is always an option. However, generally speaking, your failure rate for PC hardware and Asterisk software is fairly lower; 3 or 4 "9s" out of the can. So, really, you're trying to get last bit of distance to the "5th 9".
I hope that gives you some ideas about which way to go. Let me know if you have any questions, and please take the time to "accept" which ever answer does what you need.
(1) http://www.linuxjournal.com/content/ahead-pack-pacemaker-high-availability-stack

Resources