I've to move a Windows based multi-threaded application (which uses global variables as well as an RDBMS for storage) to an NLB (i.e., network load balancer) cluster. The common architectural issues that immediately come to mind are
Global variables (which are both read/ written) will have to be moved to a shared storage. What are the best practices here? Is there anything available in Windows Clustering API to manage such things?
My application uses sockets, and persistent connections is a norm in the field I work. I believe persistent connections cannot be load balanced. Again, what are the architectural recommendations in this regard?
I'll answer the persistent connection part of the question first since it's easier. All good network load-balancing solutions (including Microsoft's NLB service built into Windows Server, but also including load balancing devices like F5 BigIP) have the ability to "stick" individual connections from clients to particular cluster nodes for the duration of the connection. In Microsoft's NLB this is called "Single Affinity", while other load balancers call it "Sticky Sessions". Sometimes there are caveats (for example, Microsoft's NLB will break connections if a new member is added to the cluster, although a single connection is never moved from one host to another).
re: global variables, they are the bane of load-balanced systems. Most designers of load-balanced apps will do a lot of re-architecture to minimize dependence on shared state since it impedes the scalabilty and availability of a load-balanced application. Most of these approaches come down to a two-step strategy: first, move shared state to a highly-available location, and second, change the app to minimize the number of times that shared state must be accessed.
Most clustered apps I've seen will store shared state (even shared, volatile state like global variables) in an RDBMS. This is mostly out of convenience. You can also use an in-memory database for maximum performance. But the simplicity of using an RDBMS for all shared state (transient and durable), plus the use of existing database tools for high-availability, tends to work out for many services. Perf of an RDBMS is of course orders of magnitude slower than global variables in memory, but if shared state is small you'll be reading out of the RDBMS's cache anyways, and if you're making a network hop to read/write the data the difference is relatively less. You can also make a big difference by optimizing your database schema for fast reading/writing, for example by removing unneeded indexes and using NOLOCK for all read queries where exact, up-to-the-millisecond accuracy is not required.
I'm not saying an RDBMS will always be the best solution for shared state, only that improving shared-state access times are usually not the way that load-balanced apps get their performance-- instead, they get performance by removing the need to synchronously access (and, especially, write to) shared state on every request. That's the second thing I noted above: changing your app to reduce dependence on shared state.
For example, for simple "counters" and similar metrics, apps will often queue up their updates and have a single thread in charge of updating shared state asynchronously from the queue.
For more complex cases, apps may swtich from Pessimistic Concurrency (checking that a resource is available beforehand) to Optimistic Concurrency (assuming it's available, and then backing out the work later if you ended up, for example, selling the same item to two different clients!).
Net-net, in load-balanced situations, brute force solutions often don't work as well as thinking creatively about your dependency on shared state and coming up with inventive ways to prevent having to wait for synchronous reading or writing shared state on every request.
I would not bother with using MSCS (Microsoft Cluster Service) in your scenario. MSCS is a failover solution, meaning it's good at keeping a one-server app highly available even if one of the cluster nodes goes down, but you won't get the scalability and simplicity you'll get from a true load-balanced service. I suspect MSCS does have ways to share state (on a shared disk) but they require setting up an MSCS cluster which involves setting up failover, using a shared disk, and other complexity which isn't appropriate for most load-balanced apps. You're better off using a database or a specialized in-memory solution to store your shared state.
Regarding persistent connection look into the port rules, because port rules determine which tcpip port is handled and how.
MSDN:
When a port rule uses multiple-host
load balancing, one of three client
affinity modes is selected. When no
client affinity mode is selected,
Network Load Balancing load-balances
client traffic from one IP address and
different source ports on
multiple-cluster hosts. This maximizes
the granularity of load balancing and
minimizes response time to clients. To
assist in managing client sessions,
the default single-client affinity
mode load-balances all network traffic
from a given client's IP address on a
single-cluster host. The class C
affinity mode further constrains this
to load-balance all client traffic
from a single class C address space.
In an asp.net app what allows session state to be persistent is when the clients affinity parameter setting is enabled; the NLB directs all TCP connections from one client IP address to the same cluster host. This allows session state to be maintained in host memory;
The client affinity parameter makes sure that a connection would always route on the server it was landed initially; thereby maintaining the application state.
Therefore I believe, same would happen for your windows based multi threaded app, if you utilize the affinity parameter.
Network Load Balancing Best practices
Web Farming with the
Network Load Balancing Service
in Windows Server 2003 might help you give an insight
Concurrency (Check out Apache Cassandra, et al)
Speed of light issues (if going cross-country or international you'll want heavy use of transactions)
Backups and deduplication (Companies like FalconStor or EMC can help here in a distributed system. I wouldn't underestimate the need for consulting here)
Related
I have integrated Apache Geode into a web application to store HTTP session data in it. This web application is run load-balanced, i.e. there are multiple instances of it sharing session data. Each web application instance has its own locale Geode cache (locator and server) and the data is distributed by use of a replicated region to other Geode nodes in the cluster. All instances are in the same network, no multi-site usage. The number of GET operations per second are around 5000 per second; the number of PUT operations are approximatley half of it.
Testing this setup with only one web application instance the performarnce is very promising (in the area of 20-30 ms). However, when adding an instance there is a significatn performance drop up to a few seconds.
It has shown that disabling TCP syn cookies lead to an improvement of processing time up to 50%. Though the performance is still not acceptable.
I ask myself how an eventual bottleneck (e.g. by the communication between Geode nodes) could be identified? Mainly I think of getting out metrics/statistics from Geode, although I could not find anything helpful yet in that regard. I'd appreciate any hint on how to investigate and eliminate performance problems with Apache Geode.
According to the following video, a dedicated cache is a cache process hosted on a separate server while a colocated cache is a cache process hosted directly on the service hosts. Is that the standard definition? I can not find any more on this topic online.
In the colocated cache scenario would the service always reference the cache that is on this specific host or would it need to query other hosts as well? Is it possible to route requests to only hosts that have a colocated cache for that partition of data then in order to avoid the extra network hop to a cache server that would be needed to retrieve data in the dedicated cache host scenario?
In a colocated cache querying only the service instance hosting the data can be accomplished by sharding the data (typically by some key). Given a key, a requestor can resolve which shard owns the data, then resolve which instance owns the shard and direct the request at that instance. As long as everyone agrees on both levels of ownership, this works really well. It's also possible to have each instance of the service be able to forward requests so that even if a request gets misdirected it will with some likelihood eventually reach the correct instance.
Adya (2019) describes the broad approach, calling it a LInK store, which can take the approach of embedding the cache into actual service (which doesn't even require local interprocess communication). When this is done, there's basically no line between your service and the cache: the service is the cache and databases/object stores only exist to allow cold data to be evicted and to provide durability. One benefit of this approach is that your database/object store in the sunny day case mostly handles writes, and it ends up having a lot of mechanical sympathy to CQRS.
I personally have had a lot of success using Akka to implement services following this approach (Akka Cluster manages cluster membership and failure detection, Akka Cluster Sharding handles shard distribution and resolution, and Akka Persistence provides durability).
I'm looking to create a distributed Lock within Redis on Azure for our multi-instance Worker Role. I need a way of creating "critical sections" for which only a single thread can have access at a time across multiple-instances of the Worker Role.
I am using the StackExchange.Redis client to do this and, helpfully, it has an implementation of transactional TakeLock\ReleaseLock already, and this answer on SO gives me a good idea of the pattern to use and details about how to create a lock.
Reading further around the subject, I also read this Redis article regarding distlock which describes the weaknesses of failover-based Redis nodes when trying to implement a distributed lock mechanism.
The Azure Redis cache implements master/slave failover (apart from the Basic tier) so does this mean that I will need to implement the redlock pattern in order to guarantee that only one thing will ever have the lock?
Additionally, I am wondering:
Why do Azure Redis example connection strings not seem to list the master and slave in them? Have Azure implemented the master/slave failover in a different way?
Why has one .NET implementation of redlock chosen not to support using master/slaves in its usage? (See Usage section, first para) Is this just by choice or is it because master/slave is not a valid usage of redlock (that would not seem to be the case in the redis article)
I'm the author of the RedLock.net library that you linked in your question. The reason the documentation specifies connecting to independent redis instances is based on the reasoning in the Redis Distlock documentation. By forcing writes only to master nodes, we hopefully avoid the situation where a user might misconfigure Redlock to connect to multiple replicated hosts.
According to Azure Redis Cache 103 - Failover and Monitoring there is a load balancer in front of an Azure Redis Cache (at the standard tier and above) that ensures that you are always connected to the master.
Connecting to multiple redis instances (either replicated or not) should give a fairly good guarantee that no two processes end up running at the same time (moreso than a single replicated instance).
In order for another process to 'steal' the lock before the first had finished, more than half of the independent redis instances would need to lose their lock keys (e.g. by restarting without persistence), then have process two gain the lock before the timer in process one reacquired it during its extend timer.
According to my reading on jboss documentation it says,
We define high availability as the ability for the system to continue
functioning after failure of one or more of the servers. A part of
high availability is failover which we define as the ability for
client connections to migrate from one server to another in event of
server failure so client applications can continue to operate.
Is failover part of high availability? How can we differentiate failover vs high availability?
Failover is a means of achieving high availability (HA). Think of HA as a feature and failover as one possible implementation of that feature. Failover is not always the only consideration when achieving HA.
For example, Cassandra achieves HA through replication, but the degree of availability is determined by data consistency settings. In essence, these settings dictate how many nodes need to respond for an action (a read or a write) to succeed. Requiring more nodes to respond means less availability, and requiring fewer nodes means more availability. That's an example of HA that has nothing to do with failover, strictly speaking.
High Availability
Refers to the fact that the server system is in some way tolerant to failure.
Most of the time this is done with hardware redundancy. Assume a machine has redundant power supplies, if one fails the machine will keep running.
Failover
Then you have application redundancy (failover), which usually refers to the ability for an application running on multiple hardware installations to respond to clients in a consistent manner from any of those hardware installations. That way, if the hardware does totally fail, or the O/S dies on a particular machine, another machine can carry on.
SQL Server deals with application redundancy in four ways:
Clustering
Mirroring
Replication
Log Shipping
High-availability (HA for short) is a broad term, so when I think about it I tend to think as HA clusters.
From Wikipedia High-availability cluster:
High-availability clusters are groups of computers that
support server applications that can be reliably utilized with a
minimum amount of down-time. They operate by using high availability
software to harness redundant computers in groups or clusters that
provide continued service when system components fail. Without
clustering, if a server running a particular application crashes, the
application will be unavailable until the crashed server is fixed.
So the takeaway from the description above is that HA clusters will provide you with the minimum amount of down-time during a failover. Let me explain the two types of failover that HA clusters can provide you:
Hot-Hot / Active-Active: The redundant computers are truly operating in parallel, producing the exact same state, and the exact same output. They are all active nodes, operating as a perfect mirror of each other. In this scenario, your failover down-time is zero, and you can simply pull the power plug from any machine in the cluster without any downtime or disruption to your service.
Hot-Warn / Active-Passive: Only one primary computer is the active one, while the other computers in the cluster are passively rebuilding the same state as the primary. When the primary computer fails, it has to be disabled or killed (automatically or by an operator) and then a passive computer from the cluster needs to be made active (automatically or by an operator).
So what is the catch? The catch is that applications that can operate in a HA cluster are not trivial to design as they need to be true deterministic finite-state machines. A classic problem is when your application needs to use the clock to build state based on time, as clocks are very non-deterministic by nature.
Disclaimer: I am one of the developers of CoralSequencer.
Is it safe to use etcd across multiple data centers? As it expose etcd port to public internet.
Do I have to use client certificates in this case or etcd has some sort of authification?
Yes, but there are two big issues you need to tackle:
Security. This all depends on what type of info you are storing in etcd. Using a point to point VPN is probably preferred over exposing the entire cluster to the internet. Client certificates can also be used.
Tuning. etcd relies on replication between machines for two things, aliveness and consensus. Since a successful write must be committed to at majority of the cluster before it returns as successful, your write performance will degrade as the distance between the machines increases. Aliveness is measured with periodic heartbeats between the machines. By default, etcd has a fairly aggressive 50ms heartbeat timeout, which is optimized for bare metal servers running on a local network. Without tuning this timeout value, your cluster will constantly think that members have disappeared and trigger frequent master elections. This gets worse if both of your environments are on cloud providers that have variable networks plus disk writes that traverse the network, a double whammy.
More info on etcd tuning: https://etcd.io/docs/latest/tuning/