how to handle the saving failure after raft committed - etcd

when using raft, after log entries commited, we should write the data which is proposed by a node into our storage. what if one of node write failed. let's say the disk got bad. should the failure node terminate itself?
the proces like the following.
1. node A propose with data "abc"
2. raft log committed
3. A write data "abc" to file ok.
B write data "abc" to file ok.
C write data "abc" failed.
what should we do now ? since C won't have data "abc"

Don't forget that those changes are already persisted in the Raft log. Raft doesn't even guarantee that once a change is committed x (e.g. write the change to another file) will happen within any time frame. So
C won't have data "abc"
This is not accurate. The data has been persisted in the Raft log, it just hasn't been written to some other file after it was committed. What you're describing here is the behavior of a persistent state machine, wherein data is persisted in some separate store after it's been committed in the Raft log. But don't forget that committing data in the Raft log is tantamount to persisting it.
Persistent state machines have requirements beyond just the basic Raft protocol, and more on them can be found in the raft dissertation. Typically, in a persistent state machine you need to persist the lastApplied index in addition to term and votedFor. As entries are committed and applied to the persistent state machine (e.g. written to the data file on each node), the lastApplied index is persisted. Entries are not removed from the Raft log until they've been successfully applied. This is how you ensure your data "abc" is not lost even if it can't be written to a file on node C.

Related

Is this Redis Race Condition Scenario Possible?

I'm debugging an issue in an application and I'm running into a scneario where I'm out of ideas, but I suspect a race condition might be in play.
Essentially, I have two API routes - let's call them A and B. Route A generates some data and Route B is used to poll for that data.
Route A first creates an entry in the redis cache under a given key, then starts a background process to generate some data. The route immediately returns a polling ID to the caller, while the background data thread continues to run. When the background data is fully generated, we write it to the cache using the same cache key. Essentially, an overwrite.
Route B is a polling route. We simply query the cache using that same cache key - we expect one of 3 scenarios in this case:
The object is in the cache but contains no data - this indicates that the data is still being generated by the background thread and isn't ready yet.
The object is in the cache and contains data - this means that the process has finished and we can return the result.
The object is not in the cache - we assume that this means you are trying to poll for an ID that never existed in the first place.
For the most part, this works as intended. However, every now and then we see scenario 3 being hit, where an error is being thrown because the object wasn't in the cache. Because we add the placeholder object to the cache before the creation route ever returns, we should be able to safely assume this scenario is impossible. But that's clearly not the case.
Is it possible that there is some delay between when a Redis write operation returns and when the data is actually available for querying? That is, is it possible that even though the call to add the cache entry has completed but the data would briefly not be returned by queries? It seems the be the only thing that can explain the behavior we are seeing.
If that is a possibility, how can I avoid this scenario? Is there some way to force Redis to wait until the data is available for query before returning?
Is it possible that there is some delay between when a Redis write operation returns and when the data is actually available for querying?
Yes and it may depend on your Redis topology and on your network configuration. Only standalone Redis servers provides strong consistency, albeit with some considerations - see below.
Redis replication
While using replication in Redis, the writes which happen in a master need some time to propagate to its replica(s) and the whole process is asynchronous. Your client may happen to issue read-only commands to replicas, a common approach used to distribute the load among the available nodes of your topology. If that is the case, you may want to lower the chance of an inconsistent read by:
directing your read queries to the master node; and/or,
issuing a WAIT command right after the write operation, and ensure all the replicas acknowledged it: while the replication process would happen to be synchronous from the client standpoint, this option should be used only if absolutely needed because of its bad performance.
There would still be the (tiny) possibility of an inconsistent read if, during a failover, the replication process promotes a replica which did not receive the write operation.
Standalone Redis server
With a standalone Redis server, there is no need to synchronize data with replicas and, on top of that, your read-only commands would be always handled by the same server which processed the write commands. This is the only strongly consistent option, provided you are also persisting your data accordingly: in fact, you may end up having a server restart between your write and read operations.
Persistence
Redis supports several different persistence options; in your scenario, you may want to configure your server so that it
logs to disk every write operation (AOF) and
fsync every query.
Of course, every configuration setting is a trade off between performance and durability.

Regarding fault-tolerance for suppress operator in KTable

We are planning to use suppress operator over Session Windowed KTable.
We are wondering about fault-tolerance when using suppress operator.
We understand that buffer is used to store events/aggregations until the window closes.
Now let us say a rebalance has happened, and active task is moved to different machine. We are wondering what happens to this (in-memory ?) buffer.
Let us say we are tracking click count by user. And we configured session window's in-activity period to be 3 minutes, and session window has started for a key alice, and aggregations happened for that key for 2 minutes. For example in buffer we have (alice -> 5) entry representing that alice had made 5 clicks in this session so far.
And say there is no activity after that from alice.
If things are working fine , then once the session is over, downstream processor will get event alice -> 5 .
But what if there is rebalance now, and active task that is maintaining session window for alice is moved to new machine ?
Since there is no further activity from alice, will downstream processor which is running on new machine miss this event alice ->5 ?
The suppress operator provides fault tolerance similarly to any other state store in Streams. Although the active data structure is in memory, the suppression buffer maintains a changelog (an internal Kafka topic).
So, when you have that rebalance, the previous active task flushes its state to the changelog and discards the in-memory buffer. The new active task re-creates the state by replaying the changelog topic, resulting in the exact same buffered contents as if there had been no rebalance.
In other words, just like in-memory state stores, the suppression buffer is made durable (in a Kafka topic) even though it is not persistent (on the local disk).
Does that make sense?

Eventual consistency - how to avoid phantoms

I am new to the topic. Having read a handful of articles on it, and asked a couple of persons, I still do not understand what you people do in regard to one problem.
There are UI clients making requests to several backend instances (for now it's irrelevant whether sessions are sticky or not), and those instances are connected to some highly available DB cluster (may it be Cassandra or something else of even Elasticsearch). Say the backend instance is not specifically tied to one or cluster's machines, and instead its every request to DB may be served by a different machine.
One client creates some record, it's synchronously of asynchronously stored to one of cluster's machines then eventually gets replicated to the rest of DB machines. Then another client requests the list or records, the request ends up served by a distant machine not yet received the replicated changes, and so the client does not see the record. Well, that's bad but not yet ugly.
Consider however that the second client hits the machine which has the record, displays it in a list, then refreshes the list and this time hits the distant machine and again does not see the record. That's very weird behavior to observe, isn't it? It might even get worse: the client successfully requests the record, starts some editing on it, then tries to store the updates to DB and this time hits the distant machine which says "I know nothing about this record you are trying to update". That's an error which the user will see while doing something completely legitimate.
So what's the common practice to guard against this?
So far, I only see three solutions.
1) Not actually a solution but rather a policy: ignore the problem and instead speed up the cluster hard enough to guarantee that 99.999% of changes will be replicated on the whole cluster in, say, 0.5 secord (it's hard to imagine some user will try to make several consecutive requests to one record in that time; he can of course issue several reading requests, but in that case he'll probably not notice inconsistency between results). And even if sometimes something goes wrong and the user faces the problem, well, we just embrace that. If the loser gets unhappy and writes a complaint to us (which will happen maybe once a week or once an hour), we just apologize and go on.
2) Introduce an affinity between user's session and a specific DB machine. This helps, but needs explicit support from the DB, and also hurts load-balancing, and invites complications when the DB machine goes down and the session needs to be re-bound to another machine (however with proper support from DB I think that's possible; say Elasticsearch can accept routing key, and I believe if the target shard goes down it will just switch the affinity link to another shard - though I am not entirely sure; but even if re-binding happens, the other machine may contain older data :) ).
3) Rely on monotonic consistency, i.e. some method to be sure that the next request from a client will get results no older than the previous one. But, as I understand it, this approach also requires explicit support from DB, like being able so pass some "global version timestamp" to a cluster's balancer, which it will compare with it's latest data on all machines' timestamps to determine which machines can serve the request.
Are there other good options? Or are those three considered good enough to use?
P.S. My specific problem right now is with Elasticsearch; AFAIK there is no support for monotonic reads there, though looks like option #2 may be available.
Apache Ignite has primary partition for a key and backup partitions. Unless you have readFromBackup option set, you will always be reading from primary partition whose contents is expected to be reliable.
If a node goes away, a transaction (or operation) should be either propagated by remaining nodes or rolled back.
Note that Apache Ignite doesn't do Eventual Consistency but instead Strong Consistency. It means that you can observe delays during node loss, but will not observe inconsistent data.
In Cassandra if using at least quorum consistency for both reads and writes you will get monotonic reads. This was not the case pre 1.0 but thats a long time ago. There are some gotchas if using server timestamps but thats not by default so likely wont be an issue if using C* 2.1+.
What can get funny is since C* uses timestamps is things that occur at "same time". Since Cassandra is Last Write Wins the times and clock drift do matter. But concurrent updates to records will always have race conditions so if you require strong read before write guarantees you can use light weight transactions (essentially CAS operations using paxos) to ensure no one else updates between your read to update, these are slow though so I would avoid it unless critical.
In a true distributed system, it does not matter where your record is stored in remote cluster as long as your clients are connected to that remote cluster. In Hazelcast, a record is always stored in a partition and one partition is owned by one of the servers in the cluster. There could be X number of partitions in the cluster (by default 271) and all those partitions are equally distributed across the cluster. So a 3 members cluster will have a partition distribution like 91-90-90.
Now when a client sends a record to store in Hazelcast cluster, it already knows which partition does the record belong to by using consistent hashing algorithm. And with that, it also knows which server is the owner of that partition. Hence, the client sends its operation directly to that server. This approach applies on all client operations - put or get. So in your case, you may have several UI clients connected to the cluster but your record for a particular user is stored on one server in the cluster and all your UI clients will be approaching that server for their operations related to that record.
As for consistency, Hazelcast by default is strongly consistent distributed cache, which implies that all your updates to a particular record happen synchronously, in the same thread and the application waits until it has received acknowledgement from the owner server (and the backup server if backups are enabled) in the cluster.
When you connect a DB layer (this could be one or many different types of DBs running in parallel) to the cluster then Hazelcast cluster returns data even if its not currently present in the cluster by reading it from DB. So you never get a null value. On updating, you configure the cluster to send the updates downstream synchronously or asynchronously.
Ah-ha, after some even more thorough study of ES discussions I found this: https://www.elastic.co/guide/en/elasticsearch/reference/current/search-request-preference.html
Note how they specifically highlight the "custom value" case, recommending to use it exactly to solve my problem.
So, given that's their official recommendation, we can summarise it like this.
To fight volatile reads, we are supposed to use "preference",
with "custom" or some other approach.
To also get "read your
writes" consistency, we can have all clients use
"preference=_primary", because primary shard is first to get all
writes. This however will probably have worse performance than
"custom" mode due to no distribution. And that's quite similar to what other people here said about Ignite and Hazelcast.
Right?
Of course that's a solution specifically for ES. Reverting to my initial question which is a bit more generic, turns out that options #2 and #3 are really considered good enough for many distributed systems, with #3 being possible to achieve with #2 (even without immediate support for #3 by DB).

Exactly-once guarantee in Storm Trident in network partitioning and/or failure scenarios

So, Apache Storm + Trident provide the exactly-once semantics. Imagine I have the following topology:
TridentSpout -> SumMoneyBolt -> SaveMoneyBolt -> Persistent Storage.
CalculateMoneyBolt sums monetary values in memory, then passes the result to SaveMoneyBolt which should save the final value to a remote storage/database.
Now it is very important that we calculate these values and store only once to the database. We do not want to accidentally double count the money.
So how does Storm with Trident handle network partitioning and/or failure scenarios when the write request to the database has been successfully sent, the database has successfully received the request, logged the transaction, and while responding to the client, the SaveMoneyBolt has either died or partitioned from the network before having received the database response?
I assume that if SaveMoneyBolt had died, Trident would retry the batch, but we cannot afford double counting.
How are such scenarios handled?
Thanks.
Trident gives a unique transaction id for each batch. If a batch is retried it will have the same txid. Also the batch updates are ordered, i.e. the state update for a batch will not happen until the update for the previous batch is complete. So by storing the txid along with the values in the state trident can de-duplicate the updates and provide exactly once semantics.
Trident comes with a few built-in Map state implementations which handles all this automatically.
For more information take a look at the docs :
http://storm.apache.org/releases/1.0.1/Trident-tutorial.html
http://storm.apache.org/releases/current/Trident-state.html

Are staggered remote GETs implemented for replicated caches as well?

The release notes of Infinispan 8 describes a new feature: Staggered remote gets.
These are described in the user guide:
11.4. Distribution Mode
The remote GET requests are staggered: we request the value from the primary owner, but if it doesn’t respond in a reasonable amount of time, we request the value from the backup owners as well.
This feature is documented for the Distribution Mode only.
Is this feature used for Replicated Mode as well?
Generally speaking: Is it safe to assume that replicated caches are a special case of distributed caches?
Generally speaking, yes it's true that Replicated Mode is a special case of Distributed Caches. The code is pretty much the same, with the exception of Replicated mode keeping an amount of replicas which is equal to the size of the cluster: each node will also be a full replica.
A Get operation will not issue a Remote Get when the current node is also an owning replica of the entry. So while it's true that a Remote Get would also be "staggered" if it the method was invoked, in practice when you have Replication you will never actually perform a Remote Get.

Resources