Distributed Systems: Keeping timestamp consistency between different nodes - events

Context
We have a distrubuted system. We emit events from one of those systems which are read from another system for report generation.
Logical order is ensured by the fact that even if the emitter system has N nodes there is a finite state machine underlined which makes impossible to have concurrent emission of an event for one aggregate.
These events are marked with a timestamp. N nodes could not always be on synch about the time.
We care so much about timestamp because the down-stream system which generates reports needs quite always a timestamp because "Reporting people" care about this kind data to check things are going the right way.
The problem
The fact 2 nodes could have a little discrepancy is making us thinking. Let's imagine the next example.
The logical order of the events is this:
Event 1 => Event 2 => Event 3
But in the Database we could have this situation:
-------------------------------------------
| Name | TimeStamp | Logical Order |
-------------------------------------------
| Event 1 | 2 | 1 |
| Event 2 | 1 | 2 |
| Event 3 | 3 | 3 |
-------------------------------------------
Has you can see, Event 2 is logically happened after the Event 1 but their timestamp could not be on synch.
Ok, this is not going to happen every 2 seconds but it could happen because the timestamp comes from different nodes. And from a Reporting point of view this is an anomaly.
Possible solutions
Make Reporting people aware of the possible problem. We are not able to have one global source of time (NTP is not a good solution for some good reasons) so if there are discrepancies of a very small amount of time is not a problem and it means that "this event is happened around that time".
Ensure timestamp consistency checking that the next event in the logical flow could not have a timestamp which is less that the previous event making them equals. This is not the truth but keeps the flow consistent even from a non developer point of view.
Have you got experiences on this topic?

If you can ensure causality relationship and have a partial order, i don't see many problems in presenting a "useful business representation" with modified timestamp. I think that underlying distributed architecture is out of context for business domain.
They probably understand the system as a whole, and forcing a shift in their mental model may cause some friction.
On the other side i would not normalize timestamp on the log, you can use that to track clock drifts between subsystems.

Based on your question, I assume the timestamp is being generated before the event is read by the finite state machine. I'd suggest you to sort your events by timestamp instead of using the logical order. When working on distributed systems, it's recommended to have one, and just one, way to sort events.
With regard to distributed, sequential ids generation, I recommend you to take a look at this answer and to snowflake, which is mentioned in the previous link. The later provides a distributed service that you can use as a centralized marker generator. The ids generated by snowflake are a composition of: timestamp, worker number and sequence number.
TL;DR
If the timestamp is reliable enough to guarantee events order, I'd suggest you to use that one instead of the logical order, which I'm assuming is generated after the timestamp was.
Hoe this helps

Related

What kind of statistics to use to compare logFold values?

I recently did an RNAseq experiment in which I had controls and experiments at 3 different time periods. My samples were distributed as following for a total of 10 samples:
T1_control1, T2_control1, T2_control2, T3_control1,
T1_exp1, T1_exp2, T2_exp1, T2_exp2, T3_exp1, T3_exp2
I did differential expression analysis with DESeq2 and from it I obtained 3 files from each time period T1, T2, and T3 that show the logFold change values from the control to the experimental for each gene. My question is how I can statistically compare the logFold change value for one gene in one period vs another time period. I am not sure what test to use since there is only one logFold change values per each time period for each gene.
Thank you in advance.
I am not sure what test to use since there is only one logFold change values per each time period for each gene.
Since RNA-sequencing is costly (for many purposes), I think most groups ensure their sequencing run is accurate is by combining multiple biological replicates in each group or sequencing deep. An argument could be made that showing only one data point for each gene at each time point is appropriate given that standard protocols were followed.
Although one option is to increase the sample size of each time period by determining fold change between each control-experimental partner. It would be important however to consult the literature and colleagues whether this is appropriate for the specific type of analysis you are doing.

VW contextual bandits: historical data and online learning

I'd like to test CB for e-commerce task: personal offer recommendations (like "last chance to buy", "similar positions", "consumers recommend", "bestsellers", etc.). My task is to order them (more relevant issue is higher in the list of recommendations).
So, there are 5 possible offers.
I have some historical data collected without using any model: context (user and web-session features), action id (one of my 5 offers), reward (1 if user clicked this offer, 0 - not clicked). So I have N users and 5 offers with known reward, totally 5*N rows in my historical data.
Ex:
1:1:1 | user_id:1 f1:... f2:...
2:-1:1 | user_id:1 f1:... f2:...
3:-1:1 | user_id:1 f1:... f2:...
This means that user 1 have seen 3 offers (1,2,3), cost of the 1 offer is equal to 1 (didn't click), user ckickes on offers 2 and 3 (cost is negative -> reward is positive). Probabilities are equal to 1, since all offers were shown and we know rewards.
Global task is to increase CTR. I'd like to use this data for training CB and then improve the model by exploration/exploitation policies. I set probabilities equal to 1 in this data (Is it right?). But next I'd like to set the order of offers according to rewards.
Should I use for this warm start in VW CB? Will this work correctly with data collected without using CB? Maybe you can advise more relevant methods in CB for this data and task?
Thanks a lot.
If there are only 5 possible offers and if you (as indicated) have data of the form "I have N users and 5 offers with known reward, totally 5*N rows in my historical data." then your historical data is supervised multilabel data and the warm-start functionality would apply; make sure you use the cost-sensitive version to accommodate the multilabel aspect of your historical data (i.e., there is more than one offer that would result in a click).
Will this work correctly with data collected without using CB?
Because the every action-reward is specified for every user in the data set, you only have to ensure that the sample of users is representative of the population you care about.
Maybe you can advise more relevant methods in CB for this data and task?
The first paragraph started with "if" because the more typical case is 1) there are many possible offers and 2) users have only seen a few of them historically.
In such case what you have is a combination of a degenerate logging policy and multiple rewards being revealed. If there are k possible actions but each user has only seen n<=k historically then you could try and make n lines for each user as you did. Theoretically this does not necessarily work but in practice it might help.
Out of the box: change the data
If the data you have was collected as the result of running an existing policy, then an alternative would be to start randomizing the decisions made by that system in order to collect a dataset which conforms to CB. For example, use your current system to pick the "best" action 96% of the time, and one of the other 4 actions at random 4% of the time, and log the probability along with the reward (either 0.96 or 0.01 depending upon whether it was the considered best), and then set up a proper CB-style training set for vw. With this you can also counterfactually estimate the value of both your current policy and the policy vw generates, and only switch to vw when it is winning.
The fastest way to implement the last paragraph is to just start using APS.

Algorithm/Heuristic for grouping chat message histories by 'conversation'/implicit sessions from time stamps?

The problem: I have a series of chat messages -- between two users -- with time stamps. I could present, say, an entire day's worth of chat messages at once. During the entire day, however, there were multiple, discrete conversations/sessions...and it would be more useful to the user to see these divided up as opposed to all of the days as one continuous stream.
Is there an algorithm or heuristic that can 'deduce' implicit session/conversation starts/breaks from time stamps? Besides an arbitrary 'if the gap is more than x minutes, it's a separate session'. And if that is the only case, how is this interval determined? In any case, I'd like to avoid this.
For example, there are...fifty messages that get sent between 2:00 and 3:00, and then a break, and then twenty messages sent between 4:00 and 5:00. There would be a break inserted between there...but how would the break be determined?
I'm sure that there is already literature on this subject, but I just don't know what to search for.
I was playing around with things like edge detection algorithms and gradient-based approaches for a while.
(see comments for more clarification)
EDIT (Better idea):
You can view each message as being of two types:
A continuation of a previous conversation
A brand new conversation
You can model these two types of messages as independent Poisson processes, where the time difference between adjacent messages is an exponential distribution.
You can then empirically determine the exponential parameters for these two types of messages by hand (wouldn't be too hard to do given some initial data). Now you have a model for these two events.
Finally when a new message comes along, you can calculate the probability of the message being of type 1 or type 2. If type 2, then you have a new conversation.
Clarification:
The probability of the message being a new conversation, given that the delay is some time T.
P(new conversation | delay=T) = P(new conversation AND delay=T)/P(delay=T)
Using Bayes' Rule:
= P(delay=T | new conversation)*P(new conversation)/P(delay=T)
The same calculation goes for P(old conversation | delay=T).
P(delay=T | new conversation) comes from the model. P(new conversation) is easily calculable from the data used to generate your model. P(delay=T) you don't need to calculate at all since all you want to do is compare the two probabilities.
The difference in timestamps between adjacent messages depends on the type of conversation and the people participating. Thus you'll want an algorithm that takes into account local characteristics, as opposed to a global threshold parameter.
My proposition would be as follows:
Get the time difference between the last 10 adjacent messages.
Compute the mean (or median)
If the delay until the next message is more than 30 times the the mean, it's a new conversation.
Of course, I came up with these numbers on the spot. They would have to be tuned to fit your purpose.

Which is more efficient - Computing results using a functionin realtime or reading the results directly from a database?

Let us take this example scenario:
There exists a really complex function that involves mathematical square roots and cube roots (which are slower to process) to compute its output. As an example, let us assume the function accepts two parameters a and b and the input range for both the values a and b are well-defined. Let us assume the input values a and b can range from 0 to 100.
So essentially fn(a,b) can be either computed in real time or its results can be pre-filled in a database and fetched as and when required.
Method 1: Compute in realtime
function fn(a,b){
result = compute_using_cuberoots(a,b)
return result
}
Method 2: Fetch the function result from a database
We have a database pre-filled with the input values mapped to the corresponding result:
a | b | result
0 | 0 | 12.4
1 | 0 | 14.8
2 | 0 | 18.6
. | . | .
. | . | .
100 | 100 | 1230.1
And we can
function fn(a,b){
result = fetch_from_db(a,b)
return result
}
My question:
Which method would you advocate and why? Why do you think one method is more efficient than the other?
I believe this is a scenario that most of us will face at some point during our programming life and hence this question.
Thank you.
Question Background (might not be relevant)
Example : In scenarios like Image-Processing, it is possible to come across such situations more often, where the range of values for the input (R,G,B) are known (0-255) and mathematical computation of square-roots and cube-roots introduce too much time for the server requests to be completed.
Let us take for an example you're building an app like Instagram - The time taken to process an image sent to the server by the user and the time taken to return the processed image must be kept minimal for an optimal User-Experience. In such situations, it is important to minimize the time taken to process the image. Worse yet, scalability problems are introduced when the number of such processing requests grow large.
Hence it is necessary to choose between one of the methods described above that will also be the most optimal method in such situations.
More details on my situation (if required):
Framework: Ruby on Rails, Database: MongodB
I wouldn't advocate either method, I'd test them both (if I thought they were both reasonable) and get some data.
Having written that, I'll rise to the bait: given the relative speed of computation vs I/O I would expect computation to be faster than retrieving the function values from a database. I'll acknowledge the possibility (and no more) that in some special cases an in-memory database will be able to outperform (re-)computation, but as a general rule, no.
"More efficient" is a fuzzy term. "Faster" is more concrete.
If you're talking about a few million rows in a SQL database table, then selecting a single row might well be faster than calculating the result. On commodity hardware, using an untuned server, I can usually return a single row from an indexed table of millions of rows in just a few tenths of a millisecond. But I'd think hard before installing a dbms server and building a database only for this one purpose.
To make "faster" a little less concrete, when you're talking about user experience, and within certain limits, actual speed is less important than apparent speed. The right kind of feedback at the right time makes people either feel like things are running fast, or at least makes them feel like waiting just a little bit is not a big deal. For details about exactly how to do that, I'd look at User Experience on the Stack Exchange network.
The good thing is that it's pretty simple to test both ways. For speed testing just this particular issue, you don't even need to store the right values in the database. You just need to have the right keys and indexes. I'd consider doing that if calculating the right values is going to take all day.
You should probably test over an extended period of time. I'd expect there to be more variation in speed from the dbms. I don't know how much variation you should expect, though.
Computing results and reading from a table can be a good solution if inputs are fixed values. Computing real time and caching results for an optimum time can be a good solution if inputs varies in different situations.
"We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil" Donald Knuth
I'd consider using a hash as a combination of calculating and storing. With he really complex function represented as a**b:
lazy = Hash.new{|h,(a,b)|h[[a,b]] = a**b}
lazy[[4,4]]
p lazy #=> {[4, 4]=>256}
I'd think about storing the values on the code itself:
class MyCalc
RESULTS = [
[12.4, 14.8, 18.6, ...]
...
[..., 1230.1]
]
def self.fn a, b
RESULTS[a][b]
end
end
MyCalc.fn(0,1) #=> 14.8

Algorithm for most recently/often contacts for auto-complete?

We have an auto-complete list that's populated when an you send an email to someone, which is all well and good until the list gets really big you need to type more and more of an address to get to the one you want, which goes against the purpose of auto-complete
I was thinking that some logic should be added so that the auto-complete results should be sorted by some function of most recently contacted or most often contacted rather than just alphabetical order.
What I want to know is if there's any known good algorithms for this kind of search, or if anyone has any suggestions.
I was thinking just a point system thing, with something like same day is 5 points, last three days is 4 points, last week is 3 points, last month is 2 points and last 6 months is 1 point. Then for most often, 25+ is 5 points, 15+ is 4, 10+ is 3, 5+ is 2, 2+ is 1. No real logic other than those numbers "feel" about right.
Other than just arbitrarily picked numbers does anyone have any input? Other numbers also welcome if you can give a reason why you think they're better than mine
Edit: This would be primarily in a business environment where recentness (yay for making up words) is often just as important as frequency. Also, past a certain point there really isn't much difference between say someone you talked to 80 times vs say 30 times.
Take a look at Self organizing lists.
A quick and dirty look:
Move to Front Heuristic:
A linked list, Such that whenever a node is selected, it is moved to the front of the list.
Frequency Heuristic:
A linked list, such that whenever a node is selected, its frequency count is incremented, and then the node is bubbled towards the front of the list, so that the most frequently accessed is at the head of the list.
It looks like the move to front implementation would best suit your needs.
EDIT: When an address is selected, add one to its frequency, and move to the front of the group of nodes with the same weight (or (weight div x) for courser groupings). I see aging as a real problem with your proposed implementation, in that it requires calculating a weight on each and every item. A self organizing list is a good way to go, but the algorithm needs a bit of tweaking to do what you want.
Further Edit:
Aging refers to the fact that weights decrease over time, which means you need to know each and every time an address was used. Which means, that you have to have the entire email history available to you when you construct your list.
The issue is that we want to perform calculations (other than search) on a node only when it is actually accessed -- This gives us our statistical good performance.
This kind of thing seems similar to what is done by firefox when hinting what is the site you are typing for.
Unfortunately I don't know exactly how firefox does it, point system seems good as well, maybe you'll need to balance your points :)
I'd go for something similar to:
NoM = Number of Mail
(NoM sent to X today) + 1/2 * (NoM sent to X during the last week)/7 + 1/3 * (NoM sent to X during the last month)/30
Contacts you did not write during the last month (it could be changed) will have 0 points. You could start sorting them for NoM sent in total (since it is on the contact list :). These will be showed after contacts with points > 0
It's just an idea, anyway it is to give different importance to the most and just mailed contacts.
If you want to get crazy, mark the most 'active' emails in one of several ways:
Last access
Frequency of use
Contacts with pending sales
Direct bosses
Etc
Then, present the active emails at the top of the list. Pay attention to which "group" your user uses most. Switch to that sorting strategy exclusively after enough data is collected.
It's a lot of work but kind of fun...
Maybe count the number of emails sent to each address. Then:
ORDER BY EmailCount DESC, LastName, FirstName
That way, your most-often-used addresses come first, even if they haven't been used in a few days.
I like the idea of a point-based system, with points for recent use, frequency of use, and potentially other factors (prefer contacts in the local domain?).
I've worked on a few systems like this, and neither "most recently used" nor "most commonly used" work very well. The "most recent" can be a real pain if you accidentally mis-type something once. Alternatively, "most used" doesn't evolve much over time, if you had a lot of contact with somebody last year, but now your job has changed, for example.
Once you have the set of measurements you want to use, you could create an interactive apoplication to test out different weights, and see which ones give you the best results for some sample data.
This paper describes a single-parameter family of cache eviction policies that includes least recently used and least frequently used policies as special cases.
The parameter, lambda, ranges from 0 to 1. When lambda is 0 it performs exactly like an LFU cache, when lambda is 1 it performs exactly like an LRU cache. In between 0 and 1 it combines both recency and frequency information in a natural way.
In spite of an answer having been chosen, I want to submit my approach for consideration, and feedback.
I would account for frequency by incrementing a counter each use, but by some larger-than-one value, like 10 (To add precision to the second point).
I would account for recency by multiplying all counters at regular intervals (say, 24 hours) by some diminisher (say, 0.9).
Each use:
UPDATE `addresslist` SET `favor` = `favor` + 10 WHERE `address` = 'foo#bar.com'
Each interval:
UPDATE `addresslist` SET `favor` = FLOOR(`favor` * 0.9)
In this way I collapse both frequency and recency to one field, avoid the need for keeping a detailed history to derive {last day, last week, last month} and keep the math (mostly) integer.
The increment and diminisher would have to be adjusted to preference, of course.

Resources