Concurrent database MVCC timestamp generation method - parallel-processing

I need to generate database timestamps for MVCC Snapshot isolation. The typical method utilized:
"Transactional actions are implemented in SI-TM as follows.
TM BEGIN: A logical snapshot for the transaction is generated
by obtaining a unique timestamp using an atomic increment
to the global timestamp counter."
The problem with using this approach in a system with hundreds of cores is that it doesn't scale. There is a hardware limit of 10M atomic increments per second on a contested memory location.
Any ideas?

Here are two simple ideas, and a paper reference:
1) Instead of incrementing the counter by 1, increment it by N, giving clients effectively a range of transaction identifiers [c, c+N). For instance, if N=5, and the initial value of the counter is 1, then clients A, B, and C would get the following:
A: [1, 2, 3, 4, 5]
B: [6, 7, 8, 9, 10]
C: [11, 12, 13, 14, 15]
While this reduces the pressure on the atomic counter, as we can see from this example some clients (like client C) will get a relatively high range of ids while others get lower ranges (client A), and this will lead to higher abort rates in the system.
2) Use ranges of interleaved transaction identifiers. This is like 1, but we've added a step variable, S. Here's a simple example: If N=5 and S=3, and the initial value of the counter is 1, then clients A B and C would get the following:
A: [1, 4, 7, 10, 13]
B: [2, 5, 8, 11, 14]
C: [3, 6, 9, 12, 15]
This seems to have solved the problem of 1, but consider client D:
D: [16, 19, 22, 25, 28]
Now we're back to the same problem that solution #1 had. Tricks must be played with this technique to "get it right".
3) An interesting, but more complex, decentralized way of assigning transaction IDs is described here:
Tu, Stephen, Wenting Zheng, Eddie Kohler, Barbara Liskov, and Samuel Madden. "Speedy transactions in multicore in-memory databases." In Proceedings of the Twenty-Fourth ACM Symposium on Operating Systems Principles, pp. 18-32. ACM, 2013.

Related

How to calculate the optimal time interval in multiple time series forecasts?

First thing first, I am new to the world of statistics.
Problem statement:
I have three predicted time series. These time series represent three independent scores, the sum of which is desired to be minimized over timeslot while selecting it. Length of the timeslot is already provided. I have read that there is confidence based selection of predicted interval for such problems, but I have used LSTM to predict the time series which may restrict me to use that approach, perhaps I think calculating the predicted interval is related to single time series.
e.g: Consider below arrays represent the three predicted time series.
arr1 = [23, 34, 16, 5, 45, 10, 2, 34, 56, 11]
arr2 = [123, 100, 124, 245, 125, 120, 298, 124, 175, 200]
arr3 = [1, 3, 10, 7, 2, 2, 10, 7, 8, 12]
time slot length = 3
As you could see, optimal timeslot for arr1 is [5, 7], for arr2 is [0, 2], and arr3 for is [3, 5], but I need only one timeslot for all three time series.
Questions:
Which error paradigm I should employ to select the optimal time slot?
I also have given weights(positive real number in [0, 1]) which represents the importance of particular time series in deciding timeslot. How do I employ it in error paradigm?

Scattered indices in MPI

I'm trying to divide up an array between processors such that each one takes points from different parts in the array. For example, if
A = {1, 2, 3, 4, 5, 6, 7, 8}
and I'm using 2 processors, I want P1 to handle {1, 3, 5, 7}, and P2 to handle {2, 4, 6, 8}.
When scaling up to very large numbers of points (millions) and processors (128), this is tricky. In previous versions of my function, I simply gave P1 the first chunk of points, P2 the next chunk, and so on (which is easy with MPI_gatherv).
Is there some way to use MPI_gatherv to make this work, or a way to use MPI_send and MPI_receive to achieve it? The trouble with MPI_gatherv is that while you can specify indices for processors to send to, it still puts all of P1 before P2 before P3 etc.

efficiently finding overlapping segments from a set of lists

Suppose I have the following lists:
[1, 2, 3, 20, 23, 24, 25, 32, 31, 30, 29]
[1, 2, 3, 20, 23, 28, 29]
[1, 2, 3, 20, 21, 22]
[1, 2, 3, 14, 15, 16]
[16, 17, 18]
[16, 17, 18, 19, 20]
Order matters here. These are the nodes resulting from a depth-first search in a weighted graph. What I want to do is break down the lists into unique paths (where a path has at least 2 elements). So, the above lists would return the following:
[1, 2, 3]
[20, 23]
[24, 25, 32, 31, 30, 29]
[28, 29]
[20, 21, 22]
[14, 15, 16]
[16, 17, 18]
[19, 20]
The general idea I have right now is this:
Look through all pairs of lists to create a set of lists of overlapping segments at the beginning of the lists. For example, in the above example, this would be the output:
[1, 2, 3, 20, 23]
[1, 2, 3, 20]
[1, 2, 3]
[16, 17, 18]
The next output would be this:
[1, 2, 3]
[16, 17, 18]
Once I have the lists from step 2, I look through each input list and chop off the front if it matches one of the lists from step 2. The new lists look like this:
[20, 23, 24, 25, 32, 31, 30, 29]
[20, 23, 28, 29]
[20, 21, 22]
[14, 15, 16]
[19, 20]
I then go back and apply step 1 to the truncated lists from step 3. When step 1 doesn't output any overlapping lists, I'm done.
Step 2 is the tricky part here. What's silly is it's actually equivalent to solving the original problem, although on smaller lists.
What's the most efficient way to solve this problem? Looking at all pairs obviously requires O(N^2) time, and step 2 seems wasteful since I need to run the same procedure to solve these smaller lists. I'm trying to figure out if there's a smarter way to do this, and I'm stuck.
Seems like the solution is to modify a Trie to serve the purpose. Trie compression gives clues, but the kind of compression that is needed here won't yield any performance benefits.
The first list you add becomes it's own node (rather than k nodes). If there is any overlap, nodes split but never get smaller than holding two elements of the array.
A simple example of the graph structure looks like this:
insert (1,2,3,4,5)
graph: (1,2,3,4,5)->None
insert (1,2,3)
graph: (1,2,3)->(4,5), (4,5)->None
insert (3,2,3)
graph: (1,2,3)->(4,5), (4,5)->None, (3,32)->None
segments
output: (1,2,3), (4,5), (3,32)
The child nodes should also be added as an actual Trie, at least when there are enough of them to avoid a linear search when adding/removing from the data structure and potentially increasing the runtime by a factor of N. If that is implemented, then the data structure has the same big O performance as a Trie with a somewhat higher hidden constants. Meaning that it takes O(L*N), where L is the average size of the list and N is the number of lists. Obtaining the segments is linear in the number of segments.
The final data structure, basically a directed graph, for your example would looks like below, with the start node at the bottom.
Note that this data structure can be built as you run the DFS rather than afterwords.
I ended up solving this by thinking about the problem slightly differently. Instead of thinking about sequences of nodes (where an edge is implicit between each successive pair of nodes), I'm thinking about sequences of edges. I basically use the algorithm I posted originally. Step 2 is simply an iterative step where I repeatedly identify prefixes until there are no more prefixes left to identify. This is pretty quick, and dealing with edges instead of nodes really simplified everything.
Thanks for everyone's help!

How do I add a random offset to values in a Pseq?

Given a Pseq similar to the following:
Pseq([1, 2, 3, 4, 5, 6, 7, 8], inf)
How would I randomise the values slightly each time? That is, not just randomly alter the 8 values once at initialisation time, but have a random offset added each time a value is sent to the stream?
Here's a neat way:
(Pseq([1, 2, 3, 4, 5, 6, 7, 8], inf) + Pgauss(0, 0.1))
First you need to know that Pgauss is just a pattern that generates gaussian random numbers. You can use any other kind of pattern such as Pwhite.
Then you need to know the really pleasant bit: performing basic math operations on Patterns (as above) composes the patterns (by wrapping them in Pbinop).

How to solve this variation of kirkkmans schoolgirls

I am trying to implement an app which assigns s students to l labs in g lab groups. The constraints are:
1:students shall work with new students for every lab.
2:all students shall be lab leader once.
2 is not solvable if the students can't be divided evenly in the lab groups. Therfore it is acceptable if the "odd" students never get to be lab leader.
I have tried two approaches but I am not happy yet.:
Tabu search, which solves 1 but has problems solving 2 ( I actually first solve 1 and then try to solve 2, which might be the wrong approach, any suggestions)
A simple solution where I divide the students in the #labs in an array [0..6][7..14][15..21] and then rotate(with 0,1,2 inc) and transpose the matrix, repeat this for #labs times with incremented rotation (1,2,4) and (2,4,6). For 21 students in 3 labs with lab groups of 7 the result looks like this:
lab 1: [0, 7, 14], [1, 8, 15], [2, 9, 16], [3, 10, 17], [4, 11, 18], [5, 12, 19], [6, 13, 20]
lab 2: [6,12, 18], [0, 13, 19], [1, 7, 20], [2, 8, 14], [3, 9, 15], [4, 10, 16], [5, 11, 17]
lab 3: [5, 10, 15], [6, 11, 16], [0, 12, 17], [1, 13, 18], [2, 7, 19], [3, 8, 20], [4, 9, 14]
the lab leaders are the first column for lab 1, the second for lab 2 ...
This solution works decent but for instance fails for 12 students in 3 labs or 150 students in 6 labs. Any suggestions?
2 seems to handle the same number of cases or combinations, and is lightning fast compared to 1. Maybe I should get a noble price :-)
Constraint #1 alone is usually referred to as the social golfer problem. (Let parameters g be the number of groups and s be the size of each group and w be the number of weeks. A grouping is a partition of g * s golfers into g groups of size s. Determine whether w groupings can be found such that each pair of golfers are grouped together at most once.) The social golfer problem has been studied in the combinatorial optimization literature, and the approaches are of three types (you can use your favorite search engine to find the research articles):
Local search. This is effective when w is well below its maximum feasible value. DotĂș and Van Hentenryck have a paper applying tabu search to the social golfer problem.
Complete search. This is necessary when w is above or just below its maximum feasible value but it does not scale very well.
Algebraic constructions. This is how the notorious g=8 s=4 w=10 instance was solved. Unfortunately, for many parameter sets there is no construction known.
To assign lab leaders, compute a maximum matching between students and lab groups, where there is an edge between a student and a lab group if that student belongs to that lab group.

Resources