Creating a decision tree based on two columns - algorithm

I am using Spark for some large data processing. But I think this problem is kind of independent. I have following data set with some other columns:
--------------------------------------------------
| Name | Corrected_Name |
--------------------------------------------------
| Pawan | Varun |
--------------------------------------------------
| Varun | Naresh |
--------------------------------------------------
| Dona | Pia |
--------------------------------------------------
Now I am trying to correct all the names so in this case I will have to find the chain Pawan -> Varun -> Naresh. Is there a way tto handle this in Spark or some other algorithm?

First of all, note that names are commonly a bad identifier due to frequent duplication. If you would eventually have to "squash" the chain (transform 2 rows into one), reducing by name itself will cause chaos.
Regarding the original question, this is a common case where iterative calculations should be made, this type of use-case has two possible directions:
In memory (assumptions should be made over the data) - collect all the data into a single machine, perform the mapping in memory and Broadcast the result to other machines.
Distributed mapping (assumes nothing about the data, very expensive) - perform distributed next step-lookup, can be optimized to perform up to log(n) join-cache-count operations
pyspark code example for (2):
forward = get_all_data()
current_count = -1
while(current_count != 0):
forward = forward.selectExpr("Name", "Corrected_Name as original_last_name", "Corrected_Name as connection").join(forward.selectExpr("Corrected_Name as Corrected_Name_Tmp", "Name as connection"), "connection", "left")
forward_clean = forward.withColumn("Corrected_Name", merge_udf(col("original_last_name"), col("Corrected_Name_Tmp"))).cache()
current_count = forward_clean.filter(forward_clean.Corrected_Name_Tmp.isNotNull()).count()
forward = forward_clean.drop(col("original_last_name")).drop(col("Corrected_Name_Tmp")).drop(col("connection"))
This code results in all rows, each one has a mapping from original "Name" to last element in the "Corrected_Name" chain.
Note: (2) is very wasteful but assumes nothing, it can be optimized to perform at log(n) by making looking harder, looking can be optimized if you need only the first element in each chain. (1) is preferred calculation-size but you will have to benchmark the memory footprint

Related

Can Index Sorting improve the performance of the sorting in the table?

I have an index with 0.5M of records. In my UI I want to show this data within a table paginated.
+---+---+---+---+
| A | B | C | D |
+---+---+---+---+
| | | | |
The user can sort, for example, A, C, and D columns (asc/desc). Not in conjunction, but by any of these 3 columns separately.
From what I can see the Index Sorting allows to order the data in each segment for the specified set of columns.
From my understanding, I can specify a sorting setting for the index to store column A sorted and this should make the sorting exactly by this field faster. Or I can specify A + C, and exactly A in conjunction with C should be faster.
Can I benefit from Index Sorting in my scenario? Or simply rely on ES default configuration?
Create another index with a similar `data-set and try it out .. Use Reindex API for the same. By this, you can see for yourself if it improves the performance or not.
Do you even need the optimization considering there is an over-head of the same at the index time ?

How do range queries work with Sorted String Tables?

I'm a bit confused. I cannot find any information about how to execute a range query against a sorted string table.
LevelDB and RocksDB support a range iterator which allows you to query between ranges, which is perfect for NoSQL. What I don't understand is how it is implemented to be efficient.
The tables are sorted in memory (and on disk) - what algorithm or data structure allows one to query a Sorted String Table efficiently in a range query? Do you just loop through the entries and rely on the cache lines being full of data?
Usually I would put a prefix tree in front, and this gives me indexing of keys. But I am guessing Sorted String Tables do something different and take advantage of sorting in some way.
Each layer of the LSM (except for the first one) is internally sorted by the key, so you can just keep an iterator into each layer and use the one pointing to the lexicographically smallest element. The files of a layer look something like this on disk:
Layer N
---------------------------------------
File1 | File2 | File3 | ... | FileN <- filename
n:File2 |n:File3|n:File4| ... | <- next file
a-af | af-b | b-f | ... | w-z <- key range
---------------------------------------
aaron | alex | brian | ... | walter <- value omitted for brevity, but these are key:value records
abe | amy | emily | ... | xena
... | ... | ... | ... | ...
aezz | azir | erza | ... | zoidberg
---------------------------------------
First Layer (either 0 or 1)
---------------------------------------
File1 | File2 | ... | FileK
alex | amy | ... | andy
ben | chad | ... | dick
... | ... | ... | ...
xena | yen | ... | zane
---------------------------------------
...
Assume that you are looking for everything in the range ag-d (exclusive). A "range scan" is just to find the first matching element and then iterate the files of the layer. So you find that File2 is the first to contain any matching elements, and scan up to the first element starting with 'ag'. You iterate over File2, then look at the next file for File2 (n:File3). You check the key-range it contains and find that it contains more elements from the range you are interested in, so you iterate it until you hit the first entry starting with 'd'. You do the same thing in every layer, except the first. The first layer has files which are not sorted among each other, but they are internally sorted, so you can just keep an iterator per file. You also keep one more for the current memtables (in-memory data, only persisted in a log).
This never becomes too expensive, because the first layer is typically compacted on a small constant threshold. As the files in every layer are sorted and the files are internally sorted by the key too, you can just advance the smallest iterator until all iterators are exhausted. Apart from the initial search, every step has to look at a fixed number of iterators (assuming a naive approach) and is thus O(1). Most LSMs employ a block cache, and thus the sequential reads typically hit the cache most of the time.
Last but not least, be aware that this is mostly a conceptual explanation, because most implementations have a few extra tricks up their sleeves that make these things more efficient. You have to know which data is contained in which file-ranges anyway when you do a major compaction, i.e., merge layer N in to layer N + 1. Even the file-level operation may look quite different: RocksDB, e.g., maintains a coarse index with the key offsets at the beginning of each file to avoid scanning over the often much larger key/value pair portion of the file.

Versioned writeahead log - Does this data structure exist?

Background:
I have read that many DBMSs use write-ahead logging to preserve atomicity and durability of transactions by storing updates as a group of write operations. What I'm trying to accomplish is to create a dbms model with improved concurrency by allowing reads to proceed on 'old' data while writes are pending.
Question:
Is there a data structure that allows me to efficiently (ideally O(1) amortized, at most O(log(n)) look up array elements (or memory locations, if you like), which may or may not have been overwritten by write actions, in reference to some point in time? This would be for about 1TB of data total.
Here is some ascii art to make this a little clearer. The dashes are data, with version 0 being the oldest version. The arrows indicate write operations.
^ ___________________________________Snapshot 2
| V | | V
| -- --- | | -------- Version 2
| | | __________________Snapshot 1
| V | | V
T| -------- | | --------- Version 1
I| | | ___________Snapshot 0
M| V V V V
E|------------------------------------- Version 0
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~>
SPACE/ADDRESS
Attempts at solution:
Let N be the data size, M be the number of versions, and P be the average number of updates per version.
The naive algorithm (searching each update) is O(M*P).
Dividing the data into buckets, updating only entire buckets, and searching a bitmask of buckets would be O(N/B*M), where B is bucket size, which isn't much better.
A Bloom filter seems like a good candidate at first glance, except that it requires more data than a simple bitmask of each memory location (which would be bad anyway, since it requires M*N/8 bytes to store.)
A standard hash table also comes to mind, but what would the key be?
Actually, now that I've gone to the trouble of writing this all up, I've thought of a solution that uses a binary search tree. I'll submit it as an answer in a bit, but it's still O(M*log2(P)) in space and time which is not ideal. See below.
The following is the best solution I could come up with, though it is still suboptimal.
The idea is to place each region into a binary search tree, one tree per version, where each inner node contains a memory location, and each leaf node is either Hit or Miss (and possibly lookup information), depending on if updated data exists there. This is O(P*log(P)) to construct for each version, and O(M*log(P)) to look up in.
This is suboptimal for two reasons:
The tree is balanced, but Misses are much more likely than Hits in practice, so it would make sense to put Miss nodes higher in the tree, or arrange nodes by their size. Some kind of Huffman coding comes to mind, but Huffman's algorithm does not preserve the search tree invariants.
It requires M trees (hence O(M*log(P)) lookup). Maybe there is some way to combine the trees.

Synchronize two ordered lists

We have two offline systems that normally can not communicate with each other. Both systems maintain the same ordered list of items. Only rarely will they be able to communicate with each other to synchronize the list.
Items are marked with a modification timestamp to detect edits. Items are identified by UUIDs to avoid conflicts when inserting new items (as opposed to using auto-incrementing integers). When synchronizing new UUIDs are detected and copied to the other system. Likewise for deletions.
The above data structure is fine for an unordered list, but how can we handle ordering? If we added an integer "rank", that would need renumbering when inserting a new item (thus requiring synchronizing all successor items due to only 1 insertion). Alternatively, we could use fractional ranks (use the average of the ranks of the predecessor and successor item), but that doesn't seem like a robust solution as it will quickly run into accuracy problems when many new items are inserted.
We also considered implementing this as a doubly linked-list with each item holding the UUID of its predecessor and successor item. However, that would still require synchronizing 3 items when 1 new items was inserted (or synchronizing the 2 remaining items when 1 item was deleted).
Preferably, we would like to use a data structure or algorithm where only the newly inserted item needs to be synchronized. Does such a data structure exist?
Edit: we need to be able to handle moving an existing item to a different position too!
There is really no problem with the interpolated rank approach. Just define your own numbering system based on variable length bit vectors representing binary fractions between 0 and 1 with no trailing zeros. The binary point is to the left of the first digit.
The only inconvenience of this system is that the minimum possible key is 0 given by the empty bit vector. Therefore you use this only if you're positive the associated item will forever be the first list element. Normally, just give the first item the key 1. That's equivalent to 1/2, so random insertions in the range (0..1) will tend to minimize bit usage. To interpolate an item before and after,
01 < newly interpolated = 1/4
1
11 < newly interpolated = 3/4
To interpolate again:
001 < newly interpolated = 1/8
01
011 < newly interpolated = 3/8
1
101 < newly interpolated = 5/8
11
111 < newly interpolated = 7/8
Note that if you wish you can omit storing the final 1! All keys (except 0 which you won't normally use) end in 1, so storing it is supefluous.
Comparison of binary fractions is a lot like lexical comparison: 0<1 and the first bit difference in a left-to-right scan tells you which is less. If no differences occur, i.e. one vector is a strict prefix of the other, then the shorter one is smaller.
With these rules it's pretty simple to come up with an algorithm that accepts two bit vectors and computes a result that's roughly (or exactly in some cases) between them. Just add the bit strings, and shift right 1, dropping unnecessary trailing bits, i.e. take the average of the two to split the range between.
In the example above, if deletions had left us with:
01
111
and we need to interpolate these, add 01(0) and and 111 to obtain 1.001, then shift to get 1001. This works fine as an interpolant. But note the final 1 unnecessarily makes it longer than either of the operands. An easy optimization is to drop the final 1 bit along with trailing zeros to get simply 1. Sure enough, 1 is about half way between as we'd hope.
Of course if you do many inserts in the same location (think e.g. of successive inserts at the start of the list), the bit vectors will get long. This is exactly the same phenomenon as inserting at the same point in a binary tree. It grows long and stringy. To fix this, you must "rebalance" during a synchronization by renumbering with the shortest possible bit vectors, e.g. for 14 you'd use the sequence above.
Addition
Though I haven't tried it, the Postgres bit string type seems to suffice for the keys I've described. The thing I'd need to verify is that the collation order is correct.
Also, the same reasoning works just fine with base-k digits for any k>=2. The first item gets key k/2. There is also a simple optimization that prevents the very common cases of appending and prepending elements at the end and front respectively from causing keys of length O(n). It maintains O(log n) for those cases (though inserting at the same place internally can still produce O(p) keys after p insertions). I'll let you work that out. With k=256, you can use indefinite length byte strings. In SQL, I believe you'd want varbinary(max). SQL provides the correct lexicographic sort order. Implementation of the interpolation ops is easy if you have a BigInteger package similar to Java's. If you like human-readable data, you can convert the byte strings to e.g. hex strings (0-9a-f) and store those. Then normal UTF8 string sort order is correct.
You can add two fields to each item - 'creation timestamp' and 'inserted after' (containing the id of the item after which the new item was inserted). Once you synchronize a list, send all the new items. That information is enough for you to be able to construct the list on the other side.
With the list of newly added items received, do this (on the receiving end): sort by creation timestamp, then go one by one, and use the 'inserted after' field to add the new item in the appropriate place.
You may face trouble if an item A is added, then B is added after A, then A is removed. If this can happen, you will need to sync A as well (basically syncing the operations that took place on the list since the last sync, and not just the content of the current list). It's basically a form of log-shipping.
You could have a look at "lenses", which is bidirectional programming concept.
For instance, your problem seems to be solved my "matching lenses", described in this paper.
I think the datastructure that is appropriate here is order statistic tree. In order statistic tree you also need to maintain sizes of subtrees along with other data, the size field helps easy to find element by rank as you need it to be. All operations like rank,delete,change position,insert are O(logn).
I think you can try kind of transactional approach here. For example you do not delete items physically but mark them for deletion and commit changes only during synchronization. I'm not absolutely sure which data type you should choose, it depends on which operations you want to be more productive (insertions, deletions, search or iteration).
Let we have the following initial state on both systems:
|1| |2|
--- ---
|A| |A|
|B| |B|
|C| |C|
|D| |D|
After that the first system marks element A for deletion and the second system inserts element BC between B and C:
|1 | |2 |
------------ --------------
|A | |A |
|B[deleted]| |B |
|C | |BC[inserted]|
|D | |C |
|D |
Both systems continue processing taking into account local changes, System 1 ignores element B and System 2 treats element BC as normal element.
When synchronization occurs:
As I understand, each system receives the list snapshot from other system and both systems freeze processing until synchronization will be finished.
So each system iterates sequentially through received snapshot and local list and writes changes to local list (resolving possible conflicts according to modified timestamp) after that 'transaction is commited', all local changes are finally applied and information about them erases.
For example for system one:
|1 pre-sync| |2-SNAPSHOT | |1 result|
------------ -------------- ----------
|A | <the same> |A | |A |
|B[deleted]| <delete B> |B |
<insert BC> |BC[inserted]| |BC |
|C | <same> |C | |C |
|D | <same> |D | |D |
Systems wake up and continue processing.
Items are sorted by insertion order, moving can be implemented as simultaneous deletion and insertion. Also I think that it will be possible not to transfer the whole list shapshot but only list of items that were actually modified.
I think, broadly, Operational Transformation could be related to the problem you are describing here. For instance, consider the problem of Real-Time Collaborative text editing.
We essentially have a sorted list of items( words) which needs to be kept synchronized, and which could be added/modified/deleted at random within the list. The only major difference I see is in the periodicity of modifications to the list.( You say it does not happen often)
Operational Transformation does happen to be well studied field. I could find this blog article giving pointers and introduction. Plus, for all the problems Google Wave had, they actually made significant advancements to the domain of Operational Transform. Check this out. . There is quite a bit of literature available on this subject. Look at this stackoverflow thread, and about Differential Synchronisation
Another parallel that struck me was the data structure used in Text Editors - Ropes.
So if you have a log of operations,lets say, "Index 5 deleted", "Index 6 modified to ABC","Index 8 inserted",what you might now have to do is to transmit a log of the changes from System A to System B, and then reconstruct the operations sequentially on the other side.
The other "pragmatic Engineer" 's choice would be to simply reconstruct the entire list on System B when System A changes. Depending on actual frequency and size of changes, this might not be as bad as it sounds.
I have tentatively solved a similar problem by including a PrecedingItemID (which can be null if the item is the top/root of the ordered list) on each item, and then having a sort of local cache that keeps a list of all items in sorted order (this is purely for efficiency—so you don't have to recursively query for or build the list based on PrecedingItemIDs every time there is a re-ordering on the local client). Then when it comes time to sync I do the slightly more expensive operation of looking for cases where two items are requesting the same PrecedingItemID. In those cases, I simply order by creation time (or however you want to reconcile which one wins and comes first), put the second (or others) behind it, and move on ordering the list. I then store this new ordering in the local ordering cache and go on using that until the next sync (just making sure to keep the PrecedingItemID updated as you go).
I haven't unit tested this approach yet—so I'm not 100% sure I'm not missing some problematic conflict scenario—but it appears at least conceptually to handle my needs, which sound similar to those of the OP.

Which is more efficient - Computing results using a functionin realtime or reading the results directly from a database?

Let us take this example scenario:
There exists a really complex function that involves mathematical square roots and cube roots (which are slower to process) to compute its output. As an example, let us assume the function accepts two parameters a and b and the input range for both the values a and b are well-defined. Let us assume the input values a and b can range from 0 to 100.
So essentially fn(a,b) can be either computed in real time or its results can be pre-filled in a database and fetched as and when required.
Method 1: Compute in realtime
function fn(a,b){
result = compute_using_cuberoots(a,b)
return result
}
Method 2: Fetch the function result from a database
We have a database pre-filled with the input values mapped to the corresponding result:
a | b | result
0 | 0 | 12.4
1 | 0 | 14.8
2 | 0 | 18.6
. | . | .
. | . | .
100 | 100 | 1230.1
And we can
function fn(a,b){
result = fetch_from_db(a,b)
return result
}
My question:
Which method would you advocate and why? Why do you think one method is more efficient than the other?
I believe this is a scenario that most of us will face at some point during our programming life and hence this question.
Thank you.
Question Background (might not be relevant)
Example : In scenarios like Image-Processing, it is possible to come across such situations more often, where the range of values for the input (R,G,B) are known (0-255) and mathematical computation of square-roots and cube-roots introduce too much time for the server requests to be completed.
Let us take for an example you're building an app like Instagram - The time taken to process an image sent to the server by the user and the time taken to return the processed image must be kept minimal for an optimal User-Experience. In such situations, it is important to minimize the time taken to process the image. Worse yet, scalability problems are introduced when the number of such processing requests grow large.
Hence it is necessary to choose between one of the methods described above that will also be the most optimal method in such situations.
More details on my situation (if required):
Framework: Ruby on Rails, Database: MongodB
I wouldn't advocate either method, I'd test them both (if I thought they were both reasonable) and get some data.
Having written that, I'll rise to the bait: given the relative speed of computation vs I/O I would expect computation to be faster than retrieving the function values from a database. I'll acknowledge the possibility (and no more) that in some special cases an in-memory database will be able to outperform (re-)computation, but as a general rule, no.
"More efficient" is a fuzzy term. "Faster" is more concrete.
If you're talking about a few million rows in a SQL database table, then selecting a single row might well be faster than calculating the result. On commodity hardware, using an untuned server, I can usually return a single row from an indexed table of millions of rows in just a few tenths of a millisecond. But I'd think hard before installing a dbms server and building a database only for this one purpose.
To make "faster" a little less concrete, when you're talking about user experience, and within certain limits, actual speed is less important than apparent speed. The right kind of feedback at the right time makes people either feel like things are running fast, or at least makes them feel like waiting just a little bit is not a big deal. For details about exactly how to do that, I'd look at User Experience on the Stack Exchange network.
The good thing is that it's pretty simple to test both ways. For speed testing just this particular issue, you don't even need to store the right values in the database. You just need to have the right keys and indexes. I'd consider doing that if calculating the right values is going to take all day.
You should probably test over an extended period of time. I'd expect there to be more variation in speed from the dbms. I don't know how much variation you should expect, though.
Computing results and reading from a table can be a good solution if inputs are fixed values. Computing real time and caching results for an optimum time can be a good solution if inputs varies in different situations.
"We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil" Donald Knuth
I'd consider using a hash as a combination of calculating and storing. With he really complex function represented as a**b:
lazy = Hash.new{|h,(a,b)|h[[a,b]] = a**b}
lazy[[4,4]]
p lazy #=> {[4, 4]=>256}
I'd think about storing the values on the code itself:
class MyCalc
RESULTS = [
[12.4, 14.8, 18.6, ...]
...
[..., 1230.1]
]
def self.fn a, b
RESULTS[a][b]
end
end
MyCalc.fn(0,1) #=> 14.8

Resources