Sorting a 20GB file with one string per line - sorting

In question 11.5 of Gayle Laakman's book, Cracking the Technical Interview,
"Imagine you have a 20GB file with one string per line. Explain how you would sort the file"
My initial reaction was exactly the solution that she proposed - splitting the file into smaller chunks (megabytes) by reading in X mb's of data, sorting it, and then writing it to disk. And at the very end, merge the files.
I decided not to pursue this approach because the final merge would involve holding on to all the data in main memory - and we're assuming that's not possible. If that's the case, how exactly does this solution hold?
My other approach is based on the assumption that we have near unlimited disk space, or at least enough to hold 2X the data we already have. We can read in X mb's of data and then generate hash keys for them - each key corresponding to a line in a file. We'll continue doing this until all values have been hashed. Then we just have to write the values of that file into the original file.
Let me know what you think.

http://en.wikipedia.org/wiki/External_sorting gives a more detailed explanation on how external sorting works. It addresses your concern of eventually having to bring the entire 20gB into memory by explaining how you perform the final merge of the N sorted chunks by reading in chunks of the sorted chunks as opposed to reading in all the sorted chunks at the same time.

Related

Spark: Minimize task/partition size skew with textFile's minPartitions option?

I'm reading in tens of thousands of files into an rdd via something like sc.textFile("/data/*/*/*") One problem is that most of these files are tiny, whereas others are huge. That leads to imbalanced tasks, which causes all sorts of well-known problems.
Can I break up the largest partitions by instead reading in my data via sc.textFile("/data/*/*/*", minPartitions=n_files*5), where n_files is the number of input files?
As convered elsewhere on stackoverflow, minPartitions gets passed way down the hadoop rabit hole and is used in the org.apache.hadoop.mapred.TextInputFormat.getSplits. My question is whether this is implemented such that the largest files are split first. In other words, is the splitting strategy one that tries to lead to evenly sized partitions?
I would prefer an answer that points to wherever the splitting strategy is actually implemented in a recent version of spark/hadoop.
Nobody's posted an answer so I dug into this myself and will post an answer to my own question:
It appears that, if your input file(s) are splittable, then textFile will indeed try to balance partition size if you use the minPartitions option.
The partitioning strategy is implemented here, i.e., in the getSplits method of org.apache.hadoop.mapred.TextInputFormat. This partitioning strategy is complex, and operates by first setting goalSize, which is simply the total size of the input divided by the numSplits (minPartitions is passed down to set the value of numSplits). It then splits up files in such a way that tries to ensure that each partition's size (in terms of its input's byte size) is as close as possible to the goalSize/
If your input file(s) are not splittable, then this splitting will not take place: see the source code here.

About multipass sort algorithm

I am reading Programming Pearls by Jon Bentley (reference).
Here author is mentioning about various sorting algorithms like merge sort, multipass sort.
Questions:
How does algorithm for merge sort work by reading input file once and using work files and writing output file only once?
How does the author denote that the 40 pass i.e. multipass sort algorithm works by writing only once to output file and with no work files?
Can someone explain the above with a simple example, like having memory to store 3 digits and having 10 digits to store, e.g. 9,0,8,6,5,4,1,2,3,7
This is from Chapter 1 of Jon Bentley's
Programming Pearls, 2nd Edn (1999), which is an excellent book. The equivalent example from the first edition is slightly different; the multipass algorithm only made 27 passes over the data (and there was less memory available).
The sort described by Jon Bentley has special setup constraints.
File contains at most 10 million records.
Each record is a 7 digit number.
There is no other data associated with the records.
There is only 1 MiB of memory available when the sort must be done.
Question 1
The single read of the input file slurps as many lines from the input as will fit in memory, sorts that data, and writes it out to a work file. Rinse and repeat until there is no more data in the input file.
Then, complete the process by reading the work files and merging the sorted contents into a single output file. In extreme cases, it might be necessary to create new, bigger work files because the program can't read all the work files at once. If that happens, you arrange for the final pass to have the maximum number of inputs that can be handled, and have the intermediate passes merge appropriate numbers of files.
This is a general purpose algorithm.
Question 2
This is where the peculiar properties of the data are exploited. Since the numbers are unique and limited in range, the algorithm can read the file the first time, extracting numbers from the first fortieth of the range, sorting and writing those; then it extracts the second fortieth of the range, then the third, ..., then the last fortieth.
This is a special-purpose algorithm, exploiting the nature of the numbers.

Find common words from two files

Given two files containing list of words(around million), We need to find out the words that are in common.
Use Some efficient algorithm, also not enough memory availble(1 million, certainly not).. Some basic C Programming code, if possible, would help.
The files are not sorted.. We can use some sort of algorithm... Please support it with basic code...
Sorting the external file...... with minimum memory available,, how can it be implement with C programming.
Anybody game for external sorting of a file... Please share some code for this.
Yet another approach.
General. first, notice that doing this sequentially takes O(N^2). With N=1,000,000, this is a LOT. Sorting each list would take O(N*log(N)); then you can find the intersection in one pass by merging the files (see below). So the total is O(2N*log(N) + 2N) = O(N*log(N)).
Sorting a file. Now let's address the fact that working with files is much slower than with memory, especially when sorting where you need to move things around. One way to solve this is - decide the size of the chunk that can be loaded into memory. Load the file one chunk at a time, sort it efficiently and save into a separate temporary file. The sorted chunks can be merged (again, see below) into one sorted file in one pass.
Merging. When you have 2 sorted lists (files or not), you can merge them into one sorted list easily in one pass: have 2 "pointers", initially pointing to the first entry in each list. In each step, compare the values the pointers point to. Move the smaller value to the merged list (the one you are constructing) and advance its pointer.
You can modify the merge algorithm easily to make it find the intersection - if pointed values are equal move it to the results (consider how do you want to deal with duplicates).
For merging more than 2 lists (as in sorting the file above) you can generalize the algorithm for using k pointers.
If you had enough memory to read the first file completely into RAM, I would suggest reading it into a dictionary (word -> index of that word ), loop over the words of the second file and test if the word is contained in that dictionary. Memory for a million words is not much today.
If you have not enough memory, split the first file into chunks that fit into memory and do as I said above for each of that chunk. For example, fill the dictionary with the first 100.000 words, find every common word for that, then read the file a second time extracting word 100.001 up to 200.000, find the common words for that part, and so on.
And now the hard part: you need a dictionary structure, and you said "basic C". When you are willing to use "basic C++", there is the hash_map data structure provided as an extension to the standard library by common compiler vendors. In basic C, you should also try to use a ready-made library for that, read this SO post to find a link to a free library which seems to support that.
Your problem is: Given two sets of items, find the intersaction (items common to both), while staying within the constraints of inadequate RAM (less than the size of any set).
Since finding an intersaction requires comparing/searching each item in another set, you must have enough RAM to store at least one of the sets (the smaller one) to have an efficient algorithm.
Assume that you know for a fact that the intersaction is much smaller than both sets and fits completely inside available memory -- otherwise you'll have to do further work in flushing the results to disk.
If you are working under memory constraints, partition the larger set into parts that fit inside 1/3 of the available memory. Then partition the smaller set into parts the fit the second 1/3. The remaining 1/3 memory is used to store the results.
Optimize by finding the max and min of the partition for the larger set. This is the set that you are comparing from. Then when loading the corresponding partition of the smaller set, skip all items outside the min-max range.
First find the intersaction of both partitions through a double-loop, storing common items to the results set and removing them from the original sets to save on comparisons further down the loop.
Then replace the partition in the smaller set with the second partition (skipping items outside the min-max). Repeat. Notice that the partition in the larger set is reduced -- with common items already removed.
After running through the entire smaller set, repeat with the next partition of the larger set.
Now, if you do not need to preserve the two original sets (e.g. you can overwrite both files), then you can further optimize by removing common items from disk as well. This way, those items no longer need to be compared in further partitions. You then partition the sets by skipping over removed ones.
I would give prefix trees (aka tries) a shot.
My initial approach would be to determine a maximum depth for the trie that would fit nicely within my RAM limits. Pick an arbitrary depth (say 3, you can tweak it later) and construct a trie up to that depth, for the smaller file. Each leaf would be a list of "file pointers" to words that start with the prefix encoded by the path you followed to reach the leaf. These "file pointers" would keep an offset into the file and the word length.
Then process the second file by reading each word from it and trying to find it in the first file using the trie you constructed. It would allow you to fail faster on words that don't match. The deeper your trie, the faster you can fail, but the more memory you would consume.
Of course, like Stephen Chung said, you still need RAM to store enough information to describe at least one of the files, if you really need an efficient algorithm. If you don't have enough memory -- and you probably don't, because I estimate my approach would require approximately the same amount of memory you would need to load a file whose words were 14-22 characters long -- then you have to process even the first file by parts. In that case, I would actually recommend using the trie for the larger file, not the smaller. Just partition it in parts that are no bigger than the smaller file (or no bigger than your RAM constraints allow, really) and do the whole process I described for each part.
Despite the length, this is sort of off the top of my head. I might be horribly wrong in some details, but this is how I would initially approach the problem and then see where it would take me.
If you're looking for memory efficiency with this sort of thing you'll be hard pushed to get time efficiency. My example will be written in python, but should be relatively easy to implement in any language.
with open(file1) as file_1:
current_word_1 = read_to_delim(file_1, delim)
while current_word_1:
with open(file2) as file_2:
current_word_2 = read_to_delim(file_2, delim)
while current_word_2:
if current_word_2 == current_word_1:
print current_word_2
current_word_2 = read_to_delim(file_2, delim)
current_word_1 = read_to_delim(file_1, delim)
I leave read_to_delim to you, but this is the extreme case that is memory-optimal but time-least-optimal.
depending on your application of course you could load the two files in a database, perform a left outer join, and discard the rows for which one of the two columns is null

Best way to store and retrieve a DAWG data structure for fast loading

I have a 500k+ wordlist that I loaded it into a DAWG data structure. My app is for mobile phones. I of course don't want to repeat all the conversion steps to load this wordlist into a DAWG every time, since it would take to much storage space to have the wordlist on the phone and to much time to load it into a DAWG every time. So, I am looking for a way to store the data in my DAWG to a file or DB in a format that will both conserve space and allow for me to quickly load it back into my DAWG data structure.
I received one suggestion that I could store each node in a SQLite DB, but I am not sure how that would exactly work and if I did that how would I retrieve it quickly. I certainly wouldn't want to run lots of queries. Would some other type of storage method be better? I also received suggestions of creating a serialised file or to store it as a bitmap.
You can basically do a memory dump, just use offsets instead of pointers (in Java terms, put all nodes in an array, and use array index to refer to a node).
500k doesn't seem like amount that would be problematic for modern phones, especially that DAWG is quite efficient already. If you mmap the file, you would be able to work with the data structure even if it doesn't fit in memory.
Did you tried to reduce the wordlist? Are you saving only the word stam if possible for your application?
Other hand: You never should rebuild the data structure because the wordlist is constant. Try do use a memory dump like suggusted. Use mmap for the file, java serialization or pickle pickle technics to load a ready-made data structure into your memory.
I guess, you are using DAWG for fast searching some word in a dictionary. DAWG has O(LEN) search complexity.
Many years ago, I developed J2ME app and faced with the same problem. But in that times phones definetely couldn't provide such RAM amount of RAM memory, to store 500K+ strings) The solution I used is the following:
Read all words, sort them, put in some file line by line and for
each word precompute skipBytes. - number of bytes before this
word. Computing skipBytes is trivial. pseudocode is
skipBytes[0]=words[0].bytesLen;
for i=1 to n skipBytes[i]=skipBytes[i-1]+words[i].getBytesLength
When app starts read 500k skipBytes to some int array. It
is much smaller that 500K strings)
Searching word in a dict - binary search. Imagine that you are perfoming it on sorted array but, instead of making array[i] you make something like RandomAccessFile.read(skipBytes[i]). Google Java Random Access Files my pseucode of course wrong it's just direction.
Complexity - O(LEN*LOG(N)) = LOG of Binary search and comparing strings is linear complexity. LOG(500000)~19, LEN ~ average word leng in worst case is 50 (fantastic upper bound), so search operation is still very fast, only ~1000 operation it will be done in microseconds. Advantage - small memory usage.
I should mention, that in case of web app when many users perform searhing, LOG(N) becomes important, but if your app provides service for only one person LOG(500000) doesn't change much if it performed not inside a loop)

Log combing algorithm

We get these ~50GB data files consisting of 16 byte codes, and I want to find any code that occurs 1/2% of the time or more. Is there any way I can do that in a single pass over the data?
Edit: There are tons of codes - it's possible that every code is different.
EPILOGUE: I've selected Darius Bacon as best answer, because I think the best algorithm is a modification of the majority element he linked to. The majority algorithm should be modifiable to only use a tiny amount of memory - like 201 codes to get 1/2% I think. Basically you just walk the stream counting up to 201 distinct codes. As soon as you find 201 distinct codes, you drop one of each code (deduct 1 from the counters, forgetting anything that becomes 0). At the end, you have dropped at most N/201 times, so any code occurring more times than that must still be around.
But it's a two pass algorithm, not one. You need a second pass to tally the counts of the candidates. It's actually easy to see that any solution to this problem must use at least 2 passes (the first batch of elements you load could all be different and one of those codes could end up being exactly 1/2%)
Thanks for the help!
Metwally et al., Efficient Computation of Frequent and Top-k Elements in Data Streams (2005). There were some other relevant papers I read for my work at Yahoo that I can't find now; but this looks like a good start.
Edit: Ah, see this Brian Hayes article. It sketches an exact algorithm due to Demaine et al., with references. It does it in one pass with very little memory, yielding a set of items including the frequent ones you're looking for, if they exist. Getting the exact counts takes a (now-tractable) second pass.
this will depend on the distribution of the codes. if there are a small enough number of distinct codes you can build a http://en.wikipedia.org/wiki/Frequency_distribution in core with a map. otherwise you probably will have to build a http://en.wikipedia.org/wiki/Histogram and then make multiple passes over the data examining frequencies of codes in each bucket.
Sort chunks of the file in memory, as if you were performing and external sort. Rather than writing out all of the sorted codes in each chunk, however, you can just write each distinct code and the number of occurrences in that chunk. Finally, merge these summary records to find the number of occurrences of each code.
This process scales to any size data, and it only makes one pass over the input data. Multiple merge passes may be required, depending on how many summary files you want to open at once.
Sorting the file allows you to count the number of occurrences of each code using a fixed amount of memory, regardless of the input size.
You also know the total number of codes (either by dividing the input size by a fixed code size, or by counting the number of variable length codes during the sorting pass in a more general problem).
So, you know the proportion of the input associated with each code.
This is basically the pipeline sort * | uniq -c
If every code appears just once, that's no problem; you just need to be able to count them.
That depends on how many different codes exist, and how much memory you have available.
My first idea would be to build a hash table of counters, with the codes as keys. Loop through the entire file, increasing the counter of the respective code, and counting the overall number. Finally, filter all keys with counters that exceed (* overall-counter 1/200).
If the files consist solely of 16-byte codes, and you know how large each file is, you can calculate the number of codes in each file. Then you can find the 0.5% threshold and follow any of the other suggestions to count the occurrences of each code, recording each one whose frequency crosses the threshold.
Do the contents of each file represent a single data set, or is there an arbitrary cutoff between files? In the latter case, and assuming a fairly constant distribution of codes over time, you can make your life simpler by splitting each file into smaller, more manageable chunks. As a bonus, you'll get preliminary results faster and can pipeline then into the next process earlier.

Resources