Shortest sequence of operations transforming a file tree to another - algorithm

Given two file trees A and B, is it possible to determine the shortest sequence of operations or a short sequence of operations that is necessary in order to transform A to B?
An operation can be:
Create a new, empty folder
Create a new file with any contents
Delete a file
Delete an empty folder
Rename a file
Rename a folder
Move a file inside another existing folder
Move a folder inside another existing folder
A and B are identical when they will have the same files with the same contents (or same size same CRC) and same name, in the same folder structure.
This question has been puzzling me for some time. For the moment I have the following, basic idea:
Compute a database:
Store file names and their CRCs
Then, find all folders with no subfolders, and compute a CRC from the CRCs of the files they contain, and a size from the total size of the files they contain
Ascend the tree to make a CRC for each parent folder
Use the following loop having database A and database B:
Compute A ∩ B and remove this intersection from both databases.
Use an inner join to find matching CRCs in A and B, folders first, order by size desc
while there is a result, use the first result to make a folder or file move (possibly creating new folders if necessary), remove from both database the source rows of the result. If there was a move then update CRCs of new location's parent folders in db A.
Then remove all files and folders referenced in database A and create those referenced in database B.
However I think that this is really a suboptimal way to do that. What could you give me as advice?
Thank you!

This problem is a special case of the tree edit distance problem, for which finding an optimal solution is (unfortunately) known to be NP-hard. This means that there probably aren't any good, fast, and accurate algorithms for the general case.
That said, the paper I linked does contain several nice discussions of approximation algorithms and algorithms that work in restricted cases of the problem. You may find the discussion interesting, as it illuminates many of the issues that actually arise in solving this problem.
Hope this helps! And thanks for posting an awesome question!

You might want to check out tree-edit distance algorithms. I don't know if this will map neatly to your file system, but it might give you some ideas.
https://github.com/irskep/sleepytree (code and paper)

The first step to do is figure out which files need to be created/renamed/deleted.
A) Create a hash map of the files of Tree A
B) Go through the files of Tree B
B.1) If there is an identical (name and contents) file in the hash map, then leave it alone
B.2) If the contents are the identical but the name is different, rename the file to that in the hash map
B.3) If the file contents doesn't exist in the hash map, remove it
B.4) (if one of 1,2,3 was true) Remove the file from the hash map
The files left over in the hash map are those that must be created. This should be the last step, after the directory structure has been resolved.
After the file differences have been resolved, it get's rather tricky. I wouldn't be surprised if there isn't an efficient optimal solution to this problem (NP-complete/hard).
The difficulty lies in that the problem doesn't naturally subdivide itself. Each step you do must consider the entire file tree. I'll think about it some more.
EDIT: It seems that the most studied tree edit distance algorithms consider only creating/deleting nodes and relabeling of nodes. This isn't directly applicable to this problem because this problem allows moving entire subtrees around which makes it significantly more difficult. The current fastest run-time for the "easier" edit distance problem is O(N^3). I'd imagine the run-time for this will be significantly slower.
Helpful Links/References
An Optimal Decomposition Algorithm for Tree Edit Distance - Demaine, Mozes, Weimann

Enumerate all files in B and their associated sizes and checksums;
sort by size/checksum.
Enumerate all files in A and their associated sizes and checksums;
sort by size/checksum.
Now, doing an ordered list comparison, do the following:
a. for every file in A but not B, delete it.
b. for every file in B but not A, create it.
c. for every file in A and B, rename as many as you encounter from A to B, then make copies of the rest in B. If you are going to overwrite an existing file, save it off to the side in a separate list. If you find A in that list, use that as the source file.
Do the same for directories, deleting ones in A but not in B and adding those in B but not in A.
You iterate by checksum/size to ensure you never have to visit files twice or worry about deleting a file you will later need to resynchronize. I'm assuming you are trying to keep two directories in sync without unnecessary copying?
The overall complexity is O(N log N) plus however long it takes to read in all those files and their metadata.
This isn't the tree edit distance problem; it's more of a list synchronization problem that happens to generate a tree.

Only non trivial problem is moving folders and files. Renaming, deleting and creating is trivial and can be done in first step (or better in last when you finish).
You can then transform this problem into problem whit transforming tree both whit same leafs but different topology.
You decide which files will be moved from some folder/bucket and which files will be left in folder. Decision is based on number of same files in source and destination.
You apply same strategy to move folders in new topology.
I think that you should be near optimal or optimal if you forget about names of folders and think just about files and topology.

Related

Data Structure Selection - Printing Filepaths

I had a question regarding printing the names of files. Say I start with something like a list of strings such as
files = [['documents', 'pics', 'cool.zip'], ['documents', 'homework'], ['Desktop, 'documents', 'file.jpg'], ['awesome.jpg'], ['turtles', 'homework']]
Essentially this is a list of lists of file paths. I'd like to try to take this and organize it into a data structure that will help to identify the links between the file paths.
I was thinking that a Graph may be the best way to represent this, but typically i'e seen graphs start out with adjacency lists which is also a list of lists, but typically each sub list is a pair of items. Anyone have some feedback here on best data structure to use here? I'd ultimately like to reconstruct a Graph and then print out the contents of the Graph, depth first.
Usually, files are organised in a tree. You start with a "root" directory, which has a set of children, each of which is either a file, or a directory - which has its own set of children (or a link/shortcut, but they make things more complicated than it sounds like you need here)

What is the fastest way to use a binary search tree when inserting a large number of sequential files?

I'm writing a program which sorts through folders of files and checks them against each other for duplicate names. To do this I get a list of all the file names and then run them through a binary tree. If the name exists in the tree, it marks the file as a duplicate and if it doesn't exist it adds the name to the tree.
The problem I'm running into is when a large batch of files is sequential (e.g. picture files where the entire name is identical except the final number sequentially going up) which causes the files to continually be placed on the right which in turn causes the depth of the tree to balloon. I'm looking for a way to reduce the time to process these files.
I've tried an AVL Tree but the time it takes to continually balance the tree as hundreds of thousands of files are added (and again, constantly rebalancing due to the sequential nature of the file names) ends up taking longer than simply allowing the depth to reach the tens of thousands. Any help would be greatly appreciated.
Shihab Shahriar suggested randomly shuffling the array and that did the trick beautifully.
The tests were run on a folder containing 233,738 picture files and prior to shuffling, the sequential nature of the picture file's names resulted in the binary tree depth being 34,227 and taking just over 26 minutes to process. Various batches of picture files were resulting in an O(n) insertion and search on the binary tree. After simply shuffling the array containing all the files prior to inserting them into the binary tree, the depth was reduced to the mid 40s and the time to process the files dropped to around 2 minutes.
Thanks for the help!

Find common words from two files

Given two files containing list of words(around million), We need to find out the words that are in common.
Use Some efficient algorithm, also not enough memory availble(1 million, certainly not).. Some basic C Programming code, if possible, would help.
The files are not sorted.. We can use some sort of algorithm... Please support it with basic code...
Sorting the external file...... with minimum memory available,, how can it be implement with C programming.
Anybody game for external sorting of a file... Please share some code for this.
Yet another approach.
General. first, notice that doing this sequentially takes O(N^2). With N=1,000,000, this is a LOT. Sorting each list would take O(N*log(N)); then you can find the intersection in one pass by merging the files (see below). So the total is O(2N*log(N) + 2N) = O(N*log(N)).
Sorting a file. Now let's address the fact that working with files is much slower than with memory, especially when sorting where you need to move things around. One way to solve this is - decide the size of the chunk that can be loaded into memory. Load the file one chunk at a time, sort it efficiently and save into a separate temporary file. The sorted chunks can be merged (again, see below) into one sorted file in one pass.
Merging. When you have 2 sorted lists (files or not), you can merge them into one sorted list easily in one pass: have 2 "pointers", initially pointing to the first entry in each list. In each step, compare the values the pointers point to. Move the smaller value to the merged list (the one you are constructing) and advance its pointer.
You can modify the merge algorithm easily to make it find the intersection - if pointed values are equal move it to the results (consider how do you want to deal with duplicates).
For merging more than 2 lists (as in sorting the file above) you can generalize the algorithm for using k pointers.
If you had enough memory to read the first file completely into RAM, I would suggest reading it into a dictionary (word -> index of that word ), loop over the words of the second file and test if the word is contained in that dictionary. Memory for a million words is not much today.
If you have not enough memory, split the first file into chunks that fit into memory and do as I said above for each of that chunk. For example, fill the dictionary with the first 100.000 words, find every common word for that, then read the file a second time extracting word 100.001 up to 200.000, find the common words for that part, and so on.
And now the hard part: you need a dictionary structure, and you said "basic C". When you are willing to use "basic C++", there is the hash_map data structure provided as an extension to the standard library by common compiler vendors. In basic C, you should also try to use a ready-made library for that, read this SO post to find a link to a free library which seems to support that.
Your problem is: Given two sets of items, find the intersaction (items common to both), while staying within the constraints of inadequate RAM (less than the size of any set).
Since finding an intersaction requires comparing/searching each item in another set, you must have enough RAM to store at least one of the sets (the smaller one) to have an efficient algorithm.
Assume that you know for a fact that the intersaction is much smaller than both sets and fits completely inside available memory -- otherwise you'll have to do further work in flushing the results to disk.
If you are working under memory constraints, partition the larger set into parts that fit inside 1/3 of the available memory. Then partition the smaller set into parts the fit the second 1/3. The remaining 1/3 memory is used to store the results.
Optimize by finding the max and min of the partition for the larger set. This is the set that you are comparing from. Then when loading the corresponding partition of the smaller set, skip all items outside the min-max range.
First find the intersaction of both partitions through a double-loop, storing common items to the results set and removing them from the original sets to save on comparisons further down the loop.
Then replace the partition in the smaller set with the second partition (skipping items outside the min-max). Repeat. Notice that the partition in the larger set is reduced -- with common items already removed.
After running through the entire smaller set, repeat with the next partition of the larger set.
Now, if you do not need to preserve the two original sets (e.g. you can overwrite both files), then you can further optimize by removing common items from disk as well. This way, those items no longer need to be compared in further partitions. You then partition the sets by skipping over removed ones.
I would give prefix trees (aka tries) a shot.
My initial approach would be to determine a maximum depth for the trie that would fit nicely within my RAM limits. Pick an arbitrary depth (say 3, you can tweak it later) and construct a trie up to that depth, for the smaller file. Each leaf would be a list of "file pointers" to words that start with the prefix encoded by the path you followed to reach the leaf. These "file pointers" would keep an offset into the file and the word length.
Then process the second file by reading each word from it and trying to find it in the first file using the trie you constructed. It would allow you to fail faster on words that don't match. The deeper your trie, the faster you can fail, but the more memory you would consume.
Of course, like Stephen Chung said, you still need RAM to store enough information to describe at least one of the files, if you really need an efficient algorithm. If you don't have enough memory -- and you probably don't, because I estimate my approach would require approximately the same amount of memory you would need to load a file whose words were 14-22 characters long -- then you have to process even the first file by parts. In that case, I would actually recommend using the trie for the larger file, not the smaller. Just partition it in parts that are no bigger than the smaller file (or no bigger than your RAM constraints allow, really) and do the whole process I described for each part.
Despite the length, this is sort of off the top of my head. I might be horribly wrong in some details, but this is how I would initially approach the problem and then see where it would take me.
If you're looking for memory efficiency with this sort of thing you'll be hard pushed to get time efficiency. My example will be written in python, but should be relatively easy to implement in any language.
with open(file1) as file_1:
current_word_1 = read_to_delim(file_1, delim)
while current_word_1:
with open(file2) as file_2:
current_word_2 = read_to_delim(file_2, delim)
while current_word_2:
if current_word_2 == current_word_1:
print current_word_2
current_word_2 = read_to_delim(file_2, delim)
current_word_1 = read_to_delim(file_1, delim)
I leave read_to_delim to you, but this is the extreme case that is memory-optimal but time-least-optimal.
depending on your application of course you could load the two files in a database, perform a left outer join, and discard the rows for which one of the two columns is null

Building a directory tree from a list of file paths

I am looking for a time efficient method to parse a list of files into a tree. There can be hundreds of millions of file paths.
The brute force solution would be to split each path on occurrence of a directory separator, and traverse the tree adding in directory and file entries by doing string comparisons but this would be exceptionally slow.
The input data is usually sorted alphabetically, so the list would be something like:
C:\Users\Aaron\AppData\Amarok\Afile
C:\Users\Aaron\AppData\Amarok\Afile2
C:\Users\Aaron\AppData\Amarok\Afile3
C:\Users\Aaron\AppData\Blender\alibrary.dll
C:\Users\Aaron\AppData\Blender\and_so_on.txt
From this ordering my natural reaction is to partition the directory listings into groups... somehow... before doing the slow string comparisons. I'm really not sure. I would appreciate any ideas.
Edit: It would be better if this tree were lazy loaded from the top down if possible.
You have no choice but to do full string comparisons since you can't guarantee where the strings might differ. There are a couple tricks that might speed things up a little:
As David said, form a tree, but search for the new insertion point from the previous one (perhaps with the aid of some sort of matchingPrefix routine that will tell you where the new one differs).
Use a hash table for each level of the tree if there may be very many files within and you need to count duplicates. (Otherwise, appending to a stack is fine.)
if its possible, you can generate your tree structure with the tree command, here
To take advantage of the "usually sorted" property of your input data, begin your traversal at the directory where your last file was inserted: compare the directory name of current pathname to the previous one. If they match, you can just insert here, otherwise pop up a level and try again.

Pseudorandom directory tree generation?

I'm trying to write a program which will pseudorandomly autogenerate (based on a seed value so I can re-run the same test more than once) a growing directory structure consisting of files. (this is to stress test a source control database installation)
I was wondering if any of you were aware of something similar to the quasirandom "space-filling" sequences (e.g. van der Corput sequences or Halton sequences) that might work here.
edit: Or a fractal algorithm. This sounds suspiciously like a fractal algorithm.
edit 2: Never mind, I think I figured out the obvious solution, start with an empty tree, and just use sequential outputs of a pseudorandom generator to deterministically (based on the generated number and the state of the tree generated so far) do one of N actions, e.g. make a new subdirectory, add a new file, rename a file, delete a file, etc.
I want to do it this way rather than just sequentially dump files into a folder structure, because we're running into a situation where we are having some problems with large #s of files, and are not sure exactly what the cause is. (tree depth, # of renames, # of deletes, etc.)
It's not just 1 fixed tree I need to generate, the use strategy is: grow the tree structure a little bit, evaluate some performance statistics, grow the tree structure a little more, evaluate some performance statistics, etc.
If this is just for testing, what is wrong with some simple, naive generation algorithm? Like, generate a random (1-10) amount of subdirectories, generate names for them, then for each directory recursively generate subdirectories and some amount of files.
This is easily customizable and you can control the seed for rand. For funkier needs, the distribution of the amounts of files/directories can be non-linear, but something that suits your needs more.
Sounds something that can be whipped up in half an hour and get done with. I fail to see a need for something mathemathical or complex. Unless this is just for fun, of course :-)
As you mention in your second edit, I would probably implement the whole thing as a file tree traversal, with the PRNG deciding "change to directory", "create directory", "move up one level", "create file", "delete file" and have another value to determine what file to delete, what directory to change to and to generate names for files and directories.
I used a similar method to stress-test a workflow server I wrote (though I didn't need to keep track of where work items were, just needed to randomly pick one to operate on).
This is a set of different problems which makes it a fun puzzle.
First we have the pseudorandom number generator. There is a lot of stuff available. I only expect a function that creates a number in the range 0..n-1.
Then we have an algorithm to determine the number of subnodes on a single node. It is tempting to use a linear function but that is not a fair representation to reality. So you can create the following function:
randomsize() {
int n = Random(0,10);
if (n<10) return n;
return Random(0,9) + 10 * random;
}
This function produces small numbers. Most will be in the range 0..9 but the top is virtually endless. If you want to have bigger numbers you could also use a bigger threshold
randomsize() {
int n = Random(0,100);
if (n<10) return n;
return Random(0,9) + 10 * random;
}
The last problem is how to create a tree. This is rather simple. But you should keep in mind that the algorith has to end. So you need to do one of the following:
use a max depth
decrement the generated number based on the nesting level
determine the number of leaves as a percentage of the total subnodes. This percentage should increment at higher levels (10-50 at first level, 20-60 at second.. 50-100 at fifth, 60-100 at sixth, until 90-100 at nineth and higher.
Ofcourse you can tweak the parameters to create your required tree.

Resources