Build trie faster - performance

I'm making an mobile app which needs thousands of fast string lookups and prefix checks. To speed this up, I made a Trie out of my word list, which has about 180,000 words.
Everything's great, but the only problem is that building this huge trie (it has about 400,000 nodes) takes about 10 seconds currently on my phone, which is really slow.
Here's the code that builds the trie.
public SimpleTrie makeTrie(String file) throws Exception {
String line;
SimpleTrie trie = new SimpleTrie();
BufferedReader br = new BufferedReader(new FileReader(file));
while( (line = br.readLine()) != null) {
trie.insert(line);
}
br.close();
return trie;
}
The insert method which runs on O(length of key)
public void insert(String key) {
TrieNode crawler = root;
for(int level=0 ; level < key.length() ; level++) {
int index = key.charAt(level) - 'A';
if(crawler.children[index] == null) {
crawler.children[index] = getNode();
}
crawler = crawler.children[index];
}
crawler.valid = true;
}
I'm looking for intuitive methods to build the trie faster. Maybe I build the trie just once on my laptop, store it somehow to the disk, and load it from a file in the phone? But I don't know how to implement this.
Or are there any other prefix data structures which will take less time to build, but have similar lookup time complexity?
Any suggestions are appreciated. Thanks in advance.
EDIT
Someone suggested using Java Serialization. I tried it, but it was very slow with this code:
public void serializeTrie(SimpleTrie trie, String file) {
try {
ObjectOutput out = new ObjectOutputStream(new BufferedOutputStream(new FileOutputStream(file)));
out.writeObject(trie);
out.close();
} catch (IOException e) {
e.printStackTrace();
}
}
public SimpleTrie deserializeTrie(String file) {
try {
ObjectInput in = new ObjectInputStream(new BufferedInputStream(new FileInputStream(file)));
SimpleTrie trie = (SimpleTrie)in.readObject();
in.close();
return trie;
} catch (IOException | ClassNotFoundException e) {
e.printStackTrace();
return null;
}
}
Can this above code be made faster?
My trie: http://pastebin.com/QkFisi09
Word list: http://www.isc.ro/lists/twl06.zip
Android IDE used to run code: http://play.google.com/store/apps/details?id=com.jimmychen.app.sand

Double-Array tries are very fast to save/load because all data is stored in linear arrays. They are also very fast to lookup, but the insertions can be costly. I bet there is a Java implementation somewhere.
Also, if your data is static (i.e. you don't update it on phone) consider DAFSA for your task. It is one of the most efficient data structures for storing words (must be better than "standard" tries and radix tries both for size and for speed, better than succinct tries for speed, often better than succinct tries for size). There is a good C++ implementation: dawgdic - you can use it to build DAFSA from command line and then use a Java reader for the resulting data structure (example implementation is here).

You could store your trie as an array of nodes, with references to child nodes replaced with array indices. Your root node would be the first element. That way, you could easily store/load your trie from simple binary or text format.
public class SimpleTrie {
public class TrieNode {
boolean valid;
int[] children;
}
private TrieNode[] nodes;
private int numberOfNodes;
private TrieNode getNode() {
TrieNode t = nodes[++numberOnNodes];
return t;
}
}

Just build a large String[] and sort it. Then you can use binary search to find the location of a String. You can also do a query based on prefixes without too much work.
Prefix look-up example:
Compare method:
private static int compare(String string, String prefix) {
if (prefix.length()>string.length()) return Integer.MIN_VALUE;
for (int i=0; i<prefix.length(); i++) {
char s = string.charAt(i);
char p = prefix.charAt(i);
if (s!=p) {
if (p<s) {
// prefix is before string
return -1;
}
// prefix is after string
return 1;
}
}
return 0;
}
Finds an occurrence of the prefix in the array and returns it's location (MIN or MAX are mean not found)
private static int recursiveFind(String[] strings, String prefix, int start, int end) {
if (start == end) {
String lastValue = strings[start]; // start==end
if (compare(lastValue,prefix)==0)
return start; // start==end
return Integer.MAX_VALUE;
}
int low = start;
int high = end + 1; // zero indexed, so add one.
int middle = low + ((high - low) / 2);
String middleValue = strings[middle];
int comp = compare(middleValue,prefix);
if (comp == Integer.MIN_VALUE) return comp;
if (comp==0)
return middle;
if (comp>0)
return recursiveFind(strings, prefix, middle + 1, end);
return recursiveFind(strings, prefix, start, middle - 1);
}
Gets a String array and prefix, prints out occurrences of prefix in array
private static boolean testPrefix(String[] strings, String prefix) {
int i = recursiveFind(strings, prefix, 0, strings.length-1);
if (i==Integer.MAX_VALUE || i==Integer.MIN_VALUE) {
// not found
return false;
}
// Found an occurrence, now search up and down for other occurrences
int up = i+1;
int down = i;
while (down>=0) {
String string = strings[down];
if (compare(string,prefix)==0) {
System.out.println(string);
} else {
break;
}
down--;
}
while (up<strings.length) {
String string = strings[up];
if (compare(string,prefix)==0) {
System.out.println(string);
} else {
break;
}
up++;
}
return true;
}

Here's a reasonably compact format for storing a trie on disk. I'll specify it by its (efficient) deserialization algorithm. Initialize a stack whose initial contents are the root node of the trie. Read characters one by one and interpret them as follows. The meaning of a letter A-Z is "allocate a new node, make it a child of the current top of stack, and push the newly allocated node onto the stack". The letter indicates which position the child is in. The meaning of a space is "set the valid flag of the node on top of the stack to true". The meaning of a backspace (\b) is "pop the stack".
For example, the input
TREE \b\bIE \b\b\bOO \b\b\b
gives the word list
TREE
TRIE
TOO
. On your desktop, construct the trie using whichever method and then serialize by the following recursive algorithm (pseudocode).
serialize(node):
if node is valid: put(' ')
for letter in A-Z:
if node has a child under letter:
put(letter)
serialize(child)
put('\b')

This isn't a magic bullet, but you can probably reduce your runtime slightly by doing one big memory allocation instead of a bunch of little ones.
I saw a ~10% speedup in the test code below (C++, not Java, sorry) when I used a "node pool" instead of relying on individual allocations:
#include <string>
#include <fstream>
#define USE_NODE_POOL
#ifdef USE_NODE_POOL
struct Node;
Node *node_pool;
int node_pool_idx = 0;
#endif
struct Node {
void insert(const std::string &s) { insert_helper(s, 0); }
void insert_helper(const std::string &s, int idx) {
if (idx >= s.length()) return;
int char_idx = s[idx] - 'A';
if (children[char_idx] == nullptr) {
#ifdef USE_NODE_POOL
children[char_idx] = &node_pool[node_pool_idx++];
#else
children[char_idx] = new Node();
#endif
}
children[char_idx]->insert_helper(s, idx + 1);
}
Node *children[26] = {};
};
int main() {
#ifdef USE_NODE_POOL
node_pool = new Node[400000];
#endif
Node n;
std::ifstream fin("TWL06.txt");
std::string word;
while (fin >> word) n.insert(word);
}

Tries that prealloate space all possible children (256) have a huge amount of wasted space. You are making your cache cry. Store those pointers to children in a resizable data structure.
Some tries will optimize by having one node to represent a long string, and break that string up only when needed.

Instead of a simple file you can use a database like sqlite and a nested set or celko tree to store the trie and you can also build a faster and shorter (less nodes) trie with a ternary search trie.

I don't like the idea of addressing nodes by index in array, but only because it requires one more addition (index to the pointer). But with array of preallocated nodes you will maybe save some time on allocation and initialization. And you can also save a lot of space by reserving first 26 indices for leaf nodes. Thus you'll not need to allocate and initialize 180000 leaf nodes.
Also with indices you will be able to read the prepared nodes array from disk in binary format. This has to be several times faster. But I'm not sure how to do this on your language. Is this Java?
If you checked that your source vocabulary is sorted, you may also save some time by comparing some prefix of the current string with the previous one. E.g. first 4 characters. If they are equal you can start your
for(int level=0 ; level < key.length() ; level++) {
loop from the 5-th level.

Is it space inefficient or time inefficient? If you are rolling a plain trie then space may be part of the problem when dealing with a mobil device. Check out patricia/radix tries, especially if you are using it as a prefix look-up tool.
Trie:
http://en.wikipedia.org/wiki/Trie
Patricia/Radix trie:
http://en.wikipedia.org/wiki/Radix_tree
You didn't mention a language but here are two implementations of prefix tries in Java.
Regular trie:
http://github.com/phishman3579/java-algorithms-implementation/blob/master/src/com/jwetherell/algorithms/data_structures/Trie.java
Patricia/Radix (space-effecient) trie:
http://github.com/phishman3579/java-algorithms-implementation/blob/master/src/com/jwetherell/algorithms/data_structures/PatriciaTrie.java

Generally speaking, avoid using a lot of object creations from scratch in Java, which is both slow and it also has a massive overhead. Better implement your own pooling class for memory management that allocates e.g. half a million entries at a time in one go.
Also, serialization is too slow for large lexicons. Use a binary read to populate array-based representations proposed above quickly.

Related

sum and max values in a single iteration

I have a List of a custom CallRecord objects
public class CallRecord {
private String callId;
private String aNum;
private String bNum;
private int seqNum;
private byte causeForOutput;
private int duration;
private RecordType recordType;
.
.
.
}
There are two logical conditions and the output of each is:
Highest seqNum, sum(duration)
Highest seqNum, sum(duration), highest causeForOutput
As per my understanding, Stream.max(), Collectors.summarizingInt() and so on will either require several iterations for the above result. I also came across a thread suggesting custom collector but I am unsure.
Below is the simple, pre-Java 8 code that is serving the purpose:
if (...) {
for (CallRecord currentRecord : completeCallRecords) {
highestSeqNum = currentRecord.getSeqNum() > highestSeqNum ? currentRecord.getSeqNum() : highestSeqNum;
sumOfDuration += currentRecord.getDuration();
}
} else {
byte highestCauseForOutput = 0;
for (CallRecord currentRecord : completeCallRecords) {
highestSeqNum = currentRecord.getSeqNum() > highestSeqNum ? currentRecord.getSeqNum() : highestSeqNum;
sumOfDuration += currentRecord.getDuration();
highestCauseForOutput = currentRecord.getCauseForOutput() > highestCauseForOutput ? currentRecord.getCauseForOutput() : highestCauseForOutput;
}
}
Your desire to do everything in a single iteration is irrational. You should strive for simplicity first, performance if necessary, but insisting on a single iteration is neither.
The performance depends on too many factors to make a prediction in advance. The process of iterating (over a plain collection) itself is not necessarily an expensive operation and may even benefit from a simpler loop body in a way that makes multiple traversals with a straight-forward operation more efficient than a single traversal trying to do everything at once. The only way to find out, is to measure using the actual operations.
Converting the operation to Stream operations may simplify the code, if you use it straight-forwardly, i.e.
int highestSeqNum=
completeCallRecords.stream().mapToInt(CallRecord::getSeqNum).max().orElse(-1);
int sumOfDuration=
completeCallRecords.stream().mapToInt(CallRecord::getDuration).sum();
if(!condition) {
byte highestCauseForOutput = (byte)
completeCallRecords.stream().mapToInt(CallRecord::getCauseForOutput).max().orElse(0);
}
If you still feel uncomfortable with the fact that there are multiple iterations, you could try to write a custom collector performing all operations at once, but the result will not be better than your loop, neither in terms of readability nor efficiency.
Still, I’d prefer avoiding code duplication over trying to do everything in one loop, i.e.
for(CallRecord currentRecord : completeCallRecords) {
int nextSeqNum = currentRecord.getSeqNum();
highestSeqNum = nextSeqNum > highestSeqNum ? nextSeqNum : highestSeqNum;
sumOfDuration += currentRecord.getDuration();
}
if(!condition) {
byte highestCauseForOutput = 0;
for(CallRecord currentRecord : completeCallRecords) {
byte next = currentRecord.getCauseForOutput();
highestCauseForOutput = next > highestCauseForOutput? next: highestCauseForOutput;
}
}
With Java-8 you can resolved it with a Collector with no redudant iteration.
Normally, we can use the factory methods from Collectors, but in your case you need to implement a custom Collector, that reduces a Stream<CallRecord> to an instance of SummarizingCallRecord which cotains the attributes you require.
Mutable accumulation/result type:
class SummarizingCallRecord {
private int highestSeqNum = 0;
private int sumDuration = 0;
// getters/setters ...
}
Custom collector:
BiConsumer<SummarizingCallRecord, CallRecord> myAccumulator = (a, callRecord) -> {
a.setHighestSeqNum(Math.max(a.getHighestSeqNum(), callRecord.getSeqNum()));
a.setSumDuration(a.getSumDuration() + callRecord.getDuration());
};
BinaryOperator<SummarizingCallRecord> myCombiner = (a1, a2) -> {
a1.setHighestSeqNum(Math.max(a1.getHighestSeqNum(), a2.getHighestSeqNum()));
a1.setSumDuration(a1.getSumDuration() + a2.getSumDuration());
return a1;
};
Collector<CallRecord, SummarizingCallRecord, SummarizingCallRecord> myCollector =
Collector.of(
() -> new SummarizinCallRecord(),
myAccumulator,
myCombiner,
// Collector.Characteristics.CONCURRENT/IDENTITY_FINISH/UNORDERED
);
Execution example:
List<CallRecord> callRecords = new ArrayList<>();
callRecords.add(new CallRecord(1, 100));
callRecords.add(new CallRecord(5, 50));
callRecords.add(new CallRecord(3, 1000));
SummarizingCallRecord summarizingCallRecord = callRecords.stream()
.collect(myCollector);
// Result:
// summarizingCallRecord.highestSeqNum = 5
// summarizingCallRecord.sumDuration = 1150
You don't need and should not implement the logic by Stream API because the tradition for-loop is simple enough and the Java 8 Stream API can't make it simpler:
int highestSeqNum = 0;
long sumOfDuration = 0;
byte highestCauseForOutput = 0; // just get it even if it may not be used. there is no performance hurt.
for(CallRecord currentRecord : completeCallRecords) {
highestSeqNum = Math.max(highestSeqNum, currentRecord.getSeqNum());
sumOfDuration += currentRecord.getDuration();
highestCauseForOutput = Math.max(highestCauseForOutput, currentRecord.getCauseForOutput());
}
// Do something with or without highestCauseForOutput.

Implementing an Iterative Single Stack Binary Tree Copy Function

As a thought exercise I am trying to implement an iterative tree (binary or binary search tree) copy function.
It is my understanding that it can be achieved trivially:
with a single stack
without using a wrapper (that contains references to the copy and original nodes)
without a node having a reference to it's parent (would a parent reference in a node be counter to a true definition of a tree [which I believe is a DAG]?)
I have written different implementations that meet the inverse of the above constraints but I am uncertain how to approach the problem with the constraints.
I did not see anything in Algorithms 4/e and have not seen anything online (beyond statements of how trivial it is). I considered using the concepts from in order and post order of a current/previous var but I did not see a way to track accurately when popping the stack. I also briefly considered a hash map but I feel this is still just extra storage like the extra stack.
Any help in understanding the concepts/idioms behind the approach that I am not seeing is gratefully received.
Thanks in advance.
Edit:
Some requests for what I've tried so far. Here is the 2 stack solution which I believe is supposed to be able to turn into the 1 stack the most trivially.
It's written in C++. I am new to the language (but not programming) and teaching myself using C++ Primer 5/e (Lippman, Lajole, Moo) [C++11] and the internet. If any of the code from a language perspective is wrong, please let me know (although I'm aware Code Review Stack Exchange is the place for an actual review).
I have a template Node that is used by other parts of the code.
template<typename T>
struct Node;
typedef Node<std::string> tree_node;
typedef std::shared_ptr<tree_node> shared_ptr_node;
template<typename T>
struct Node final {
public:
const T value;
const shared_ptr_node &left = m_left;
const shared_ptr_node &right = m_right;
Node(const T value, const shared_ptr_node left = nullptr, const shared_ptr_node right = nullptr) : value(value), m_left(left), m_right (right) {}
void updateLeft(const shared_ptr_node node) {
m_left = node;
}
void updateRight(const shared_ptr_node node) {
m_right = node;
}
private:
shared_ptr_node m_left;
shared_ptr_node m_right;
};
And then the 2 stack implementation.
shared_ptr_node iterativeCopy2Stacks(const shared_ptr_node &node) {
const shared_ptr_node newRoot = std::make_shared<tree_node>(node->value);
std::stack<const shared_ptr_node> s;
s.push(node);
std::stack<const shared_ptr_node> copyS;
copyS.push(newRoot);
shared_ptr_node original = nullptr;
shared_ptr_node copy = nullptr;
while (!s.empty()) {
original = s.top();
s.pop();
copy = copyS.top();
copyS.pop();
if (original->right) {
s.push(original->right);
copy->updateRight(std::make_shared<tree_node>(original->right->value));
copyS.push(copy->right);
}
if (original->left) {
s.push(original->left);
copy->updateLeft(std::make_shared<tree_node>(original->left->value));
copyS.push(copy->left);
}
}
return newRoot;
}
I'm not fluent in c++, so you'll have to settle with pseudocode:
node copy(treenode n):
if n == null
return null
node tmp = clone(n) //no deep clone!!!
stack s
s.push(tmp)
while !s.empty():
node n = s.pop()
if n.left != null:
n.left = clone(n.left)
s.push(n.left)
if n.right != null:
n.right = clone(n.right)
s.push(n.right)
return tmp
Note that clone(node) is not a deep-clone. The basic idea is to start with a shallow-clone of the root, then iterate over all children of that node and replace those nodes (still references to the original node) by shallow copies, replace those nodes children, etc.. This algorithm traverses the tree in a DFS-manner. In case you prefer BFS (for whatever reason) you could just replace the stack by a queue. Another advantage of this code: it can be altered with a few minor changes to work for arbitrary trees.
A recursive version of this algorithm (in case you prefer recursive code over my horrible prosa):
node copyRec(node n):
if n.left != null:
n.left = clone(n.left)
copyRec(n.left)
if n.right != null:
n.right = clone(n.right)
copyRec(n.right)
return n
node copy(node n):
return copyRec(clone(n))
EDIT:
If you want to have a look at working code, I've created an implementation in python.

splitting up the contents of a single line

I just went through a problem, where input is a string which is a single word.
This line is not readable,
Like, I want to leave is written as Iwanttoleave.
The problem is of separating out each of the tokens(words, numbers, abbreviations, etc)
I have no idea where to start
The first thought that came to my mind is making a dictionary and then mapping accordingly but I think making a dictionary is not at all a good idea.
Can anyone suggest some algorithm to do it ?
First of all, create a dictionary which helps you to identify if some string is a valid word or not.
bool isValidString(String s){
if(dictionary.contains(s))
return true;
return false;
}
Now, you can write a recursive code to split the string and create an array of actually useful words.
ArrayList usefulWords = new ArrayList<String>; //global declaration
void split(String s){
int l = s.length();
int i,j;
for(i = l-1; i >= 0; i--){
if(isValidString(s.substr(i,l)){ //s.substr(i,l) will return substring starting from index `i` and ending at `l-1`
usefulWords.add(s.substr(i,l));
split(s.substr(0,i));
}
}
}
Now, use these usefulWords to generate all possible strings. Maybe something like this:
ArrayList<String> splits = new ArrayList<String>[10]; //assuming max 10 possible outputs
ArrayList<String>[] allPossibleStrings(String s, int level){
for(int i = 0; i < s.length(); i++){
if(usefulWords.contains(s.substr(0,i)){
splits[level].add(s.substr(0,i));
allPossibleStrings(s.substr(i,s.length()),level);
level++;
}
}
}
Now, this code gives you all possible splits in a somewhat arbitrary manner. eg.
dictionary = {cat, dog, i, am, pro, gram, program, programmer, grammer}
input:
string = program
output:
splits[0] = {pro, gram}
splits[1] = {program}
input:
string = iamprogram
output:
splits[0] = {i, am, pro, gram} //since `mer` is not in dictionary
splits[1] = {program}
I did not give much thought to the last part, but I think you should be able to formulate a code from there as per your requirement.
Also, since no language is tagged, I've taken the liberty of writing the code in JAVA-like syntax as it is really easy to understand.
Instead of using a Dictionary, I'd suggest you use a Trie with all your valid words (the whole English dictionary?). Then you can start moving one letter at a time in your input line and the trie at the same time. If the letter leads to more results in the trie, you can continue expanding the current word, and if not, you can start looking for a new word in the trie.
This won't be a forward only search for sure, so you'll need some sort of backtracking.
// This method Generates a list with all the matching phrases for the given input
List<string> CandidatePhrases(string input) {
Trie validWords = BuildTheTrieWithAllValidWords();
List<string> currentWords = new List<string>();
List<string> possiblePhrases = new List<string>();
// The root of the trie has an empty key that points to all the first letters of all words
Trie currentWord = validWords;
int currentLetter = -1;
// Calls a backtracking method that creates all possible phrases
FindPossiblePhrases(input, validWords, currentWords, currentWord, currentLetter, possiblePhrases);
return possiblePhrases;
}
// The Trie structure could be something like
class Trie {
char key;
bool valid;
List<Trie> children;
Trie parent;
Trie Next(char nextLetter) {
return children.FirstOrDefault(c => c.key == nextLetter);
}
string WholeWord() {
Debug.Assert(valid);
string word = "";
Trie current = this;
while (current.Key != '\0')
{
word = current.Key + word;
current = current.parent;
}
}
}
void FindPossiblePhrases(string input, Trie validWords, List<string> currentWords, Trie currentWord, int currentLetter, List<string> possiblePhrases) {
if (currentLetter == input.Length - 1) {
if (currentWord.valid) {
string phrase = ""
foreach (string word in currentWords) {
phrase += word;
phrase += " ";
}
phrase += currentWord.WholeWord();
possiblePhrases.Add(phrase);
}
}
else {
// The currentWord may be a valid word. If that's the case, the next letter could be the first of a new word, or could be the next letter of a bigger word that begins with currentWord
if (currentWord.valid) {
// Try to match phrases when the currentWord is a valid word
currentWords.Add(currentWord.WholeWord());
FindPossiblePhrases(input, validWords, currentWords, validWords, currentLetter, possiblePhrases);
currentWords.RemoveAt(currentWords.Length - 1);
}
// If either the currentWord is a valid word, or not, try to match a longer word that begins with current word
int nextLetter = currentLetter + 1;
Trie nextWord = currentWord.Next(input[nextLetter]);
// If the nextWord is null, there was no matching word that begins with currentWord and has input[nextLetter] as the following letter.
if (nextWord != null) {
FindPossiblePhrases(input, validWords, currentWords, nextWord, nextLetter, possiblePhrases);
}
}
}

Lossless compression in small blocks with precomputed dictionary

I have an application where I am reading and writing small blocks of data (a few hundred bytes) hundreds of millions of times. I'd like to generate a compression dictionary based on an example data file and use that dictionary forever as I read and write the small blocks. I'm leaning toward the LZW compression algorithm. The Wikipedia page (http://en.wikipedia.org/wiki/Lempel-Ziv-Welch) lists pseudocode for compression and decompression. It looks fairly straightforward to modify it such that the dictionary creation is a separate block of code. So I have two questions:
Am I on the right track or is there a better way?
Why does the LZW algorithm add to the dictionary during the decompression step? Can I omit that, or would I lose efficiency in my dictionary?
Thanks.
Update: Now I'm thinking the ideal case be to find a library that lets me store the dictionary separate from the compressed data. Does anything like that exist?
Update: I ended up taking the code at http://www.enusbaum.com/blog/2009/05/22/example-huffman-compression-routine-in-c and adapting it. I am Chris in the comments on that page. I emailed my mods back to that blog author, but I haven't heard back yet. The compression rates I'm seeing with that code are not at all impressive. Maybe that is due to the 8-bit tree size.
Update: I converted it to 16 bits and the compression is better. It's also much faster than the original code.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.IO;
namespace Book.Core
{
public class Huffman16
{
private readonly double log2 = Math.Log(2);
private List<Node> HuffmanTree = new List<Node>();
internal class Node
{
public long Frequency { get; set; }
public byte Uncoded0 { get; set; }
public byte Uncoded1 { get; set; }
public uint Coded { get; set; }
public int CodeLength { get; set; }
public Node Left { get; set; }
public Node Right { get; set; }
public bool IsLeaf
{
get { return Left == null; }
}
public override string ToString()
{
var coded = "00000000" + Convert.ToString(Coded, 2);
return string.Format("Uncoded={0}, Coded={1}, Frequency={2}", (Uncoded1 << 8) | Uncoded0, coded.Substring(coded.Length - CodeLength), Frequency);
}
}
public Huffman16(long[] frequencies)
{
if (frequencies.Length != ushort.MaxValue + 1)
{
throw new ArgumentException("frequencies.Length must equal " + ushort.MaxValue + 1);
}
BuildTree(frequencies);
EncodeTree(HuffmanTree[HuffmanTree.Count - 1], 0, 0);
}
public static long[] GetFrequencies(byte[] sampleData, bool safe)
{
if (sampleData.Length % 2 != 0)
{
throw new ArgumentException("sampleData.Length must be a multiple of 2.");
}
var histogram = new long[ushort.MaxValue + 1];
if (safe)
{
for (int i = 0; i <= ushort.MaxValue; i++)
{
histogram[i] = 1;
}
}
for (int i = 0; i < sampleData.Length; i += 2)
{
histogram[(sampleData[i] << 8) | sampleData[i + 1]] += 1000;
}
return histogram;
}
public byte[] Encode(byte[] plainData)
{
if (plainData.Length % 2 != 0)
{
throw new ArgumentException("plainData.Length must be a multiple of 2.");
}
Int64 iBuffer = 0;
int iBufferCount = 0;
using (MemoryStream msEncodedOutput = new MemoryStream())
{
//Write Final Output Size 1st
msEncodedOutput.Write(BitConverter.GetBytes(plainData.Length), 0, 4);
//Begin Writing Encoded Data Stream
iBuffer = 0;
iBufferCount = 0;
for (int i = 0; i < plainData.Length; i += 2)
{
Node FoundLeaf = HuffmanTree[(plainData[i] << 8) | plainData[i + 1]];
//How many bits are we adding?
iBufferCount += FoundLeaf.CodeLength;
//Shift the buffer
iBuffer = (iBuffer << FoundLeaf.CodeLength) | FoundLeaf.Coded;
//Are there at least 8 bits in the buffer?
while (iBufferCount > 7)
{
//Write to output
int iBufferOutput = (int)(iBuffer >> (iBufferCount - 8));
msEncodedOutput.WriteByte((byte)iBufferOutput);
iBufferCount = iBufferCount - 8;
iBufferOutput <<= iBufferCount;
iBuffer ^= iBufferOutput;
}
}
//Write remaining bits in buffer
if (iBufferCount > 0)
{
iBuffer = iBuffer << (8 - iBufferCount);
msEncodedOutput.WriteByte((byte)iBuffer);
}
return msEncodedOutput.ToArray();
}
}
public byte[] Decode(byte[] bInput)
{
long iInputBuffer = 0;
int iBytesWritten = 0;
//Establish Output Buffer to write unencoded data to
byte[] bDecodedOutput = new byte[BitConverter.ToInt32(bInput, 0)];
var current = HuffmanTree[HuffmanTree.Count - 1];
//Begin Looping through Input and Decoding
iInputBuffer = 0;
for (int i = 4; i < bInput.Length; i++)
{
iInputBuffer = bInput[i];
for (int bit = 0; bit < 8; bit++)
{
if ((iInputBuffer & 128) == 0)
{
current = current.Left;
}
else
{
current = current.Right;
}
if (current.IsLeaf)
{
bDecodedOutput[iBytesWritten++] = current.Uncoded1;
bDecodedOutput[iBytesWritten++] = current.Uncoded0;
if (iBytesWritten == bDecodedOutput.Length)
{
return bDecodedOutput;
}
current = HuffmanTree[HuffmanTree.Count - 1];
}
iInputBuffer <<= 1;
}
}
throw new Exception();
}
private static void EncodeTree(Node node, int depth, uint value)
{
if (node != null)
{
if (node.IsLeaf)
{
node.CodeLength = depth;
node.Coded = value;
}
else
{
depth++;
value <<= 1;
EncodeTree(node.Left, depth, value);
EncodeTree(node.Right, depth, value | 1);
}
}
}
private void BuildTree(long[] frequencies)
{
var tiny = 0.1 / ushort.MaxValue;
var fraction = 0.0;
SortedDictionary<double, Node> trees = new SortedDictionary<double, Node>();
for (int i = 0; i <= ushort.MaxValue; i++)
{
var leaf = new Node()
{
Uncoded1 = (byte)(i >> 8),
Uncoded0 = (byte)(i & 255),
Frequency = frequencies[i]
};
HuffmanTree.Add(leaf);
if (leaf.Frequency > 0)
{
trees.Add(leaf.Frequency + (fraction += tiny), leaf);
}
}
while (trees.Count > 1)
{
var e = trees.GetEnumerator();
e.MoveNext();
var first = e.Current;
e.MoveNext();
var second = e.Current;
//Join smallest two nodes
var NewParent = new Node();
NewParent.Frequency = first.Value.Frequency + second.Value.Frequency;
NewParent.Left = first.Value;
NewParent.Right = second.Value;
HuffmanTree.Add(NewParent);
//Remove the two that just got joined into one
trees.Remove(first.Key);
trees.Remove(second.Key);
trees.Add(NewParent.Frequency + (fraction += tiny), NewParent);
}
}
}
}
Usage examples:
To create the dictionary from sample data:
var freqs = Huffman16.GetFrequencies(File.ReadAllBytes(#"D:\nodes"), true);
To initialize an encoder with a given dictionary:
var huff = new Huffman16(freqs);
And to do some compression:
var encoded = huff.Encode(raw);
And decompression:
var raw = huff.Decode(encoded);
The hard part in my mind is how you build your static dictionary. You don't want to use the LZW dictionary built from your sample data. LZW wastes a bunch of time learning since it can't build the dictionary faster than the decompressor can (a token will only be used the second time it's seen by the compressor so the decompressor can add it to its dictionary the first time its seen). The flip side of this is that it's adding things to the dictionary that may never get used, just in case the string shows up again. (e.g., to have a token for 'stackoverflow' you'll also have entries for 'ac','ko','ve','rf' etc...)
However, looking at the raw token stream from an LZ77 algorithm could work well. You'll only see tokens for strings seen at least twice. You can then build a list of the most common tokens/strings to include in your dictionary.
Once you have a static dictionary, using LZW sans the dictionary update seems like an easy implementation but to get the best compression I'd consider a static Huffman table instead of the traditional 12 bit fixed size token (as George Phillips suggested). An LZW dictionary will burn tokens for all the sub-strings you may never actually encode (e.g, if you can encode 'stackoverflow', there will be tokens for 'st', 'sta', 'stac', 'stack', 'stacko' etc.).
At this point it really isn't LZW - what makes LZW clever is how the decompressor can build the same dictionary the compressor used only seeing the compressed data stream. Something you won't be using. But all LZW implementations have a state where the dictionary is full and is no longer updated, this is how you'd use it with your static dictionary.
LZW adds to the dictionary during decompression to ensure it has the same dictionary state as the compressor. Otherwise the decoding would not function properly.
However, if you were in a state where the dictionary was fixed then, yes, you would not need to add new codes.
Your approach will work reasonably well and it's easy to use existing tools to prototype and measure the results. That is, compress the example file and then the example and test data together. The size of the latter less the former will be the expected compressed size of a block.
LZW is a clever way to build up a dictionary on the fly and gives decent results. But a more thorough analysis of your typical data blocks is likely to generate a more efficient dictionary.
There's also room for improvement in how LZW represents compressed data. For instance, each dictionary reference could be Huffman encoded to a closer to optimal length based on the expected frequency of their use. To be truly optimal the codes should be arithmetic encoded.
I would look at your data to see if there's an obvious reason it's so easy to compress. You might be able to do something much simpler than LZ78. I've done both LZ77 (lookback) and LZ78 (dictionary).
Try running a LZ77 on your data. There's no dictionary with LZ77, so you could use a library without alteration. Deflate is an implementation of LZ77.
Your idea of using a common dictionary is a good one, but it's hard to know whether the files are similar to each other or just self-similar without doing some tests.
The right track is to use an library -- almost every modern language have a compression library. C#, Python, Perl, Java, VB.net, whatever you use.
LZW save some space by depending the dictionary on previous inputs. It have an initial dictionary, and when you decompress something, you add them to the dictionary -- so the dictionary is growing. (I am omitting some details here, but this is the general idea)
You can omit this step by supply the whole (complete) dictionary as the initial one. But this would cost some space.
I find this aproach quite interesting for repeated log entries and something I would like to explore using.
Can you share the compression statistics for using this approach for your use case so I can compare it with other alternatives?
Have you considered having the common dictionary grow over time or is that not a valid option?

Algorithm to format text to Pascal or camel casing

Using this question as the base is there an alogrithm or coding example to change some text to Pascal or Camel casing.
For example:
mynameisfred
becomes
Camel: myNameIsFred
Pascal: MyNameIsFred
I found a thread with a bunch of Perl guys arguing the toss on this question over at http://www.perlmonks.org/?node_id=336331.
I hope this isn't too much of a non-answer to the question, but I would say you have a bit of a problem in that it would be a very open-ended algorithm which could have a lot of 'misses' as well as hits. For example, say you inputted:-
camelCase("hithisisatest");
The output could be:-
"hiThisIsATest"
Or:-
"hitHisIsATest"
There's no way the algorithm would know which to prefer. You could add some extra code to specify that you'd prefer more common words, but again misses would occur (Peter Norvig wrote a very small spelling corrector over at http://norvig.com/spell-correct.html which might help algorithm-wise, I wrote a C# implementation if C#'s your language).
I'd agree with Mark and say you'd be better off having an algorithm that takes a delimited input, i.e. this_is_a_test and converts that. That'd be simple to implement, i.e. in pseudocode:-
SetPhraseCase(phrase, CamelOrPascal):
if no delimiters
if camelCase
return lowerFirstLetter(phrase)
else
return capitaliseFirstLetter(phrase)
words = splitOnDelimiter(phrase)
if camelCase
ret = lowerFirstLetter(first word)
else
ret = capitaliseFirstLetter(first word)
for i in 2 to len(words): ret += capitaliseFirstLetter(words[i])
return ret
capitaliseFirstLetter(word):
if len(word) <= 1 return upper(word)
return upper(word[0]) + word[1..len(word)]
lowerFirstLetter(word):
if len(word) <= 1 return lower(word)
return lower(word[0]) + word[1..len(word)]
You could also replace my capitaliseFirstLetter() function with a proper case algorithm if you so wished.
A C# implementation of the above described algorithm is as follows (complete console program with test harness):-
using System;
class Program {
static void Main(string[] args) {
var caseAlgorithm = new CaseAlgorithm('_');
while (true) {
string input = Console.ReadLine();
if (string.IsNullOrEmpty(input)) return;
Console.WriteLine("Input '{0}' in camel case: '{1}', pascal case: '{2}'",
input,
caseAlgorithm.SetPhraseCase(input, CaseAlgorithm.CaseMode.CamelCase),
caseAlgorithm.SetPhraseCase(input, CaseAlgorithm.CaseMode.PascalCase));
}
}
}
public class CaseAlgorithm {
public enum CaseMode { PascalCase, CamelCase }
private char delimiterChar;
public CaseAlgorithm(char inDelimiterChar) {
delimiterChar = inDelimiterChar;
}
public string SetPhraseCase(string phrase, CaseMode caseMode) {
// You might want to do some sanity checks here like making sure
// there's no invalid characters, etc.
if (string.IsNullOrEmpty(phrase)) return phrase;
// .Split() will simply return a string[] of size 1 if no delimiter present so
// no need to explicitly check this.
var words = phrase.Split(delimiterChar);
// Set first word accordingly.
string ret = setWordCase(words[0], caseMode);
// If there are other words, set them all to pascal case.
if (words.Length > 1) {
for (int i = 1; i < words.Length; ++i)
ret += setWordCase(words[i], CaseMode.PascalCase);
}
return ret;
}
private string setWordCase(string word, CaseMode caseMode) {
switch (caseMode) {
case CaseMode.CamelCase:
return lowerFirstLetter(word);
case CaseMode.PascalCase:
return capitaliseFirstLetter(word);
default:
throw new NotImplementedException(
string.Format("Case mode '{0}' is not recognised.", caseMode.ToString()));
}
}
private string lowerFirstLetter(string word) {
return char.ToLower(word[0]) + word.Substring(1);
}
private string capitaliseFirstLetter(string word) {
return char.ToUpper(word[0]) + word.Substring(1);
}
}
The only way to do that would be to run each section of the word through a dictionary.
"mynameisfred" is just an array of characters, splitting it up into my Name Is Fred means understanding what the joining of each of those characters means.
You could do it easily if your input was separated in some way, e.g. "my name is fred" or "my_name_is_fred".

Resources