What is the best way to solve this:
I have a group of arrays with 3-4 characters inside each like so:
{p, {a, {t, {m,
q, b, u, n,
r, c v o
s } } }
}
I also have an array of dictionary words.
What is the best/fastest way to find if the array of characters can combine to form one of the dictionary words? For example, the above arrays could make the words:
"pat","rat","at","to","bum"(lol)but not "nub" or "mat"Should i loop through the dictionary to see if words can be made or get all the combinations from the letters then compare those to the dictionary
I had some Scrabble code laying around, so I was able to throw this together. The dictionary I used is sowpods (267751 words). The code below reads the dictionary as a text file with one uppercase word on each line.
The code is C#:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.IO;
using System.Diagnostics;
namespace SO_6022848
{
public struct Letter
{
public const string Chars = "ABCDEFGHIJKLMNOPQRSTUVWXYZ";
public static implicit operator Letter(char c)
{
return new Letter() { Index = Chars.IndexOf(c) };
}
public int Index;
public char ToChar()
{
return Chars[Index];
}
public override string ToString()
{
return Chars[Index].ToString();
}
}
public class Trie
{
public class Node
{
public string Word;
public bool IsTerminal { get { return Word != null; } }
public Dictionary<Letter, Node> Edges = new Dictionary<Letter, Node>();
}
public Node Root = new Node();
public Trie(string[] words)
{
for (int w = 0; w < words.Length; w++)
{
var word = words[w];
var node = Root;
for (int len = 1; len <= word.Length; len++)
{
var letter = word[len - 1];
Node next;
if (!node.Edges.TryGetValue(letter, out next))
{
next = new Node();
if (len == word.Length)
{
next.Word = word;
}
node.Edges.Add(letter, next);
}
node = next;
}
}
}
}
class Program
{
static void GenWords(Trie.Node n, HashSet<Letter>[] sets, int currentArrayIndex, List<string> wordsFound)
{
if (currentArrayIndex < sets.Length)
{
foreach (var edge in n.Edges)
{
if (sets[currentArrayIndex].Contains(edge.Key))
{
if (edge.Value.IsTerminal)
{
wordsFound.Add(edge.Value.Word);
}
GenWords(edge.Value, sets, currentArrayIndex + 1, wordsFound);
}
}
}
}
static void Main(string[] args)
{
const int minArraySize = 3;
const int maxArraySize = 4;
const int setCount = 10;
const bool generateRandomInput = true;
var trie = new Trie(File.ReadAllLines("sowpods.txt"));
var watch = new Stopwatch();
var trials = 10000;
var wordCountSum = 0;
var rand = new Random(37);
for (int t = 0; t < trials; t++)
{
HashSet<Letter>[] sets;
if (generateRandomInput)
{
sets = new HashSet<Letter>[setCount];
for (int i = 0; i < setCount; i++)
{
sets[i] = new HashSet<Letter>();
var size = minArraySize + rand.Next(maxArraySize - minArraySize + 1);
while (sets[i].Count < size)
{
sets[i].Add(Letter.Chars[rand.Next(Letter.Chars.Length)]);
}
}
}
else
{
sets = new HashSet<Letter>[] {
new HashSet<Letter>(new Letter[] { 'P', 'Q', 'R', 'S' }),
new HashSet<Letter>(new Letter[] { 'A', 'B', 'C' }),
new HashSet<Letter>(new Letter[] { 'T', 'U', 'V' }),
new HashSet<Letter>(new Letter[] { 'M', 'N', 'O' }) };
}
watch.Start();
var wordsFound = new List<string>();
for (int i = 0; i < sets.Length - 1; i++)
{
GenWords(trie.Root, sets, i, wordsFound);
}
watch.Stop();
wordCountSum += wordsFound.Count;
if (!generateRandomInput && t == 0)
{
foreach (var word in wordsFound)
{
Console.WriteLine(word);
}
}
}
Console.WriteLine("Elapsed per trial = {0}", new TimeSpan(watch.Elapsed.Ticks / trials));
Console.WriteLine("Average word count per trial = {0:0.0}", (float)wordCountSum / trials);
}
}
}
Here is the output when using your test data:
PA
PAT
PAV
QAT
RAT
RATO
RAUN
SAT
SAU
SAV
SCUM
AT
AVO
BUM
BUN
CUM
TO
UM
UN
Elapsed per trial = 00:00:00.0000725
Average word count per trial = 19.0
And the output when using random data (does not print each word):
Elapsed per trial = 00:00:00.0002910
Average word count per trial = 62.2
EDIT: I made it much faster with two changes: Storing the word at each terminal node of the trie, so that it doesn't have to be rebuilt. And storing the input letters as an array of hash sets instead of an array of arrays, so that the Contains() call is fast.
There are probably many way of solving this.
What you are interested in is the number of each character you have available to form a word, and how many of each character is required for each dictionary word. The trick is how to efficiently look up this information in the dictionary.
Perhaps you can use a prefix tree (a trie), some kind of smart hash table, or similar.
Anyway, you will probably have to try out all your possibilities and check them against the dictionary. I.e., if you have three arrays of three values each, there will be 3^3+3^2+3^1=39 combinations to check out. If this process is too slow, then perhaps you could stick a Bloom filter in front of the dictionary, to quickly check if a word is definitely not in the dictionary.
EDIT: Anyway, isn't this essentially the same as Scrabble? Perhaps try Googling for "scrabble algorithm" will give you some good clues.
The reformulated question can be answered just by generating and testing. Since you have 4 letters and 10 arrays, you've only got about 1 million possible combinations (10 million if you allow a blank character). You'll need an efficient way to look them up, use a BDB or some sort of disk based hash.
The trie solution previously posted should work as well, you are just restricted more by what characters you can choose at each step of the search. It should be faster as well.
I just made a very large nested for loop like this:
for(NSString*s1 in [letterList objectAtIndex:0]{
for(NSString*s2 in [letterList objectAtIndex:1]{
8 more times...
}
}
Then I do a binary search on the combination to see if it is in the dictionary and add it to an array if it is
Related
I built a data structure for two sum question. In this data structure I built add and find method.
add - Add the number to an internal data structure.
find - Find if there exists any pair of numbers which sum is equal to the value.
For example:
add(1); add(3); add(5);
find(4) // return true
find(7) // return false
the following is my code, so what is wrong with this code?
http://www.lintcode.com/en/problem/two-sum-data-structure-design/
this is the test website, some cases could not be passed
public class TwoSum {
private List<Integer> sets;
TwoSum() {
this.sets = new ArrayList<Integer>();
}
// Add the number to an internal data structure.
public void add(int number) {
// Write your code here
this.sets.add(number);
}
// Find if there exists any pair of numbers which sum is equal to the value.
public boolean find(int value) {
// Write your code here
Collections.sort(sets);
for (int i = 0; i < sets.size(); i++) {
if (sets.get(i) > value) break;
for (int j = i + 1; j < sets.size(); j++) {
if (sets.get(i) + sets.get(j) == value) {
return true;
}
}
}
return false;
}
}
There does not seem to be anything wrong with your code.
However a coding challenge could possibly require a more performant solution. (You check every item against every item, which would take O(N^2)).
The best solution to implement find, is using a HashMap, which would take O(N). It's explained more in detail here.
Let's say I have such map:
#####
..###
W.###
. is a discovered cell.
# is an undiscovered cell.
W is a worker. There can be many workers. Each of them can move once per turn. In one turn he can move by one cell in 4 directions (up, right, down or left). He discovers all 8 cells around him - turns # into .. In one turn, there can be maximum one worker on the same cell.
Maps are not always rectangular. In the beginning all cells are undiscovered, except the neighbours of W.
The goal is to make all the cells discovered, in as least turns as possible.
First approach
Find the nearest # and go towards it. Repeat.
To find the nearest # I start BFS from W and finish it when first # is found.
On exemplary map it can give such solution:
##### ##### ##### ##### ##... #.... .....
..### ...## ....# ..... ...W. ..W.. .W...
W.### .W.## ..W.# ...W. ..... ..... .....
6 turns. Pretty far from optimal:
##### ..### ...## ....# .....
..### W.### .W.## ..W.# ...W.
W.### ..### ...## ....# .....
4 turns.
Question
What is the algorithm that discovers all the cells with as least turns as possible?
Here is a basic idea that uses A*. It is probably quite time- and memory-consuming, but it is guaranteed to return an optimal solution and is definitely better than brute force.
The nodes for A* will be the various states, i.e. where the workers are positioned and the discovery state of all cells. Each unique state represents a different node.
Edges will be all possible transitions. One worker has four possible transitions. For more workers, you will need every possible combination (about 4^n edges). This is the part where you can constrain the workers to remain within the grid and not to overlap.
The cost will be the number of turns. The heuristic to approximate the distance to the goal (all cells discovered) can be developed as follows:
A single worker can discover at most three cells per turn. Thus, n workers can discover at most 3*n cells. The minimum number of remaining turns is therefore "number of undiscovered cells / (3 * worker count)". This is the heuristic to use. This could even be improved by determining the maximum number of cells that each worker can discover in the next turn (will be max. 3 per worker). So overall heuristic would be "(undiscorvered cells - discoverable cells) / (3 * workers) + 1".
In each step you examine the node with the least overall cost (turns so far + heuristic). For the examined node, you calculate the costs for each surrounding node (possible movements of all workers) and go on.
Strictly speaking, the main part of this answer may be considered as "Not An Answer". So to first cover the actual question:
What is the algorithm that discovers all the cells with as least turns as possible?
Answer: In each step, you can compute all possible successors of the current state. Then the successors of these successors. This can be repeated recursively, until one of the successors contains no more #-fields. The sequence of states through which this successor was reached is optimal regarding the number of moves that have been necessary to reach this state.
So far, this is trivial. But of course, this is not feasible for a "large" map and/or a "large" number of workers.
As mentioned in the comments: I think that finding the optimal solution may be an NP-complete problem. In any case, it's most likely at least a tremendously complicated optimization problem where you may employ some rather sophisticated techniques to find the optimal solution in optimal time.
So, IMHO, the only feasible approach for tackling this are heuristics.
Several approaches can be imagined here. However, I wanted to give it a try, with a very simple approach. The following MCVE accepts the definition of the map as a rectangular string (empty spaces represent "invalid" regions, so it's possible to represent non-rectangular maps with that). The workers are simply enumerated, from 0 to 9 (limited to this number, at the moment). The string is converted into a MapState that consists of the actual map, as well as the paths that the workers have gone through until then.
The actual search here is a "greedy" version of the exhaustive search that I described in the first paragraph: Given an initial state, it computes all successor states. These are the states where each worker has moved in either direction (e.g. 64 states for 3 workers - of course these are "filtered" to make sure that workers don't leave the map or move to the same field).
These successor states are stored in a list. Then it searches the list for the "best" state, and again computes all successors of this "best" state and stores them in the list. Sooner or later, the list contains a state where no fields are missing.
The definition of the "best" state is where the heuristics come into play: A state is "better" than another when there are fewer fields missing (unvisited). When two states have an equal number of missing fields, then the average distance of the workers to the next unvisited fields serves as the criterion to decide which one is "better".
This finds and a solution for the example that is contained in the code below rather quickly, and prints it as the lists of positions that each worker has to visit in each turn.
Of course, this will also not be applicable to "really large" maps or "many" workers, because the list of states will grow rather quickly (one could consider dropping the "worst" solutions to speed this up a little, but this may have caveats, like being stuck in local optima). Additionally, one can easily think of cases where the "greedy" strategy does not give optimal results. But until someone posts an MVCE that always computes the optimal solution in polynomial time, maybe someone finds this interesting or helpful.
import java.awt.Point;
import java.util.ArrayList;
import java.util.Comparator;
import java.util.HashMap;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Set;
public class MapExplorerTest
{
public static void main(String[] args)
{
String mapString =
" ### ######"+"\n"+
" ### ###1##"+"\n"+
"###############"+"\n"+
"#0#############"+"\n"+
"###############"+"\n"+
"###############"+"\n"+
"###############"+"\n"+
"###############"+"\n"+
"###############"+"\n"+
"###############"+"\n"+
"##### #######"+"\n"+
"##### #######"+"\n"+
"##### #######"+"\n"+
"###############"+"\n"+
"###############"+"\n"+
"###############"+"\n"+
"### ######2##"+"\n"+
"### #########"+"\n";
MapExplorer m = new MapExplorer(mapString);
MapState solution = m.computeSolutionGreedy();
System.out.println(solution.createString());
}
}
class MapState
{
private int rows;
private int cols;
private char map[][];
List<List<Point>> workerPaths;
private int missingFields = -1;
MapState(String mapString)
{
workerPaths = new ArrayList<List<Point>>();
rows = countLines(mapString);
cols = mapString.indexOf("\n");
map = new char[rows][cols];
String s = mapString.replaceAll("\\n", "");
for (int r=0; r<rows; r++)
{
for (int c=0; c<cols; c++)
{
int i = c+r*cols;
char ch = s.charAt(i);
map[r][c] = ch;
if (Character.isDigit(ch))
{
int workerIndex = ch - '0';
while (workerPaths.size() <= workerIndex)
{
workerPaths.add(new ArrayList<Point>());
}
Point p = new Point(r, c);
workerPaths.get(workerIndex).add(p);
}
}
}
}
MapState(MapState other)
{
this.rows = other.rows;
this.cols = other.cols;
this.map = new char[other.map.length][];
for (int i=0; i<other.map.length; i++)
{
this.map[i] = other.map[i].clone();
}
this.workerPaths = new ArrayList<List<Point>>();
for (List<Point> otherWorkerPath : other.workerPaths)
{
this.workerPaths.add(MapExplorer.copy(otherWorkerPath));
}
}
int distanceToMissing(Point p0)
{
if (getMissingFields() == 0)
{
return -1;
}
List<Point> points = new ArrayList<Point>();
Map<Point, Integer> distances = new HashMap<Point, Integer>();
distances.put(p0, 0);
points.add(p0);
while (!points.isEmpty())
{
Point p = points.remove(0);
List<Point> successors = MapExplorer.computeSuccessors(p);
for (Point s : successors)
{
if (!isValid(p))
{
continue;
}
if (map[p.x][p.y] == '#')
{
return distances.get(p)+1;
}
if (!distances.containsKey(s))
{
distances.put(s, distances.get(p)+1);
points.add(s);
}
}
}
return -1;
}
double averageDistanceToMissing()
{
double d = 0;
for (List<Point> workerPath : workerPaths)
{
Point p = workerPath.get(workerPath.size()-1);
d += distanceToMissing(p);
}
return d / workerPaths.size();
}
int getMissingFields()
{
if (missingFields == -1)
{
missingFields = countMissingFields();
}
return missingFields;
}
private int countMissingFields()
{
int count = 0;
for (int r=0; r<rows; r++)
{
for (int c=0; c<cols; c++)
{
if (map[r][c] == '#')
{
count++;
}
}
}
return count;
}
void update()
{
for (List<Point> workerPath : workerPaths)
{
Point p = workerPath.get(workerPath.size()-1);
for (int dr=-1; dr<=1; dr++)
{
for (int dc=-1; dc<=1; dc++)
{
if (dr == 0 && dc == 0)
{
continue;
}
int nr = p.x + dr;
int nc = p.y + dc;
if (!isValid(nr, nc))
{
continue;
}
if (map[nr][nc] != '#')
{
continue;
}
map[nr][nc] = '.';
}
}
}
}
public void updateWorkerPosition(int w, Point p)
{
List<Point> workerPath = workerPaths.get(w);
Point old = workerPath.get(workerPath.size()-1);
char oc = map[old.x][old.y];
char nc = map[p.x][p.y];
map[old.x][old.y] = nc;
map[p.x][p.y] = oc;
}
boolean isValid(int r, int c)
{
if (r < 0) return false;
if (r >= rows) return false;
if (c < 0) return false;
if (c >= cols) return false;
if (map[r][c] == ' ')
{
return false;
}
return true;
}
boolean isValid(Point p)
{
return isValid(p.x, p.y);
}
private static int countLines(String s)
{
int count = 0;
while (s.contains("\n"))
{
s = s.replaceFirst("\\\n", "");
count++;
}
return count;
}
public String createMapString()
{
StringBuilder sb = new StringBuilder();
for (int r=0; r<rows; r++)
{
for (int c=0; c<cols; c++)
{
sb.append(map[r][c]);
}
sb.append("\n");
}
return sb.toString();
}
public String createString()
{
StringBuilder sb = new StringBuilder();
for (List<Point> workerPath : workerPaths)
{
Point p = workerPath.get(workerPath.size()-1);
int d = distanceToMissing(p);
sb.append(workerPath).append(", distance: "+d+"\n");
}
sb.append(createMapString());
sb.append("Missing "+getMissingFields());
return sb.toString();
}
}
class MapExplorer
{
MapState mapState;
public MapExplorer(String mapString)
{
mapState = new MapState(mapString);
mapState.update();
computeSuccessors(mapState);
}
static List<Point> copy(List<Point> list)
{
List<Point> result = new ArrayList<Point>();
for (Point p : list)
{
result.add(new Point(p));
}
return result;
}
public MapState computeSolutionGreedy()
{
Comparator<MapState> comparator = new Comparator<MapState>()
{
#Override
public int compare(MapState ms0, MapState ms1)
{
int m0 = ms0.getMissingFields();
int m1 = ms1.getMissingFields();
if (m0 != m1)
{
return m0-m1;
}
double d0 = ms0.averageDistanceToMissing();
double d1 = ms1.averageDistanceToMissing();
return Double.compare(d0, d1);
}
};
Set<MapState> handled = new HashSet<MapState>();
List<MapState> list = new ArrayList<MapState>();
list.add(mapState);
while (true)
{
MapState best = list.get(0);
for (MapState mapState : list)
{
if (!handled.contains(mapState))
{
if (comparator.compare(mapState, best) < 0)
{
best = mapState;
}
}
}
if (best.getMissingFields() == 0)
{
return best;
}
handled.add(best);
list.addAll(computeSuccessors(best));
System.out.println("List size "+list.size()+", handled "+handled.size()+", best\n"+best.createString());
}
}
List<MapState> computeSuccessors(MapState mapState)
{
int numWorkers = mapState.workerPaths.size();
List<Point> oldWorkerPositions = new ArrayList<Point>();
for (int i=0; i<numWorkers; i++)
{
List<Point> workerPath = mapState.workerPaths.get(i);
Point p = workerPath.get(workerPath.size()-1);
oldWorkerPositions.add(p);
}
List<List<Point>> successorPositionsForWorkers = new ArrayList<List<Point>>();
for (int w=0; w<oldWorkerPositions.size(); w++)
{
Point p = oldWorkerPositions.get(w);
List<Point> ps = computeSuccessors(p);
successorPositionsForWorkers.add(ps);
}
List<List<Point>> newWorkerPositionsList = new ArrayList<List<Point>>();
int numSuccessors = (int)Math.pow(4, numWorkers);
for (int i=0; i<numSuccessors; i++)
{
String s = Integer.toString(i, 4);
while (s.length() < numWorkers)
{
s = "0"+s;
}
List<Point> newWorkerPositions = copy(oldWorkerPositions);
for (int w=0; w<numWorkers; w++)
{
int index = s.charAt(w) - '0';
Point newPosition = successorPositionsForWorkers.get(w).get(index);
newWorkerPositions.set(w, newPosition);
}
newWorkerPositionsList.add(newWorkerPositions);
}
List<MapState> successors = new ArrayList<MapState>();
for (int i=0; i<newWorkerPositionsList.size(); i++)
{
List<Point> newWorkerPositions = newWorkerPositionsList.get(i);
if (workerPositionsValid(newWorkerPositions))
{
MapState successor = new MapState(mapState);
for (int w=0; w<numWorkers; w++)
{
Point p = newWorkerPositions.get(w);
successor.updateWorkerPosition(w, p);
successor.workerPaths.get(w).add(p);
}
successor.update();
successors.add(successor);
}
}
return successors;
}
private boolean workerPositionsValid(List<Point> workerPositions)
{
Set<Point> set = new HashSet<Point>();
for (Point p : workerPositions)
{
if (!mapState.isValid(p.x, p.y))
{
return false;
}
set.add(p);
}
return set.size() == workerPositions.size();
}
static List<Point> computeSuccessors(Point p)
{
List<Point> result = new ArrayList<Point>();
result.add(new Point(p.x+0, p.y+1));
result.add(new Point(p.x+0, p.y-1));
result.add(new Point(p.x+1, p.y+0));
result.add(new Point(p.x-1, p.y+0));
return result;
}
}
I have a datatable of 200,000 rows and want to validate each row with that of list and return that string codesList..
It is taking very long time..I want to improve the performance.
for (int i = 0; i < dataTable.Rows.Count; i++)
{
bool isCodeValid = CheckIfValidCode(codevar, codesList,out CodesCount);
}
private bool CheckIfValidCode(string codevar, List<Codes> codesList, out int count)
{
List<Codes> tempcodes= codesList.Where(code => code.StdCode.Equals(codevar)).ToList();
if (tempcodes.Count == 0)
{
RetVal = false;
for (int i = 0; i < dataTable.Rows.Count; i++)
{
bool isCodeValid = CheckIfValidCode(codevar, codesList,out CodesCount);
}
}
}
private bool CheckIfValidCode(string codevar, List<Codes> codesList, out int count)
{
List<Codes> tempcodes= codesList.Where(code => code.StdCode.Equals(codevar)).ToList();
if (tempcodes.Count == 0)
{
RetVal = false;
}
else
{
RetVal=true;
}
return bRetVal;
}
codelist is a list which also contains 200000 records. Please suggest. I used findAll which takes same time and also used LINQ query which also takes same time.
A few optimizations come to mind:
You could start by removing the Tolist() altogether
replace the Count() with .Any(), which returns true if there are items in the result
It's probably also a lot faster when you replace the List with a HashSet<Codes> (this requires your Codes class to implement HashCode and Equals properly. Alternatively you could populate a HashSet<string> with the contents of Codes.StdCode
It looks like you're not using the out count at all. Removing it would make this method a lot faster. Computing a count requires you to check all codes.
You could also split the List into a Dictionary> which you populate with by taking the first character of the code. That would reduce the number of codes to check drastically, since you can exclude 95% of the codes by their first character.
Tell string.Equals to use a StringComparison of type Ordinal or OrdinalIgnoreCase to speed up the comparison.
It looks like you can stop processing a lot earlier as well, the use of .Any takes care of that in the second method. A similar construct can be used in the first, instead of using for and looping through each row, you could short-circuit after the first failure is found (unless this code is incomplete and you mark each row as invalid individually).
Something like:
private bool CheckIfValidCode(string codevar, List<Codes> codesList)
{
Hashset<string> codes = new Hashset(codesList.Select(c ==> code.StdCode));
return codes.Contains(codevar);
// or: return codes.Any(c => string.Equals(codevar, c, StringComparison.Ordinal);
}
If you're adamant about the count:
private bool CheckIfValidCode(string codevar, List<Codes> codesList, out int count)
{
Hashset<string> codes = new Hashset(codesList.Select(c ==> code.StdCode));
count = codes.Count(codevar);
// or: count = codes.Count(c => string.Equals(codevar, c, StringComparison.Ordinal);
return count > 0;
}
You can optimize further by creating the HashSet outside of the call and re-use the instance:
InCallingCode
{
...
Hashset<string> codes = new Hashset(codesList.Select(c ==> code.StdCode));
for (/*loop*/) {
bool isValid = CheckIfValidCode(codevar, codes, out int count)
}
....
}
private bool CheckIfValidCode(string codevar, List<Codes> codesList, out int count)
{
count = codes.Count(codevar);
// or: count = codes.Count(c => string.Equals(codevar, c, StringComparison.Ordinal);
return count > 0;
}
Simply put I want to check if a specified word exists or not.
The lookup needs to be very fast which is why I decided to store the dictionary in a trie. So far so good! My trie works without issues. The problem is filling the trie with a dictionary. What I'm currently doing is looping through every line of a plain text file that is the dictionary and adding each word to my trie.
This is understandably so an extremely slow process. The file contains just about 120 000 lines. If anyone could point me in the right direction for what I could do it would be much appreciated!
This is how I add words to the trie (in Boo):
trie = Trie()
saol = Resources.Load("saol") as TextAsset
text = saol.text.Split(char('\n'))
for new_word in text:
trie.Add(new_word)
And this is my trie (in C#):
using System.Collections.Generic;
public class TrieNode {
public char letter;
public bool word;
public Dictionary<char, TrieNode> child;
public TrieNode(char letter) {
this.letter = letter;
this.word = false;
this.child = new Dictionary<char, TrieNode>();
}
}
public class Trie {
private TrieNode root;
public Trie() {
root = new TrieNode(' ');
}
public void Add(string word) {
TrieNode node = root;
bool found_letter;
int c = 1;
foreach (char letter in word) {
found_letter = false;
// if current letter is in child list, set current node and break loop
foreach (var child in node.child) {
if (letter == child.Key) {
node = child.Value;
found_letter = true;
break;
}
}
// if current letter is not in child list, add child node and set it as current node
if (!found_letter) {
TrieNode new_node = new TrieNode(letter);
if (c == word.Length) new_node.word = true;
node.child.Add(letter, new_node);
node = node.child[letter];
}
c ++;
}
}
public bool Find(string word) {
TrieNode node = root;
bool found_letter;
int c = 1;
foreach (char letter in word) {
found_letter = false;
// check if current letter is in child list
foreach (var child in node.child) {
if (letter == child.Key) {
node = child.Value;
found_letter = true;
break;
}
}
if (found_letter && node.word && c == word.Length) return true;
else if (!found_letter) return false;
c ++;
}
return false;
}
}
Assuming that you don't have any serious implementation problems, pay the price for populating the trie. After you've populated the trie serialize it to a file. For future needs, just load the serialized version. That should be faster that reconstructing the trie.
-- ADDED --
Looking closely at your TrieNode class, you may want to replacing the Dictionary you used for child with an array. You may consume more space, but have a faster lookup time.
Anything you do with CLI yourself will be slower then using the built-in functions.
120k is not that much for a dictionary.
First thing I would do is fire up the code performance tool.
But just some wild guesses: You have a lot of function calls. Just starting with the Boo C# binding in a for loop. Try to pass the whole text block and tare it apart with C#.
Second, do not use a Dictionary. You waste just about as much resources with your code now as you would just using a Dictionary.
Third, sort the text before you go inserting - you can probably make some optimizations that way. Maybe just construct a suffix table.
Given a set of strings (large set), and an input string, you need to find all the anagrams of the input string efficiently. What data structure will you use. And using that, how will you find the anagrams?
Things that I have thought of are these:
Using maps
a) eliminate all words with more/less letters than the input.
b) put the input characters in map
c) Traverse the map for each string and see if all letters are present with their count.
Using Tries
a) Put all strings which have the right number of characters into a trie.
b) traverse each branch and go deeper if the letter is contained in the input.
c) if leaf reached the word is an anagram
Can anyone find a better solution?
Are there any problems that you find in the above approaches?
Build a frequency-map from each word and compare these maps.
Pseudo code:
class Word
string word
map<char, int> frequency
Word(string w)
word = w
for char in word
int count = frequency.get(char)
if count == null
count = 0
count++
frequency.put(char, count)
boolean is_anagram_of(that)
return this.frequency == that.frequency
You could build an hashmap where the key is sorted(word), and the value is a list of all the words that, sorted, give the corresponding key:
private Map<String, List<String>> anagrams = new HashMap<String, List<String>>();
void buildIndex(){
for(String word : words){
String sortedWord = sortWord(word);
if(!anagrams.containsKey(sortedWord)){
anagrams.put(sortedWord, new ArrayList<String>());
}
anagrams.get(sortedWord).add(word);
}
}
Then you just do a lookup for the sorted word in the hashmap you just built, and you'll have the list of all the anagrams.
import java.util.HashMap;
import java.util.HashSet;
import java.util.Map;
import java.util.Set;
/*
*Program for Find Anagrams from Given A string of Arrays.
*
*Program's Maximum Time Complexity is O(n) + O(klogk), here k is the length of word.
*
* By removal of Sorting, Program's Complexity is O(n)
* **/
public class FindAnagramsOptimized {
public static void main(String[] args) {
String[] words = { "gOd", "doG", "doll", "llod", "lold", "life",
"sandesh", "101", "011", "110" };
System.out.println(getAnaGram(words));
}
// Space Complexity O(n)
// Time Complexity O(nLogn)
static Set<String> getAnaGram(String[] allWords) {
// Internal Data Structure for Keeping the Values
class OriginalOccurence {
int occurence;
int index;
}
Map<String, OriginalOccurence> mapOfOccurence = new HashMap<>();
int count = 0;
// Loop Time Complexity is O(n)
// Space Complexity O(K+2K), here K is unique words after sorting on a
for (String word : allWords) {
String key = sortedWord(word);
if (key == null) {
continue;
}
if (!mapOfOccurence.containsKey(key)) {
OriginalOccurence original = new OriginalOccurence();
original.index = count;
original.occurence = 1;
mapOfOccurence.put(key, original);
} else {
OriginalOccurence tempVar = mapOfOccurence.get(key);
tempVar.occurence += 1;
mapOfOccurence.put(key, tempVar);
}
count++;
}
Set<String> finalAnagrams = new HashSet<>();
// Loop works in O(K), here K is unique words after sorting on
// characters
for (Map.Entry<String, OriginalOccurence> anaGramedWordList : mapOfOccurence.entrySet()) {
if (anaGramedWordList.getValue().occurence > 1) {
finalAnagrams.add(allWords[anaGramedWordList.getValue().index]);
}
}
return finalAnagrams;
}
// Array Sort works in O(nLogn)
// Customized Sorting for only chracter's works in O(n) time.
private static String sortedWord(String word) {
// int[] asciiArray = new int[word.length()];
int[] asciiArrayOf26 = new int[26];
// char[] lowerCaseCharacterArray = new char[word.length()];
// int characterSequence = 0;
// Ignore Case Logic written in lower level
for (char character : word.toCharArray()) {
if (character >= 97 && character <= 122) {
// asciiArray[characterSequence] = character;
if (asciiArrayOf26[character - 97] != 0) {
asciiArrayOf26[character - 97] += 1;
} else {
asciiArrayOf26[character - 97] = 1;
}
} else if (character >= 65 && character <= 90) {
// asciiArray[characterSequence] = character + 32;
if (asciiArrayOf26[character + 32 - 97] != 0) {
asciiArrayOf26[character + 32 - 97] += 1;
} else {
asciiArrayOf26[character + 32 - 97] = 1;
}
} else {
return null;
}
// lowerCaseCharacterArray[characterSequence] = (char)
// asciiArray[characterSequence];
// characterSequence++;
}
// Arrays.sort(lowerCaseCharacterArray);
StringBuilder sortedWord = new StringBuilder();
int asciiToIndex = 0;
// This Logic uses for reading the occurrences from array and copying
// back into the character array
for (int asciiValueOfCharacter : asciiArrayOf26) {
if (asciiValueOfCharacter != 0) {
if (asciiValueOfCharacter == 1) {
sortedWord.append((char) (asciiToIndex + 97));
} else {
for (int i = 0; i < asciiValueOfCharacter; i++) {
sortedWord.append((char) (asciiToIndex + 97));
}
}
}
asciiToIndex++;
}
// return new String(lowerCaseCharacterArray);
return sortedWord.toString();
}
}