4x4 2D character matrix permutations - algorithm

I have a 4x4 2D array of characters like this:
A B C D
U A L E
T S U G
N E Y I
Now, I would need to find all the permutations of 3 characters, 4 characters, etc till 10.
So, some words that one could "find" out of this are TEN, BALD, BLUE, GUYS.
I did search SO for this and Googled, but to no concrete help. Can you push me in the right direction in which algorithm I should learn (A* maybe?). Please be gentle as I'm no algorithms guy (aren't we all (well, at least a majority :)), but am willing to learn just don't know where exactly to start.

Ahhh, that's the game Boggle isn't it... You don't want permutations, you want a graph and you want to find words in the graph.
Well, I would start by arranging the characters as graph nodes, and join them to their immediate and diagonal neighbours.
Now you just want to search the graph. For each of the 16 starting nodes, you're going to do a recursion. As you move to a new node, you must flag it as being used so that you can't move to it again. When you leave a node (having completely searched it) you unflag it.
I hope you see where this is going...
For each node, you will visit each of its neighbours and add that character to a string. If you have built your dictionary with this search in mind, you will immediately be able to see whether the characters you have so far are the beginning of a word. This narrows the search nicely.
The kind of dictionary I'm talking about is where you have a tree whose nodes have one child for each letter of the alphabet. The beauty of these is that you only need to store which tree node you're currently up to in the search. If you decide you've found a word, you just backtrack via the parent nodes to work out which word it is.
Using this tree style along with a depth-first graph search, you can search ALL possible word lengths at the same time. That's about the most efficient way I can think of.
Let me just write a pseudocodish function for your graph search:
function FindWords( graphNode, dictNode, wordsList )
# can't use a letter twice
if graphNode.used then return
# don't continue if the letter not part of any word
if not dictNode.hasChild(graphNode.letter) then return
nextDictNode = dictNode.getChild(graphNode.letter)
# if this dictionary node is flagged as a word, add it to our list
nextDictNode.isWord()
wordsList.addWord( nextDictNode .getWord() )
end
# Now do a recursion on all our neighbours
graphNode.used = true
foreach nextGraphNode in graphNode.neighbours do
FindWords( nextGraphNode, nextDictNode, wordsList )
end
graphNode.used = false
end
And of course, to kick the whole thing off:
foreach graphNode in graph do
FindWords( graphNode, dictionary, wordsList )
end
All that remains is to build the graph and the dictionary. And I just remembered what that dictionary data structure is called! It's a Trie. If you need more space-efficient storage, you can compress into a Radix Tree or similar, but by far the easiest (and fastest) is to just use a straight Trie.

As you not define preferred language I implemented on C#:
private static readonly int[] dx = new int[] { 1, 1, 1, 0, 0, -1, -1, -1 };
private static readonly int[] dy = new int[] { -1, 0, 1, 1, -1, -1, 0, 1 };
private static List<string> words;
private static List<string> GetAllWords(char[,] matrix ,int d)
{
words = new List<string>();
bool[,] visited = new bool[4, 4];
char[] result = new char[d];
for (int i = 0; i < 4; i++)
for (int j = 0; j < 4; j++)
Go(matrix, result, visited, d, i, j);
return words;
}
private static void Go(char[,] matrix, char[] result, bool[,] visited, int d, int x, int y)
{
if (x < 0 || x >= 4 || y < 0 || y >= 4 || visited[x, y])
return;
if (d == 0)
{
words.Add(new String(result));
return;
}
visited[x, y] = true;
result[d - 1] = matrix[x, y];
for (int i = 0; i < 8; i++)
{
Go(matrix, result, visited, d - 1, x + dx[i], y + dy[i]);
}
visited[x, y] = false;
}
Code to get results:
char[,] matrix = new char[,] { { 'A', 'B', 'C', 'D' }, { 'U', 'A', 'L', 'E' }, { 'T', 'S', 'U', 'G' }, { 'N', 'E', 'Y', 'I' } };
List<string> list = GetAllWords(matrix, 3);
Change parameter 3 to required text length.

It seems you just use the 4x4 matrix as an array of length 16. If it is the case, you can try the recursive approach to generate permutations up to length k as follows:
findPermutations(chars, i, highLim, downLim, candidate):
if (i > downLim):
print candidate
if (i == highLim): //stop clause
return
for j in range(i,length(chars)):
curr <- chars[i]
candidate.append(curr)
swap(chars,i,j) // make it unavailable for repicking
findPermutations(chars,i+1,highLim,downLim,candidate)
//clean up environment after recursive call:
candidate.removeLast()
swap(chars ,i, j)
The idea is to print each "candidate" that has more chars then downLim (3 in your case), and terminate when you reach the upper limit (highLim) - 10 in your case.
At each time, you "guess" which character is the next to put - and you append it to the candidate, and recursively invoke to find the next candidate.
Repeat the process for all possible guesses.
Note that there are choose(10,16)*10! + choose(9,16)*9! + ... + choose(3,16)*3! different such permutations, so it might be time consuming...
If you want meaningful words, you are going to need some kind of dictionary (or to statistically extract one from some context) in order to match the candidates with the "real words".

Related

Quick Way of Finding How many Substrings has first and last character repeated inside

This is a problem about substrings that I created. I am wondering how to implement an O(nlog(n)) solution to this problem because the naive approach is pretty easy. Here is how it goes. You have a string S. S has many substrings. In some substrings, the first character and last character are there more than once. Find how many substrings where the first and last character are there more than once.
Input: "ABCDCBE"
Expected output: 2
Explanation: "BCDCB" and "CDC" are two such substrings
That test case explanation only has "BCDCB" and "CDC" where first and last char are same.
There can be another case aside from the sample case with "ABABCAC" being the substring where the first character "A" appears 3 times and the last character "C" appears twice. "AAAABB" is also another substring.
"AAAAB" does not satisfy.
What I have learned that is O(nlog(n)) that might or might not contribute to solution is Binary Indexed Trees. Binary Indexed Trees can somehow be used to solve this. There is also sorting and binary search, but first I want to focus especially on Binary Indexed Trees.
I am looking for a space complexity of O(n log(n)) or better.
Also Characters are in UTF-16
The gist of my solution is as follows:
Iterate over the input array, and, for each position, compute the amount of 'valid' substrings that end on that position. The sum of these values is the total amount of valid substrings. We achieve this by counting the amount of valid starts to a substring, that come before the current position, using a Binary Indexed Tree.
Now for the full detail:
As we iterate over the array we think of the current element as the end of a substring, and we say that the positions that are a valid start are those such that its value appears again between it, and the position we are currently iterating over. (i.e. if the value at the start of a substring appears at least twice in it)
For example:
current index V
data = [1, 2, 3, 4, 1, 4, 3, 2]
valid = [1, 0, 1, 1, 0, 0, 0, 0]
0 1 2 3 4 5 6 7
The first 1 (at index 0) is a valid start, because there is another 1 (at index 4) after it, but before the current index (index 6).
Now, counting the amount of valid starts that come before the current index gives us something pretty close to what we wanted, except that we may grab some substrings that don't have two appearances of the last value of the substring (i.e. the one we are currently iterating over)
For example:
current index V
data = [1, 2, 3, 4, 1, 4, 3, 2]
valid = [1, 0, 1, 1, 0, 0, 0, 0]
0 1 2 3 4 5 6 7
^--------^
Here, the 4 is marked as a valid start (because there is another 4 that comes after it), but the corresponding substring does not have two 3s.
To fix this, we shall only consider valid starts up to the previous appearance of the current value. (this means that the substring will contain both the current value, and its previous appearance, thus, the last element will be in the substring at least twice)
The pseudocode goes as follows:
fn solve(arr) {
answer := 0
for i from 1 to length(arr) {
previous_index := find_previous(arr, i)
if there is a previous_index {
arr[previous_index].is_valid_start = true
answer += count_valid_starts_up_to_and_including(arr, previous_index)
}
}
return answer
}
To implement these operations efficiently, we use a hash table for looking up the previous position of a value, and a Binary Indexed Tree (BIT) to keep track of and count the valid positions.
Thus, a more fleshed out pseudocode would look like
fn solve(arr) {
n := length(arr)
prev := hash_table{}
bit := bit_indexed_tree{length = n}
answer := 0
for i from 1 to length(arr) {
value := arr[i]
previous_index := prev[value]
if there is a previous_index {
bit.update(previous_index, 1)
answer += bit.query(previous_index)
}
prev[value] = i
}
return answer
}
Finally, since a pseudocode is not always enough, here is an implementation in C++, where the control flow is a bit munged, to ensure efficient usage of std::unordered_map (C++'s built-in hash table)
class Bit {
std::vector<int> m_data;
public:
// initialize BIT of size `n` with all 0s
Bit(int n);
// add `value` to index `i`
void update(int i, int value);
// sum from index 0 to index `i` (inclusive)
int query(int i);
};
long long solve (std::vector<int> const& arr) {
int const n = arr.size();
std::unordered_map<int, int> prev_index;
Bit bit(n);
long long answer = 0;
int i = 0;
for (int value : arr) {
auto insert_result = prev_index.insert({value, i});
if (!insert_result.second) { // there is a previous index
int j = insert_result.first->second;
bit.update(j, 1);
answer += bit.query(j);
insert_result.first->second = i;
}
++i;
}
return answer;
}
EDIT: For transparency, here is the Fenwick tree implementation i used to test this code
struct Bit {
std::vector<int> m_data;
Bit(int n) : m_data(n+2, 0) { }
int query(int i) {
int res = 0;
for(++i; i > 0; i -= i&-i) res += m_data[i];
return res;
}
void update(int i, int x) {
for(++i; i < m_data.size(); i += i&-i) m_data[i] += x;
}
};

printing the permutation using bfs or dfs

I am trying to print all the permutations of a string using recursion as below. But I was wondering if we can use bfs or dfs also to do this, am I thinking right?
If yes, then can you please give me an idea?
My idea is: if string = "abcd"
start node: 'a'
end node: 'd'
intermediate nodes: 'b' and 'c'
We can then change the start nodes to 'b','c' and 'd'.
I am having difficulty in visualizing it to put it in a algorithm.
#include <stdio.h>
void swap(char *s, int i, int j)
{
char temp = s[i];
s[i] = s[j];
s[j] = temp;
}
void foo(char *s, int j, int len)
{
int i;
if (j == len-1) {
printf("%s\n", s);
return;
}
for (i=j;i<len;i++) {
swap(s, i, j);
foo(s, j+1, len);
swap(s, i, j);
}
}
int main()
{
char s[] = "abc";
foo(s, 0, strlen(s));
}
Based on the logic given by Serge Rogatch, below problem can be solved:
def swap_two(s, i, j):
return s[:i] + s[j] + s[i+1:j] + s[i] + s[j+1:]
def swaps(s):
for i in range(1, len(s)):
yield swap_two(s, 0, i)
def print_permutations(input, q):
seen_list = []
q.enqueue(input)
while not q.isempty():
data = q.dequeue()
for i in swaps(data):
if i not in seen_list:
q.enqueue(i)
seen_list.append(i)
return seen_list
q = queue(512)
seen_list = print_permutations("abcd", q)
print(sorted(seen_list), len(seen_list))
queue implementation is here
Your algorithm seems to already implement backtracking, which is one of the correct things to do for permuting. There is also non-recursive algorithm based on tail inversion (can't find the link, I think I don't remember its name precisely) or QuickPerm algorithm: http://www.quickperm.org/quickperm.html
DFS and BFS visit every vertex exactly once. So if you really want to use them, then as vertices you should view permutations (whole strings like "abcd", "abdc", etc.) rather than separate characters like 'a', 'b', etc. Starting with some initial vertex like "abcd" you should try to swap each pair of characters and see if that vertex has been already visited. You can store the set of visited vertices in an unordered_set. So e.g. in "abcd" if you swap 'b' and 'c' you get "acbd" etc. This algorithm should produce each permutation because for Heap's algorithm it suffices to swap just one pair of vertices in each step: https://en.wikipedia.org/wiki/Heap%27s_algorithm
If you strictly want to emulate a graph traversal algorithm...Here's an intuitive(probably not the most graceful) approach:
Think of string as a graph, where each character is connected to every other character
Instead of trying to find a "path" from source to destination, frame the problem as follows: "find all paths of a specific length - from every source"
So start from the first character, use it as the "source"; then find all paths with length = length of the entire String... Then use the next character as the source...
Here's an implementation in python:
def permutations(s):
g = _str_to_graph(s) # {'a': ['b', 'c'], 'b': ['c', 'a'], 'c': ['a', 'b'] }
branch = []
visited = set()
for i in s: # use every character as a source
dfs_all_paths_of_certain_length(i, len(s), branch, visited, g)
def _str_to_graph(s):
from collections import defaultdict
g = defaultdict(list)
for i in range(len(s)):
for j in range(len(s)):
if i != j:
g[s[i]].append(s[j])
return g
def dfs_all_paths_of_certain_length(u, ll, branch, visited, g):
visited.add(u)
branch.append(u)
if len(branch) == ll: # if length of branch equals length of string, print the branch
print("".join(branch))
else:
for n in g[u]:
if n not in visited:
dfs_all_paths_of_certain_length(n, ll, branch, visited, g)
# backtrack
visited.remove(u)
branch.remove(u)
You can read this article:
http://en.cppreference.com/w/cpp/algorithm/next_permutation
AlTHOUGH this is C++ implementation, but you can easily transform it to a C version
By the way, your method can be called a dfs!

how to write iterative algorithm for generate all subsets of a set?

I wrote recursive backtracking algorithm for finding all subsets of a given set.
void backtracke(int* a, int k, int n)
{
if (k == n)
{
for(int i = 1; i <=k; ++i)
{
if (a[i] == true)
{
std::cout << i << " ";
}
}
std::cout << std::endl;
return;
}
bool c[2];
c[0] = false;
c[1] = true;
++k;
for(int i = 0; i < 2; ++i)
{
a[k] = c[i];
backtracke(a, k, n);
a[k] = INT_MAX;
}
}
now we have to write the same algorithm but in an iterative form, how to do it ?
You can use the binary counter approach. Any unique binary string of length n represents a unique subset of a set of n elements. If you start with 0 and end with 2^n-1, you cover all possible subsets. The counter can be easily implemented in an iterative manner.
The code in Java:
public static void printAllSubsets(int[] arr) {
byte[] counter = new byte[arr.length];
while (true) {
// Print combination
for (int i = 0; i < counter.length; i++) {
if (counter[i] != 0)
System.out.print(arr[i] + " ");
}
System.out.println();
// Increment counter
int i = 0;
while (i < counter.length && counter[i] == 1)
counter[i++] = 0;
if (i == counter.length)
break;
counter[i] = 1;
}
}
Note that in Java one can use BitSet, which makes the code really shorter, but I used a byte array to illustrate the process better.
There are a few ways to write an iterative algorithm for this problem. The most commonly suggested would be to:
Count (i.e. a simply for-loop) from 0 to 2numberOfElements - 1
If we look at the variable used above for counting in binary, the digit at each position could be thought of a flag indicating whether or not the element at the corresponding index in the set should be included in this subset. Simply loop over each bit (by taking the remainder by 2, then dividing by 2), including the corresponding elements in our output.
Example:
Input: {1,2,3,4,5}.
We'd start counting at 0, which is 00000 in binary, which means no flags are set, so no elements are included (this would obviously be skipped if you don't want the empty subset) - output {}.
Then 1 = 00001, indicating that only the last element would be included - output {5}.
Then 2 = 00010, indicating that only the second last element would be included - output {4}.
Then 3 = 00011, indicating that the last two elements would be included - output {4,5}.
And so on, all the way up to 31 = 11111, indicating that all the elements would be included - output {1,2,3,4,5}.
* Actually code-wise, it would be simpler to turn this on its head - output {1} for 00001, considering that the first remainder by 2 will then correspond to the flag of the 0th element, the second remainder, the 1st element, etc., but the above is simpler for illustrative purposes.
More generally, any recursive algorithm could be changed to an iterative one as follows:
Create a loop consisting of parts (think switch-statement), with each part consisting of the code between any two recursive calls in your function
Create a stack where each element contains each necessary local variable in the function, and an indication of which part we're busy with
The loop would pop elements from the stack, executing the appropriate section of code
Each recursive call would be replaced by first adding it's own state to the stack, and then the called state
Replace return with appropriate break statements
A little Python implementation of George's algorithm. Perhaps it will help someone.
def subsets(S):
l = len(S)
for x in range(2**l):
yield {s for i,s in enumerate(S) if ((x / 2**i) % 2) // 1 == 1}
Basically what you want is P(S) = S_0 U S_1 U ... U S_n where S_i is a set of all sets contained by taking i elements from S. In other words if S= {a, b, c} then S_0 = {{}}, S_1 = {{a},{b},{c}}, S_2 = {{a, b}, {a, c}, {b, c}} and S_3 = {a, b, c}.
The algorithm we have so far is
set P(set S) {
PS = {}
for i in [0..|S|]
PS = PS U Combination(S, i)
return PS
}
We know that |S_i| = nCi where |S| = n. So basically we know that we will be looping nCi times. You may use this information to optimize the algorithm later on. To generate combinations of size i the algorithm that I present is as follows:
Suppose S = {a, b, c} then you can map 0 to a, 1 to b and 2 to c. And perumtations to these are (if i=2) 0-0, 0-1, 0-2, 1-0, 1-1, 1-2, 2-0, 2-1, 2-2. To check if a sequence is a combination you check if the numbers are all unique and that if you permute the digits the sequence doesn't appear elsewhere, this will filter the above sequence to just 0-1, 0-2 and 1-2 which are later mapped back to {a,b},{a,c},{b,c}. How to generate the long sequence above you can follow this algorithm
set Combination(set S, integer l) {
CS = {}
for x in [0..2^l] {
n = {}
for i in [0..l] {
n = n U {floor(x / |S|^i) mod |S|} // get the i-th digit in x base |S|
}
CS = CS U {S[n]}
}
return filter(CS) // filtering described above
}

Add the least amount of characters to make a palindrome

The question:
Given any string, add the least amount of characters possible to make it a palindrome in linear time.
I'm only able to come up with a O(N2) solution.
Can someone help me with an O(N) solution?
Revert the string
Use a modified Knuth-Morris-Pratt to find the latest match (simplest modification would be to just append the original string to the reverted string and ignore matches after len(string).
Append the unmatched rest of the reverted string to the original.
1 and 3 are obviously linear and 2 is linear beacause Knuth-Morris-Pratt is.
If only appending is allowed
A Scala solution:
def isPalindrome(s: String) = s.view.reverse == s.view
def makePalindrome(s: String) =
s + s.take((0 to s.length).find(i => isPalindrome(s.substring(i))).get).reverse
If you're allowed to insert characters anywhere
Every palindrome can be viewed as a set of nested letter pairs.
a n n a b o b
| | | | | * |
| -- | | |
--------- -----
If the palindrome length n is even, we'll have n/2 pairs. If it is odd, we'll have n/2 full pairs and one single letter in the middle (let's call it a degenerated pair).
Let's represent them by pairs of string indexes - the left index counted from the left end of the string, and the right index counted from the right end of the string, both ends starting with index 0.
Now let's write pairs starting from the outer to the inner. So in our example:
anna: (0, 0) (1, 1)
bob: (0, 0) (1, 1)
In order to make any string a palindrome, we will go from both ends of the string one character at a time, and with every step, we'll eventually add a character to produce a correct pair of identical characters.
Example:
Assume the input word is "blob"
Pair (0, 0) is (b, b) ok, nothing to do, this pair is fine. Let's increase the counter.
Pair (1, 1) is (l, o). Doesn't match. So let's add "o" at position 1 from the left. Now our word became "bolob".
Pair (2, 2). We don't need to look even at the characters, because we're pointing at the same index in the string. Done.
Wait a moment, but we have a problem here: in point 2. we arbitrarily chose to add a character on the left. But we could as well add a character "l" on the right. That would produce "blolb", also a valid palindrome. So does it matter? Unfortunately it does because the choice in earlier steps may affect how many pairs we'll have to fix and therefore how many characters we'll have to add in the future steps.
Easy algorithm: search all the possiblities. That would give us a O(2^n) algorithm.
Better algorithm: use Dynamic Programming approach and prune the search space.
In order to keep things simpler, now we decouple inserting of new characters from just finding the right sequence of nested pairs (outer to inner) and fixing their alignment later. So for the word "blob" we have the following possibilities, both ending with a degenerated pair:
(0, 0) (1, 2)
(0, 0) (2, 1)
The more such pairs we find, the less characters we will have to add to fix the original string. Every full pair found gives us two characters we can reuse. Every degenerated pair gives us one character to reuse.
The main loop of the algorithm will iteratively evaluate pair sequences in such a way, that in step 1 all valid pair sequences of length 1 are found. The next step will evaluate sequences of length 2, the third sequences of length 3 etc. When at some step we find no possibilities, this means the previous step contains the solution with the highest number of pairs.
After each step, we will remove the pareto-suboptimal sequences. A sequence is suboptimal compared to another sequence of the same length, if its last pair is dominated by the last pair of the other sequence. E.g. sequence (0, 0)(1, 3) is worse than (0, 0)(1, 2). The latter gives us more room to find nested pairs and we're guaranteed to find at least all the pairs that we'd find for the former. However sequence (0, 0)(1, 2) is neither worse nor better than (0, 0)(2, 1). The one minor detail we have to beware of is that a sequence ending with a degenerated pair is always worse than a sequence ending with a full pair.
After bringing it all together:
def makePalindrome(str: String): String = {
/** Finds the pareto-minimum subset of a set of points (here pair of indices).
* Could be done in linear time, without sorting, but O(n log n) is not that bad ;) */
def paretoMin(points: Iterable[(Int, Int)]): List[(Int, Int)] = {
val sorted = points.toSeq.sortBy(identity)
(List.empty[(Int, Int)] /: sorted) { (result, e) =>
if (result.isEmpty || e._2 <= result.head._2)
e :: result
else
result
}
}
/** Find all pairs directly nested within a given pair.
* For performance reasons tries to not include suboptimal pairs (pairs nested in any of the pairs also in the result)
* although it wouldn't break anything as prune takes care of this. */
def pairs(left: Int, right: Int): Iterable[(Int, Int)] = {
val builder = List.newBuilder[(Int, Int)]
var rightMax = str.length
for (i <- left until (str.length - right)) {
rightMax = math.min(str.length - left, rightMax)
val subPairs =
for (j <- right until rightMax if str(i) == str(str.length - j - 1)) yield (i, j)
subPairs.headOption match {
case Some((a, b)) => rightMax = b; builder += ((a, b))
case None =>
}
}
builder.result()
}
/** Builds sequences of size n+1 from sequence of size n */
def extend(path: List[(Int, Int)]): Iterable[List[(Int, Int)]] =
for (p <- pairs(path.head._1 + 1, path.head._2 + 1)) yield p :: path
/** Whether full or degenerated. Full-pairs save us 2 characters, degenerated save us only 1. */
def isFullPair(pair: (Int, Int)) =
pair._1 + pair._2 < str.length - 1
/** Removes pareto-suboptimal sequences */
def prune(sequences: List[List[(Int, Int)]]): List[List[(Int, Int)]] = {
val allowedHeads = paretoMin(sequences.map(_.head)).toSet
val containsFullPair = allowedHeads.exists(isFullPair)
sequences.filter(s => allowedHeads.contains(s.head) && (isFullPair(s.head) || !containsFullPair))
}
/** Dynamic-Programming step */
#tailrec
def search(sequences: List[List[(Int, Int)]]): List[List[(Int, Int)]] = {
val nextStage = prune(sequences.flatMap(extend))
nextStage match {
case List() => sequences
case x => search(nextStage)
}
}
/** Converts a sequence of nested pairs to a palindrome */
def sequenceToString(sequence: List[(Int, Int)]): String = {
val lStr = str
val rStr = str.reverse
val half =
(for (List(start, end) <- sequence.reverse.sliding(2)) yield
lStr.substring(start._1 + 1, end._1) + rStr.substring(start._2 + 1, end._2) + lStr(end._1)).mkString
if (isFullPair(sequence.head))
half + half.reverse
else
half + half.reverse.substring(1)
}
sequenceToString(search(List(List((-1, -1)))).head)
}
Note: The code does not list all the palindromes, but gives only one example, and it is guaranteed it has the minimum length. There usually are more palindromes possible with the same minimum length (O(2^n) worst case, so you probably don't want to enumerate them all).
O(n) time solution.
Algorithm:
Need to find the longest palindrome within the given string that contains the last character. Then add all the character that are not part of the palindrome to the back of the string in reverse order.
Key point:
In this problem, the longest palindrome in the given string MUST contain the last character.
ex:
input: abacac
output: abacacaba
Here the longest palindrome in the input that contains the last letter is "cac". Therefore add all the letter before "cac" to the back in reverse order to make the entire string a palindrome.
written in c# with a few test cases commented out
static public void makePalindrome()
{
//string word = "aababaa";
//string word = "abacbaa";
//string word = "abcbd";
//string word = "abacac";
//string word = "aBxyxBxBxyxB";
//string word = "Malayal";
string word = "abccadac";
int j = word.Length - 1;
int mark = j;
bool found = false;
for (int i = 0; i < j; i++)
{
char cI = word[i];
char cJ = word[j];
if (cI == cJ)
{
found = true;
j--;
if(mark > i)
mark = i;
}
else
{
if (found)
{
found = false;
i--;
}
j = word.Length - 1;
mark = j;
}
}
for (int i = mark-1; i >=0; i--)
word += word[i];
Console.Write(word);
}
}
Note that this code will give you the solution for least amount of letter to APPEND TO THE BACK to make the string a palindrome. If you want to append to the front, just have a 2nd loop that goes the other way. This will make the algorithm O(n) + O(n) = O(n). If you want a way to insert letters anywhere in the string to make it a palindrome, then this code will not work for that case.
I believe #Chronical's answer is wrong, as it seems to be for best case scenario, not worst case which is used to compute big-O complexity. I welcome the proof, but the "solution" doesn't actually describe a valid answer.
KMP finds a matching substring in O(n * 2k) time, where n is the length of the input string, and k substring we're searching for, but does not in O(n) time tell you what the longest palindrome in the input string is.
To solve this problem, we need to find the longest palindrome at the end of the string. If this longest suffix palindrome is of length x, the minimum number of characters to add is n - x. E.g. the string aaba's longest suffix substring is aba of length 3, thus our answer is 1. The algorithm to find out if a string is a palindrome takes O(n) time, whether using KMP or the more efficient and simple algorithm (O(n/2)):
Take two pointers, one at the first character and one at the last character
Compare the characters at the pointers, if they're equal, move each pointer inward, otherwise return false
When the pointers point to the same index (odd string length), or have overlapped (even string length), return true
Using the simple algorithm, we start from the entire string and check if it's a palindrome. If it is, we return 0, and if not, we check the string string[1...end], string[2...end] until we have reached a single character and return n - 1. This results in a runtime of O(n^2).
Splitting up the KMP algorithm into
Build table
Search for longest suffix palindrome
Building the table takes O(n) time, and then each check of "are you a palindrome" for each substring from string[0...end], string[1...end], ..., string[end - 2...end] each takes O(n) time. k in this case is the same factor of n that the simple algorithm takes to check each substring, because it starts as k = n, then goes through k = n - 1, k = n - 2... just the same as the simple algorithm did.
TL; DR:
KMP can tell you if a string is a palindrome in O(n) time, but that supply an answer to the question, because you have to check if all substrings string[0...end], string[1...end], ..., string[end - 2...end] are palindromes, resulting in the same (but actually worse) runtime as a simple palindrome-check algorithm.
#include<iostream>
#include<string>
using std::cout;
using std::endl;
using std::cin;
int main() {
std::string word, left("");
cin >> word;
size_t start, end;
for (start = 0, end = word.length()-1; start < end; end--) {
if (word[start] != word[end]) {
left.append(word.begin()+end, 1 + word.begin()+end);
continue;
}
left.append(word.begin()+start, 1 + word.begin()+start), start++;
}
cout << left << ( start == end ? std::string(word.begin()+end, 1 + word.begin()+end) : "" )
<< std::string(left.rbegin(), left.rend()) << endl;
return 0;
}
Don't know if it appends the minimum number, but it produces palindromes
Explained:
We will start at both ends of the given string and iterate inwards towards the center.
At each iteration, we check if each letter is the same, i.e. word[start] == word[end]?.
If they are the same, we append a copy of the variable word[start] to another string called left which as it name suggests will serve as the left hand side of the new palindrome string when iteration is complete. Then we increment both variables (start)++ and (end)-- towards the center
In the case that they are not the same, we append a copy of of the variable word[end] to the same string left
And this is the basics of the algorithm until the loop is done.
When the loop is finished, one last check is done to make sure that if we got an odd length palindrome, we append the middle character to the middle of the new palindrome formed.
Note that if you decide to append the oppoosite characters to the string left, the opposite about everything in the code becomes true; i.e. which index is incremented at each iteration and which is incremented when a match is found, order of printing the palindrome, etc. I don't want to have to go through it again but you can try it and see.
The running complexity of this code should be O(N) assuming that append method of the std::string class runs in constant time.
If some wants to solve this in ruby, The solution can be very simple
str = 'xcbc' # Any string that you want.
arr1 = str.split('')
arr2 = arr1.reverse
count = 0
while(str != str.reverse)
count += 1
arr1.insert(count-1, arr2[count-1])
str = arr1.join('')
end
puts str
puts str.length - arr2.count
I am assuming that you cannot replace or remove any existing characters?
A good start would be reversing one of the strings and finding the longest-common-substring (LCS) between the reversed string and the other string. Since it sounds like this is a homework or interview question, I'll leave the rest up to you.
Here see this solution
This is better than O(N^2)
Problem is sub divided in to many other sub problems
ex:
original "tostotor"
reversed "rototsot"
Here 2nd position is 'o' so dividing in to two problems by breaking in to "t" and "ostot" from the original string
For 't':solution is 1
For 'ostot':solution is 2 because LCS is "tot" and characters need to be added are "os"
so total is 2+1 = 3
def shortPalin( S):
k=0
lis=len(S)
for i in range(len(S)/2):
if S[i]==S[lis-1-i]:
k=k+1
else :break
S=S[k:lis-k]
lis=len(S)
prev=0
w=len(S)
tot=0
for i in range(len(S)):
if i>=w:
break;
elif S[i]==S[lis-1-i]:
tot=tot+lcs(S[prev:i])
prev=i
w=lis-1-i
tot=tot+lcs(S[prev:i])
return tot
def lcs( S):
if (len(S)==1):
return 1
li=len(S)
X=[0 for x in xrange(len(S)+1)]
Y=[0 for l in xrange(len(S)+1)]
for i in range(len(S)-1,-1,-1):
for j in range(len(S)-1,-1,-1):
if S[i]==S[li-1-j]:
X[j]=1+Y[j+1]
else:
X[j]=max(Y[j],X[j+1])
Y=X
return li-X[0]
print shortPalin("tostotor")
Using Recursion
#include <iostream>
using namespace std;
int length( char str[])
{ int l=0;
for( int i=0; str[i]!='\0'; i++, l++);
return l;
}
int palin(char str[],int len)
{ static int cnt;
int s=0;
int e=len-1;
while(s<e){
if(str[s]!=str[e]) {
cnt++;
return palin(str+1,len-1);}
else{
s++;
e--;
}
}
return cnt;
}
int main() {
char str[100];
cin.getline(str,100);
int len = length(str);
cout<<palin(str,len);
}
Solution with O(n) time complexity
public static void main(String[] args) {
String givenStr = "abtb";
String palindromeStr = covertToPalindrome(givenStr);
System.out.println(palindromeStr);
}
private static String covertToPalindrome(String str) {
char[] strArray = str.toCharArray();
int low = 0;
int high = strArray.length - 1;
int subStrIndex = -1;
while (low < high) {
if (strArray[low] == strArray[high]) {
high--;
} else {
high = strArray.length - 1;
subStrIndex = low;
}
low++;
}
return str + (new StringBuilder(str.substring(0, subStrIndex+1))).reverse().toString();
}
// string to append to convert it to a palindrome
public static void main(String args[])
{
String s=input();
System.out.println(min_operations(s));
}
static String min_operations(String str)
{
int i=0;
int j=str.length()-1;
String ans="";
while(i<j)
{
if(str.charAt(i)!=str.charAt(j))
{
ans=ans+str.charAt(i);
}
if(str.charAt(i)==str.charAt(j))
{
j--;
}
i++;
}
StringBuffer sd=new StringBuffer(ans);
sd.reverse();
return (sd.toString());
}

Algorithm to find the smallest snippet from searching a document?

I've been going through Skiena's excellent "The Algorithm Design Manual" and got hung up on one of the exercises.
The question is:
"Given a search string of three words, find the smallest snippet of the document that contains all three of the search words—i.e. , the snippet with smallest number of words in it. You are given the index positions where these words in occur search strings, such as word1: (1, 4, 5), word2: (4, 9, 10), and word3: (5, 6, 15). Each of the lists are in sorted order, as above."
Anything I come up with is O(n^2)... This question is in the "Sorting and Searching" chapter, so I assume there is a simple and clever way to do it. I'm trying something with graphs right now, but that seems like overkill.
Ideas?
Thanks
Unless I've overlooked something, here's a simple, O(n) algorithm:
We'll represent the snippet by (x, y) where x and y are where the snippet begins and ends respectively.
A snippet is feasible if it contains all 3 search words.
We will start with the infeasible snippet (0,0).
Repeat the following until y reaches end-of-string:
If the current snippet (x, y) is feasible, proceed to the snippet (x+1, y)
Else (the current snippet is infeasible) proceed to the snippet (x, y+1)
Choose the shortest snippet among all feasible snippets we went through.
Running time - in each iteration either x or y is increased by 1, clearly x can't exceed y and y can't exceed string length so total number of iterations is O(n). Also, feasibility can be checked at O(1) in this case since we can track how many occurences of each word are within the current snippet. We can maintain this count at O(1) with each increase of x or y by 1.
Correctness - For each x, we calculate the minimal feasible snippet (x, ?). Thus we must go over the minimal snippet. Also, if y is the smallest y such that (x, y) is feasible then if (x+1, y') is a feasible snippet y' >= y (This bit is why this algorithm is linear and the others aren't).
I already posted a rather straightforward algorithm that solves exactly that problem in this answer
Google search results: How to find the minimum window that contains all the search keywords?
However, in that question we assumed that the input is represented by a text stream and the words are stored in an easily searchable set.
In your case the input is represented slightly differently: as a bunch of vectors with sorted positions for each word. This representation is easily transformable to what is needed for the above algorithm by simply merging all these vectors into a single vector of (position, word) pairs ordered by position. It can be done literally, or it can be done "virtually", by placing the original vectors into the priority queue (ordered in accordance with their first elements). Popping an element from the queue in this case means popping the first element from the first vector in the queue and possibly sinking the first vector into the queue in accordance with its new first element.
Of course, since your statement of the problem explicitly fixes the number of words as three, you can simply check the first elements of all three arrays and pop the smallest one at each iteration. That gives you a O(N) algorithm, where N is the total length of all arrays.
Also, your statement of the problem seems to suggest that target words can overlap in the text, which is rather strange (given that you use the term "word"). Is it intentional? In any case, it doesn't present any problem for the above linked algorithm.
From the question, it seems that you're given the index locations for each of your n “search words” (word1, word2, word3, ..., word n) in the document. Using a sorting algorithm, the n independent arrays associated with search words can readily be represented as a single array of all the index locations in ascending numerical order and a word label associated with each index in the array (the index array).
The Basic Algorithm:
(Designed to work whether or not the poster of this question intended to allow two different search words to coexist at the same index number.)
First, we define a simple function for measuring the length of a snippet that contains all n labels given a starting point in the index array. (It is obvious from the definition of our array that any starting point on the array will necessarily be the indexed location of one of the n search labels.) The function simply keeps track of the unique search labels seen as the function iterates through the elements in the array until all n labels have been observed. The length of the snippet is defined as the difference between the index of the last unique label found and the index of the starting point in the index array (the first unique label found). If all n labels aren't observed before the end of the array the function returns a null value.
Now, the snippet length function can be run for each element in your array to associate a snippet size containing all n search words starting from each element in the array. The smallest non-Null value returned by the snippet length function over the whole index array is the snippet in your document that you're looking for.
Necessary Optimizations:
Keep track of the value of the current shortest snippet length so that the value will be know immediately after iterating once through the index array.
When iterating through your array terminate the snippet length function if the current snippet under inspection ever surpasses the length of the shortest snippet length previously seen.
When the snippet length function returns null for not locating all n search words in the remaining index array elements, associate a null snippet length to all successive elements in the index array.
If the snippet length function is applied to a word label and the label immediately following it is identical to the starting label, assign a null value to the starting label and move on to the next label.
Computational Complexity:
Obviously the sorting part of the algorithm can be arranged in O(n log n).
Here's how I would work out the time complexity of the second part of the algorithm (any critiques and corrections would be greatly appreciated).
In the best case scenario, the algorithm only applies the snippet length function to the first element in the index array and finds that no snippet containing all the search words exists. This scenario would be computed in just n calculations where n is the size of the index array. Slightly worse than that is if the smallest snippet turns out to be equal to the size of the whole array. In this case the computational complexity will be a little less than 2 n (once through the array to find the smallest snippet length, a second time to demonstrate that no other snippets exist). The shorter the average computed snippet length, the more times the snippet length function will need to be applied over the index array. We can assume that our worse case scenario will be the case where the snippet length function needs to be applied to every element in the index array. To develop a case where the function will be applied to every element in the index array we need to design an index array where the average snippet length over the whole index array is negligible in comparison to the size of the index array as a whole. Using this case we can write out our computational complexity as O(C n) where C is some constant that is significantly smaller then n. Giving a final computational complexity of:
O(n log n + C n)
Where:
C << n
Edit:
AndreyT correctly points out that instead of sorting the word indicies in n log n time, one might just as well merge them (since the sub arrays are already sorted) in n log m time where m is the amount of search word arrays to be merged. This will obviously speed up the algorithm is cases where m < n.
O(n log k) solution, where n is the total number of indices and k is the number of words. The idea is to use a heap to identify the smallest index at each iteration, while also keeping track of the maximum index in the heap. I also put the coordinates of each value in the heap, in order to be able to retrieve the next value in constant time.
#include <algorithm>
#include <cassert>
#include <limits>
#include <queue>
#include <vector>
using namespace std;
int snippet(const vector< vector<int> >& index) {
// (-index[i][j], (i, j))
priority_queue< pair< int, pair<size_t, size_t> > > queue;
int nmax = numeric_limits<int>::min();
for (size_t i = 0; i < index.size(); ++i) {
if (!index[i].empty()) {
int cur = index[i][0];
nmax = max(nmax, cur);
queue.push(make_pair(-cur, make_pair(i, 0)));
}
}
int result = numeric_limits<int>::max();
while (queue.size() == index.size()) {
int nmin = -queue.top().first;
size_t i = queue.top().second.first;
size_t j = queue.top().second.second;
queue.pop();
result = min(result, nmax - nmin + 1);
j++;
if (j < index[i].size()) {
int next = index[i][j];
nmax = max(nmax, next);
queue.push(make_pair(-next, make_pair(i, j)));
}
}
return result;
}
int main() {
int data[][3] = {{1, 4, 5}, {4, 9, 10}, {5, 6, 15}};
vector<vector<int> > index;
for (int i = 0; i < 3; i++) {
index.push_back(vector<int>(data[i], data[i] + 3));
}
assert(snippet(index) == 2);
}
Sample implementation in java (tested only with the implementation in the example, there might be bugs). The implementation is based on the replies above.
import java.util.Arrays;
public class SmallestSnippet {
WordIndex[] words; //merged array of word occurences
public enum Word {W1, W2, W3};
public SmallestSnippet(Integer[] word1, Integer[] word2, Integer[] word3) {
this.words = new WordIndex[word1.length + word2.length + word3.length];
merge(word1, word2, word3);
System.out.println(Arrays.toString(words));
}
private void merge(Integer[] word1, Integer[] word2, Integer[] word3) {
int i1 = 0;
int i2 = 0;
int i3 = 0;
int wordIdx = 0;
while(i1 < word1.length || i2 < word2.length || i3 < word3.length) {
WordIndex wordIndex = null;
Word word = getMin(word1, i1, word2, i2, word3, i3);
if (word == Word.W1) {
wordIndex = new WordIndex(word, word1[i1++]);
}
else if (word == Word.W2) {
wordIndex = new WordIndex(word, word2[i2++]);
}
else {
wordIndex = new WordIndex(word, word3[i3++]);
}
words[wordIdx++] = wordIndex;
}
}
//determine which word has the smallest index
private Word getMin(Integer[] word1, int i1, Integer[] word2, int i2, Integer[] word3,
int i3) {
Word toReturn = Word.W1;
if (i1 == word1.length || (i2 < word2.length && word2[i2] < word1[i1])) {
toReturn = Word.W2;
}
if (toReturn == Word.W1 && i3 < word3.length && word3[i3] < word1[i1])
{
toReturn = Word.W3;
}
else if (toReturn == Word.W2){
if (i2 == word2.length || (i3 < word3.length && word3[i3] < word2[i2])) {
toReturn = Word.W3;
}
}
return toReturn;
}
private Snippet calculate() {
int start = 0;
int end = 0;
int max = words.length;
Snippet minimum = new Snippet(words[0].getIndex(), words[max-1].getIndex());
while (start < max)
{
end = start;
boolean foundAll = false;
boolean found[] = new boolean[Word.values().length];
while (end < max && !foundAll) {
found[words[end].getWord().ordinal()] = true;
boolean complete = true;
for (int i=0 ; i < found.length && complete; i++) {
complete = found[i];
}
if (complete)
{
foundAll = true;
}
else {
if (words[end].getIndex()-words[start].getIndex() == minimum.getLength())
{
// we won't find a minimum no need to search further
break;
}
end++;
}
}
if (foundAll && words[end].getIndex()-words[start].getIndex() < minimum.getLength()) {
minimum.setEnd(words[end].getIndex());
minimum.setStart(words[start].getIndex());
}
start++;
}
return minimum;
}
/**
* #param args
*/
public static void main(String[] args) {
Integer[] word1 = {1,4,5};
Integer[] word2 = {3,9,10};
Integer[] word3 = {2,6,15};
SmallestSnippet smallestSnippet = new SmallestSnippet(word1, word2, word3);
Snippet snippet = smallestSnippet.calculate();
System.out.println(snippet);
}
}
Helper classes:
public class Snippet {
private int start;
private int end;
//getters, setters etc
public int getLength()
{
return Math.abs(end - start);
}
}
public class WordIndex
{
private SmallestSnippet.Word word;
private int index;
public WordIndex(SmallestSnippet.Word word, int index) {
this.word = word;
this.index = index;
}
}
The other answers are alright, but like me, if you're having trouble understanding the question in the first place, those aren't really helpful. Let's rephrase the question:
Given three sets of integers (call them A, B, and C), find the minimum contiguous range that contains one element from each set.
There is some confusion about what the three sets are. The 2nd edition of the book states them as {1, 4, 5}, {4, 9, 10}, and {5, 6, 15}. However, another version that has been stated in a comment above is {1, 4, 5}, {3, 9, 10}, and {2, 6, 15}. If one word is not a suffix/prefix of another, version 1 isn't possible, so let's go with the second one.
Since a picture is worth a thousand words, lets plot the points:
Simply inspecting the above visually, we can see that there are two answers to this question: [1,3] and [2,4], both of size 3 (three points in each range).
Now, the algorithm. The idea is to start with the smallest valid range, and incrementally try to shrink it by moving the left boundary inwards. We will use zero-based indexing.
MIN-RANGE(A, B, C)
i = j = k = 0
minSize = +∞
while i, j, k is a valid index of the respective arrays, do
ans = (A[i], B[j], C[k])
size = max(ans) - min(ans) + 1
minSize = min(size, minSize)
x = argmin(ans)
increment x by 1
done
return minSize
where argmin is the index of the smallest element in ans.
+---+---+---+---+--------------------+---------+
| n | i | j | k | (A[i], B[j], C[k]) | minSize |
+---+---+---+---+--------------------+---------+
| 1 | 0 | 0 | 0 | (1, 3, 2) | 3 |
+---+---+---+---+--------------------+---------+
| 2 | 1 | 0 | 0 | (4, 3, 2) | 3 |
+---+---+---+---+--------------------+---------+
| 3 | 1 | 0 | 1 | (4, 3, 6) | 4 |
+---+---+---+---+--------------------+---------+
| 4 | 1 | 1 | 1 | (4, 9, 6) | 6 |
+---+---+---+---+--------------------+---------+
| 5 | 2 | 1 | 1 | (5, 9, 6) | 5 |
+---+---+---+---+--------------------+---------+
| 6 | 3 | 1 | 1 | | |
+---+---+---+---+--------------------+---------+
n = iteration
At each step, one of the three indices is incremented, so the algorithm is guaranteed to eventually terminate. In the worst case, i, j, and k are incremented in that order, and the algorithm runs in O(n^2) (9 in this case) time. For the given example, it terminates after 5 iterations.
O(n)
Pair find(int[][] indices) {
pair.lBound = max int;
pair.rBound = 0;
index = 0;
for i from 0 to indices.lenght{
if(pair.lBound > indices[i][0]){
pair.lBound = indices[i][0]
index = i;
}
if(indices[index].lenght > 0)
pair.rBound = max(pair.rBound, indices[i][0])
}
remove indices[index][0]
return min(pair, find(indices)}

Resources