Given a string: a = "1234567891234567891234567". What's the best way to check for each of the substring ["1","12","123", "1234", etc, "123...7"] (note that only substrings starts at 0, so no "23","234" in the list) on a function f().
I have two proposed methods. Seems like the first one is faster but not sure if it is the optimal one.
Building up string and check on the way.
b=""
for i in range(0,len(a)):
b=b+a[i]
f(b)
Use list comprehension.
for i in range(1,len(a)+1):
b = ''.join([a[j] for j in range(0,i)])
f(b)
I'm asking this because this is a small part of a bigger problem I'm doing, where you have a long string and for each position/char, you check the window of 1,2,3,4..,n character substring starting at that position.
Related
I have come across the following problem statement:
You have a sentence written entirely in a single row. You would like to split it into several rows by replacing some of the spaces
with "new row" indicators. Your goal is to minimize the width of the
longest row in the resulting text ("new row" indicators do not count
towards the width of a row). You may replace at most K spaces.
You will be given a sentence and a K. Split the sentence using the
procedure described above and return the width of the longest row.
I am a little lost with where to start. To me, it seems I need to try to figure out every possible sentence length that satisfies the criteria of splitting the single sentence up into K lines.
I can see a couple of edge cases:
There are <= K words in the sentence, therefore return the longest word.
The sentence length is 0, return 0
If neither of those criteria are true, then we have to determine all possible combinations of splitting the sentence and the return the minimum of all those options. This is the part I don't know how to do (and is obviously the heart of the problem).
You can solve it by inverting the problem. Let's say I fix the length of the longest split to L. Can you compute the minimum number of breaks you need to satisfy it?
Yes, you just break before the first word that would go over L and count them up (O(N)).
So now that we have that we just have to find a minimum L that would require less or equal K breaks. You can do a binary search in the length of the input. Final complexity O(NlogN).
First Answer
What you want to achieve is Minimum Raggedness. If you just want the algorithm, it is here as a PDF. If the research paper's link goes bad, please search for the famous paper named Breaking Paragraphs into Lines by Knuth.
However if you want to get your hands over some implementations of the same, in the question Balanced word wrap (Minimum raggedness) in PHP on SO, people have actually given implementation not only in PHP but in C, C++ and bash as well.
Second Answer
Though this is not exactly a correct approach, it is quick and dirty if you are looking for something like that. This method will not return correct answer for every case. It is for those people for whom time to ship their product is more important.
Idea
You already know the length of your input string. Let's call it L;
When putting in K breaks, the best scenario would be to be able to break the string to parts of exactly L / (K + 1) size;
So break your string at that word which makes the resulting sentence part's length least far from L / (K + 1);
My recursive solution, which can be improved through memoization or dynamic programming.
def split(self,sentence, K):
if not sentence: return 0
if ' ' not in sentence or K == 0: return len(sentence)
spaces = [i for i, s in enumerate(sentence) if s == ' ']
res = 100000
for space in spaces:
res = min(res, max(space, self.split(sentence[space+1:], K-1)))
return res
In Longest Common Subsequence (LCS) Problem, why do we match last chars for the string. For ex
Consider the input strings “AGGTAB” and “AXTXAYB”. Last characters match for the strings. So length of LCS can be written as:
L(“AGGTAB”, “AXTXAYB”) = 1 + L(“AGGTA”, “AXTXAY”)
Wouldn't the algo still produce the optimal search if we match first chars for string. For example
Consider the input strings “AGGTAB” and “AXTXAYB”. First characters match for the strings. So length of LCS can be written as:
L(“AGGTAB”, “AXTXAYB”) = 1 + L(“GGTAB”, “XTXAYB”)
LCS problem : Longest Common Subsequence Problem
Yes, this is the same thing.
Computing the LCS of two reversed sequences is the same as reversing the LCS of two sequences before reversal. In other words,
REVERSE(LCS(A,B)) = LCS(REVERSE(A), REVERSE(B))
Assuming that the LCS reduces from the end, the operation on the right would go from the opposite end, but achieve the same result.
That's why you can work with prefixes in the same way that they work with suffixes in the explanation: you would get the same kind of recursive reduction in the process.
Moreover, you can do reductions on both ends if you wish. However, this would complicate the algorithm a lot without giving you any speed up in return.
Well it turns out that you can directly make use of length variables (say M,N) in recursion provided by user if we perform LCS from last. On the other hand you will have to create extra variables if you do it from start index. That's the reason former method is considered as standard otherwise there is no complexity difference and everything is same.
LCS (M, N)
{
if(M==0 || N==0)
return 0;
elseif (a[M]!=b[N])
return max(LCS(M,N-1), LCS(M-1,N));
else
return 1 + LCS(M-1,N-1);
}
Yes you could do that it will not change the time complexity. Starting from last is just a matter of convention.
Say there is a word set and I would like to clustering them based on their char bag (multiset). For example
{tea, eat, abba, aabb, hello}
will be clustered into
{{tea, eat}, {abba, aabb}, {hello}}.
abba and aabb are clustered together because they have the same char bag, i.e. two a and two b.
To make it efficient, a naive way I can think of is to covert each word into a char-cnt series, for exmaple, abba and aabb will be both converted to a2b2, tea/eat will be converted to a1e1t1. So that I can build a dictionary and group words with same key.
Two issues here: first I have to sort the chars to build the key; second, the string key looks awkward and performance is not as good as char/int keys.
Is there a more efficient way to solve the problem?
For detecting anagrams you can use a hashing scheme based on the product of prime numbers A->2, B->3, C->5 etc. will give "abba" == "aabb" == 36 (but a different letter to primenumber mapping will be better)
See my answer here.
Since you are going to sort words, I assume all characters ascii values are in the range 0-255. Then you can do a Counting Sort over the words.
The counting sort is going to take the same amount of time as the size of the input word. Reconstruction of the string obtained from counting sort will take O(wordlen). You cannot make this step less than O(wordLen) because you will have to iterate the string at least once ie O(wordLen). There is no predefined order. You cannot make any assumptions about the word without iterating though all the characters in that word. Traditional sorting implementations(ie comparison based ones) will give you O(n * lg n). But non comparison ones give you O(n).
Iterate over all the words of the list and sort them using our counting sort. Keep a map of
sorted words to the list of known words they map. Addition of elements to a list takes constant time. So overall the complexity of the algorithm is O(n * avgWordLength).
Here is a sample implementation
import java.util.ArrayList;
public class ClusterGen {
static String sortWord(String w) {
int freq[] = new int[256];
for (char c : w.toCharArray()) {
freq[c]++;
}
StringBuilder sortedWord = new StringBuilder();
//It is at most O(n)
for (int i = 0; i < freq.length; ++i) {
for (int j = 0; j < freq[i]; ++j) {
sortedWord.append((char)i);
}
}
return sortedWord.toString();
}
static Map<String, List<String>> cluster(List<String> words) {
Map<String, List<String>> allClusters = new HashMap<String, List<String>>();
for (String word : words) {
String sortedWord = sortWord(word);
List<String> cluster = allClusters.get(sortedWord);
if (cluster == null) {
cluster = new ArrayList<String>();
}
cluster.add(word);
allClusters.put(sortedWord, cluster);
}
return allClusters;
}
public static void main(String[] args) {
System.out.println(cluster(Arrays.asList("tea", "eat", "abba", "aabb", "hello")));
System.out.println(cluster(Arrays.asList("moon", "bat", "meal", "tab", "male")));
}
}
Returns
{aabb=[abba, aabb], ehllo=[hello], aet=[tea, eat]}
{abt=[bat, tab], aelm=[meal, male], mnoo=[moon]}
Using an alphabet of x characters and a maximum word length of y, you can create hashes of (x + y) bits such that every anagram has a unique hash. A value of 1 for a bit means there is another of the current letter, a value of 0 means to move on to the next letter. Here's an example showing how this works:
Let's say we have a 7 letter alphabet(abcdefg) and a maximum word length of 4. Every word hash will be 11 bits. Let's hash the word "fade": 10001010100
The first bit is 1, indicating there is an a present. The second bit indicates that there are no more a's. The third bit indicates that there are no more b's, and so on. Another way to think about this is the number of ones in a row represents the number of that letter, and the total zeroes before that string of ones represents which letter it is.
Here is the hash for "dada": 11000110000
It's worth noting that because there is a one-to-one correspondence between possible hashes and possible anagrams, this is the smallest possible hash guaranteed to give unique hashes for any input, which eliminates the need to check everything in your buckets when you are done hashing.
I'm well aware that using large alphabets and long words will result in a large hash size. This solution is geared towards guaranteeing unique hashes in order to avoid comparing strings. If you can design an algorithm to compute this hash in constant time(given you know the values of x and y) then you'll be able to solve the entire grouping problem in O(n).
I would do this in two steps, first sort all your words according to their length and work on each subset separately(this is to avoid lots of overlaps later.)
The next step is harder and there are many ways to do it. One of the simplest would be to assign every letter a number(a = 1, b = 2, etc. for example) and add up all the values for each word, thereby assigning each word to an integer. Then you can sort the words according to this integer value which drastically cuts the number you have to compare.
Depending on your data set you may still have a lot of overlaps("bad" and "cac" would generate the same integer hash) so you may want to set a threshold where if you have too many words in one bucket you repeat the previous step with another hash(just assigning different numbers to the letters) Unless someone has looked at your code and designed a wordlist to mess you up, this should cut the overlaps to almost none.
Keep in mind that this approach will be efficient when you are expecting small numbers of words to be in the same char bag. If your data is a lot of long words that only go into a couple char bags, the number of comparisons you would do in the final step would be astronomical, and in this case you would be better off using an approach like the one you described - one that has no possible overlaps.
One thing I've done that's similar to this, but allows for collisions, is to sort the letters, then get rid of duplicates. So in your example, you'd have buckets for "aet", "ab", and "ehlo".
Now, as I say, this allows for collisions. So "rod" and "door" both end up in the same bucket, which may not be what you want. However, the collisions will be a small set that is easily and quickly searched.
So once you have the string for a bucket, you'll notice you can convert it into a 32-bit integer (at least for ASCII). Each letter in the string becomes a bit in a 32-bit integer. So "a" is the first bit, "b" is the second bit, etc. All (English) words make a bucket with a 26-bit identifier. You can then do very fast integer compares to find the bucket a new words goes into, or find the bucket an existing word is in.
Count the frequency of characters in each of the strings then build a hash table based on the frequency table. so for an example, for string aczda and aacdz we get 20110000000000000000000001. Using hash table we can partition all these strings in buckets in O(N).
26-bit integer as a hash function
If your alphabet isn't too large, for instance, just lower case English letters, you can define this particular hash function for each word: a 26 bit integer where each bit represents whether that English letter exists in the word. Note that two words with the same char set will have the same hash.
Then just add them to a hash table. It will automatically be clustered by hash collisions.
It will take O(max length of the word) to calculate a hash, and insertion into a hash table is constant time. So the overall complexity is O(max length of a word * number of words)
A rotated palindrome is like "1234321", "3432112".
The naive method will be cut the string into different pieces and concate them back and see if the string is a palindrome.
That would take O(n^2) since there are n cuts and for each cut we need O(n) to check if the string is a palindrome.
I'm wondering if there's a better solution than this.
I guess so, please advice.
Thanks!
According to this wikipedia article it is possible for each string S of length n in time O(n) compute an array A of the same size, such that:
A[i]==1 iff the prefix of S of length i is a palindrome.
http://en.wikipedia.org/wiki/Longest_palindromic_substring
The algorithm should be possible to find in:
Manacher, Glenn (1975), "A new linear-time "on-line" algorithm for
finding the smallest initial palindrome of a string"
In other words we can check which prefixes of the string are palindromes in linear time. We will use this result to solve the proposed problem.
Each (non-rotating) palindrome S has the following form S = psxs^Rp^R.
Where "x" is the center of the palindrome (either empty string or one letter string),
"p" and "s" are (possibly empty) strings and "s^R" means "s" string reversed.
Each rotating palindrome created from this string has one of the two following forms (for some p):
sxs^Rp^Rp
p^Rpsxs^R
This is true, because you can choose if to cut some substring before or after the middle of the palindrome and then paste it on the other end.
As one can see the substrings "p^Rp" and "sxs^R" are both palindromes, one of then of even length and the other on odd length iff S is of odd length.
We can use the algorithm mentioned in the wikipedia link to create two arrays A and B. The array A is created by checking which prefixes are palindromes and B for suffixes. Then we search for a value i such that A[i]==B[i]==1 such that either prefix or suffix has even length. We will find such index iff the proposed string is a rotated palindrome and the even part is the "p^Rp" substring, so we can easily recover the original palindrome by moving half of this string to the other end of the string.
One remark to the solution by rks, this solution doesn't work, as for a string S = 1121 it will create string 11211121 which has palindrome of length longer or equal than the length of S, but it is not a rotated palindrome. If we change the solution such that it checks whether there exist a palindrome of length equal to length of S, it would work, but i don't see any direct solution how to change the algorithm searching for longest substring in such a way that it would search for substring of fixed length (len(S)).
(i didn't write this as a comment under the solution as i'm new to Stackoverflow and don't have enough reputation to do so)
Second remark -- I'm sorry not to include the algorithm of Manacher, if someone has link to either the idea of the algorithm or some implementation please include it in the comments.
Concatenate the string to itself, then do the classical palindrome research in the new string. If you find a palindrome whose length is longer or equal to the length of your original string, you know your string is a rotated palindrome.
For your example, you would do your research in 34321123432112 , finding 21123432112, which is longer than your initial string, so it's a rotated palindrome.
EDIT: as Richard Stefanec noted, my algorithm fails on 1121, he proposed that we change the >= test on the length by =.
EDIT2: it should be noted than finding a palindrome of a given size isn't obviously easy. Read the discussion under Richard Stefanec post for more information.
#Given a string, check if it is a rotation of a palindrome.
#For example your function should return true for “aab” as it is a rotation of “aba”.
string1 = input("Enter the first string")
def check_palindrome(string1):
#string1_list = [word1 for word1 in string1]
#print(string1_list)
string1_rotated = string1[1::1] + string1[0]
print(string1_rotated)
string1_rotated_palindrome = string1_rotated[-1::-1]
print(string1_rotated_palindrome)
if string1_rotated == string1_rotated_palindrome:
return True
else:
return False
isPalindrome = check_palindrome(string1)
if(isPalindrome):
print("Rotated string is palindrome as well")
else:
print("Rotated string is not palindrome")
I would like to propose one simple solution, using only conventional algorithms. It will not solve any harder problem, but it should be sufficient for your task. It is somewhat similar to the other two proposed solutions, but none of them seems to be concise enough for me to read carefully.
First step: concatenate the string to itself (abvc - > abvcabvc) as in all other proposed solutions.
Second step: do Rabin-Karp precalculation (which uses rolling hash) on the newly obtained string and its reversed.
Third step: Let the string be with length n. For each index iin 0...n-1 check if the substring of the doubled string [i, i + n - 1] is palindrome in constant time, using the Rabin-Karp precalculations (basically the obtained value in for the substring in the forward and the reversed direction should be equal).
Conclusion: if third step found any palindrome - then the string is rotated palindrome. If not - then it is not.
PS: Rabin Karp uses hashes, and collisions are possible even for non-coinciding strings. Thus it is a good idea to make verifying brute force check for equality if such is induced by the hash checks. Still if the hash functions used in the Rabin Karp are good, the amortized speed of the solution should remain O(n).
you can add the same pattern to the end of the original pattern. For example the pattern is 1234321, then you can add the same pattern to the end 12343211234321. After to do this, you can use the KMP or other substring match algotithms to find the string you want. if match, return ture.
Given a dictionary find out if given word can be made by two words in dictionary. For eg. given "newspaper" you have to find if it can be made by two words. (news and paper in this case). Only thing i can think of is starting from beginning and checking if current string is a word. In this case checking n, ne, new, news..... check for the remaining part if current string is a valid word.
Also how do you generalize it for k(means if a word is made up of k words) ? Any thoughts?
Starting your split at the center may yield results faster. For example, for newspaper, you would first try splitting at 'news paper' or 'newsp aper'. As you can see, for this example, you would find your result on the first or second try. If you do not find a result, just search outwards. See the example for 'crossbow' below:
cros sbow
cro ssbow
cross bow
For the case with two words, the problem can be solved by just considering all possible ways of splitting the word into two, then checking each half to see if it's a valid word. If the input string has length n, then there are only O(n) different ways of splitting the string. If you store the strings in a structure supporting fast lookup (say, a trie, or hash table).
The more interesting case is when you have k > 2 words to split the word into. For this, we can use a really elegant recursive formulation:
A word can be split into k words if it can be split into a word followed by a word splittable into k - 1 words.
The recursive base case would be that a word can be split into zero words only if it's the empty string, which is trivially true.
To use this recursive insight, we'll modify the original algorithm by considering all possible splits of the word into two parts. Once we have that split, we can check if the first part of the split is a word and if the second part of the split can be broken apart into k - 1 words. As an optimization, we don't recurse on all possible splits, but rather just on those where we know the first word is valid. Here's some sample code written in Java:
public static boolean isSplittable(String word, int k, Set<String> dictionary) {
/* Base case: If the string is empty, we can only split into k words and vice-
* versa.
*/
if (word.isEmpty() || k == 0)
return word.isEmpty() && k == 0;
/* Generate all possible non-empty splits of the word into two parts, recursing on
* problems where the first word is known to be valid.
*
* This loop is structured so that we always try pulling off at least one letter
* from the input string so that we don't try splitting the word into k pieces
* of which some are empty.
*/
for (int i = 1; i <= word.length(); ++i) {
String first = word.substring(0, i), last = word.substring(i);
if (dictionary.contains(first) &&
isSplittable(last, k - 1, dictionary)
return true;
}
/* If we're here, then no possible split works in this case and we should signal
* that no solution exists.
*/
return false;
}
}
This code, in the worst case, runs in time O(nk) because it tries to generate all possible partitions of the string into k different pieces. Of course, it's unlikely to hit this worst-case behavior because most possible splits won't end up forming any words.
I'd first loop through the dictionary using a strpos(-like) function to check if it occurs at all. Then try if you can find a match with the results.
So it would do something like this:
Loop through the dictionary strpos-ing every word in the dictionary and saving results into an array, let's say it gives me the results 'new', 'paper', and 'news'.
Check if new+paper==newspaper, check if new+news==newspaper, etc, untill you get to paper+news==newspaper which returns.
Not sure if it is a good method though, but it seems more efficient than checking a word letter for letter (more iterations) and you didn't explain how you'd check when the second word started.
Don't know what you mean by 'how do you generalize it for k'.