Bad word table for random alpha string? - algorithm

I'm writing an algorithm to generate a random 6-character string (e.g. customer code XDEJQW). I want to ensure no and or offensive words or strings within. I guess I have no choice but to have a database table of those bad words, right? Just seems icky that I'll have to have an add/edit page for someone to go to that has some pretty awful words in it.
Thanks.

No need for a table, you can either use a string array or an enum for this purpose. The advantage is that you do not have to send a request to get the records of the bad word table. It is better for performance. Basically you can randomize the 6-character value until the result does not contain bad words.

depending on the purpose of the value, you can change the random process to make sure that no valid words are generate.. so if no valid words are generated.. offensive strings wont ether.. for example..
use only consonants
use only vowels
use 3 consecutive consonants and 3 consecutive vowels
etc..
the point is, normally, words of languages are made of syllables, a syllables to be pronounceable need to have a vowel.. usually paired with one or two (maybe more) consonants, before, after or around, that act as a "modifier" of the sound bi,ca,do,et,if,or or get,for etc.. if you can avoid these "patterns"
the probability of generating a word is low..
on the other and if you want to generate pronounceable passwords you do exactly the opposite alternating between consonants and vowels to produce syllables, ex: cidofe, but in that case you do have to validate against a list of "bad words"
but in ether case remember if you are going to validate.. don't just validate against a full word also try to filter out partial words, misspells or abbreviation to avoid things like SUKMYDIK

Related

Algorithm to reform a sentence from sentence whose spaces are removed and alphabets of words are reordered?

I was looking around some puzzles online to improve my knowledge on algorithms...
I came upon below question:
"You have a sentence with several words with spaces remove and words having their character order shuffled. You have a dictionary. Write an algorithm to produce the sentence back with spaces and words with normal character order."
I do not know what is good way to solve this.
I am new to algorithms but just looking at problem I think I would make program do what an intellectual mind would do.
Here is something I can think of:
-First find out manually common short english words from dictionary like "is" "the" "if" etc and put in dataset-1.
-Then find out permutation of words in dataset1 (eg "si", "eht" or "eth" or "fi") and put in dataset-2
-then find out from input sentense what character sequence matches the words of dataset2 and put them in dataset-3 and insert space in input sentence instead of those found.
-for rest of the words i would perform permutations to find out word from dictionary.
I am newbie to algorithms...is it a bad solution?
this seems like a perfectly fine solution,
In general there are 2 parameters for judging an algorithm.
correctness - does the algorithm provide the correct answer.
resources - the time or storage size needed to provide an answer.
usually there is a tradeoff between these two parameters.
so for example the size of your dictionary dictates what scrambled sentences you may
reconstruct, giving you a correct answer for more inputs,
however the whole searching process would take longer and would require more storage.
The hard part of the problem you presented is the fact that you need to compute permutations, and there are a LOT of them.
so checking them all is expensive, a good approach would be to do what you suggested, create a small subset of commonly used words and check them first, that way the average case is better.
note: just saying that you check the permutation/search is ok, but in the end you would need to specify the exact way of doing that.
currently what you wrote is an idea for an algorithm but it would not allow you to take a given input and mechanically work out the output.
Actually, might be wise to start by partitioning the dictionary by word length.
Then try to find the largest words that can be made using the letters avaliable, instead of finding the smallest ones. Short words are more common and thus will be harder to narrow down. IE: is it really "If" or "fig".
Then for each word length w, you can proceed w characters at a time.
There are still a lot of possible combinations though, simply because you found a valid word, doesn't mean it's the right word. Once you've gone through all the substrings, of which there should be something like O(c^4*d) where d is the number of words in the dictionary and c is the number of characters in the sentence. Practically speaking if the dictionary is sorted by word length, it'll be a fair bit less than that. Then you have to take the valid words, and figure out an ordering that works, so that all characters are used. There might be multiple solutions.

String comparison algorithm, relevancy, how much "alike" 2 strings are

I have 2 sources of information for the same data (companies), which I can join together via a unique ID (contract number). The presence of the second, different source, is due to the fact that the 2 sources are updated manually, independently. So what I have is an ID and a company Name in 2 tables.
I need to come up with an algorithm that would compare the Name in the 2 tables for the same ID, and order all the companies by a variable which indicates how different the strings are (to highlight the most different ones, to be placed at the top of the list).
I looked at the simple Levenshtein distance calculation algorithm, but it's at the letter level, so I am still looking for something better.
The reason why Levenshtein doesn't really do the job is this: companies have a name, prefixed or postfixed by the organizational form (LTD, JSC, co. etc). So we may have a lot of JSC "Foo" which will differ a lot from Foo JSC., but what I am really looking for in the database is pairs of different strings like SomeLongCompanyName JSC and JSC OtherName.
Are there any Good ways to do this? (I don't really like the idea of using regex to separate words in each string, then find matches for every word in the other string by using the Levenshtein distance, so I am searching for other ideas)
How about:
1. Replace all punctuation by whitespace.
2. Break the string up into whitespace-delimited words.
3. Move all words of <= 4 characters to the end, sorted alphabetically.
4. Levenshtein.
Could you filter out (remove) those "common words" (similar to removing stop words for fulltext indexing) and then search on that? If not, could you sort the words alphabetically before comparing?
As an alternative or in addition to the Levenshtein distance, you could use Soundex. It's not terribly good, but it can be used to index the data (which is not possible when using Levenshtein).
Thank you both for ideas.
I used 4 indices which are levenshtein distances divided by the sum of the length of both words (relative distances) of the following:
Just the 2 strings
The string composed of the result after separating the word sequences, eliminating the non-word chars, ordering ascending and joining with space as separator.
The string which is contained between quotes (if no such string is present, the original string is taken)
The string composed of alphabetically ordered first characters of each word.
each of these in return is an integer value between 1 and 1000. The resulting value is the product of:
X1^E1 * X2^E2 * X3^E3 * X4^E4
Where X1..X4 are the indices, and E1..E4 are user-provided preferences of valuable (significant) is each index. To keep the result inside reasonable values of 1..1000, the vector (E1..E4) is normalized.
The results are impressive. The whole thing works much faster than I've expected (built it as a CLR assembly in C# for Microsoft SQL Server 2008). After picking E1..E4 correctly, the largest index (biggest difference) on non-null values in the whole database is 765. Right untill about 300 there is virtually no matching company name. Around 200 there are companies that have kind of similar names, and some are the same names but written in very different ways, with abbreviations, additional words, etc. When it comes down to 100 and less - practically all the records contain names that are the same but written with slight differences, and by 30, only the order or the punctuation may differ.
Totally works, result is better than I've expected.
I wrote a post on my blog, to share this library in case someone else needs it.

How to recognize words in text with non-word tokens?

I am currently parsing a bunch of mails and want to get words and other interesting tokens out of mails (even with spelling errors or combination of characters and letters, like "zebra21" or "customer242"). But how can I know that "0013lCnUieIquYjSuIA" and "anr5Brru2lLngOiEAVk1BTjN" are not words and not relevant? How to extract words and discard tokens that are encoding errors or parts of pgp signature or whatever else we get in mails and know that we will never be interested in those?
You need to decide on a good enough criteria for a word and write a regular expression or a manual to enforce it.
A few rules that can be extrapolated from your examples:
words can start with a captial letter or be all capital letters but if you have more than say, 2 uppercase letters and more than 2 lowercase letters inside a word, it's not a word
If you have numbers inside the word, it's not a word
if it's longer than say, 20 characters
There's no magic trick. you need to decide what you want the rules to be and make them happen.
Al alternative way is to train some kind of Hidden Markov-Models system to recognize things that sound like words but I think this is an overkill for what you want to do.
http://en.wikipedia.org/wiki/English_words_with_uncommon_properties
you can make rules that reject anything with these 'uncommon properties' to build a system that accepts most actual words
Although I generally agree with shoosh's answer, his approach makes it easy to achieve high recall but also low precision, i.e. you would get almost all real words but also a lot non-words. If your definition of word is too restrictive, it's the other way around but that's also not what you want since then you would miss cases like 'zebra123'. So here are a few ideas about how to improve precision:
It may be worthwile thinking about if you could determine what parts of an email belong to the main text and which are footers like pgp signatures. I'm sure it's possible to find some simple heuristics that match most cases, e.g. cut of everything below a line which consists only of '-'-characters.
Depending on your performance criteria you may want to check if a word is a real word or contains a real word by matching against a simple word list. It's easy to find quite exhaustive lists of Englisch words on the web, and you could also compile one yourself by extracting words from a large and clean text corpus.
Using a lexical analyser, you could filter every token which is marked as unknown.
Some simple statistics may tell you how likely it is that something is a word. Tokens which occur with high frequency most probably are words. Tokens which appear only once or whose number is below a certain threshold very probably are not words. Common spelling errors should appear more than once and uncommon ones may be ignored.
Some if these suggestions clearly don't work for cases like 'zebra123'. Again, simply cutting off, or splitting on, in-word numbers may do the trick.
My general approach would be to first identify tokens which certainly are words (using the suggestions above), then identify tokens which certainly are not words (using a regular expression), and then look (with your eyes) at the few hundred or thousand remaining tokens to find common characteristics to handle these separately.

Algorithm wanted: Find all words of a dictionary that are similar to words in a free text

We have a list of about 150,000 words, and when the user enters a free text, the system should present a list of words from the dictionary, that are very close to words in the free text.
For instance, the user enters: "I would like to buy legoe toys in Walmart". If the dictionary contains "Lego", "Car" and "Walmart", the system should present "Lego" and "Walmart" in the list. "Walmart" is obvious because it is identical to a word in the sentence, but "Lego" is similar enough to "Legoe" to be mentioned, too. However, nothing is similar to "Car", so that word is not shown.
Showing the list should be realtime, meaning that when the user has entered the sentence, the list of words must be present on the screen. Does anybody know a good algorithm for this?
The dictionary actually contains concepts which may include a space. For instance, "Lego spaceship". The perfect solution recognizes these multi-word concepts, too.
Any suggestions are appreciated.
Take a look at http://norvig.com/spell-correct.html for a simple algorithm. The article uses Python, but there are links to implementations in other languages at the end.
You will be doing quite a few lookups of words against a fixed dictionary. Therefore you need to prepare your dictionary. Logically, you can quickly eliminate candidates that are "just too different".
For instance, the words car and dissimilar may share a suffix, but they're obviously not misspellings of each other. Now why is that so obvious to us humans? For starters, the length is entirely different. That's an immediate disqualification (but with one exception - below). So, your dictionary should be sorted by word length. Match your input word with words of similar length. For short words that means +/- 1 character; longer words should have a higher margin (exactly how well can your demographic spell?)
Once you've restricted yourself to candidate words of similar length, you'd want to strip out words that are entirely dissimilar. With this I mean that they use entirely different letters. This is easiest to compare if you sort the letters in a word alphabetically. E.g. car becomes "acr"; rack becomes "ackr". You'll do this in preprocessing for your dictionary, and for each input word. The reason is that it's cheap to determine the (size of an) difference of two sorted sets. (Add a comment if you need explanation). car and rack have an difference of size 1, car and hat have a difference of size 2. This narrows down your set of candidates even further. Note that for longer words, you can bail out early when you've found too many differences. E.g. dissimilar and biography have a total difference of 13, but considering the length (8/9) you can probably bail out once you've found 5 differences.
This leaves you with a set of candidate words that use almost the same letters, and also are almost the same length. At this point you can start using more refined algorithms; you don't need to run 150.000 comparisons per input word anymore.
Now, for the length exception mentioned before: The problem is in "words" like greencar. It doesn't really match a word of length 8, and yet for humans it's quite obvious what was meant. In this case, you can't really break the input word at any random boundary and run an additional N-1 inexact matches against both halves. However, it is feasible to check for just a missing space. Just do a lookup for all possible prefixes. This is efficient because you'll be using the same part of the dictionary over and over, e.g. g gr, gre, gree, etc. For every prefix that you've found, check if the remaining suffixis also in the dictionery, e.g. reencar, eencar. If both halves of the input word are in the dictionary, but the word itself isn't, you can assume a missing space.
You would likely want to use an algorithm which calculates the Levenshtein distance.
However, since your data set is quite large, and you'll be comparing lots of words against it, a direct implementation of typical algorithms that do this won't be practical.
In order to find words in a reasonable amount of time, you will have to index your set of words in some way that facilitates fuzzy string matching.
One of these indexing methods would be to use a suffix tree. Another approach would be to use n-grams.
I would lean towards using a suffix tree since I find it easier to wrap my head around it and I find it more suited to the problem.
It might be of interest to look at a some algorithms such as the Levenshtein distance, which can calculate the amount of difference between 2 strings.
I'm not sure what language you are thinking of using but PHP has a function called levenshtein that performs this calculation and returns the distance. There's also a function called similar_text that does a similar thing. There's a code example here for the levenshtein function that checks a word against a dictionary of possible words and returns the closest words.
I hope this gives you a bit of insight into how a solution could work!

Algorithm that Generates Unique Serial Number for Each English Word

For an application I need to generate unique serial numbers for each English word.
What would be the best approach?
One constraint is serial number generation algorithm should be very effective in an ordinary desktop computer.
Thanks
Do you have a list of all possible words? If yes, start from 0 at the first word and increment the serial by 1 for each word.
If not then a simple way to guarantee they are unique is to use the word itself as the serial. For example, ABC = 0x41 0x42 0x43 = 4276803.
As suggested in the comments there are other ways (that however require more work), such as compressing the words first with, for example, Huffman.
This of course gets awkward with long words: The serial of Pneumonoultramicroscopicsilicovolcanoconiosis would require around 100 digits, for example.
Otherwise you can use a hash, but there is no guarantee it will be unique for all English words.
You appear to be asking about a perfect hashing function. If so, take a look at this Wikipedia article, and at the gperf utility.
Here is an algorithm (in python) that allows you to code and decode any combination of lowercase letters:
def encode(s):
r = 1
for i in len(s):
r = r * 26 + (ord(s[i]) - ord('a'))
return r
Using 64 bits you can code up to 12 letter words. You can use the remaining unused serials as in index to a table containing low-frequency very long words.
Just use a 64-bit hash function, like Fowler-Noll-Vo. You're not likely to get collisions using a 64-bit integer, as this gives you 2^64 possible values, and there are certainly way less than that many words in the English language. You'd need to normalize each word, of course, (convert to lower-case, etc.)
Do you really need it to be 'serial'? if not - did you try to use the various hash algorithms? Several of them are built into .NET (MD5 and SHA1 if I remember correctly). I am not sure which one will be good enough especially with short strings
Are you looking for every word, or every word in the English dictionary? Are you using standard words - i.e. from the Oxford English Dictionary or are slang words included too? I guess what I'm getting at is: "How big is your dictionary"? You could use an MD5 hash which has a theoretical possibility of collisions - albeit 1 in billions of hashes that may collide - although, I can't say I'd understand the purpose of using a hash over using the actual word. Unless perhaps you're wanting to calculate the serial client side so that it's referencing a correct dictionary item on the server side without having to parse the dictionary looking for its serial. Of course - the word obviously has to be sufficiently unique in order for us to understand it as humans, and we're way more efficient at parsing the meaning of words than a computer is at doing the same.
Are you looking to separate words that look the same but are pronounced differently? Words that look and sound the same but have different meanings? If so, then you're going to come unstuck with a hash, as the same spelling with a different semantic will produce the same hash, so it won't work for this scenario. In this case you'd need some kind of incremental system. If you add words after the fact to the dictionary, will they be added at the end and just given the next serial number in sequence? What if that word is spelled the same as another word but sounds different or sounds the same but has a different semantic? What then?
I guess it depends on the purpose of the serialization as to what would be the most suitable output for your serial number and hence what would be the most efficient algorithm.
The most efficient algorithm would probably be to split your dictionary into the same number of chunks as you have processors and have a thread on each processor serialize the words in its chunk recombining the output from each thread at the end. This (in theory) would work at a speed slightly slower than O(n/number of processors) in real world performance, however I think for mathematical correctness that's still O(n) because you still have to parse the whole dictionary once to serialize each word.
I think the safest way to go is:
Worry about what you've got now
Order them in the most logical sequence (alphabetically?)
Number them in sequence
Add new words (whether spelled the same or not and having different semantics) at the end; give them the next number in the sequence, regardless of their rightful place in the dictionary alphabetically.
This way you don't have to worry about leaving spaces in the serial numbers to account for insertions between words, you don't have to worry about reindexing any dependent data to account for changes in indexes when words are inserted, you just carry on as normal. You don't have to worry about collisions, and you still get the most efficient indexing mechanism for storage purposes meaning you're not storing MD5 hashes that are potentially longer than the original word - which makes no sense for real world use.
If you need to access the dictionary alphabetically, just sort by the word, otherwise, don't.
I still think I'm at a loss as to the necessity of serializing the word - except for storage purposes where you can store your dictionary and link tables by the word's key.
I wonder if an answer is even possible.
Are color and colour the same word? Do they get one serial number or two?
Are polish and Polish the same word?
Are watch (noun) and watch (verb) the same word?
Are multiply (verb) and multiply (adverb) the same word?
Analysis (singular noun) and analyses (plural noun) are not the same word. Are analyse (plural verb) and analyze (plural verb) the same word? Are analyses (singular verb) and analyzes (singular verb) the same word? Are analyses (singular verb) and analyses (plural noun) the same word?
Are wont and won't the same word?
Are Beijing and Peking the same word? Or maybe they aren't English, since Londres and Frankreich aren't English, but then what is the English word for the capital of the Middle Country?
About about MD5 hash algorithm. Do something like this:
serialNumber = MD5( ToLower ( english word ) )

Resources