I'm trying to find an elegant solution on how to convert something like this
ALL CAPS TEXT. "WHY ANYONE WOULD USE IT?" THIS IS RIDICULOUS! HELP.
...to regular-case. I could more or less find all sentence-starting characters with:
(?<=^|(\. \"?)|(! ))[A-Z] #this regex sure should be more complex
but (standard) Ruby neither allows lookbehinds, nor it is possible to apply .capitalize to, say, gsub replacements. I wish I could do this:
"mytext".gsub(/my(regex)/, '\1'.capitalize)
but the current working solution would be to
"mytext".split(/\. /).each {|x| p x.capitalize } #but this solution sucks
First of all, notice that what you are trying to do will only be an approximation.
You cannot correctly tell where the sentence boundaries are. You can approximate it as The beginning of the entire string or right after a period, question mark, or exclamation mark followed by spaces. But then, you will incorrectly capitalize "economy" in "U.S. economy".
You cannot correctly tell which words should be capitalized. For example, "John" will be "john".
You may want to do some natural language processing to give you a close-to-correct result in many cases, but these methods are only probablistically correct. You will never get a perfect result.
Understanding these limitations, you might want to do:
mytext.gsub(/.*?(?:[.?!]\s+|\z)/, &:capitalize)
Related
I am trying to figure out regex/scripting logic to parse something out like this;
RAW DATA
{CLNDSDB=MedGen:OMIM:SNOMED_CT;CLNDSDBID=C0432243:271640:254100000}
Here, the value is;
MedGen = C0432243
OMIM = 271640
SNOMED_CT = 254100000
Result: 271640
I am envisaging a convoluted if-else loop to get the result. Just wanted to know if there any simple way of get the same result. Much appreciate your answers.
Perhaps something like this: (assuming there is always three fields)
(?<=[=:])(?<key>[^:;]+)(?=[:=;](?:[^:;=]+[=;:]){3}(?<val>[^:]+))
The idea is to capture the field values inside a lookahead assertion so as not to be interfering with overlapping substrings.
However, there is probably a cleaner way that uses successive split.
It's difficult to tell from the question whether the input string is two lines or one:
str = 'RAW DATA
{CLNDSDB=MedGen:OMIM:SNOMED_CT;CLNDSDBID=C0432243:271640:254100000}
'
or
str = '{CLNDSDB=MedGen:OMIM:SNOMED_CT;CLNDSDBID=C0432243:271640:254100000}'
but, in either case I'd use a simple pattern:
str = '{CLNDSDB=MedGen:OMIM:SNOMED_CT;CLNDSDBID=C0432243:271640:254100000}'
medgen, omim, snomed_ct = str.match(/(\w+):(\w+):(\w+)}/).captures
medgen # => "C0432243"
omim # => "271640"
snomed_ct # => "254100000"
Here's the pattern at Rubular.
I am envisaging a convoluted if-else loop to get the result.
Well, don't do that. Most programming solutions are surprisingly simple, so start simple. As you learn, your programming toolbox will grow as you become familiar with new ways of doing things, and you'll find certain tools are more useful for certain tasks. Still, always start from "simple", get the basics working, then carefully add to handle the corner cases.
In this case, when using a regular expression, it's important to look for landmarks in the string that you can use to locate your target text. In this case the trailing '}' is usable, so I wrote three simple captures to find \w strings separated by :.
I'm solving http://www.rubeque.com/problems/a-man-comma--a-plan-comma--a-canal--panama-excl-/solutions but I'm a bit confused about treating #{} as comment in regexp.
My code look like this now
def longest_palindrome(txt)
txt[/#{txt.reverse}/]
end
I tried txt[/"#{txt.reverse}"/] or txt[#{txt.reverse}] but nothing works as I wish. How should I implicate variable into regexp?
This is not something you can do with a regex.
While you could use variable interpolation in the construction of a regex (see the other answers/comments), that wouldn't help you here. You could only use that to reverse a literal string, not a regex match result. Even if you could, you still wouldn't have solved the "find the longest palindrome" part, at least not with acceptable runtime performance.
Use a different approach to the problem.
It is hard to tell how do you wish that happens without examples, but I suppose you are after
txt[/#{Regexp.escape(txt.reverse)}/]
See the Regexp#escape method
I want to be able to match all the following cases below using Ruby 1.8.7.
/pages/multiedit/16801,16809,16817,16825,16833
/pages/multiedit/16801,16809,16817
/pages/multiedit/16801
/pages/multiedit/1,3,5,7,8,9,10,46
I currently have:
\/pages\/multiedit\/\d*
This matches upto the first set of numbers. So for example:
"/pages/multiedit/16801,16809,16817,16825,16833"[/\/pages\/multiedit\/\d*/]
# => "/pages/multiedit/16801"
See http://rubular.com/r/ruFPx5yIAF for example.
Thanks for the help, regex gods.
\/pages\/multiedit\/\d+(?:,\d+)*
Example: http://rubular.com/r/0nhpgki6Gy
Edit: Updated to not capture anything... Although the performance hit would be negligible. (Thanks Tin Man)
The currently accepted answer of
\/pages\/multiedit\/[\d,]+
may not be a good idea because that will also match the following strings
.../pages/multiedit/,,,
.../pages/multiedit/,1,
My answer requires there be at least one digit before the first comma, and at least one digit between commas, and it must end with a digit.
I'd use:
/\/pages\/multiedit\/[\d,]+/
Here's a demonstration of the pattern at http://rubular.com/r/h7VLZS1W1q
[\d,]+ means "find one or more numbers or commas"
The reason \d* doesn't work is it means "find zero or more numbers". As soon as the pattern search runs into a comma it stops. You have to tell the engine that it's OK to find numbers and commas.
Say I have an incoming string that I want scan to see if it contains any of the words I have chosen to be "bad." :)
Is it faster to split the string into an array, as well as keep the bad words in an array, and then iterate through each bad word as well as each incoming word and see if there's a match, kind of like:
badwords.each do |badword|
incoming.each do |word|
trigger = true if badword == word
end
end
OR is it faster to do this:
incoming.each do |word|
trigger = true if badwords.include? word
end
OR is it faster to leave the string as it is and run a .match() with a regex that looks something like:
/\bbadword1\b|\bbadword2\b|\bbadword3\b/
Or is the performance difference almost completely negligible? Been wondering this for a while.
You're giving the regex an advantage by not stopping your loop when it finds a match. Try:
incoming.find{|word| badwords.include? word}
My money is still on the regex though which should be simplified to:
/\b(badword1|badword2|badword3)\b/
or to make it a fair fight:
/\a(badword1|badword2|badword3)\z/
Once it is compiled, the Regex is the fastest in real live (i.e. really long incoming string, many similar bad words, etc.) since it can run on incoming in situ and will handle overlapping parts of your "bad words" really well.
The answer probably depends on the number of bad words to check: if there is only one bad word it probably doesn't make a huge difference, if there are 50 then checking an array would probably get slow. On the other hand, with tens or hundreds of thousands of words the regexp probably won't be too fast either
If you need to handle large numbers of bad words, you might want to consider splitting into individual words and then using a bloomfilter to test whether the word is likely to be bad or not.
This does not excatly answer your question but this will definitely help solve it.
Take some examples what your are tring to acheive and put them to bench marks.
you can find how to do benchmarking in ruby here
Just put the varoius forms between report block and get the benchmarks and decide yourself what suits you the best.
http://ruby.about.com/od/tasks/f/benchmark.htm
http://ruby-doc.org/stdlib-1.9.3/libdoc/benchmark/rdoc/Benchmark.html
For better solutions use the real data to test.
Benchmarks are always better than discussions :)
If you want to scan a string for occurrences of words, use scan to find them.
Use Regexp.union to build a pattern that will find the strings in your black-list. You will want to wrap the result with \b to force matching word-boundaries, and use a case-insensitive search.
To give you an idea of how Regexp.union can help:
words = %w[foo bar]
Regexp.union(words)
=> /foo|bar/
'Daniel Foo killed him a bar'.scan(/\b#{Regexp.union(words)}\b/i)
=> ["foo", "bar"]
You could also build the pattern using Regexp.new or /.../ if you want a bit more control:
Regexp.new('\b(?:' + words.join('|') + ')\b', Regexp::IGNORECASE)
=> /\b(?:foo|bar)\b/i
/\b(?:#{words.join('|')})\b/i
=> /\b(?:foo|bar)\b/i
'Daniel Foo killed him a bar'.scan(/\b(?:#{words.join('|')})\b/i)
=> ["Foo", "bar"]
As a word of advice, black-listing words you find offensive is easily tricked by a user, and often gives results that are wrong because many "offensive" words are only offensive in a certain context. A user can deliberately misspell them or use "l33t" speak and have an almost inexhaustible supply of alternate spellings that will make you constantly update your list. It's a source of enjoyment to some people to fool a system.
I was once given a similar task and wrote a translator to supply alternate spellings for "offensive" words. I started with a list of words and terms I'd gleaned from the Internet and started my code running. After several million alternates were added to the database I pulled the plug and showed management it was a fools-errand because it was trivial to fool it.
I'm building an application that returns results based on a movie input from a user. If the user messes up and forgets to space out the title of the movie is there a way I can still take the input and return the correct data? For example "outofsight" will still be interpreted as "out of sight".
There is no regex that can do this in a good and reliable way. You could try a search server like Solr.
Alternatively, you could do auto-complete in the GUI (if you have one) on the input of the user, and this way mitigate some of the common errors users can end up doing.
Example:
User wants to search for "outofsight"
Starts typing "out"
Sees "out of sight" as suggestion
Selects "out of sight" from suggestions
????
PROFIT!!!
There's no regex that can tell you where the word breaks were supposed to be. For example, if the input is "offlight", is it supposed to return "Off Light" or "Of Flight"?
This is impossible without a dictionary and some kind of fuzzy-search algorithm. For the latter see How can I do fuzzy substring matching in Ruby?.
You could take a string and put \s* in between each character.
So outofsight would be converted to:
o\s*u\s*t\s*o\s*f\s*s\s*i\s*g\s*h\s*t
... and match out of sight.
You can't do this with regular expressions, unless you want to store one or more patterns to match for each movie record. That would be silly.
A better approach for catching minor misspellings would be to calculate Levenshtein distances between what the user is typing and your movie titles. However, when your list of movies is large, this will become a rather slow operation, so you're better off using a dedicated search engine like Lucene/Solr that excels at this sort of thing.