A word processor program features a search and replace function. However, partial words (character combinations found within words) are also replaced. To fix this, I plan to remove extra spaces and use the split function to change the string into an array of words by using " " as a delimiter.
However, once I search through the array, replace the appropriate words, and put the array back into a string separated by spaces, the original formatting of the user will be lost. For example, if the original string was "This is a sentence." and the user wanted "a" to be replaced with "the", the output will be "This is the sentence.", with no additional spaces.
So, my question is whether there is any way to search and replace entire words only while still preserving the formatting (extra spaces) of the user in Visual Basic.
What about using a regex?
In a regex the code \b is a word boundary so for example the regex \ba\b will match a only when a is a whole word.
So for example your code would be:
Dim strPattern As String: strPattern = "\ba\b"
Dim regex As New RegExp
regex.Global = True
regex.Pattern = strPattern
result = regex.Replace("This is a sentence.", "the")
If you use the Split function without removing your extra spaces first your array will have empty items in it so you would not lose the extra spaces and can reconstruct your document with the original formatting in tact.
Why is your formatting lost? If you split the text by space, just attach a space after each element when composing it back from an array. But you will also have to take into account words that end not with a space but punctuation.
in "This is a simple sentence, eh?", "eh" will be stored as "eh?" because u split by space. So you will have to program a complex punctuation-friendly formula or simply use regex. Be prepared - regex is... tricky.
Related
I am using gsub in Ruby to make a word within text bold. I am using a word boundary so as to not make letters within other words bold, but am finding that this ignores words that have a quote after them. For example:
text.gsub(/#{word}\b/i, "<b>#{word}</b>")
text = "I said, 'look out below'"
word = below
In this case the word below is not made bold. Is there any way to ignore certain characters along with a word boundary?
All that escaping in the Regexp.new is looking quite ugly. You could greatly simplify that by using a Regexp literal:
word = 'below'
text = "I said, 'look out below'"
reg = /\b#{word}\b/i
text.gsub!(reg, '<b>\0</b>')
Also, you could use the modifier form of gsub! directly, unless that string is aliased in some other place in your code that you are not showing us. Lastly, if you use the single quoted string literal inside your gsub call, you don't need to escape the backslash.
Be very careful with your \b boundaries. Here’s why.
The #{word} syntax doesn't work for regular expressions. Use Regexp.new instead:
word = "below"
text = "I said, 'look out below'"
reg = Regexp.new("\\b#{word}\\b", true)
text = text.gsub(reg, "<b>\\0</b>")
Note that when using sting you need to escape \b to \\b, or it is interpreted as a backspace. If word may contain special regex characters, escape it using Regexp.escape.
Also, by replacing the string to <b>#{word}</b> you may change casing of the string: "BeloW" will be replaced to "below". \0 corrects this by replacing with the found word. In addition, I added \\b at the beginning, you don't want to look for "day" and end up with "sunday".
Using Ruby 1.8.7
I need to grab everything up to a certain word - and I would like to match against words in an array. Example:
match_words = ['title','author','pages']
item = "Title: Jurassic Park\n"
item += "Author: Michael Crichton\n"
if item =~ /title: (.*)#{match any word in match_words array}/i
#do something here
end
So, this would ideally return "Jurassic Park\n". I am currently matching on newlines but have found that the data I will be matching against might have newlines in strange places, like the middle of the sentence. So, I think matching to the next match_word would be a good idea.
Is this possible, or maybe can be done another way?
Try this on for size
item.scan(/(title|author|pages):\s*?(.+)/i)
What this says is find all the results that start (case-insensitive) with either title, author or pages, are then followed by a colon and option white space and then characters. Capture the label and then the characters following the whitespace. The scan method will match as many times as it can.
Just iterate over the match words and do the regex compare as you normally would.
match_words.each do |word|
if item =~ /#{word}/ # Plus case sensitivity, start/end of item, etc.
# etc.
end
end
But if you know that the things you care about are at the beginning of the lines, then split the input string on \n and just use start_with instead of bothering with the regex--that partially depends on what the real data looks like.
First, create a | separated list of keywords from match_words.
Then, use string.scan to split the string apart, giving you an array of arrays with your results. See the end of this tutorial for a reference.
Here's my best shot:
keywords = match_words.join('|')
results = item.scan(/(#{keywords}):\s*(.+?)\s*(?= (#{keywords}):)/im)
Results: [["Title", "Jurassic Park"], ["Author", "Michael Crichton"]]
Don't forget to use the /m switch to indicate that you want . to match newlines.
To explain the pattern: we look for a keyword, then use a "look ahead" (?= ) to find the next keyword without capturing it. We capture all characters in between using a "lazy" expression .+?, so that we don't capture other keywords.
I'm helpless on regular expressions so please help me on this problem.
Basically I am downloading web pages and rss feeds and want to strip everything except plain words. No periods, commas, if, ands, and buts. Literally I have a list of the most common words used in English and I also want to strip those too but I think I know how to do that and don't need a regular expression because it would be really way to long.
How do I strip everything from a chunk of text except words that are delimited by spaces? Everything else goes in the trash.
This works quite well thanks to Pavel .split(/[^[:alpha:]]/).uniq!
I think that what fits you best would be splitting of the string into words. In this case, String::split function would be the better option. It accepts a regexp that matches substrings, which should split the source string into array elements.
In your case, it should be "some non-alphabetic characters". Alphabetic character class is denoted by [:alpha:]. So, here's the example of what you need:
irb(main):001:0> "asd, < er >w , we., wZr,fq.".split(/[^[:alpha:]]+/)
=> ["asd", "er", "w", "we", "wZr", "fq"]
You may further filter the result by intersecting the resultant array with array that contains only English words:
irb(main):001:0> ["asd", "er", "w", "we", "wZr", "fq"] & ["we","you","me"]
=> ["we"]
try \b\w*\b to match whole words
I am using gsub in Ruby to make a word within text bold. I am using a word boundary so as to not make letters within other words bold, but am finding that this ignores words that have a quote after them. For example:
text.gsub(/#{word}\b/i, "<b>#{word}</b>")
text = "I said, 'look out below'"
word = below
In this case the word below is not made bold. Is there any way to ignore certain characters along with a word boundary?
All that escaping in the Regexp.new is looking quite ugly. You could greatly simplify that by using a Regexp literal:
word = 'below'
text = "I said, 'look out below'"
reg = /\b#{word}\b/i
text.gsub!(reg, '<b>\0</b>')
Also, you could use the modifier form of gsub! directly, unless that string is aliased in some other place in your code that you are not showing us. Lastly, if you use the single quoted string literal inside your gsub call, you don't need to escape the backslash.
Be very careful with your \b boundaries. Here’s why.
The #{word} syntax doesn't work for regular expressions. Use Regexp.new instead:
word = "below"
text = "I said, 'look out below'"
reg = Regexp.new("\\b#{word}\\b", true)
text = text.gsub(reg, "<b>\\0</b>")
Note that when using sting you need to escape \b to \\b, or it is interpreted as a backspace. If word may contain special regex characters, escape it using Regexp.escape.
Also, by replacing the string to <b>#{word}</b> you may change casing of the string: "BeloW" will be replaced to "below". \0 corrects this by replacing with the found word. In addition, I added \\b at the beginning, you don't want to look for "day" and end up with "sunday".
I'm not sure how to use regular expressions in a function so that I could grab all the words in a sentence starting with a particular letter. I know that I can do:
word =~ /^#{letter}/
to check if the word starts with the letter, but how do I go from word to word. Do I need to convert the string to an array and then iterate through each word or is there a faster way using regex? I'm using ruby so that would look like:
matching_words = Array.new
sentance.split(" ").each do |word|
matching_words.push(word) if word =~ /^#{letter}/
end
Scan may be a good tool for this:
#!/usr/bin/ruby1.8
s = "I think Paris in the spring is a beautiful place"
p s.scan(/\b[it][[:alpha:]]*/i)
# => ["I", "think", "in", "the", "is"]
\b means 'word boundary."
[:alpha:] means upper or lowercase alpha (a-z).
You can use \b. It matches word boundaries--the invisible spot just before and after a word. (You can't see them, but oh they're there!) Here's the regex:
/\b(a\w*)\b/
The \w matches a word character, like letters and digits and stuff like that.
You can see me testing it here: http://rubular.com/regexes/13347
Similar to Anon.'s answer:
/\b(a\w*)/g
and then see all the results with (usually) $n, where n is the n-th hit. Many libraries will return /g results as arrays on the $n-th set of parenthesis, so in this case $1 would return an array of all the matching words. You'll want to double-check with whatever library you're using to figure out how it returns matches like this, there's a lot of variation on global search returns, sadly.
As to the \w vs [a-zA-Z], you can sometimes get faster execution by using the built-in definitions of things like that, as it can easily have an optimized path for the preset character classes.
The /g at the end makes it a "global" search, so it'll find more than one. It's still restricted by line in some languages / libraries, though, so if you wish to check an entire file you'll sometimes need /gm, to make it multi-line
If you want to remove results, like your title (but not question) suggests, try:
/\ba\w*//g
which does a search-and-replace in most languages (/<search>/<replacement>/). Sometimes you need a "s" at the front. Depends on the language / library. In Ruby's case, use:
string.gsub(/(\b)a\w*(\b)/, "\\1\\2")
to retain the non-word characters, and optionally put any replacement text between \1 and \2. gsub for global, sub for the first result.
/\ba[a-z]*\b/i
will match any word starting with 'a'.
The \b indicates a word boundary - we want to only match starting from the beginning of a word, after all.
Then there's the character we want our word to start with.
Then we have as many as possible letter characters, followed by another word boundary.
To match all words starting with t, use:
\bt\w+
That will match test but not footest; \b means "word boundary".
Personally i think that regex is overkill for this application, simply running a select is more than capable of solving this particular problem.
"this is a test".split(' ').select{ |word| word[0,1] == 't' }
result => ["this", "test"]
or if you are determined to use regex then go with grep
"this is a test".split(' ').grep(/^t/)
result => ["this", "test"]
Hope this helps.