An anagram group is a group of words such that any one can be converted into any other just by rearranging the letters. For example, "rats", "tars" and "star" are an anagram group.
Now I have an array of words and I am going to find the anagram words
to find this I have written the following code
actually it works for some words like scar and cars, but it doesn't work
for [scar , carts].
temp=[]
words.each do |e|
temp=e.split(//) # make an array of letters
words.each do |z|
if z.match(/#{temp}/) # match to find scar and cars
puts "exp is True"
else
puts "exp is false"
end
end
end
I just think that while [abc] means a or b or c I can separate my words to letters and then look for other cases in the array
Your algorithm is incorrect and inefficient (quadratic time complexity). Why regex?
Here's another idea. Define the signature of a word such that all the letters of a word are sorted. For example, the signature of hello is ehllo.
By this definition, anagrams are words that have the same signature, for example, rats, tars and star all have the signature arst. The code to implement this idea is straight-forward.
Two words are anagrams if they contain the same letters. There are several ways to figure out whether they do, the most obvious one is sorting the letters alphabetically. Then you want to separate the words into groups. Here's an idea:
words = %w[cats scat rats tars star scar cars carts]
words.group_by {|word| word.each_char.sort }.values
# => [['cats', 'scat'], ['rats', 'tars', 'star'], ['scar', 'cars'], ['carts']]
The problem is that /#{e.split(//)}/ here is pretty much nonsensical.
To illustrate this, lets see what happens:
word = 'wtf'
letters = word.split(//) # => ["w", "t", "f"]
regex = /#{letters}/ # => /["w", "t", "f"]/
'"'.match(regex) # => 0
','.match(regex) # => 0
' '.match(regex) # => 0
't'.match(regex) # => 0
What happens is interpolating something in a regex replaces it with the result of its to_s method. And since character sets match a single character in what's inside, you will get a regex that matches " or , or or any of the letters in the original word.
Therefore, I will unfortunately call your solution unsalvageable.
A very easy way to check if two words are anagrams is to sort their characters and see if the result is the same.
The faster way would be:
def is_anagram? w1, w2
w1.chars.sort == w2.chars.sort
end
You could also do something like this I suppose:
def is_anagram? w1, w2
w2 = w2.chars
w1.chars.permutation.to_a.include?(w2)
end
then run it like this:
is_anagram? "rats", "star"
=> true
Note:
This post has been edited as per Cary Swoveland's advice.
words = ['demo', 'none', 'tied', 'evil', 'dome', 'mode', 'live',
'fowl', 'veil', 'wolf', 'diet', 'vile', 'edit', 'tide',
'flow', 'neon']
groups = words.group_by { |word| word.split('').sort }
groups.each { |x, y| p y }
Related
Given a sentence, I want to count all the duplicated words:
It is an exercice from Exercism.io Word count
For example for the input "olly olly in come free"
plain
olly: 2
in: 1
come: 1
free: 1
I have this test for exemple:
def test_with_quotations
phrase = Phrase.new("Joe can't tell between 'large' and large.")
counts = {"joe"=>1, "can't"=>1, "tell"=>1, "between"=>1, "large"=>2, "and"=>1}
assert_equal counts, phrase.word_count
end
this is my method
def word_count
phrase = #phrase.downcase.split(/\W+/)
counts = phrase.group_by{|word| word}.map {|k,v| [k, v.count]}
Hash[*counts.flatten]
end
For the test above I have this failure when I run it in the terminal:
2) Failure:
PhraseTest#test_with_apostrophes [word_count_test.rb:69]:
--- expected
+++ actual
## -1 +1 ##
-{"first"=>1, "don't"=>2, "laugh"=>1, "then"=>1, "cry"=>1}
+{"first"=>1, "don"=>2, "t"=>2, "laugh"=>1, "then"=>1, "cry"=>1}
My problem is to remove all chars except 'apostrophe...
the regex in the method almost works...
phrase = #phrase.downcase.split(/\W+/)
but it remove the apostrophes...
I don't want to keep the single quote around a word, 'Hello' => Hello
but Don't be cruel => Don't be cruel
Maybe something like:
string.scan(/\b[\w']+\b/i).each_with_object(Hash.new(0)){|a,(k,v)| k[a]+=1}
The regex employs word boundaries (\b).
The scan outputs an array of the found words and for each word in the array they are added to the hash, which has a default value of zero for each item which is then incremented.
Turns out my solution whilst finding all items and ignoring case will still leave the items in the case they were found in originally.
This would now be a decision for Nelly to either accept as is or to perform a downcase on the original string or the array item as it is added to the hash.
I'll leave that decision up to you :)
Given:
irb(main):015:0> phrase
=> "First: don't laugh. Then: don't cry."
Try:
irb(main):011:0> Hash[phrase.downcase.scan(/[a-z']+/)
.group_by{|word| word.downcase}
.map{|word, words|[word, words.size]}
]
=> {"first"=>1, "don't"=>2, "laugh"=>1, "then"=>1, "cry"=>1}
With your update, if you want to remove single quotes, do that first:
irb(main):038:0> p2
=> "Joe can't tell between 'large' and large."
irb(main):039:0> p2.gsub(/(?<!\w)'|'(?!\w)/,'')
=> "Joe can't tell between large and large."
Then use the same method.
But you say -- gsub(/(?<!\w)'|'(?!\w)/,'') will remove the apostrophe in 'Twas the night before. Which I reply you will eventually need to build a parser that can determine the distinction between an apostrophe and a single quote if /(?<!\w)'|'(?!\w)/ is not sufficient.
You can also use word boundaries:
irb(main):041:0> Hash[p2.downcase.scan(/\b[a-z']+\b/)
.group_by{|word| word.downcase}
.map{|word, words|[word, words.size]}
]
=> {"joe"=>1, "can't"=>1, "tell"=>1, "between"=>1, "large"=>2, "and"=>1}
But that does not solve 'Tis the night either.
Another way:
str = "First: don't 'laugh'. Then: 'don't cry'."
reg = /
[a-z] #single letter
[a-z']+ #one or more letters or apostrophe
[a-z] #single letter
'? #optional single apostrophe
/ix #case-insensitive and free-spacing regex
str.scan(reg).group_by(&:itself).transform_values(&:count)
#=> {"First"=>1, "don't"=>2, "laugh"=>1, "Then"=>1, "cry'"=>1}
If I have a string "blueberrymuffinsareinsanelydelicious", what is the most efficient way to parse it such that I am left with ["blueberry", "muffins", "are", "insanely", "delicious"]?
I already have my wordlist (mac's /usr/share/dict/words), but how do I ensure that the full word is stored in my array, aka: blueberry, instead of two separate words, blue and berry.
Although there's cases where there's multiple interpretations possible and picking the best one can be trouble, you can always approach it with a fairly naïve algorithm like this:
WORDS = %w[
blueberry
blue
berry
fin
fins
muffin
muffins
are
insane
insanely
in
delicious
deli
us
].sort_by do |word|
[ -word.length, word ]
end
WORD_REGEXP = Regexp.union(*WORDS)
def best_fit(string)
string.scan(WORD_REGEXP)
end
This will parse your example:
best_fit("blueberrymuffinsareinsanelydelicious")
# => ["blueberry", "muffins", "are", "insanely", "delicious"]
Note that this skips any non-matching components.
Here's a recursive method which finds the correct sentence in 0.4s on my slowish laptop.
It first imports almost 100K english words and sorts them by decreasing size
For every word, it checks if text starts with it
If it does, it removes the word from the text, keeps the word in an array and recursively calls itself.
If the text is empty, it means a sentence has been found.
It uses a lazy array to stop at the first found sentence.
text = "blueberrymuffinsareinsanelydeliciousbecausethey'rereallymoistandcolorful"
dictionary = File.readlines('/usr/share/dict/american-english')
.map(&:chomp)
.sort_by{ |w| -w.size }
def find_words(text, possible_words, sentence = [])
return sentence if text.empty?
possible_words.lazy.select{ |word|
text.start_with?(word)
}.map{ |word|
find_words(text[word.size..-1], possible_words, sentence + [word])
}.find(&:itself)
end
p find_words(text, dictionary)
#=> ["blueberry", "muffins", "are", "insanely", "delicious", "because", "they're", "really", "moist", "and", "colorful"]
p find_words('someword', %w(no way to find a combination))
#=> nil
p find_words('culdesac', %w(culd no way to find a combination cul de sac))
#=> ["cul", "de", "sac"]
p find_words("carrotate", dictionary)
#=> ["carrot", "ate"]
For a faster lookup, it could be a good idea to use a Trie.
Say that we want to count the number of words in a document. I know we can do the following:
text.each_line(){ |line| totalWords = totalWords + line.split.size }
Say, that I just want to add some exceptions, such that, I don't want to count the following as words:
(1) numbers
(2) standalone letters
(3) email addresses
How can we do that?
Thanks.
You can wrap this up pretty neatly:
text.each_line do |line|
total_words += line.split.reject do |word|
word.match(/\A(\d+|\w|\S*\#\S+\.\S+)\z/)
end.length
end
Roughly speaking that defines an approximate email address.
Remember Ruby strongly encourages the use of variables with names like total_words and not totalWords.
assuming you can represent all the exceptions in a single regular expression regex_variable, you could do:
text.each_line(){ |line| totalWords = totalWords + line.split.count {|wrd| wrd !~ regex_variable }
your regular expression could look something like:
regex_variable = /\d.|^[a-z]{1}$|\A([^#\s]+)#((?:[-a-z0-9]+\.)+[a-z]{2,})\Z/i
I don't claim to be a regex expert, so you may want to double check that, particularly the email validation part
In addition to the other answers, a little gem hunting came up with this:
WordsCounted Gem
Get the following data from any string or readable file:
Word count
Unique word count
Word density
Character count
Average characters per word
A hash map of words and the number of times they occur
A hash map of words and their lengths
The longest word(s) and its length
The most occurring word(s) and its number of occurrences.
Count invividual strings for occurrences.
A flexible way to exclude words (or anything) from the count. You can pass a string, a regexp, an array, or a lambda.
Customisable criteria. Pass your own regexp rules to split strings if you prefer. The default regexp has two features:
Filters special characters but respects hyphens and apostrophes.
Plays nicely with diacritics (UTF and unicode characters): "São Paulo" is treated as ["São", "Paulo"] and not ["S", "", "o", "Paulo"].
Opens and reads files. Pass in a file path or a url instead of a string.
Have you ever started answering a question and found yourself wandering, exploring interesting, but tangential issues, or concepts you didn't fully understand? That's what happened to me here. Perhaps some of the ideas might prove useful in other settings, if not for the problem at hand.
For readability, we might define some helpers in the class String, but to avoid contamination, I'll use Refinements.
Code
module StringHelpers
refine String do
def count_words
remove_punctuation.split.count { |w|
!(w.is_number? || w.size == 1 || w.is_email_address?) }
end
def remove_punctuation
gsub(/[.!?,;:)](?:\s|$)|(?:^|\s)\(|\-|\n/,' ')
end
def is_number?
self =~ /\A-?\d+(?:\.\d+)?\z/
end
def is_email_address?
include?('#') # for testing only
end
end
end
module CountWords
using StringHelpers
def self.count_words_in_file(fname)
IO.foreach(fname).reduce(0) { |t,l| t+l.count_words }
end
end
Note that using must be in a module (possibly a class). It does not work in main, presumably because that would make the methods available in the class self.class #=> Object, which would defeat the purpose of Refinements. (Readers: please correct me if I'm wrong about the reason using must be in a module.)
Example
Let's first informally check that the helpers are working correctly:
module CheckHelpers
using StringHelpers
s = "You can reach my dog, a 10-year-old golden, at fido#dogs.org."
p s = s.remove_punctuation
#=> "You can reach my dog a 10 year old golden at fido#dogs.org."
p words = s.split
#=> ["You", "can", "reach", "my", "dog", "a", "10",
# "year", "old", "golden", "at", "fido#dogs.org."]
p '123'.is_number? #=> 0
p '-123'.is_number? #=> 0
p '1.23'.is_number? #=> 0
p '123.'.is_number? #=> nil
p "fido#dogs.org".is_email_address? #=> true
p "fido(at)dogs.org".is_email_address? #=> false
p s.count_words #=> 9 (`'a'`, `'10'` and "fido#dogs.org" excluded)
s = "My cat, who has 4 lives remaining, is at abbie(at)felines.org."
p s = s.remove_punctuation
p s.count_words
end
All looks OK. Next, put I'll put some text in a file:
FName = "pets"
text =<<_
My cat, who has 4 lives remaining, is at abbie(at)felines.org.
You can reach my dog, a 10-year-old golden, at fido#dogs.org.
_
File.write(FName, text)
#=> 125
and confirm the file contents:
File.read(FName)
#=> "My cat, who has 4 lives remaining, is at abbie(at)felines.org.\n
# You can reach my dog, a 10-year-old golden, at fido#dogs.org.\n"
Now, count the words:
CountWords.count_words_in_file(FName)
#=> 18 (9 in ech line)
Note that there is at least one problem with the removal of punctuation. It has to do with the hyphen. Any idea what that might be?
Something like...?
def is_countable(word)
return false if word.size < 2
return false if word ~= /^[0-9]+$/
return false if is_an_email_address(word) # you need a gem for this...
return true
end
wordCount = text.split().inject(0) {|count,word| count += 1 if is_countable(word) }
Or, since I am jumping to the conclusion that you can just split your entire text into an array with split(), you might need:
wordCount = 0
text.each_line do |line|
line.split.each{|word| wordCount += 1 if is_countable(word) }
end
I'm looking to split a numeric random string like "12345567" into the array ["12","345","567"] as simply as possible. basically changing a number into a human readable number array with splits at thousands,million, billions, etc..
my previous solution cuts it from the front rather than back
"'12345567".to_s.scan(/.{1,#{3}}/)
#> ["123","455","67"]
If you are on Rails, you can use the number_with_delimiter helper. In plain Ruby, you can include it.
require 'action_view'
require 'action_view/helpers'
include ActionView::Helpers::NumberHelper
number_with_delimiter("12345567", :delimiter => ',')
# => "12,345,567"
You can do a split on the comma, to get an Array
You could try the below.
> "12345567".scan(/\d+?(?=(?:\d{3})*$)/)
=> ["12", "345", "567"]
\d+? will do a non-greedy match of one or more digits which must be followed by exactly three digits, zero or more times and further followed by the end of a line.
\d+? will do a non-greedy match of one or more digits.
(?=..) called positive lookahead assertion which asserts that the match must be followed by,
(?:\d{3})* exactly three digits of zero or more times. So this would match an empty string or 111 or 111111 like multiples of 3.
$ End of the line anchor which matches the boundary which exists at the last.
OR
> "12345567".scan(/.{1,3}(?=(?:.{3})*$)/)
=> ["12", "345", "567"]
Here's one non-regex solution:
s = "12345567"
sz = s.size
n_first = sz % 3
((n_first>0) ? [s[0,n_first]] : []) + (n_first...sz).step(3).map { |i| s[i,3] }
#=> ["12", "345", "567"]
Another:
s.reverse.chars.each_slice(3).map { |a| a.join.reverse }.reverse
#=> ["12", "345", "567"]
A recursive approach:
def split(str)
str.size <= 3 ? [str] : (split(str[0..-4]) + [str[-3..-1]])
end
Hardly readable, though. Perhaps a more explicit code layout:
def split(str)
if str.size <= 3 then
[str] # Too short, keep it all.
else
split(str[0..-4]) + [str[-3..-1]] # Append the last 3, and recurse on the head.
end
end
Disclaimer: No test whatsoever on performance (or attempt to go for a clear tail recursion)! Just an alternative to explore.
It's hard to tell what you want, but maybe:
"12345567".scan(/^..|.{1,3}/)
=> ["12", "345", "567"]
I want my output to search and count the frequency of the words "candy" and "gram", but also the combinations of "candy gram" and "gram candy," in a given text (whole_file.)
I am currently using the following code to display the occurrences of "candy" and "gram," but when I aggregate the combinations within the %w, only the word and frequencies of "candy" and "gram" display. Should I try a different way? thanks so much.
myArray = whole_file.split
stop_words= %w{ candy gram 'candy gram' 'gram candy' }
nonstop_words = myArray - stop_words
key_words = myArray - nonstop_words
frequency = Hash.new (0)
key_words.each { |word| frequency[word] +=1 }
key_words = frequency.sort_by {|x,y| x }
key_words.each { |word, frequency| puts word + ' ' + frequency.to_s }
It sounds like you're after n-grams. You could break the text into combinations of consecutive words in the first place, and then count the occurrences in the resulting array of word groupings. Here's an example:
whole_file = "The big fat man loves a candy gram but each gram of candy isn't necessarily gram candy"
[["candy"], ["gram"], ["candy", "gram"], ["gram", "candy"]].each do |term|
terms = whole_file.split(/\s+/).each_cons(term.length).to_a
puts "#{term.join(" ")} #{terms.count(term)}"
end
EDIT: As was pointed out in the comments below, I wasn't paying close enough attention and was splitting the file on each loop which is obviously not a good idea, especially if it's large. I also hadn't accounted for the fact that the original question may've need to sort by the count, although that wasn't explicitly asked.
whole_file = "The big fat man loves a candy gram but each gram of candy isn't necessarily gram candy"
# This is simplistic. You would need to address punctuation and other characters before
# or at this step.
split_file = whole_file.split(/\s+/)
terms_to_count = [["candy"], ["gram"], ["candy", "gram"], ["gram", "candy"]]
counts = []
terms_to_count.each do |term|
terms = split_file.each_cons(term.length).to_a
counts << [term.join(" "), terms.count(term)]
end
# Seemed like you may need to do sorting too, so here that is:
sorted = counts.sort { |a, b| b[1] <=> a[1] }
sorted.each do |count|
puts "#{count[0]} #{count[1]}"
end
Strip punctuation and convert to lower-case
The first thing you probably want to do is remove all punctuation from the string holding the contents of the file and then convert what's left to lower case, the latter so you don't have worry about counting 'Cat' and 'cat' as the same word. Those two operations can be done in either order.
Changing upper-case letters to lower-case is easy:
text = whole_file.downcase
To remove the punctuation it is probably easier to decide what to keep rather than what to discard. If we only want to keep lower-case letters, you can do this:
text = whole_file.downcase.gsub(/[^a-z]/, '')
That is, substitute an empty string for all characters other than (^) lowercase letters.1
Determine frequency of individual words
If you want to count the number of times text contains the word 'candy', you can use the method String#scan on the string text and then determine the size of the array that is returned:
text.scan(/\bcandy\b/).size
scan returns an array with every occurrence of the string 'candy'; .size returns the size of that array. Here \b ensures 'candy gram' has a word "boundary" at each end, which could be whitespace or the beginning or end of a line or the file. That's to prevent `candycane' from being counted.
A second way is to convert the string text to an array of words, as you have done2:
myArray = text.split
If you don't mind, I'd like to call this:
words = text.split
as I find that more expressive.3
The most direct way to determine the number of times 'candy' appears is to use the method Enumberable#count, like this:
words.count('candy')
You can also use the array difference method, Array#-, as you noted:
words.size - (words - ['candy']).size
If you wish to know the number of times either 'candy' or 'gram' appears, you could of course do the above for each and sum the two counts. Some other ways are:
words.size - (myArray - ['candy', 'gram']).size
words.count { |word| word == 'candy' || word = 'gram' }
words.count { |word| ['candy', 'gram'].include?(word) }
Determine the frequency of all words that appear in the text
Your use of a hash with a default value of zero was a good choice:
def frequency_of_all_words(words)
frequency = Hash.new(0)
words.each { |word| frequency[word] +=1 }
frequency
end
I wrote this as a method to emphasize that words.each... does not return frequency. Often you would see this written more compactly using the method Enumerable#each_with_object, which returns the hash ("object"):
def frequency_of_all_words(words)
words.each_with_object(Hash.new(0)) { |word, h| h[word] +=1 }
end
Once you have the hash frequency you can sort it as you did:
frequency.sort_by {|word, freq| freq }
or
frequency.sort_by(&:last)
which you could write:
frequency.sort_by {|_, freq| freq }
since you aren't using the first block variable. If you wanted the most frequent words first:
frequency.sort_by(&:last).reverse
or
frequency.sort_by {|_, freq| -freq }
All of these will give you an array. If you want to convert it back to a hash (with the largest values first, say):
Hash[frequency.sort_by(&:last).reverse]
or in Ruby 2.0+,
frequency.sort_by(&:last).reverse.to_h
Count the number of times a substring appears
Now let's count the number of times the string 'candy gram' appears. You might think we could use String#scan on the string holding the entire file, as we did earlier4:
text.scan(/\bcandy gram\b/).size
The first problem is that this won't catch 'candy\ngram'; i.e., when the words are separated by a newline character. We could fix that by changing the regex to /\bcandy\sgram\b/. A second problem is that 'candy gram' might have been 'candy. Gram' in the file, in which case you might not want to count it.
A better way is to use the method Enumerable#each_cons on the array words. The easiest way to show you how that works is by example:
words = %w{ check for candy gram here candy gram again }
#=> ["check", "for", "candy", "gram", "here", "candy", "gram", "again"]
enum = words.each_cons(2)
#=> #<Enumerator: ["check", "for", "candy", "gram", "here", "candy",
# "gram", "again"]:each_cons(2)>
enum.to_a
#=> [["check", "for"], ["for", "candy"], ["candy", "gram"],
# ["gram", "here"], ["here", "candy"], ["candy", "gram"],
# ["gram", "again"]]
each_cons(2) returns an enumerator; I've converted it to an array to display its contents.
So we can write
words.each_cons(2).map { |word_pair| word_pair.join(' ') }
#=> ["check for", "for candy", "candy gram", "gram here",
# "here candy", "candy gram", "gram again"]
and lastly:
words.each_cons(2).map { |word_pair|
word_pair.join(' ') }.count { |s| s == 'candy gram' }
#=> 2
1 If you also wanted to keep dashes, for hyphenated words, change the regex to /[^-a-z]/ or /[^a-z-]/.
2 Note from String#split that .split is the same as both .split(' ') and .split(/\s+/)).
3 Also, Ruby's naming convention is to use lower-case letters and underscores ("snake-case") for variables and methods, such as my_array.