Related
I am trying to write a method that returns the highest product from adjacent values within an array. Below is my attempt to do so however it fails to return the highest products in some instances, patterns of which I am unclear on and I cannot see why the problems with this code:
def adjacentElementsProduct(inputArray)
inputArray.each_with_index do |value, index|
if inputArray[index+1]
products = [] << value * inputArray[index + 1]
return products.max
end
end
end
a) Can anyone help me understand what is wrong with the above implementation
b) Can anyone suggest a less verbose and simpler method of achieving the desired.
Here are the fails and passes (this is from codefights.com, 1st question, 2nd chapter 'Edge of the Ocean'):
First of all, you're returning from the method after calculating only the first product. So all your answers are simply the product of the first two numbers. To fix it, initialize your products variable before the loop and put your return after the loop.
As for a cleaner implementation, take a look at Enumerable's each_cons, which returns consecutive members of an array (for example, each_cons(2) returns consecutive pairs). Then you can multiply each pair in one fell swoop via map and return the maximum.
def adjacentElementsProduct(inputArray)
inputArray.each_cons(2).map{|a,b| a*b}.max
end
I presume that, for an array arr, you want the largest product
arr[i] * arr[i+1] * arr[i+2] *...* arr[j-1] * arr[j]
where 0 <= i <= j <= arr.size-1. We can do that in a Ruby-like way using Enumerable#each_cons, as #Mark suggested.
def max_adjacent_product(arr)
n = (1..arr.size).each_with_object({prod: -Float::INFINITY, adj: []}) do |len, best|
arr.each_cons(len) do |a|
pr = a.reduce(:*)
best.replace({prod: pr, adj: a}) if pr > best[:prod]
end
end
end
max_adjacent_product [2, -4, 3, -5, -10]
#=> {:prod=>150, :adj=>[3, -5, -10]}
I loop through the number of adjacent elements to consider. In the example, that would be from 1 to 5. For each number of adjacent elements, len, I then loop through each subarray a of adjacent elements of arr for which a.size equals len. In the example, for len = 3, that would be [2, -4, 3], [-4, 3, -5] and [3, -5, -10]. For each of those subarrays of adjacent elements I then compute the product of its elements (-24, 60 and 150 for the example just given). If a product is greater than the best known product so far I make it the best subarray so far. The final value of best is returned by the method.
I am trying to analyze some documents and find similarities in them. After analysis, I have an array, the elements of which are arrays of data from documents considered similar. But sometimes I have two almost similar elements, and naturally I want to leave the biggest of them. For simplification:
data = [[1,2,3,4,5,6], [7,8,9,10], [1,2,3,5,6]...]
How do I efficiently process the data that I get:
data = [[1,2,3,4,5,6], [7,8,9,10]...]
I suppose I could intersect every array, and if the intersected array matches one of the original arrays - I ignore it. Here is a quick code I wrote:
data = [[1,2,3,4,5,6], [7,8,9,10], [1,2,3,5,6], [7,9,10]]
cleaned = []
data.each_index do |i|
similar = false
data.each_index do |j|
if i == j
next
elsif data[i]&data[j] == data[i]
similar = true
break
end
end
unless similar
cleaned << data[i]
end
end
puts cleaned.inspect
Is this an efficient way to go? Also, the current behaviour only allows to leave out arrays that are a few elements short, and I might want to merge similar arrays if they occur:
[[1,2,3,4,5], [1,3,4,5,6]] => [[1,2,3,4,5,6]]
You can delete any element in the list if it is fully contained in another element:
data.delete_if do |arr|
data.any? { |a2| !a2.equal?(arr) && arr - a2 == [] }
end
# => [[1, 2, 3, 4, 5, 6], [7, 8, 9, 10]]
This is a bit more efficient than your suggestion since once you decide that an element should be removed, you don't check against it in the next iterations.
This is what I have so far:
myArray.map!{ rand(max) }
Obviously, however, sometimes the numbers in the list are not unique. How can I make sure my list only contains unique numbers without having to create a bigger list from which I then just pick the n unique numbers?
Edit:
I'd really like to see this done w/o loop - if at all possible.
(0..50).to_a.sort{ rand() - 0.5 }[0..x]
(0..50).to_a can be replaced with any array.
0 is "minvalue", 50 is "max value"
x is "how many values i want out"
of course, its impossible for x to be permitted to be greater than max-min :)
In expansion of how this works
(0..5).to_a ==> [0,1,2,3,4,5]
[0,1,2,3,4,5].sort{ -1 } ==> [0, 1, 2, 4, 3, 5] # constant
[0,1,2,3,4,5].sort{ 1 } ==> [5, 3, 0, 4, 2, 1] # constant
[0,1,2,3,4,5].sort{ rand() - 0.5 } ==> [1, 5, 0, 3, 4, 2 ] # random
[1, 5, 0, 3, 4, 2 ][ 0..2 ] ==> [1, 5, 0 ]
Footnotes:
It is worth mentioning that at the time this question was originally answered, September 2008, that Array#shuffle was either not available or not already known to me, hence the approximation in Array#sort
And there's a barrage of suggested edits to this as a result.
So:
.sort{ rand() - 0.5 }
Can be better, and shorter expressed on modern ruby implementations using
.shuffle
Additionally,
[0..x]
Can be more obviously written with Array#take as:
.take(x)
Thus, the easiest way to produce a sequence of random numbers on a modern ruby is:
(0..50).to_a.shuffle.take(x)
This uses Set:
require 'set'
def rand_n(n, max)
randoms = Set.new
loop do
randoms << rand(max)
return randoms.to_a if randoms.size >= n
end
end
Ruby 1.9 offers the Array#sample method which returns an element, or elements randomly selected from an Array. The results of #sample won't include the same Array element twice.
(1..999).to_a.sample 5 # => [389, 30, 326, 946, 746]
When compared to the to_a.sort_by approach, the sample method appears to be significantly faster. In a simple scenario I compared sort_by to sample, and got the following results.
require 'benchmark'
range = 0...1000000
how_many = 5
Benchmark.realtime do
range.to_a.sample(how_many)
end
=> 0.081083
Benchmark.realtime do
(range).sort_by{rand}[0...how_many]
end
=> 2.907445
Just to give you an idea about speed, I ran four versions of this:
Using Sets, like Ryan's suggestion.
Using an Array slightly larger than necessary, then doing uniq! at the end.
Using a Hash, like Kyle suggested.
Creating an Array of the required size, then sorting it randomly, like Kent's suggestion (but without the extraneous "- 0.5", which does nothing).
They're all fast at small scales, so I had them each create a list of 1,000,000 numbers. Here are the times, in seconds:
Sets: 628
Array + uniq: 629
Hash: 645
fixed Array + sort: 8
And no, that last one is not a typo. So if you care about speed, and it's OK for the numbers to be integers from 0 to whatever, then my exact code was:
a = (0...1000000).sort_by{rand}
Yes, it's possible to do this without a loop and without keeping track of which numbers have been chosen. It's called a Linear Feedback Shift Register: Create Random Number Sequence with No Repeats
[*1..99].sample(4) #=> [64, 99, 29, 49]
According to Array#sample docs,
The elements are chosen by using random and unique indices
If you need SecureRandom (which uses computer noise instead of pseudorandom numbers):
require 'securerandom'
[*1..99].sample(4, random: SecureRandom) #=> [2, 75, 95, 37]
How about a play on this? Unique random numbers without needing to use Set or Hash.
x = 0
(1..100).map{|iter| x += rand(100)}.shuffle
You could use a hash to track the random numbers you've used so far:
seen = {}
max = 100
(1..10).map { |n|
x = rand(max)
while (seen[x])
x = rand(max)
end
x
}
Rather than add the items to a list/array, add them to a Set.
If you have a finite list of possible random numbers (i.e. 1 to 100), then Kent's solution is good.
Otherwise there is no other good way to do it without looping. The problem is you MUST do a loop if you get a duplicate. My solution should be efficient and the looping should not be too much more than the size of your array (i.e. if you want 20 unique random numbers, it might take 25 iterations on average.) Though the number of iterations gets worse the more numbers you need and the smaller max is. Here is my above code modified to show how many iterations are needed for the given input:
require 'set'
def rand_n(n, max)
randoms = Set.new
i = 0
loop do
randoms << rand(max)
break if randoms.size > n
i += 1
end
puts "Took #{i} iterations for #{n} random numbers to a max of #{max}"
return randoms.to_a
end
I could write this code to LOOK more like Array.map if you want :)
Based on Kent Fredric's solution above, this is what I ended up using:
def n_unique_rand(number_to_generate, rand_upper_limit)
return (0..rand_upper_limit - 1).sort_by{rand}[0..number_to_generate - 1]
end
Thanks Kent.
No loops with this method
Array.new(size) { rand(max) }
require 'benchmark'
max = 1000000
size = 5
Benchmark.realtime do
Array.new(size) { rand(max) }
end
=> 1.9114e-05
Here is one solution:
Suppose you want these random numbers to be between r_min and r_max. For each element in your list, generate a random number r, and make list[i]=list[i-1]+r. This would give you random numbers which are monotonically increasing, guaranteeing uniqueness provided that
r+list[i-1] does not over flow
r > 0
For the first element, you would use r_min instead of list[i-1]. Once you are done, you can shuffle the list so the elements are not so obviously in order.
The only problem with this method is when you go over r_max and still have more elements to generate. In this case, you can reset r_min and r_max to 2 adjacent element you have already computed, and simply repeat the process. This effectively runs the same algorithm over an interval where there are no numbers already used. You can keep doing this until you have the list populated.
As far as it is nice to know in advance the maxium value, you can do this way:
class NoLoopRand
def initialize(max)
#deck = (0..max).to_a
end
def getrnd
return #deck.delete_at(rand(#deck.length - 1))
end
end
and you can obtain random data in this way:
aRndNum = NoLoopRand.new(10)
puts aRndNum.getrnd
you'll obtain nil when all the values will be exausted from the deck.
Method 1
Using Kent's approach, it is possible to generate an array of arbitrary length keeping all values in a limited range:
# Generates a random array of length n.
#
# #param n length of the desired array
# #param lower minimum number in the array
# #param upper maximum number in the array
def ary_rand(n, lower, upper)
values_set = (lower..upper).to_a
repetition = n/(upper-lower+1) + 1
(values_set*repetition).sample n
end
Method 2
Another, possibly more efficient, method modified from same Kent's another answer:
def ary_rand2(n, lower, upper)
v = (lower..upper).to_a
(0...n).map{ v[rand(v.length)] }
end
Output
puts (ary_rand 5, 0, 9).to_s # [0, 8, 2, 5, 6] expected
puts (ary_rand 5, 0, 9).to_s # [7, 8, 2, 4, 3] different result for same params
puts (ary_rand 5, 0, 1).to_s # [0, 0, 1, 0, 1] repeated values from limited range
puts (ary_rand 5, 9, 0).to_s # [] no such range :)
This is my problem I have met in my assignment.
Array A has two elements: array B and array C.
Array B has two elements: array D and array E
At some point, array X just contains two elements: string a and string b.
I don't know how to determine how deep array A is. For example:
arrA = [
[
[1,2]
]
]
I have tested by: A[0][0][0] == nil which returns false. Moreover, A[0][0]..[0] == nil always returns false. So, I cannot do this way to know how deep array A is.
If this is not what you're looking for, it should be a good starting point:
def depth (a)
return 0 unless a.is_a?(Array)
return 1 + depth(a[0])
end
> depth(arrA)
=> 3
Please note that this only measures the depth of the first branch.
My solution which goes below answers the maximum depth of any array:
Example: for arr=[ [[1],[2,3]], [[[ 3,4 ]]] ], the maximum depth of arr is 4 for 3,4.
Aprroach - flatten by one level and compare
b, depth = arr.dup, 1
until b==arr.flatten
depth+=1
b=b.flatten(1)
end
puts "Array depth: #{depth}" #=> 4
Hope it answers your question.
A simple pure functional recursive solution:
def depth(xs, n=0)
return case
when xs.class != Array
n
when xs == []
n + 1
else
xs.collect{|x| depth x, n+1}.max
end
end
Examples:
depth([]) == 1
depth([['a']])) == 2
depth([1, 2, 3, 4, [1, 2, 3, [[2, 2],[]], 4, 5, 6, 7], 5, 5, [[[[[3, 4]]]]], [[[[[[[[[1, 2]]]]]]]]]]) == 10
Here's a one-liner similar to kiddorails' solution extracted into a method:
def depth(array)
array.to_a == array.flatten(1) ? 1 : depth(array.flatten(1)) + 1
end
It will flatten the array 1 dimension at the time until it can't flatten anymore, while counting the dimensions.
Why is this better than other solutions out there?
doesn't require modification to native classes (avoid that if possible)
doesn't use metaprogramming (is_a?, send, respond_to?, etc.)
fairly easy to read
works on hashes as well (notice array.to_a)
actually works (unlike only checking the first branch, and other silly stuff)
Also one line code if you want to use
def depth (a)
a.to_s.count("[")
end
This is what I have so far:
myArray.map!{ rand(max) }
Obviously, however, sometimes the numbers in the list are not unique. How can I make sure my list only contains unique numbers without having to create a bigger list from which I then just pick the n unique numbers?
Edit:
I'd really like to see this done w/o loop - if at all possible.
(0..50).to_a.sort{ rand() - 0.5 }[0..x]
(0..50).to_a can be replaced with any array.
0 is "minvalue", 50 is "max value"
x is "how many values i want out"
of course, its impossible for x to be permitted to be greater than max-min :)
In expansion of how this works
(0..5).to_a ==> [0,1,2,3,4,5]
[0,1,2,3,4,5].sort{ -1 } ==> [0, 1, 2, 4, 3, 5] # constant
[0,1,2,3,4,5].sort{ 1 } ==> [5, 3, 0, 4, 2, 1] # constant
[0,1,2,3,4,5].sort{ rand() - 0.5 } ==> [1, 5, 0, 3, 4, 2 ] # random
[1, 5, 0, 3, 4, 2 ][ 0..2 ] ==> [1, 5, 0 ]
Footnotes:
It is worth mentioning that at the time this question was originally answered, September 2008, that Array#shuffle was either not available or not already known to me, hence the approximation in Array#sort
And there's a barrage of suggested edits to this as a result.
So:
.sort{ rand() - 0.5 }
Can be better, and shorter expressed on modern ruby implementations using
.shuffle
Additionally,
[0..x]
Can be more obviously written with Array#take as:
.take(x)
Thus, the easiest way to produce a sequence of random numbers on a modern ruby is:
(0..50).to_a.shuffle.take(x)
This uses Set:
require 'set'
def rand_n(n, max)
randoms = Set.new
loop do
randoms << rand(max)
return randoms.to_a if randoms.size >= n
end
end
Ruby 1.9 offers the Array#sample method which returns an element, or elements randomly selected from an Array. The results of #sample won't include the same Array element twice.
(1..999).to_a.sample 5 # => [389, 30, 326, 946, 746]
When compared to the to_a.sort_by approach, the sample method appears to be significantly faster. In a simple scenario I compared sort_by to sample, and got the following results.
require 'benchmark'
range = 0...1000000
how_many = 5
Benchmark.realtime do
range.to_a.sample(how_many)
end
=> 0.081083
Benchmark.realtime do
(range).sort_by{rand}[0...how_many]
end
=> 2.907445
Just to give you an idea about speed, I ran four versions of this:
Using Sets, like Ryan's suggestion.
Using an Array slightly larger than necessary, then doing uniq! at the end.
Using a Hash, like Kyle suggested.
Creating an Array of the required size, then sorting it randomly, like Kent's suggestion (but without the extraneous "- 0.5", which does nothing).
They're all fast at small scales, so I had them each create a list of 1,000,000 numbers. Here are the times, in seconds:
Sets: 628
Array + uniq: 629
Hash: 645
fixed Array + sort: 8
And no, that last one is not a typo. So if you care about speed, and it's OK for the numbers to be integers from 0 to whatever, then my exact code was:
a = (0...1000000).sort_by{rand}
Yes, it's possible to do this without a loop and without keeping track of which numbers have been chosen. It's called a Linear Feedback Shift Register: Create Random Number Sequence with No Repeats
[*1..99].sample(4) #=> [64, 99, 29, 49]
According to Array#sample docs,
The elements are chosen by using random and unique indices
If you need SecureRandom (which uses computer noise instead of pseudorandom numbers):
require 'securerandom'
[*1..99].sample(4, random: SecureRandom) #=> [2, 75, 95, 37]
How about a play on this? Unique random numbers without needing to use Set or Hash.
x = 0
(1..100).map{|iter| x += rand(100)}.shuffle
You could use a hash to track the random numbers you've used so far:
seen = {}
max = 100
(1..10).map { |n|
x = rand(max)
while (seen[x])
x = rand(max)
end
x
}
Rather than add the items to a list/array, add them to a Set.
If you have a finite list of possible random numbers (i.e. 1 to 100), then Kent's solution is good.
Otherwise there is no other good way to do it without looping. The problem is you MUST do a loop if you get a duplicate. My solution should be efficient and the looping should not be too much more than the size of your array (i.e. if you want 20 unique random numbers, it might take 25 iterations on average.) Though the number of iterations gets worse the more numbers you need and the smaller max is. Here is my above code modified to show how many iterations are needed for the given input:
require 'set'
def rand_n(n, max)
randoms = Set.new
i = 0
loop do
randoms << rand(max)
break if randoms.size > n
i += 1
end
puts "Took #{i} iterations for #{n} random numbers to a max of #{max}"
return randoms.to_a
end
I could write this code to LOOK more like Array.map if you want :)
Based on Kent Fredric's solution above, this is what I ended up using:
def n_unique_rand(number_to_generate, rand_upper_limit)
return (0..rand_upper_limit - 1).sort_by{rand}[0..number_to_generate - 1]
end
Thanks Kent.
No loops with this method
Array.new(size) { rand(max) }
require 'benchmark'
max = 1000000
size = 5
Benchmark.realtime do
Array.new(size) { rand(max) }
end
=> 1.9114e-05
Here is one solution:
Suppose you want these random numbers to be between r_min and r_max. For each element in your list, generate a random number r, and make list[i]=list[i-1]+r. This would give you random numbers which are monotonically increasing, guaranteeing uniqueness provided that
r+list[i-1] does not over flow
r > 0
For the first element, you would use r_min instead of list[i-1]. Once you are done, you can shuffle the list so the elements are not so obviously in order.
The only problem with this method is when you go over r_max and still have more elements to generate. In this case, you can reset r_min and r_max to 2 adjacent element you have already computed, and simply repeat the process. This effectively runs the same algorithm over an interval where there are no numbers already used. You can keep doing this until you have the list populated.
As far as it is nice to know in advance the maxium value, you can do this way:
class NoLoopRand
def initialize(max)
#deck = (0..max).to_a
end
def getrnd
return #deck.delete_at(rand(#deck.length - 1))
end
end
and you can obtain random data in this way:
aRndNum = NoLoopRand.new(10)
puts aRndNum.getrnd
you'll obtain nil when all the values will be exausted from the deck.
Method 1
Using Kent's approach, it is possible to generate an array of arbitrary length keeping all values in a limited range:
# Generates a random array of length n.
#
# #param n length of the desired array
# #param lower minimum number in the array
# #param upper maximum number in the array
def ary_rand(n, lower, upper)
values_set = (lower..upper).to_a
repetition = n/(upper-lower+1) + 1
(values_set*repetition).sample n
end
Method 2
Another, possibly more efficient, method modified from same Kent's another answer:
def ary_rand2(n, lower, upper)
v = (lower..upper).to_a
(0...n).map{ v[rand(v.length)] }
end
Output
puts (ary_rand 5, 0, 9).to_s # [0, 8, 2, 5, 6] expected
puts (ary_rand 5, 0, 9).to_s # [7, 8, 2, 4, 3] different result for same params
puts (ary_rand 5, 0, 1).to_s # [0, 0, 1, 0, 1] repeated values from limited range
puts (ary_rand 5, 9, 0).to_s # [] no such range :)