Create ranges from numbers - ruby

I have an array of hashes that have numbers:
[
{
y: 316.28,
label: "Kaimur",
color: "Light_Green"
},
{
y: 323.63,
label: "Banka",
color: "Light_Green"
},
{
y: 327.85,
label: "Gaya",
color: "Light_Green"
},
{
y: 346.11,
label: "EastChamparan",
color: "Light_Green"
},
{
y: 358.38,
label: "Nalanda",
color: "Light_Green"
},
{
y: 363.13,
label: "Madhubani",
color: "Light_Green"
}
]
Here my first number is 316.28 and last number is 363.13. I want to created ranges from this array like 300 to 400. This is an example using the first and the last element of the array.
I want to make it like 300 to 400 or 100 to 200 or 10 to 20.
If my number is 316.28, I want to return a value 300 and if my value is 363.13, then it should return 400.
How can I do that?
I want to round my values if my list of array has three number 2 numbers or 4 numbers such as 12.5, 123.45, or 3900.56. These can be the number and all of my array can have these kind of numbers. If I have to round each number after finding a length, that becomes a nightmare. I need a function which can do the trick.

Use Float#round with a negative argument:
316.28.round(-2)
#⇒ 300
363.13.round(-2)
#⇒ 400
input = _
#⇒ [{:y=>316.28, :label=>"Kaimur", :color=>"Light_Green"},
# {:y=>323.63, :label=>"Banka", :color=>"Light_Green"},
# {:y=>327.85, :label=>"Gaya", :color=>"Light_Green"},
# {:y=>346.11, :label=>"EastChamparan", :color=>"Light_Green"},
# {:y=>358.38, :label=>"Nalanda", :color=>"Light_Green"},
# {:y=>363.13, :label=>"Madhubani", :color=>"Light_Green"}]
ys = input.map { |e| e[:y] }
#⇒ [316.28, 323.63, 327.85, 346.11, 358.38, 363.13]
Range.new *[ys.min, ys.max].map { |e| e.round(-e.round.to_s.length+1) }
#⇒ 300..400

Related

Ruby. Returns an array of random products that have an aggregate number of prices that is equal to or less than max price

Note: The closer the sum of the prices are to max_price, the better
Initial data:
**max_price** = 11
[
{
id: 1,
price: 5
},
{
id: 2,
price: 6
},
{
id: 3,
price: 6
},
{
id: 4,
price: 1
},
{
id: 5,
price: 3
},
]
For instance, for the first time, we should return
[
{
id: 1,
price: 5
},
{
id: 2,
price: 6
}
]
because the sum of prices of these 2 elements is equal to or less than max_price.
But for the next time, we should return other random elements where their price sum is equal to or less than max_price
[
{
id: 3,
price: 6
},
{
id: 4,
price: 1
},
{
id: 5,
price: 3
}
]
Every time we should return an array with random elements where their sum is equal to or less than max_price.
How can we do that in ruby?
As #Spickerman stated in his comment, this looks like the knapsack problem, and isn't language sensitive at all.
for a Ruby version, I played around a bit, to see how to get the pseudocode working, and I've come up with this as a possible solution for you:
Initialisation of your records:
#prices =
[
{ id: 1, price: 3 },
{ id: 2, price: 6 },
{ id: 3, price: 6 },
{ id: 4, price: 1 },
{ id: 5, price: 5 }
]
# Define value[n, W]
#max_price = 11
#max_items = #prices.size
Defining the Ruby subprocedures, based on that Wiki page, one procedure to create the possibilities, one procedure to read the possibilities and return an index:
# Define function m so that it represents the maximum value we can get under the condition: use first i items, total weight limit is j
def put_recurse(i, j)
if i.negative? || j.negative?
#value[[i, j]] = 0
return
end
put_recurse(i - 1, j) if #value[[i - 1, j]] == -1 # m[i-1, j] has not been calculated, we have to call function m
return unless #prices.count > i
if #prices[i][:price] > j # item cannot fit in the bag
#value[[i, j]] = #value[[i - 1, j]]
else
put_recurse(i - 1, j - #prices[i][:price]) if #value[[i - 1, j - #prices[i][:price]]] == -1 # m[i-1,j-w[i]] has not been calculated, we have to call function m
#value[[i, j]] = [#value[[i - 1, j]], #value[[i - 1, j - #prices[i][:price]]] + #prices[i][:price]].max
end
end
def get_recurse(i, j)
return if i.negative?
if #value[[i, j]] > #value[[i - 1, j]]
#ret << i
get_recurse(i - 1, j - #prices[i][:price])
else
get_recurse(i - 1, j)
end
end
procedure to run the previously defined procedures in a nice orderly fashion:
def knapsack(items, weights)
# Initialize all value[i, j] = -1
#value = {}
#value.default = -1
#ret = []
# recurse through results
put_recurse(items, weights)
get_recurse(items, weights)
#prices.values_at(*#ret).sort_by { |x| x[:id] }
end
Running the code to get your results:
knapsack(#max_items, #max_price)

How can the YAML following references override the front one?

merge:
- &LEFT { x: 1, y: 1, r: 1 }
- &BIG { x: 2, y: 2, r: 2 }
- &SMALL { x: 3, y: 3, r: 3}
- # Override
<< : [ *BIG, *LEFT, *SMALL ]
x: 1
label: big/left/small
I get the output:
{
merge:
[
{ x: 1, y: 1, r: 1 },
{ x: 2, y: 2, r: 2 },
{ x: 3, y: 3, r: 3 },
{ x: 1, y: 2, r: 2, label: 'big/left/small' }
]
}
But the results do not meet my expectation, the last one in the merge object I hope it be
{ x: 1, y: 3, r: 3, label: 'big/left/small' }.
How can I do with the YAML syntax ?
You cannot do this with YAML syntax, and your expectations are unfounded on multiple levels.
An anchored element (whether a sequence element or not) doesn't magically disappear when it is used in merge alias or any other alias nor on the basis of it being an anchor
A toplevel mapping key (merge) doesn't magically disappear because its value is a sequence scalar that contains an element with a merge indicator
The Merge Key Language-Independent Type documentation doesn't indicate such a deletion and neither does the YAML specification. The anchors (and aliases) are not normally preserved in the representation in the language you use for loading your YAML, as per the YAML specs. Therefore it is normally not possible to find the anchored elements and delete them after loading.
A generic solution would be to have top another toplevel key default key that "defines" the anchors and work only with the value associated with the merge key:
import ruamel.yaml
yaml_str = """\
default:
- &LEFT { x: 1, y: 1, r: 1 }
- &BIG { x: 2, y: 2, r: 2 }
- &SMALL { x: 3, y: 3, r: 3}
merge:
# Override
<< : [ *BIG, *LEFT, *SMALL ]
x: 1
label: big/left/small
"""
data = ruamel.yaml.load(yaml_str)['merge']
print(data)
gives:
{'x': 1, 'r': 2, 'y': 2, 'label': 'big/left/small'}
(the order of the keys in your output is of course random)

Frequency of pairs in an array ruby

I have an array of pairs like this:
arr = [
{lat: 44.456, lng: 33.222},
{lat: 42.456, lng: 31.222},
{lat: 44.456, lng: 33.222},
{lat: 44.456, lng: 33.222},
{lat: 42.456, lng: 31.222}
]
There are some geographical coordinates of some places. I want to get an array with these coordinates grouped and sorted by frequency. The result should look like this:
[
{h: {lat: 44.456, lng: 33.222}, fr: 3},
{h: {lat: 42.456, lng: 31.222}, fr: 2},
]
How can I do this?
The standard ways of approaching this problem are to use Enumerable#group_by or a counting hash. As others have posted answers using the former, I'll go with the latter.
arr.each_with_object(Hash.new(0)) { |f,g| g[f] += 1 }.map { |k,v| { h: k, fr: v } }
#=> [{:h=>{:lat=>44.456, :lng=>33.222}, :fr=>3},
# {:h=>{:lat=>42.456, :lng=>31.222}, :fr=>2}]
First, count instances of the hashes:
counts = arr.each_with_object(Hash.new(0)) { |f,g| g[f] += 1 }
#=> {{:lat=>44.456, :lng=>33.222}=>3,
# {:lat=>42.456, :lng=>31.222}=>2}
Then construct the array of hashes:
counts.map { |k,v| { h: k, fr: v } }
#=> [{:h=>{:lat=>44.456, :lng=>33.222}, :fr=>3},
# {:h=>{:lat=>42.456, :lng=>31.222}, :fr=>2}]
g = Hash.new(0) creates an empty hash with a default value of zero. That means that if g does not have a key k, g[k] returns zero. (The hash is not altered.) g[k] += 1 is first expanded to g[k] = g[k] + 1. If g does not have a key k, g[k] on the right side returns zero, so the expression becomes:
g[k] = 1.
Alternatively, you could write:
counts = arr.each_with_object({}) { |f,g| g[f] = (g[f] ||= 0) + 1 }
If you want the elements (hashes) of the array returned to be in decreasing order of the value of :fr (here it's coincidental), tack on Enumerable#sort_by:
arr.each_with_object(Hash.new(0)) { |f,g| g[f] += 1 }.
map { |k,v| { h: k, fr: v } }.
sort_by { |h| -h[:fr] }
arr.group_by(&:itself).map{|k, v| {h: k, fr: v.length}}.sort_by{|h| h[:fr]}.reverse
# =>
# [
# {:h=>{:lat=>44.456, :lng=>33.222}, :fr=>3},
# {:h=>{:lat=>42.456, :lng=>31.222}, :fr=>2}
# ]
arr.group_by{|i| i.hash}.map{|k, v| {h: v[0], fr: v.size}
#=> [{:h=>{:lat=>44.456, :lng=>33.222}, :fr=>3}, {:h=>{:lat=>42.456, :lng=>31.222}, :fr=>2}]

Find set of objects in array that have same attributes

Given that I have an array with two attributes: 'n_parents' and 'class', which looks like this:
my_arr = [{n_parents: 10, class: 'right'}, {n_parents: 10, class: 'right'}, {n_parents: 5, class: 'left'}, {n_parents: 2, class: 'center'}, {n_parents: 2, class: 'center'}, {n_parents: 2, class: 'center'}]
I would like to get an array with the objects that share most of those two attributes. So in the previous example:
result = [{n_parents: 2, class: 'center'}, {n_parents: 2, class: 'center'}, {n_parents: 2, class: 'center'}]
Because there are three objects that share n_parents = 2, and class = 'center'.
So far, I know how can I group by dos two attributes, but after that I am not sure how to get the set that has more elements.
Right now I have:
my_arr.group_by { |x| [x[:n_parents], x[:class]] }
This should work for you. It groups the hashes by the hash itself and then gets the largest group by the array count
my_arr = [{n_parents: 10, class: 'right'}, {n_parents: 10, class: 'right'}, {n_parents: 5, class: 'left'}, {n_parents: 2, class: 'center'}, {n_parents: 2, class: 'center'}, {n_parents: 2, class: 'center'}]
my_arr.group_by { |h| h }.max_by { |h,v| v.count }.last
#=>[{:n_parents=>2, :class=>"center"}, {:n_parents=>2, :class=>"center"}, {:n_parents=>2, :class=>"center"}]
something like below :
my_arr.group_by(&:values).max_by { |_,v| v.size }.last
# => [{:n_parents=>2, :class=>"center"},
# {:n_parents=>2, :class=>"center"},
# {:n_parents=>2, :class=>"center"}]
I am using code used by OP and extending over that to get result he wants:--
my_arr.group_by { |x| [x[:n_parents], x[:class]] }.max_by{|k,v| v.size}.last
Output
#=> [{:n_parents=>2, :class=>"center"}, {:n_parents=>2, :class=>"center"}, {:n_parents=>2, :class=>"center"}]
This is the fourth answer to be posted. The three earlier answers all employed group_by/max_by/last. Sure, that may be the best approach, but is it the most interesting, the most fun? Here are a couple other ways to generate the desired result. When
my_arr = [{n_parents: 10, class: 'right' }, {n_parents: 10, class: 'right' },
{n_parents: 5, class: 'left' }, {n_parents: 2, class: 'center'},
{n_parents: 2, class: 'center'}, {n_parents: 2, class: 'center'}]
the desired result is:
#=> [{:n_parents=>2, :class=>"center"},
# {:n_parents=>2, :class=>"center"},
# {:n_parents=>2, :class=>"center"}]
#1
# Create a hash `g` whose keys are the elements of `my_arr` (hashes)
# and whose values are counts for the elements of `my_arr`.
# `max_by` the values (counts) and construct the array.
el, nbr = my_arr.each_with_object({}) { |h,g| g[h] = (g[h] ||= 0) + 1 }
.max_by { |_,v| v }
arr = [el]*nbr
#2
# Sequentially delete the elements equal to the first element of `arr`,
# each time calculating the number of elements deleted, by determining
# `arr.size` before and after the deletion. Compare that number with the
# largest number deleted so far to find the element with the maximum
# number of instances in `arr`, then construct the array.
arr = my_arr.map(&:dup)
most_plentiful = { nbr_copies: 0, element: [] }
until arr.empty? do
sz = arr.size
element = arr.delete(arr.first)
if sz - arr.size > most_plentiful[:nbr_copies]
most_plentiful = { nbr_copies: sz - arr.size, element: element }
end
end
arr = [most_plentiful[:element]]* most_plentiful[:nbr_copies]

Group array by nested array

I have an array in this format:
[
{ day: 1
intervals: [
{
from: 900,
to: 1200
}
]
},
{ day: 2
intervals: [
{
from: 900,
to: 1200
}
]
},
{ day: 3
intervals: [
{
from: 900,
to: 1200
}
]
},
{ day: 4
intervals: [
{
from: 900,
to: 1200
}
]
},
{ day: 5
intervals: [
{
from: 900,
to: 1200
}
]
},
{ day: 6
intervals: [
{
from: 900,
to: 1200
},
{
from: 1300,
to: 2200
}
]
},
{ day: 7
intervals: [
{
from: 900,
to: 1200
},
{
from: 1300,
to: 2200
}
]
}
]
I wan't to group them like this:
[
{ day: 1-5
intervals: [
{
from: 900,
to: 1200
}
]
},
{ day: 6-7
intervals: [
{
from: 900,
to: 1200
},
{
from: 1300,
to: 2200
}
]
}
]
Criteria:
Only group if intervals are the same.
Only group if matches are in chronological order, i.e 1-5 or 1-3, not 1-2-5.
How can this be achieved?
Here's a variation of #joelparkerhenderson's solution that tries to be a bit closer to your requirements re formatting of the output etc.
output = []
grouped = input.group_by do |x|
x[:intervals]
end
grouped.each_pair do |k, v|
days = v.map {|day| day[:day]}
if days.each_cons(2).all? { |d1, d2| d1.next == d2 }
output << {
:days => days.values_at(0,-1).join('-'),
:intervals => k
}
end
end
puts output
This produces the required output:
by_interval = data.inject({}) do | a, e |
i = e[:intervals]
a[i] ||= []
a[i] << e[:day].to_i
a
end
result = by_interval.map do | interval, days |
slices = days.sort.inject([]) do | a, e |
a << [] if a == [] || a.last.last != e - 1
a.last << e
a
end
slices.map do | slice |
{:day => "#{slice.first}-#{slice.last}", :intervals => interval }
end
end
result.flatten!
I'm sure there are better approaches :-)
You need to look into Map Method for an array. You need to remap the array and iterate over it to extract the data you want using your "grouping" logic above.
Extending #Michael Kohl's answer
output = []
grouped = schedules.as_json.group_by do |x|
x['intervals']
end
grouped.each_pair do |k, v|
days = v.map {|day| day['day']}
grouped_days = days.inject([[]]) do |grouped, num|
if grouped.last.count == 0 or grouped.last.last == num - 1
grouped.last << num
else
grouped << [ num ]
end
grouped
end
grouped_days.each do |d|
output << {
heading: d.values_at(0, -1).uniq.join('-'),
interval: k
}
end
end
output
You should probably split that up into separate methods, but you get the idea.

Resources