I have given list of dates like this:
['2020-02-01', '2020-02-05', '2020-02-08']
I want the inverse of this range like this (the missing dates in that range):
['2020-02-02', '2020-02-03', '2020-02-04', '2020-02-06', '2020-02-07']
I am sure I can build some sort of loop that starts at the first date then iterates through to build that second array. Any chance there is a ruby method / trick to do this faster?
You can use Array#difference or Array#- on range of Date objects. For example, with Ruby 2.7.1:
require "date"
dates = ['2020-02-01', '2020-02-05', '2020-02-08']
# convert sorted strings to date objects
dates.sort.map! { Date.strptime _1, "%Y-%m-%d" }
# use first and last date to build an array of dates
date_range = (dates.first .. dates.last).to_a
# remove your known dates from the range - dates).map &:to_s
(date_range - dates).map &:to_s
#=> ["2020-02-02", "2020-02-03", "2020-02-04", "2020-02-06", "2020-02-07"]
For compactness, and assuming your date strings in dates are already sorted, you could also use a train wreck like this:
((dates[0]..dates[-1]).to_a - dates).map &:to_s
require 'date'
arr = ['2020-02-26', '2020-03-02', '2020-03-04']
first, last = arr.map { |s| Date.strptime(s, '%Y-%m-%d') }.minmax
#=> [#<Date: 2020-02-26 ((2458906j,0s,0n),+0s,2299161j)>,
# #<Date: 2020-03-04 ((2458913j,0s,0n),+0s,2299161j)>]
(first..last).map { |d| d.strftime('%Y-%m-%d') } - arr
#=> ["2020-02-27", "2020-02-28", "2020-02-29", "2020-03-01",
# "2020-03-03"]
See Date::strptime and Date#strftime
You can use Array#min, Array#max and Array#- to easily solve this:
dates = ['2020-02-01', '2020-02-05', '2020-02-08']
missing_dates = ((dates.min..dates.max).to_a) - dates
=> [‘2020-02-02’, ‘2020-02-03’, ‘2020-02-04’, ‘2020-02-06’, ‘2020-02-07’]
Related
I'm parsing XML files and wanting to omit duplicate values from being added to my Array. As it stands, the XML will looks like this:
<vulnerable-software-list>
<product>cpe:/a:octopus:octopus_deploy:3.0.0</product>
<product>cpe:/a:octopus:octopus_deploy:3.0.1</product>
<product>cpe:/a:octopus:octopus_deploy:3.0.2</product>
<product>cpe:/a:octopus:octopus_deploy:3.0.3</product>
<product>cpe:/a:octopus:octopus_deploy:3.0.4</product>
<product>cpe:/a:octopus:octopus_deploy:3.0.5</product>
<product>cpe:/a:octopus:octopus_deploy:3.0.6</product>
</vulnerable-software-list>
document.xpath("//entry[
number(substring(translate(last-modified-datetime,'-.T:',''), 1, 12)) > #{last_imported_at} and
cvss/base_metrics/access-vector = 'NETWORK'
]").each do |entry|
product = entry.xpath('vulnerable-software-list/product').map { |product| product.content.split(':')[-2] }
effected_versions = entry.xpath('vulnerable-software-list/product').map { |product| product.content.split(':').last }
puts product
end
However, because of the XML input, that's parsing quite a bit of duplicates, so I end up with an array like ['Redhat','Redhat','Redhat','Fedora']
I already have the effected_versions taken care of, since those values don't duplicate.
Is there a method of .map to only add unique values?
If you need to get an array of unique values, then just call uniq method to get the unique values:
product =
entry.xpath('vulnerable-software-list/product').map do |product|
product.content.split(':')[-2]
end.uniq
There are many ways to do this:
input = ['Redhat','Redhat','Redhat','Fedora']
# approach 1
# self explanatory
result = input.uniq
# approach 2
# iterate through vals, and build a hash with the vals as keys
# since hashes cannot have duplicate keys, it provides a 'unique' check
result = input.each_with_object({}) { |val, memo| memo[val] = true }.keys
# approach 3
# Similar to the previous, we iterate through vals and add them to a Set.
# Adding a duplicate value to a set has no effect, and we can convert it to array
result = input.each_with_object.(Set.new) { |val, memo| memo.add(val) }.to_a
If you're not familiar with each_with_object, it's very similar to reduce
Regarding performance, you can find some info if you search for it, for example What is the fastest way to make a uniq array?
From a quick test, I see these performing in increasing time. uniq is 5 times faster than each_with_object, which is 25% slower than the Set.new approach. Probably because sort is implemetned using C. I only tested with only an arbitrary input though, so it might not be true for all cases.
I want to sort a hash that has dates as the key in ascending order. My hash is:
date_hash = {"2018-02-09"=>{"12"=>0},
"2018-02-08"=>{"12"=>0},
"2018-01-09"=>{"12"=>0}}
I tried:
Hash[date_hash.sort_by{|k, _| k.to_date}]
but no luck. It gives the output:
{"2018-01-09"=>{"12"=>0},
"2018-02-09"=>{"12"=>0},
"2018-02-08"=>{"12"=>0}}
strange thing I noticed is date_hash comes as a sorted hash just after it has defined! why hash is not coming in the order I defined ?
in irb
>> date_hash = {"2018-02-09"=>{"12"=>0},"2018-02-08"=>{"12"=>0},"2018-01-09"=>{"12"=>0}}
=> {"2018-01-09"=>{"12"=>0}, "2018-02-08"=>{"12"=>0}, "2018-02-09"=>{"12"=>0}}
The rule of thumb: don’t use Rails bullshit when there are old good plain ruby methods existing.
date_hash.sort_by { |k, _| Date.parse k }.to_h
#⇒ {"2018-01-09"=>{"12"=>0},
# "2018-02-08"=>{"12"=>0},
# "2018-02-09"=>{"12"=>0}}
Or, even without dates:
date_hash.sort_by { |k, _| k.split('-').map(&:to_i) }.to_h
Or even for dates formatted like this:
date_hash.sort_by(&:first).to_h
Proposed by #StefanPochmann, shorter and maybe even cleaner:
date_hash.sort.to_h
I have a CSV file like:
123,hat,19.99
321,cap,13.99
I have this code:
products_file = File.open('text.txt')
while ! products_file.eof?
line = products_file.gets.chomp
puts line.inspect
products[ line[0].to_i] = [line[1], line[2].to_f]
end
products_file.close
which is reading the file. While it's not at the end of the file, it reads each line. I don't need the line.inspect in there. but it stores the file in an array inside of my products hash.
Now I want to pull the min and max value from the hash.
My code so far is:
read_file = File.open('text.txt', "r+").read
read_file.(?) |line|
products[ products.length] = gets.chomp.to_f
products.min_by { |x| x.size }
smallest = products
puts "Your highest priced product is #{smallest}"
Right now I don't have anything after read_file.(?) |line| so I get an error. I tried using min or max but neither worked.
Without using CSV
If I understand your question correctly, you don't have to use CSV class methods: just read the file (less header) into an array and determine the min and max as follows:
arr = ["123,hat,19.99", "321,cap,13.99",
"222,shoes,33.41", "255,shirt,19.95"]
arr.map { |s| s.split(',').last.to_f }.minmax
#=> [13.99, 33.41]
or
arr.map { |s| s[/\d+\.\d+$/].to_f }.minmax
#=> [13.99, 33.41]
If you want the associated records:
arr.minmax_by { |s| s.split(',').last.to_f }
=> ["321,cap,13.99", "222,shoes,33.41"]
With CSV
If you wish to use CSV to read the file into an array:
arr = [["123", "hat", "19.99"],
["321", "cap", "13.99"],
["222", "shoes", "33.41"],
["255", "shirt", "19.95"]]
then
arr.map(&:last).minmax
# => ["13.99", "33.41"]
or
arr.minmax_by(&:last)
#=> [["321", "cap", "13.99"],
# ["222", "shoes", "33.41"]]
if you want the records. Note that in the CSV examples I didn't convert the last field to a float, assuming that all records have two decimal digits.
You should use the built-in CSV class as such:
require 'CSV'
data = CSV.read("text.txt")
data.sort!{ |row1, row2| row1[2].to_f <=> row2[2].to_f }
least_expensive = data.first
most_expensive = data.last
The Array#sort! method modifies data in place, so it is sorted based on the condition in the block for later usage. As you can see, the block sorts based on the values in each row at index 2 - in your case, the prices. Incidentally, you don't need to convert these values to floats - strings will sort the same way. Using to_f stops working if you have leading non-digit characters (eg, $), so you might find it better just keep the values as strings during your sort.
Then you can grab the most and least expensive, or the 5 most expensive, or whatever, at your leisure.
Date.today.jd returns a rounded number. Is there a way to get more precision in Ruby?
I want to return a Julian date for the current time in UTC.
The Date#amjd method does what you're asking for, but it returns a Rational; converting to a Float gives you something easier to work with:
require 'date'
DateTime.now.amjd.to_f # => 56759.82092321331
require "date"
p jdate = DateTime.now.julian #=> #<DateTime: 2014-03-30T21:28:30+02:00 (...)
p jdate.julian? # => true
I have an array #dates, that are UTC dates, and in increasing order. I want to flip the indices of the array so that the dates are in descending order. I am familiar with JS an Java, and don't know how to either use a pointer/index counter in ruby.
#dates = [//dates are in here already]
#reverseDates = []
#dates.each do |d|
#reverseDates << #dates.last
end
#dates = #reverseDates
Part of the issue as well is that I think it is duplicating the last index of #dates, not moving it to the other array when it pushes.
So I got it it working by prepending the array, but how do you include index counters in Ruby to accomplish this?
#reverseDates = []
#dates.each do |d|
#reverseDates.unshift(d)
end
#dates = #reverseDates
Ruby has reversing an array built in:
#dates.reverse!
From http://ruby-doc.org/core-1.8.7/Array.html#method-i-reverse-21