Get unique array of Classes - ruby

I have an array of objects:
#searches
It could return something like:
--- !ruby/object:Profile
attributes:
id: 2
name: Basti Stolzi
username: paintdat
website: ''
biography: ''
created_at: 2013-06-10 19:51:29.000000000 Z
updated_at: 2013-06-15 10:10:17.000000000 Z
user_id: 2
--- !ruby/object:Essential
attributes:
id: 4
description: ! '#paintdat'
user_id: 1
created_at: 2013-06-16 08:19:47.000000000 Z
updated_at: 2013-06-16 08:19:47.000000000 Z
photo_file_name: Unknown-1.jpeg
photo_content_type: image/jpeg
photo_file_size: 101221
photo_updated_at: 2013-06-16 08:19:46.000000000 Z
--- !ruby/object:Essential
attributes:
id: 3
description: ! '#user_mention_2 well done! #paintdat'
user_id: 1
created_at: 2013-06-16 07:56:55.000000000 Z
updated_at: 2013-06-16 08:00:24.000000000 Z
photo_file_name: Unknown.jpeg
photo_content_type: image/jpeg
photo_file_size: 135822
photo_updated_at: 2013-06-16 07:56:55.000000000 Z
Now I'd like to get an unique array of the classes within that array like:
--- !ruby/class 'Profile'
--- !ruby/class 'Essential'
It would be nice to do this without 2 loops. Hope somebody could help me out! <3

I'd recommend you perform a map and unique enumerable operators to do this. Your code would be something like (depending on what is required to select the class from an individual element):
#searches.map{ |search| search.class }.uniq
For more info, check the documentation on Array and Enumerable
Edit:
Note the above can be more succinctly using the & operator (which converts a symbol into a proc):
#searches.map(&:class).uniq

Related

How do I add a map to an array of maps in ytt?

I'm trying to add a map to an array of maps in ytt to modify a YAML doc.
I tried the below but it errors out and says it expects a map but getting an array.
https://gist.github.com/amalagaura/c8b5c7c92402120ed76dec95dfafb276
---
id: 1
type: book
awards:
books:
- id: 1
title: International Botev
reviewers:
- id: 2
name: PersonB
- id: 2
title: Dayton Literary Peace Prize
reviewers:
- id: 3
name: PersonC
#! How do I add a map to an array of maps?
## load("#ytt:overlay", "overlay")
##overlay/match by=overlay.all
---
awards:
books:
##overlay/match by=overlay.all, expects="1+"
##overlay/match missing_ok=True
reviewers:
##overlay/append
- id: 1
name: PersonA
## load("#ytt:overlay", "overlay")
#! Add a map to an array of maps:
##overlay/match by=overlay.all
---
awards:
books:
##overlay/match by=overlay.all, expects="1+"
- reviewers:
##overlay/append
- id: 1
name: Person A
You were really close in your solution, all you really needed was to make reviewers an array item. If you want to be able to add reviewers to a book that does not have that key, then you will have to add a matcher on the array item and the map item; a gist is included below to see this behavior overlay in action.
If you have more than one ##overlay/match annotation on the same item, the last one wins. There are plans to improve this behavior: https://github.com/k14s/ytt/issues/114.
https://get-ytt.io/#gist:https://gist.github.com/gcheadle-vmware/a6243ee73fa5cc139dba870690eb15c5

Algorith to remove duplicate records and records with a repetitive pattern

I have some records in a database tracking the price development on some items. These records often contains duplicates and repetitive sequences of price changes. I need to clean those up. Consider the following:
Record = Struct.new(:id, :created_at, :price)
records = [
Record.new(1, Date.parse('2017-01-01'), 150_000),
Record.new(2, Date.parse('2017-01-02'), 150_000),
Record.new(3, Date.parse('2017-01-03'), 130_000),
Record.new(4, Date.parse('2017-01-04'), 140_000),
Record.new(5, Date.parse('2017-01-05'), 140_000),
Record.new(6, Date.parse('2017-01-06'), 137_000),
Record.new(7, Date.parse('2017-01-07'), 140_000),
Record.new(8, Date.parse('2017-01-08'), 140_000),
Record.new(9, Date.parse('2017-01-09'), 137_000),
Record.new(10, Date.parse('2017-01-10'), 140_000),
Record.new(11, Date.parse('2017-01-11'), 137_000),
Record.new(12, Date.parse('2017-01-12'), 140_000),
Record.new(13, Date.parse('2017-01-13'), 132_000),
Record.new(14, Date.parse('2017-01-14'), 130_000),
Record.new(14, Date.parse('2017-01-15'), 132_000)
]
The policy should in plain words should be:
Remove any duplicates of exactly the same price immediately following each other.
Remove any records of a sequence of records with the same two prices jumping up and down for 2 times or more (e.g. [120, 110, 120, 110] but not [120, 110, 120]), so that only the initial price change is preserved.
In the above example the output that I would expect should be:
[
Record#<id: 1, created_at: Date#<'2017-01-01'>, price: 150_000>,
Record#<id: 3, created_at: Date#<'2017-01-03'>, price: 130_000>,
Record#<id: 4, created_at: Date#<'2017-01-04'>, price: 140_000>,
Record#<id: 6, created_at: Date#<'2017-01-06'>, price: 137_000>,
Record#<id: 13, created_at: Date#<'2017-01-13'>, price: 132_000>,
Record#<id: 14, created_at: Date#<'2017-01-14'>, price: 130_000>,
Record#<id: 14, created_at: Date#<'2017-01-14'>, price: 132_000>
]
Note: This is the most complicated example I can think of for the time being, if I find more, I'll update the question.
I have no problem dear sir of helping you with your challenge, here you go:
records_to_delete = []
# Cleanup duplicates
records.each_with_index do |record, i|
if i != 0 && record.price == records[i - 1].price
records_to_delete << record.id
end
end
records = records.delete_if{|record| records_to_delete.include?(record.id)}
# Remove repetitions
records_to_delete = []
records.each_with_index do |record, i|
if record.price == records[i + 2]&.price && records[i + 1]&.price == records[i + 3]&.price
records_to_delete << records[i+2].id
records_to_delete << records[i+3].id
end
end
records = records.delete_if{|record| records_to_delete.uniq.include?(record.id)}

ActiveRecord: Unique by attribute

I am trying to filter ActiveRecord_AssociationRelations to be unique by parent id.
So, I'd like a list like this:
[#<Message id: 25, posted_by_id: 3, posted_at: "2014-10-30 06:02:47", parent_id: 20, content: "This is a comment", created_at: "2014-10-30 06:02:47", updated_at: "2014-10-30 06:02:47">,
#<Message id: 23, posted_by_id: 3, posted_at: "2014-10-28 16:11:02", parent_id: 20, content: "This is another comment", created_at: "2014-10-28 16:11:02", updated_at: "2014-10-28 16:11:02">]}
to return this:
[#<Message id: 25, posted_by_id: 3, posted_at: "2014-10-30 06:02:47", parent_id: 20, content: "This is a comment", created_at: "2014-10-30 06:02:47", updated_at: "2014-10-30 06:02:47">]
I've tried various techniques including:
#messages.uniq(&:parent_id) # returns the same list (with duplicate parent_ids)
#messages.select(:parent_id).distinct # returns [#<Message id: nil, parent_id: 20>]
and uniq_by has been removed from Rails 4.1.
Have you tried
group(:parent_id)
It sounds to me like that is what you are after. This does return the first entry with the given parent_id. If you want the last entry you will have to reorder the result in a subquery and then use the group.
For me in Rails 3.2 & Postgresql, Foo.group(:bar) works on simple queries but gives me an error if I have any where clauses on there, for instance
irb> Message.where(receiver_id: 434).group(:sender_id)
=> PG::GroupingError: ERROR: column "messages.id" must appear in the
GROUP BY clause or be used in an aggregate function
I ended up specifying an SQL 'DISTINCT ON' clause to select. In a Message class I have the following scope:
scope :latest_from_each_sender, -> { order("sender_id ASC, created_at DESC").select('DISTINCT ON ("sender_id") *') }
Usage:
irb> Message.where(receiver_id: 434).latest_from_each_sender

Get Unique contents from Ruby Hash

I have a Hash #estate:
[#<Estate id: 1, Name: "Thane ", Address: "Thane St.", created_at: "2013-06-21 16:40:50", updated_at: "2013-06-21 16:40:50", user_id: 2, asset_file_name: "DSC02358.JPG", asset_content_type: "image/jpeg", asset_file_size: 5520613, asset_updated_at: "2013-06-21 16:40:49", Mgmt: "abc">,
#<Estate id: 2, Name: "Mumbai", Address: "Mumbai St.", created_at: "2013-06-21 19:13:59", updated_at: "2013-06-21 19:14:28", user_id: 2, asset_file_name: "DSC02359.JPG", asset_content_type: "image/jpeg", asset_file_size: 5085580, asset_updated_at: "2013-06-21 19:13:57", Mgmt: "abc">]
Is it possible to make new Hash with unique values according to the user_id: 2, because currently 2 elements have the user_id same i.e 2, I just want it once in the hash, what should I do ?
It seems to be something like a has_many relation between User model and Estate model, right? If I understood you correctly, than you need in fact to group your Estate by user_id:
PostgreSQL:
Estate.select('DISTINCT ON (user_id) *').all
MySQL:
Estate.group(:user_id).all
P.S. I'd not recommend to select all records from a database and then process them with Ruby as databases handle operations with data in much more efficient way.
Here is an sample example to get you a good start:
h = [ { a: 2, b: 3}, { a: 2, c: 3 } ]
h.uniq { |i| i[:a] }
# => [{:a=>2, :b=>3}]

Problem with join

It looks like this:
nearbys(20, :units => :km).joins(:interests)
.where(["users.id NOT IN (?)", blocked_ids])
.where("interests.language_id IN (?)", interests
.collect{|interest| interest.language_id})
This produces the following SQL:
SELECT
*,
(111.19492664455873 * ABS(latitude - 47.4984056) * 0.7071067811865475) +
(96.29763124613503 * ABS(longitude - 19.0407578) * 0.7071067811865475)
AS distance,
CASE
WHEN (latitude >= 47.4984056 AND longitude >= 19.0407578) THEN 45.0
WHEN (latitude < 47.4984056 AND longitude >= 19.0407578) THEN 135.0
WHEN (latitude < 47.4984056 AND longitude < 19.0407578) THEN 225.0
WHEN (latitude >= 47.4984056 AND longitude < 19.0407578) THEN 315.0
END AS bearing
FROM
"users"
INNER JOIN "interests" ON "interests"."user_id" = "users"."id"
WHERE
(latitude BETWEEN 47.38664309234778 AND 47.610168107652214
AND longitude BETWEEN 18.875333386667762 AND 19.20618221333224
AND users.id != 3)
AND (users.id NOT IN (3))
AND (interests.language_id IN (1,1))
GROUP BY
users.id,users.name,users.created_at,users.updated_at,users.location,
users.details,users.hash_id,users.facebook_id,users.blocked,users.locale,
users.latitude,users.longitude
ORDER BY
(111.19492664455873 * ABS(latitude - 47.4984056) * 0.7071067811865475) +
(96.29763124613503 * ABS(longitude - 19.0407578) * 0.7071067811865475)
The result it returns is correct, except it replaces the id of the user with the id of the interest. What am I missing here?
Thanks for the help!
Edit:
I narrowed the problem down to the geocoded gem.
This works perfectly:
User.where(["users.id NOT IN (?)", blocked_ids]).joins(:interests)
.where("interests.language_id IN (?)", interests
.collect{|interest| interest.language_id})
and returns:
[#<User id: 8,
name: "George Supertramp",
created_at: "2011-08-13 15:51:46",
updated_at: "2011-08-21 16:11:05",
location: "Budapest",
details: "{\"image\":\"http://graph.facebook.com/...",
hash_id: 1908133256,
facebook_id: nil,
blocked: nil,
locale: "de",
latitude: 47.4984056,
longitude: 19.0407578>]
but when I add .near([latitude, longitude], 20, :units => :km) it returns
[#<User id: 5,
name: "George Supertramp",
created_at: "2011-08-13 15:52:53",
updated_at: "2011-08-13 15:52:53",
location: "Budapest",
details: "{\"image\":\"http://graph.facebook.com/...",
hash_id: 1908133256,
facebook_id: nil,
blocked: nil,
locale: "de",
latitude: 47.4984056,
longitude: 19.0407578>]
because if somehow merges with the interest result:
[#<Interest id: 5,
user_id: 8,
language_id: 1,
classification: 1,
created_at: "2011-08-13 15:52:53",
updated_at: "2011-08-13 15:52:53">]
It seems the problem is with the grouping. How can I circumvent it without forking the gem.
I've solved the problem temporarily by using include instead of join. It is a stupid solution and it works on small sets of data while aggressively cached.
Here is the code:
User.where(["users.id NOT IN (?)", blocked_ids]).includes(:interests).near([latitude, longitude], 20, :units => :km).select{|user| user if ([user.interests.find_by_classification(1).language_id, user.interests.find_by_classification(2).language_id] - [self.interests.find_by_classification(1).language_id, self.interests.find_by_classification(2).language_id]).size < 2 }
I think you join table has an id field which is causing the issue.

Resources