Searching Hash for entry that does not contain value - ruby

I am not quite sure how to title this question and I am having a hard time getting it to work so here goes.
I have a hash of users which is various sizes. This can be anywhere from 2 to 40. I also have a hash of tickets which I want to search through and find any entries that do not contain the user_id of my users hash. I am not quite sure how to accomplish this. My last attempt I used this:
#not_found = []
users.each do |u|
#not_found += #tickets.select{|t| t["user_id"] != u.user_id}
end
I know this is not the right result as it does a comparison for only the one user_id. What I need to do is run through all of the tickets and pull out any results that contain a user_id that is not in the users hash.
I hope I am explaining this properly and appreciate any help!

Try this
known_user_ids = users.map(&:user_id)
tickets_with_unknown_users = #tickets.reject{ |t| known_user_ids.include?(t["user_id"]) }

Related

Rails order active records with assign_attributes

I have a User model which has a Scoring model which has a score value.
In my rails view I want to make an order of my users by score.
=> User.joins (: scoring) .order (: score)
So far, so good.
it gets complicated when I would dynamically change the score of some User without modifying them in the database according to certain attributes such as geolocation.
I tried the assign_attributes function but it does not change because the .order function calls the score fields in the database.
Use case: I do a user search by geolocation and the users near the geolocation appear in my search with their scores. I would like to weight the scores of users nearby since they are not on the exact geolocation
My code:
#Get scoring in other geolocation
#fiches_proxi = Fiche.joins(:user).merge(User.joins(:scoring)).near([#geo.lat_long_DMS.to_f, #geo.lat_long_grd.to_f], proxi_calcule(#geo.population_2012.to_i),units: :km, :order => 'scorings.score DESC').order('scorings.score DESC').where.not(geo: #geo.id).limit(10)
#Get scoring in real geolocation
#fiche_order_algo_all = Fiche.joins(:user).merge(User.joins(:scoring)).where(geo_id: #geo)
#Find all scores
#fiches_all = Fiche.where(id: #fiche_order_algo_all.pluck(:id) + #fiches_proxi.pluck(:id))
#pagy, #fiche_order_algo = pagy(#fiches_all.joins(:user).merge(User.joins(:scoring).order('scorings.score DESC')), items: 12)
#fiche_order_algo.each do |f|
if f.geo.id != #geo.id
f.user.scoring.assign_attributes(score: (f.user.scoring.score - 10.0))
else
f.user.scoring.score
end
end
My score is updated but my order is the same !
When you call .each on your relation, it returns an array, so you can use Array#sort_by
#fiche_order_algo.each do |f|
if f.geo.id != #geo.id
f.user.scoring.assign_attributes(score: (f.user.scoring.score - 10.0))
else
f.user.scoring.score
end
end
#fiche_order_algo.sort_by!{|f| f.scoring.score}
If you're working with large data sets, this might not be optimized, but won't be any less efficient than what you already have.
But you can also do it in one go with:
#fiche_order_algo.sort_by! do |f|
if f.geo.id != #geo.id
f.user.scoring.assign_attributes(score: (f.user.scoring.score - 10.0))
end
f.user.scoring.score
end

Ruby on Rails: Active-Record sort and put specific row to the top

I have an Active record one -> many association and I need to get all child rows starting with a specific row
something like
parent.children.startwith(some_child_row_id)
Is there a one-liner?
Edit:
For more Clarity
Let say we have an array
a = ["a","b" ,"cg", "d","e"]
I want "cg" to be first element.
I'll do somthing like
element = a.delete("cg") // array will be ["a","b","d","e"]
a.unshift(element) // array will now become ["cg","a","b","d","e"]
See! An element is moved to index 0.
I want the same in case of ActiveRecord rows, Preferably a One-liner.
I suppose conditional ordering will do the trick.
parent.children.order("CASE WHEN (id = #{some_child_row_id}) THEN 0 ELSE 1 END ASC, id")
It's pretty simple actually.
All you need is to apply a where clause to the children of your parent.
Try something like this.
parent.children.where('id > ?', start_row)
I believe this is what you were looking for. Hope this helps.

Linq Query Where Contains

I'm attempting to make a linq where contains query quicker.
The data set contains 256,999 clients. The Ids is just a simple list of GUID'S and this would could only contain 3 records.
The below query can take up to a min to return the 3 records. This is because the logic will go through the 256,999 record to see if any of the 256,999 records are within the List of 3 records.
returnItems = context.ExecuteQuery<DataClass.SelectClientsGridView>(sql).Where(x => ids.Contains(x.ClientId)).ToList();
I would like to and get the query to check if the three records are within the pot of 256,999. So in a way this should be much quicker.
I don't want to do a loop as the 3 records could be far more (thousands). The more loops the more hits to the db.
I don't want to grap all the db records (256,999) and then do the query as it would take nearly the same amount of time.
If I grap just the Ids for all the 256,999 from the DB it would take a second. This is where the Ids come from. (A filtered, small and simple list)
Any Ideas?
Thanks
You've said "I don't want to grab all the db records (256,999) and then do the query as it would take nearly the same amount of time," but also "If I grab just the Ids for all the 256,999 from the DB it would take a second." So does this really take "just as long"?
returnItems = context.ExecuteQuery<DataClass.SelectClientsGridView>(sql).Select(x => x.ClientId).ToList().Where(x => ids.Contains(x)).ToList();
Unfortunately, even if this is fast, it's not an answer, as you'll still need effectively the original query to actually extract the full records for the Ids matched :-(
So, adding an index is likely your best option.
The reason the Id query is quicker is due to one field being returned and its only a single table query.
The main query contains sub queries (below). So I get the Ids from a quick and easy query, then use the Ids to get the more details information.
SELECT Clients.Id as ClientId, Clients.ClientRef as ClientRef, Clients.Title + ' ' + Clients.Forename + ' ' + Clients.Surname as FullName,
[Address1] ,[Address2],[Address3],[Town],[County],[Postcode],
Clients.Consent AS Consent,
CONVERT(nvarchar(10), Clients.Dob, 103) as FormatedDOB,
CASE WHEN Clients.IsMale = 1 THEN 'Male' WHEN Clients.IsMale = 0 THEN 'Female' END As Gender,
Convert(nvarchar(10), Max(Assessments.TestDate),103) as LastVisit, ";
CASE WHEN Max(Convert(integer,Assessments.Submitted)) = 1 Then 'true' ELSE 'false' END AS Submitted,
CASE WHEN Max(Convert(integer,Assessments.GPSubmit)) = 1 Then 'true' ELSE 'false' END AS GPSubmit,
CASE WHEN Max(Convert(integer,Assessments.QualForPay)) = 1 Then 'true' ELSE 'false' END AS QualForPay,
Clients.UserIds AS LinkedUsers
FROM Clients
Left JOIN Assessments ON Clients.Id = Assessments.ClientId
Left JOIN Layouts ON Layouts.Id = Assessments.LayoutId
GROUP BY Clients.Id, Clients.ClientRef, Clients.Title, Clients.Forename, Clients.Surname, [Address1] ,[Address2],[Address3],[Town],[County],[Postcode],Clients.Consent, Clients.Dob, Clients.IsMale,Clients.UserIds";//,Layouts.LayoutName, Layouts.SubmissionProcess
ORDER BY ClientRef
I was hoping there was an easier way to do the Contain element. As the pool of Ids would be smaller than the main pool.
A way I've speeded it up for now is. I've done a Stinrg.Join to the list of Ids and added them as a WHERE within the main SQL. This has reduced the time down to a seconds or so now.

how can I group sum and count with sequel ORM and postgresl?

This is too tough for me guys. It's for Jeremy!
I have two tables (although I can also envision needing to join a third table) and I want to sum one field and count rows, in the same, table while joining with another table and return the result in json format.
First of all, the data type field that needs to be summed, is numeric(10,2) and the data is inserted as params['amount'].to_f.
The tables are expense_projects which has the name of the project and the company id and expense_items which has the company_id, item and amount (to mention just the critical columns) - the "company_id" columns are disambiguated.
So, the following code:
expense_items = DB[:expense_projects].left_join(:expense_items, :expense_project_id => :project_id).where(:project_company_id => company_id).to_a.to_json
works fine but when I add
expense_total = expense_items.sum(:amount).to_f.to_json
I get an error message which says
TypeError - no implicit conversion of Symbol into Integer:
so, the first question is why and how can this be fixed?
Then I want to join the two tables and get all the project names form the left (first table) and sum amount and count items in the second table. I have tried
DB[:expense_projects].left_join(:expense_items, :expense_items_company_id => expense_projects_company_id).count(:item).sum(:amount).to_json
and variations of this, all of which fails.
I would like a result which gets all the project names (even if there are no expense entries and returns something like:
project item_count item_amount
pr 1 7 34.87
pr 2 0 0
and so on. How can this be achieved with one query returning the result in json format?
Many thanks, guys.
Figured it out, I hope this helps somebody else:
DB[:expense_projects___p].where(:project_company_id=>user_company_id).
left_join(:expense_items___i, :expense_project_id=>:project_id).
select_group(:p__project_name).
select_more{count(:i__item_id)}.
select_more{sum(:i__amount)}.to_a.to_json

Data mapper - count objects uploaded on specific date

I am building a simple app and I want to show some simple statistics to admins. I want to know is it possible to get the array of counts of objects from database that were created on the same date using datamapper or do I have to manually go through records and count them?
Objects have created_at attribute.
So i managed to solve it, I dont know if it is the right way but it works
days = Array.new
count = Array.new
photos_per_day = Photo.aggregate(:all.count, :upload_date)
photos_per_day.each do |ppd|
count.push(ppd[0])
days.push(ppd[1].day.to_s + " " + Date::MONTHNAMES[photo[1].month])
end
{:days => days, :count => count}.to_json
try this out:-
suppose you want to count users created on specific date.
User.group('date(created_at)').count
=> {"2013-05-20"=>66,
"2013-05-07"=>46,
"2013-05-17"=>9,
"2013-05-13"=>28,
"2013-05-22"=>22,
"2013-05-15"=>43,
"2013-05-08"=>32,
"2013-06-12"=>2,
"2013-05-28"=>22,
"2013-05-16"=>35,
"2013-05-09"=>33,
"2013-05-10"=>132,
"2013-05-21"=>5,
"2013-05-14"=>38,
"2013-05-11"=>4}

Resources