Save API response data daily without overwriting - ruby

UPDATED
I have setup a model Graph with a single attribute, :data
I request, for eg, Facebook page and store the likes in data.
How can I schedule that API call everyday, such that two things are achievable
1. Value stored in :data is a hash of key - updated_at and value :data. This hash being added with value everyday without risk of overwriting.
2. All the :data stored everyday can be recalled in a page and presented like a monthly graph, or a calculation is done, for eg, to show avg likes per day, etc.
Controller
def index
#fb = Graph.fb_page("facebook")
#data = Graph.new(:fb => [#fb])
#data.save
end

Related

Rails order active records with assign_attributes

I have a User model which has a Scoring model which has a score value.
In my rails view I want to make an order of my users by score.
=> User.joins (: scoring) .order (: score)
So far, so good.
it gets complicated when I would dynamically change the score of some User without modifying them in the database according to certain attributes such as geolocation.
I tried the assign_attributes function but it does not change because the .order function calls the score fields in the database.
Use case: I do a user search by geolocation and the users near the geolocation appear in my search with their scores. I would like to weight the scores of users nearby since they are not on the exact geolocation
My code:
#Get scoring in other geolocation
#fiches_proxi = Fiche.joins(:user).merge(User.joins(:scoring)).near([#geo.lat_long_DMS.to_f, #geo.lat_long_grd.to_f], proxi_calcule(#geo.population_2012.to_i),units: :km, :order => 'scorings.score DESC').order('scorings.score DESC').where.not(geo: #geo.id).limit(10)
#Get scoring in real geolocation
#fiche_order_algo_all = Fiche.joins(:user).merge(User.joins(:scoring)).where(geo_id: #geo)
#Find all scores
#fiches_all = Fiche.where(id: #fiche_order_algo_all.pluck(:id) + #fiches_proxi.pluck(:id))
#pagy, #fiche_order_algo = pagy(#fiches_all.joins(:user).merge(User.joins(:scoring).order('scorings.score DESC')), items: 12)
#fiche_order_algo.each do |f|
if f.geo.id != #geo.id
f.user.scoring.assign_attributes(score: (f.user.scoring.score - 10.0))
else
f.user.scoring.score
end
end
My score is updated but my order is the same !
When you call .each on your relation, it returns an array, so you can use Array#sort_by
#fiche_order_algo.each do |f|
if f.geo.id != #geo.id
f.user.scoring.assign_attributes(score: (f.user.scoring.score - 10.0))
else
f.user.scoring.score
end
end
#fiche_order_algo.sort_by!{|f| f.scoring.score}
If you're working with large data sets, this might not be optimized, but won't be any less efficient than what you already have.
But you can also do it in one go with:
#fiche_order_algo.sort_by! do |f|
if f.geo.id != #geo.id
f.user.scoring.assign_attributes(score: (f.user.scoring.score - 10.0))
end
f.user.scoring.score
end

Xeroizer::ApiException : QueryParseException: No property or field 'inv_id' exists

I am Trying to get all the invoices in a single API hit.
Because, for every user having 100's of invoices.
It will exceed the API limit (Minute Limit: 60 calls in a rolling 60 second window).
I am trying to store all the invoice id into a single array and from that i will get the details of the user and then i loop the records locally and display it. It's the right way?
invoice_ids = user.estimates.select("invoice_id") || [] xero = Xeroizer::PrivateApplication.new(XERO_CONFIG["key"], XERO_CONFIG["secret"], XERO_CONFIG["path"], :rate_limit_sleep => 5)
invoices = ['795f789b-5958-xxxx-yyyy-48436dbe7757','987g389b-5958-xxxx-yyyy-68636dbe5589']
inv_id = invoice_ids.pluck(:invoice_id)
invoices = xero.Invoice.all(:where => 'InvoiceID==inv_id')
Also, I am getting the following error:
Xeroizer::ApiException (QueryParseException: No property or field 'inv_id' exists in type 'Invoice')
Looks like the problem is that you're not interpolating the inv_ids correctly. You probably need to do something like this:
invoices = xero.Invoice.all(:where => "InvoiceID==\"#{inv_id}\"")
You may have to perform some additional formatting on the inv_id variable to make it a valid Xero string. https://github.com/waynerobinson/xeroizer#retrieving-data

Fetch first charge of a customer in stripe

I am reading stripe documentation and I want to fetch the first charge of the a customer. Currently I am doing
charge_list = Stripe::Charge.list(
{
customer: "cus_xxx"
},
"sk_test_xxxxxx"
)
first_charge = charge_list.data.last
Since stripe api returns the charges list in sorted order with the most recent charges appearing first. But I don't think it is a good approach. Can anyone help me with how can I fetch the first charge by a customer or how can I sort the list with descending order of created date so that I could get the first object from the array.
It seems there is no reverse order sorting feature in stripe API.
Also remember the first charge may not be on the first page result set, so you have to iterate using #auto_paging_each.
A quick possible solution:
charge_list = Stripe::Charge.list(
{customer: "cus_xxx", limit: 100 }, # limit 100 to reduce request
"sk_test_xxxxxx")
first_charge = nil
charge_list.auto_paging_each {|c| first_charge = c }
You may want to persist the result somewhere since it is a heavy operation.
But the cleanest solution IMO would be to store all charge records into your DB and make subsequent queries against it.

Data mapper - count objects uploaded on specific date

I am building a simple app and I want to show some simple statistics to admins. I want to know is it possible to get the array of counts of objects from database that were created on the same date using datamapper or do I have to manually go through records and count them?
Objects have created_at attribute.
So i managed to solve it, I dont know if it is the right way but it works
days = Array.new
count = Array.new
photos_per_day = Photo.aggregate(:all.count, :upload_date)
photos_per_day.each do |ppd|
count.push(ppd[0])
days.push(ppd[1].day.to_s + " " + Date::MONTHNAMES[photo[1].month])
end
{:days => days, :count => count}.to_json
try this out:-
suppose you want to count users created on specific date.
User.group('date(created_at)').count
=> {"2013-05-20"=>66,
"2013-05-07"=>46,
"2013-05-17"=>9,
"2013-05-13"=>28,
"2013-05-22"=>22,
"2013-05-15"=>43,
"2013-05-08"=>32,
"2013-06-12"=>2,
"2013-05-28"=>22,
"2013-05-16"=>35,
"2013-05-09"=>33,
"2013-05-10"=>132,
"2013-05-21"=>5,
"2013-05-14"=>38,
"2013-05-11"=>4}

Concurrency On Association In ActiveRecord

I have an app where people sign up for items. Each item has a limited number of slots. How can I handle concurrency? I've tried like this in the Item class:
def sign_up(signup)
ActiveRecord::Base.transaction do
return 'Sorry, that item is full.' if full?
signups << signup
sheet.save!
nil
end
end
def full?
locked_signups = signups.lock(true).all
locked_signups.size >= max_signups
end
Is what I am trying to do even possible through AR? Do I need to implement my own locking via a column? Any suggestions are welcome.
UPDATE: I got this working per tadman's answer. Here's the code that works:
rows_updated = ActiveRecord::Base.transaction do
Item.connection.update "update items set signup_count=signup_count+1 where id=#{ActiveRecord::Base.sanitize(self.id)} and signup_count<quantity"
end
return 'Sorry, that item is full. Refresh the page to see what\'s still open.' if rows_updated < 1
I can think of two approaches to this sort of problem that are reliable.
Counter Column
You'll create a "remaining stock" column and update it atomically:
UPDATE sheet SET signups_remaining=signups_remaining-:count WHERE id=:id AND signups_remaining>=:count
You'll have to bind to the :count and :id values accordingly. If this query runs, it means there was a sufficient number of signups left.
Reserved Signups
Create the signup records in advance and allocate them:
UPDATE signups SET allocation_id=:allocation_id WHERE allocation_id IS NULL LIMIT :count
This will update zero or more signup records, so you'll have to check that you reserved the correct count before committing your transaction.

Resources