I am working with an API and I am trying to develop a simple pagination from scratch in Sinatra
This is my Sinatra route - it performs the API Call and returns 25 matters with the limit: 25 parameter and stores the resulting array in #matter -
get '/matters' do
#matter = client.matters.list(limit: 25)
slim :matters
end
My plan to get additional matters is
get '/next_matters' do
#matter = client.matters.next_page
slim :matters
end
Which supposedly returns the next 25 matters
This is my slim code at the bottom of the list of matters:
a href= "/next_matters" Next
This does not work - It returns nothing - I am sure this is because when I call next_matters is does not remember the state from client.matters.list(limit: 25).
Do I need a helper method for this?
When I try:
get '/matters' do
#matter = client.matters.list(limit: 25)
#matter2 = client.matters.next_page
slim :matters
end
I can list 50 matters - some how I need to pass the fact that I have called list previously
How do I do this?
You would do this by passing in an offset to the database as part of the query.
i.e.:
SELECT * FROM matters LIMIT 25;
Then on requesting the next page, you would do:
SELECT * FROM matters LIMIT 25 OFFSET 25;
Therefore, the next_page method needs to accept at least 1 parameter; an offset.
Pseudo code:
def next_page(offset)
# first page offset would be 0
# second page offset would be 25
# ...
limit(25).offset(offset)
end
Related
I need to gather elements calling an API with protocol http.
I am able to get the result, but the API can't return more than 100 results at each call
For this exemple, here is my step where we can see that I could have 3837 elements (total_count) but only 100 elements are returned (limit). The offset can be use to start at a different element of the 3837 elements
Notice that I can force the limit, but I can't have a value bigger than 100
Finaly, I would like to have a loop that can call x times the http (39 times for this exemple) by increasing each time the offset and reconciliate the 39 results in one table
Would you have some tips to help me with this problem ? Can we do some kind of loop ?
Thanks in advance
Here's a snippet I used with the Jira API. You should be able to get the gist from it.
Source = Json.Document(Web.Contents(yourJiraInstance, [Query=[maxResults="100",startAt="0"]])),
totalIssuesCount = Source[total],
// Now it is time to build a list of startAt values, starting on 0, incrementing 100 per item
startAtList = List.Generate(()=>0, each _ < totalIssuesCount, each _ +100),
urlList = List.Transform(startAtList, each Json.Document(Web.Contents(yourJiraInstance, [Query=[maxResults="100",startAt=Text.From(_)]]))),
This came from Nick Cerneaz (and Tiago Machado) from their posts on this thread.
I have an application implementing RESTful API. I have two methods create_order and order_status. The first method creates order and persists it with the current time in the order.time field:
order.time = Time.now
The second method responds with a hardcoded value:
:eta => 20.minutes.from_now.to_i
Instead of returning the hardcoded 20 minutes, how can I return the relative value that decreases with elapsed time (depending on the time when status request was made)?
At the beginning of the order, they are the same (20.minutes.from_now.to_i), but if the request is made after 5 mins, it should be 15.minutes.from_now.to_i.
I would save some other attribute along with order.time
For example : order.eta = Time.now + 1200
Or else : order.processing_time = 1200 and then order.eta could be calculated.
I like the second solution better, enabling different processing times for different orders.
So, What I'm trying to do is make calls to a Reporting API to filter by all possible breakdowns (breakdown the reports by site, avertiser, ad type, campaign, etc...). But, one issue is that the breakdowns can be unique to each login.
Example:
user1: alice123's reporting breakdowns are ["site","advertiser","ad_type","campaign","line_items"]
user2: bob789's reporting breakdowns are ["campaign","position","line_items"]
When I first built the code for this reporting API, I only had one login to test with, so I hard coded the loops for the dimensions (["site","advertiser","ad_type","campaign","line_items"]). So what I did was pinged the API for a report by sites. Then for each site, pinged for advertisers, and each advertiser, I pinged for the next dimension and so on..., leaving me with a nested loop of ~6 layers.
basically what I'm doing:
sites = mechanize.get "#{base_ur}/report?dim=sites"
sites = Yajl::Parser.parse(sites.body) # json parser
sites.each do |site|
advertisers = mechanize.get "#{base_ur}/report?site=#{site.fetch("id")}&dim=advertiser"
advertisers = Yajl::Parser.parse(advertisers.body) # json parser
advertisers.each do |advertiser|
ad_types = mechanize.get "#{base_ur}/report?site=#{site.fetch("id")}&advertiser=#{advertiser.fetch("id")}&dim=ad_type"
ad_types = Yajl::Parser.parse(ad_types.body) # json parser
ad_types.each do |ad_type|
...and so on...
end
end
end
GET <api_url>/?dim=<dimension to breakdown>&site=<filter by site id>&advertiser=<filter by advertiser id>...etc...
At the end of the nested loop, I'm left with a report that's broken down as much granularity as possible.
This works now since I only thought that there was one path of breaking down, but apparently each account could have different dimensions breakdowns.
So what I'm asking is if given an array of breakdowns, how can I set up a nested loop to traverse down dynamically do the granularity singularity?
Thanks.
I'm not sure what your JSON/GET returns exactly but for a problem like this you would need recursion.
Something like this perhaps? It's not very elegant and can definitely be optimised further but should hopefully give you an idea.
some_hash = {:id=>"site-id", :body=>{:id=>"advertiser-id", :body=>{:id=>"ad_type-id", :body=>{:id=>"something-id"}}}}
#breakdowns = ["site", "advertiser", "ad_type", "something"]
def recursive(some_hash, str = nil, i = 0)
if #breakdowns[i+1].nil?
str += "#{#breakdowns[i]}=#{some_hash[:id]}"
else
str += "#{#breakdowns[i]}=#{some_hash[:id]}&dim=#{#breakdowns[i + 1]}"
end
p str
some_hash[:body].is_a?(Hash) ? recursive(some_hash[:body], str.gsub(/dim.*/, ''), i + 1) : return
end
recursive(some_hash, 'base-url/report?')
=> "base-url/report?site=site-id&dim=advertiser"
=> "base-url/report?site=site-id&advertiser=advertiser-id&dim=ad_type"
=> "base-url/report?site=site-id&advertiser=advertiser-id&ad_type=ad_type-id&dim=something"
=> "base-url/report?site=site-id&advertiser=advertiser-id&ad_type=ad_type-id&something=something-id"
If you are just looking to map your data, you can recursively map to a hash as another user pointed out. If you are actually looking to do something with this data while within the loop and want to dynamically recreate the loop structure you listed in your question (though I would advise coming up with a different solution), you can use metaprogramming as follows:
require 'active_support/inflector'
# Assume we are given an input of breakdowns
# I put 'testarr' in place of the operations you perform on each local variable
# for brevity and so you can see that the code works.
# You will have to modify to suit your needs
result = []
testarr = [1,2,3]
b = binding
breakdowns.each do |breakdown|
snippet = <<-END
eval("#{breakdown.pluralize} = testarr", b)
eval("#{breakdown.pluralize}", b).each do |#{breakdown}|
END
result << snippet
end
result << "end\n"*breakdowns.length
eval(result.join)
Note: This method is probably frowned upon, and as I've said I'm sure there are other methods of accomplishing what you are trying to do.
I am receiving some data which is parsed in a Ruby script, a sample of the parsed data looks like this;
{"address":"00","data":"FF"}
{"address":"01","data":"00"}
That data relates to the status (on/off) of plant items (Fans, coolers, heaters etc.) the address is a HEX number to tell you which set of bits the data refers to. So in the example above the lookup table would be; Both of these values are received as HEX as in this example.
Bit1 Bit2 Bit3 Bit4 Bit5 Bit6 Bit7 Bit8
Address 00: Fan1 Fan2 Fan3 Fan4 Cool1 Cool2 Cool3 Heat1
Address 01: Hum1 Hum2 Fan5 Fan6 Heat2 Heat3 Cool4 Cool5
16 Addresses per block (This example is 00-0F)
Data: FF tells me that all items in Address 00 are set on (high/1) I then need to output the result of the lookup for each individual bit e.g
{"element":"FAN1","data":{"type":"STAT","state":"1"}}
{"element":"FAN2","data":{"type":"STAT","state":"1"}}
{"element":"FAN3","data":{"type":"STAT","state":"1"}}
{"element":"FAN4","data":{"type":"STAT","state":"1"}}
{"element":"COOL1","data":{"type":"STAT","state":"1"}}
{"element":"COOL2","data":{"type":"STAT","state":"1"}}
{"element":"COOL3","data":{"type":"STAT","state":"1"}}
{"element":"HEAT1","data":{"type":"STAT","state":"1"}}
A lookup table could be anything up to 2048 bits (though I don't have anything that size in use at the moment - this is maximum I'd need to scale to)
The data field is the status of the all 8 bits per address, some may be on some may be off and this updates every time my source pushes new data at me.
I'm looking for a way to do this in code ideally for the lay-person as I'm still very new to doing much with Ruby. There was a code example here, but it was not used in the end and has been removed from the question so as not to confuse.
Based on the example below I've used the following code to make some progress. (note this integrates with an existing script all of which is not shown here. Nor is the lookup table shown as its quite big now.)
data = [feeder]
data.each do |str|
hash = JSON.parse(str)
address = hash["address"]
number = hash["data"].to_i(16)
binary_str = sprintf("%0.8b", number)
binary_str.reverse.each_char.with_index do |char, i|
break if i+1 > max_binary_digits
mouse = {"element"=>+table[address][i], "data"=>{"type"=>'STAT', "state"=>char}}
mousetrap = JSON.generate(mouse)
puts mousetrap
end
end
This gives me an output of {"element":"COOL1","data":{"type":"STAT","state":"0"}} etc... which in turn gives the correct output via my node.js script.
I have a new problem/query having got this to work and captured a whole bunch of data from last night & this morning. It appears that now I've built my lookup table I need some of the results to be modified based on the result of the lookup. I have other sensors which need to generate a different output to feed my SVG for example;
FAN objects need to output {"element":"FAN1","data":{"type":"STAT","state":"1"}}
DOOR objects need to output {"element":"DOOR1","data":{"type":"LAT","state":"1"}}
SWIPE objects need to output {"element":"SWIPE6","data":{"type":"ROUTE","state":"1"}}
ALARM objects need to output {"element":"PIR1","data":{"type":"PIR","state":"0"}}
This is due to the way the SVG deals with updating - I'm not in a position to modify the DOM stuff so would need to fix this in my Ruby script.
So to address this what I ended up doing was making an exact copy of my existing lookup table and rather than listing the devices I listed the type of output like so;
Address 00: STAT STAT STAT ROUTE ROUTE LAT LAT PIR
Address 01: PIR PIR STAT ROUTE ROUTE LAT LAT PIR
This might be very dirty (and it also means I have to duplicate my lookup table, but it actually might be better for my specific needs as devices within the dataset could have any name (I have no control over the received data) Having built a new lookup table I modified the code I had been provided with below and already used for the original lookup but I had to remove these 2 lines. Without removing them I was getting the result of the lookup output 8 times!
binary_str.reverse.each_char.with_index do |char, i|
break if i+1 > max_binary_digits
The final array was built using the following;
mouse = {"element"=>+table[address][i], "data"=>{"type"=>typetable[address][i], "state"=>char}}
mousetrap = JSON.generate(mouse)
puts mousetrap
This gave me exactly the output I require and was able to integrate with both the existing script, node.js websocket & mongodb 'state' database (which is read on initial load)
There is one last thing I'd like to try and do with this code, when certain element states are set to 1 I'd like to be able to look something else up (and then use that result) I'm thinking this may be best done with a find query to my MongoDB and then just use the result. Doing that would hit the db for every query, but there would only ever be a handful or results so most things would return null which is fine. Am I along the right method of thinking?
require 'json'
table = {
"00" => ["Fan1", "Fan2", "Fan3"],
"01" => ["Hum1", "Hum2", "Fan5"],
}
max_binary_digits = table.first[1].size
data = [
%Q[{"address": "00","data":"FF"}],
%Q[{"address": "01","data":"00"}],
%Q[{"address": "01","data":"03"}],
]
data.each do |str|
hash = JSON.parse(str)
address = hash["address"]
number = hash["data"].to_i(16)
binary_str = sprintf("%0.8b", number)
p binary_str
binary_str.reverse.each_char.with_index do |char, i|
break if i+1 > max_binary_digits
puts %Q[{"element":#{table[address][i]},"data":{"type":"STAT","state":"#{char}"}}}]
end
puts "-" * 20
end
--output:--
"11111111"
{"element":Fan1,"data":{"type":"STAT","state":"1"}}}
{"element":Fan2,"data":{"type":"STAT","state":"1"}}}
{"element":Fan3,"data":{"type":"STAT","state":"1"}}}
--------------------
"00000000"
{"element":Hum1,"data":{"type":"STAT","state":"0"}}}
{"element":Hum2,"data":{"type":"STAT","state":"0"}}}
{"element":Fan5,"data":{"type":"STAT","state":"0"}}}
--------------------
"00000011"
{"element":Hum1,"data":{"type":"STAT","state":"1"}}}
{"element":Hum2,"data":{"type":"STAT","state":"1"}}}
{"element":Fan5,"data":{"type":"STAT","state":"0"}}}
--------------------
My answer assumes Bit1 in your table is the least significant bit, if that is not the case remove .reverse in the code.
You can ask me anything you want about the code.
I'm building a Sinatra based app for deployment on Heroku. You can imagine it like a standard URL shortener but where old shortcodes expire and become available for new URLs (I realise this is a silly concept but its easier to explain this way). I'm representing the shortcode in my database as an integer and redefining its reader to give a nice short and unique string from the integer.
As some rows will be deleted, I've written code that goes thru all the shortcode integers and picks the first free one to use just before_save. Unfortunately I can make my code create two rows with identical shortcode integers if I run two instances very quickly one after another, which is obviously no good! How should I implement a locking system so that I can quickly save my record with a unique shortcode integer?
Here's what I have so far:
Chars = ('a'..'z').to_a + ('A'..'Z').to_a + ('0'..'9').to_a
CharLength = Chars.length
class Shorts < ActiveRecord::Base
before_save :gen_shortcode
after_save :done_shortcode
def shortcode
i = read_attribute(:shortcode).to_i
return '0' if i == 0
s = ''
while i > 0
s << Chars[i.modulo(CharLength)]
i /= 62
end
s
end
private
def gen_shortcode
shortcode = 0
self.class.find(:all,:order=>"shortcode ASC").each do |s|
if s.read_attribute(:shortcode).to_i != shortcode
# Begin locking?
break
end
shortcode += 1
end
write_attribute(:shortcode,shortcode)
end
def done_shortcode
# End Locking?
end
end
This line:
self.class.find(:all,:order=>"shortcode ASC").each
will do a sequential search over your entire record collection. You'd have to lock the entire table so that, when one of your processes is scanning for the next integer, the others will wait for the first one to finish. This will be a performance killer. My suggestion, if possible, is for the process to be as follows:
Add a column that indicates when a record has expired (do you expire them by time of creation? last use?). Index this column.
When you need to find the next lowest usable number, do something like
Shorts.find(:conditions => {:expired => true},:order => 'shortcode')
This will have the database doing the hard work of finding the lowest expired shortcode. Recall that, in the absence of the :all parameter, the find method will only return the first matching record.
Now, in order to prevent race conditions between processes, you can wrap this in a transaction and lock while doing the search:
Shorts.transaction do
Shorts.find(:conditions => {:expired => true},:order => 'shortcode', :lock => true)
#Do your thing here. Be quick about it, the row is locked while you work.
end #on ending the transaction the lock is released
Now when a second process starts looking for a free shortcode, it will not read the one that's locked (so presumably it will find the next one). This is because the :lock => true parameter gets an exclusive lock (both read/write).
Check this guide for more on locking with ActiveRecord.