Updating same hash in iterator each loop in ruby - ruby

Updated with getEntitlementList() method
I am new to ruby and trying to update hash with data getting in steps.
Short story: I got my other problem solved and it was on this:
[1]: split and loop in Ruby
It has 2 problems:
Even though I am setting offset step wise by 250, it is returning the same response. For ex: offset=0. offset=250, offset=500 till offset=3000. It is returning same response not different.
On every new offset, I want to update the same hash so that in the end I will have hash for all items.
API Doc says:
limit >> Integer that specifies the maximum number of records to return in a single API call. If not specified a default limit will be used. Maxiumum of 250 records per page
2.offset >> Integer that specifies the offset of the first result
from the beginning of the collection. offset is record based,
not page based, and the index starts at 0.
For example, offset=0 and limit=20 will return records 0-19, while offset=1 and limit=20 will return records 1-20. Between 0 and the last record index.
def getEntitlementList(url,header,source,limit,offset)
# This method pulls list of existing entitlements of a given source
# This list is uses to create a hsh Map of displayableName vs Id
# Returns a hash Map
uri = URI.parse( "#{url}/cc/api/entitlement/list?limit=#{limit}&offset=#{offset}&CISApplicationId=#{source}" )
puts uri
http = Net::HTTP.new( uri.host, uri.port )
http.use_ssl = true
request = Net::HTTP::Get.new( uri.request_uri, header )
# puts "request : #{request}"
response = http.request( request )
# puts "entitlements hash map : #{response}"
# puts "#{JSON.parse(response.body)}"
case response
when Net::HTTPSuccess
responseBody = JSON.parse(response.body)
# puts responseBody
when Net::HTTPUnauthorized
puts "#{source} Entitlements Hash Map creation is a Failure!"
puts "Error: Unauthorized."
when Net::HTTPServerError
puts "#{source} Entitlements Hash Map creation is a Failure!"
puts "Error: Server Error."
else
puts "#{source} Entitlements Hash Map creation is a Failure!"
puts "Error: #{response.code}"
end
return responseBody
end
What I want to achieve is, I have below code:
limit = 250
entitlementHash = Hash.new
(0..3000).step(250) do |offset|
responseBody = getEntitlementList(url,header,source,limit,offset)
count = responseBody['count']
entitlementList = responseBody['items']
entitlementList.each {|a| entitlementHash["#{a['displayableName'].upcase()}"] =
a['id']}
puts "updated entitlementHash size: #{entitlementHash.size}"
end
return entitlementHash
My api has 3000 data items but in each run it pulls only 250.
So every time it pulls the data, I would like to update the entitlementHash with newer data.
So that at the end of last run, my entitlementHash will have all 3000 data items.
Currently it is just taking last 250 items.
I tried with below:
entitlementHash.update
entitlementHash.store
entitlementHash.merge
but didn't work.
Any help would be really grateful.

Related

twitter returns nothing when :max_id and/or :until are specified

I am using twitter gem to retrieve tweets for the hash tag. My goal is to retrieve tweets for the last year. I am trying to use :max_id option.
So, I do (twitter is properly configured client):
loop.inject(nil) do |memo|
results = twitter.search "#ruby -rt", (memo ? {max_id: memo - 1} : {})
last = results.inject(nil) do |_, res|
# handle res here, unrelated
res
end
break memo if last.nil? || last.created_at < (Date.today - 365).to_time
last.id
end
The problem is that I receive an empty result set on the subsequent request. That said, the first request returns ≈2K tweets, the second always returns none.
How do I retrieve the statuses from twitter using twitter gem (or using anything else,) by chunks?
Well, it turns out that max_id parameter in call to search is [likely] expected to be a valid tweet id.
By changing
# ⇓⇓⇓⇓ HERE
twitter.search "#ruby -rt", (memo ? {max_id: memo - 1} : {})
to
twitter.search "#ruby -rt", (memo ? {max_id: memo} : {})
I was finally able to retrieve the feeds in the past by chunks.
NB: twitter responds with RateLimit error after each subsequent ≈1.5K statuses returned.

no implicit conversion of String into Integer (TypeError)

I am trying to parse a line of JSON using ruby and running into this error
no implicit conversion of String into Integer (TypeError)
uri = URI.parse('xxx')
http = Net::HTTP.new(uri.host, uri.port)
request = Net::HTTP::Get.new(uri.request_uri)
response = http.request(request)
if response.code == "200"
result = JSON.parse(response.body)
result.each do |doc|
#if doc.is_a?(Hash)
dns_name = doc['dns_name'] #reference properties like this <-- Line 25 (see below)
host = ['host']# this is the result in object form
#end
end
else
puts "ERROR!!!"
end
puts host
puts dns_name
I have looked at several similar questions but they didn't seem to help and I have tried changing
result.each do |doc|
to
result.each.first do |doc|
as discussed in them.
My ruby is passable at best but I would take a link to some docs as well, I have tried the official docs without much luck at this point. Here is what is returned:
[{"name":"server","power_state":"On","host":"host","cluster":"cluster","ip_address":"10.0.0.0","dns_name":"server.com","vcenter":"vcenter","description":" ","datastore":"datastore","num_vcpu":"2","mem_size_gb":8,"vmid":"1","application":"Misc","business_unit":"","category":"","support_contact":"joe#example.com"},200]
I have also tried .is_a?(Hash) and .is_a?(Array). I am fairly certain when I look at the json it is an array of hashes and the problem lies in the 200 response code I am getting back at the end of the line. Why that is a problem I have no idea, I would like to work around it but the json is generated by a known source so I may be able to have them modify it if I can show that it is faulty.
Thanks
UPDATE
As asked the full out from the error
'./status.rb:25:in `[]''
'./status.rb:25:in `block in ''
'./status.rb:23:in `each''
'./status.rb:23:in `''
In your case it doesn't really seem like their is a reason for the loop, you could just write:
dns_name = result.first['dns_name']
host = result.first['host']
Since result is an array with 2 objects 0 being the Hash and 1 being an Int that should work.
If you well-format the JSON it will look like this:
[
{
"name":"server",
"power_state":"On",
"host":"host",
"cluster":"cluster",
"ip_address":"10.0.0.0",
"dns_name":"server.com",
"vcenter":"vcenter",
"description":" ",
"datastore":"datastore",
"num_vcpu":"2",
"mem_size_gb":8,
"vmid":"1",
"application":"Misc",
"business_unit":"",
"category":"",
"support_contact":"joe#example.com"
},
200
]
You want to access the hash, it's the first element in the array so:
if response.code == "200"
result = JSON.parse(response.body)
dns_name = result.first['dns_name']
host = result.first['host']
else
puts "ERROR!!!"
end
No need for an each.

Ruby - Problems concatenating array elements within loop

I'm very new to Ruby and am having some problems concatenating strings within a for loop.
Here is what I have so far
# search request
search = ["testOne", "testTwo"]
# Create base url for requests in loop
base_url = "http://example.com/"
# create an empty response array for loop below
response = []
search.each do |element|
response = "#{base_url}#{element}"
end
I'd like response[0] to hold "http://example.com/testOne". However, after the loop executes, response[0] only holds the first letter (h) of my base variable; response holds "http://example.com/testTwo".
I'm thinking this is a simple mistake, but can't find any helpful resources.
Use Array#<< method
# search request
search = ["testOne", "testTwo"]
# Create base url for requests in loop
base_url = "http://example.com/"
# create an empty response array for loop below
response = []
search.each do |element|
response << "#{base_url}#{element}"
end
response # => ["http://example.com/testOne", "http://example.com/testTwo"]
response = "#{base_url}#{element}" means you are assigning in each iteration a new string object to the local variable response. In the last iteration response holds the string object "http://example.com/testTwo". Now response[0] means you are calling the method String#[]. So at index 0 of the string "http://example.com/testTwo", the character present is h, so your response[0] returning 'h'- which is expected as per your code.
The same code can be written in more sweet way :
# search request
search = ["testOne", "testTwo"]
# Create base url for requests in loop
base_url = "http://example.com/"
response = search.map {|element| base_url+element }
response # => ["http://example.com/testOne", "http://example.com/testTwo"]
or
response = search.map(&base_url.method(:+))
response # => ["http://example.com/testOne", "http://example.com/testTwo"]
or, as Michael Kohl pointed :
response = search.map { |s| "#{base_url}#{s}" }
response # => ["http://example.com/testOne", "http://example.com/testTwo"]

using HTTParty to test api response times

We have some performance enhancements coming to one of our api services. I was asked to get some random data and log the response times for the new and old so we can see side by side examples. I wrote a quick script to grab random data from the db and iterate through that result set using the same data in both calls. I am making both calls with httparty in the same loop but it seems like the first call is way slower than it should be. If I switch the two around the old call is now faster and it shouldn't be. Below is what im doing.
If I switch around the old and new call, I should see the time reflected but it doesn't and the new call is now really slow.
Can anybody shine some light on what Im doing wrong? Thanks ahead of time. (Let me know if my problem is unclear)
class ProductCompare < Test::Unit::TestCase
def test_compare
def time_diff(start, finish)
((finish - start) * 1000.0).to_i
end
begin
api_conn = Mysql.new()
random_skus = api_conn.query("Select random data")
random_skus.each do |row|
puts row.join("\s")
products1 = "http://api-call-old/#{row.join("\s")}"
start_time1 = Time.now
response1 = HTTParty.get(products1)
end_time1 = Time.now
products1_elapsed_tm = time_diff(start_time1, end_time1)
puts "The response time for response1 is: #{products1_elapsed_tm} ms"
products2 = "http://api-call-new/#{row.join("\s")}"
start_time2 = Time.now
response2 = HTTParty.get(products2)
end_time2 = Time.now
products2_elapsed_tm = time_diff(start_time2, end_time2)
puts "The response time for response2 is: #{products2_elapsed_tm} ms"
assert_equal(response1.body, response2.body, 'The products responce did not match')
end
api_conn.close
rescue Mysql::Error => e
puts e.error
ensure
api_conn.close if api_conn
end
end
end

Retrieving full request string using Ruby curl

I intend to send a request like the following:
c = Curl::Easy.http_post("https://example.com", json_string
) do |curl|
curl.headers['Accept'] = 'application/json'
curl.headers['Content-Type'] = 'application/json'
curl.headers['Api-Version'] = '2.2'
end
I want to log the exact http request that is being made. Is there a way to get the actual request that was made (base path, query parameters, headers and body)?
The on_debug handler has helped me before. In your example you could try:
curl.on_debug do |type, data|
puts type, data
end
You can reach the solution in differents manner:
Inside your block you can put:
curl.verbose = true # that prints a detailed output of the connection
Or outside the block:
c.url # return the url with queries
c.total_time # retrieve the total time for the prev transfer (name resolving, TCP,...)
c.header_str # return the response header
c.headers # return your call header
c.body_str # return the body of the response
Remember to call c.perform (if not yet performed) before call these methods.
Many more option can be found here: http://curb.rubyforge.org/classes/Curl/Easy.html#M000001

Resources