using HTTParty to test api response times - ruby

We have some performance enhancements coming to one of our api services. I was asked to get some random data and log the response times for the new and old so we can see side by side examples. I wrote a quick script to grab random data from the db and iterate through that result set using the same data in both calls. I am making both calls with httparty in the same loop but it seems like the first call is way slower than it should be. If I switch the two around the old call is now faster and it shouldn't be. Below is what im doing.
If I switch around the old and new call, I should see the time reflected but it doesn't and the new call is now really slow.
Can anybody shine some light on what Im doing wrong? Thanks ahead of time. (Let me know if my problem is unclear)
class ProductCompare < Test::Unit::TestCase
def test_compare
def time_diff(start, finish)
((finish - start) * 1000.0).to_i
end
begin
api_conn = Mysql.new()
random_skus = api_conn.query("Select random data")
random_skus.each do |row|
puts row.join("\s")
products1 = "http://api-call-old/#{row.join("\s")}"
start_time1 = Time.now
response1 = HTTParty.get(products1)
end_time1 = Time.now
products1_elapsed_tm = time_diff(start_time1, end_time1)
puts "The response time for response1 is: #{products1_elapsed_tm} ms"
products2 = "http://api-call-new/#{row.join("\s")}"
start_time2 = Time.now
response2 = HTTParty.get(products2)
end_time2 = Time.now
products2_elapsed_tm = time_diff(start_time2, end_time2)
puts "The response time for response2 is: #{products2_elapsed_tm} ms"
assert_equal(response1.body, response2.body, 'The products responce did not match')
end
api_conn.close
rescue Mysql::Error => e
puts e.error
ensure
api_conn.close if api_conn
end
end
end

Related

How to write a while loop properly

I'm trying to scrape a website however I cannot seem to get my while-loop to break out once it hits a page with no more information:
def scrape_verse_items(keyword)
pg = 1
while pg < 1000
puts "page #{pg}"
url = "https://www.bible.com/search/bible?page=#{pg}&q=#{keyword}&version_id=1"
doc = Nokogiri::HTML(open(url))
items = doc.css("ul.search-result li.reference")
error = doc.css('div#noresults')
until error.any? do
if keyword != ''
item_hash = {}
items.each do |item|
title = item.css("h3").text.strip
content = item.css("p").text.strip
item_hash[title] = content
end
else
puts "Please enter a valid search"
end
if error.any?
break
end
end
pg += 1
end
item_hash
end
puts scrape_verse_items('joy')
I know this doesn't exactly answer your question, but perhaps you might consider using a different approach altogether.
Using while and until loops can get a bit confusing, and usually isn't the most performant way of doing things.
Maybe you would consider using recursion instead.
I've written a small script that seems to work :
class MyScrapper
def initialize;end
def call(keyword)
puts "Please enter a valid search" && return unless keyword
scrape({}, keyword, 1)
end
private
def scrape(results, keyword, page)
doc = load_page(keyword, page)
return results if doc.css('div#noresults').any?
build_new_items(doc).merge(scrape(results, keyword, page+1))
end
def load_page(keyword, page)
url = "https://www.bible.com/search/bible?page=#{page}&q=#{keyword}&version_id=1"
Nokogiri::HTML(open(url))
end
def build_new_items(doc)
items = doc.css("ul.search-result li.reference")
items.reduce({}) do |list, item|
title = item.css("h3").text.strip
content = item.css("p").text.strip
list[title] = content
list
end
end
end
You call it by doing MyScrapper.new.call("Keyword") (It might make more sense to have this as a module you include or even have them as class methods to avoid the need to instantiate the class.
What this does is, call a method called scrape and you give it the starting results, keyword, and page. It loads the page, if there are no results it returns the existing results it has found.
Otherwise it builds a hash from the page it loaded, and then the method calls itself, and merges the results with the new hash it just build. It does this till there are no more results.
If you want to limit the page results you can just change this like:
return results if doc.css('div#noresults').any?
to this:
return results if doc.css('div#noresults').any? || page > 999
Note: You might want to double-check the results that are being returned are correct. I think they should be but I wrote this quite quickly, so there could always be a small bug hiding somewhere in there.

How should I use recursive method in ruby

I wrote a simple web scrawler using Mechanize, now I'm stuck at how to get next page recursively, below is the code.
def self.generate_page #generate a Mechainze page object,the first page
agent = Mechanize.new
url = "http://www.baidu.com/s?wd=intitle:#{URI.encode(WORD)}%20site:sina.com.cn&rn=50&gpc=stf#{URI.encode(TIME)}"
page = agent.get(url)
page
end
def self.next_page(n_page) #get next page recursively by click next tag showed in each pages
puts n_page
# if I dont use puts , I get nothing , when using puts, I get
#<Mechanize::Page:0x007fd341c70fd0>
#<Mechanize::Page:0x007fd342f2ce08>
#<Mechanize::Page:0x007fd341d0cf70>
#<Mechanize::Page:0x007fd3424ff5c0>
#<Mechanize::Page:0x007fd341e1f660>
#<Mechanize::Page:0x007fd3425ec618>
#<Mechanize::Page:0x007fd3433f3e28>
#<Mechanize::Page:0x007fd3433a2410>
#<Mechanize::Page:0x007fd342446ca0>
#<Mechanize::Page:0x007fd343462490>
#<Mechanize::Page:0x007fd341c2fe18>
#<Mechanize::Page:0x007fd342d18040>
#<Mechanize::Page:0x007fd3432c76a8>
#which are the results I want
np = Mechanize.new.click(n_page.link_with(:text=>/next/)) unless n_page.link_with(:text=>/next/).nil?
result = next_page(np) unless np.nil?
result # here the value is empty, I dont know what is worng
end
def self.get_page # trying to pass the result of next_page() method
puts next_page(generate_page)
# it seems result is never passed here,
end
I followed these two links What is recursion and how does it work?
and Ruby recursive function
but still cant figure out what's wrong.. hope someone can help me out.. Thanks
There are a few issues with your code:
You shouldn't be calling Mechanize.new more than once.
From a stylistic perspective, you are doing too many nil checks.
Unless you have a preference for recursion, it'll probably be easier to do it iteratively.
To have your next_page method return an array containing every link page in the chain, you could write this:
# you should store the mechanize agent as a global variable
Agent = Mechanize.new
# a helper method to DRY up the code
def click_to_next_page(page)
Agent.click(n_page.link_with(:text=>/next/))
end
# repeatedly visits next page until none exists
# returns all seen pages as an array
def get_all_next_pages(n_page)
results = []
np = click_to_next_page(n_page)
results.push(np)
until !np
np = click_to_next_page(np)
np && results.push(np)
end
results
end
# testing it out (i'm not actually running this)
base_url = "http://www.baidu.com/s?wd=intitle:#{URI.encode(WORD)}%20site:sina.com.cn&rn=50&gpc=stf#{URI.encode(TIME)}"
root_page = Agent.get(base_url)
next_pages = get_all_next_pages(root_page)
puts next_pages

Increasing Ruby Resolv Speed

Im trying to build a sub-domain brute forcer for use with my clients - I work in security/pen testing.
Currently, I am able to get Resolv to look up around 70 hosts in 10 seconds, give or take and wanted to know if there was a way to get it to do more. I have seen alternative scripts out there, mainly Python based that can achieve far greater speeds than this. I don't know how to increase the number of requests Resolv makes in parallel, or if i should split the list up. Please note I have put Google's DNS servers in the sample code, but will be using internal ones for live usage.
My rough code for debugging this issue is:
require 'resolv'
def subdomains
puts "Subdomain enumeration beginning at #{Time.now.strftime("%H:%M:%S")}"
subs = []
domains = File.open("domains.txt", "r") #list of domain names line by line.
Resolv.new(:nameserver => ['8.8.8.8', '8.8.4.4'])
File.open("tiny.txt", "r").each_line do |subdomain|
subdomain.chomp!
domains.each do |d|
puts "Checking #{subdomain}.#{d}"
ip = Resolv.new.getaddress "#{subdomain}.#{d}" rescue ""
if ip != nil
subs << subdomain+"."+d << ip
end
end
end
test = subs.each_slice(4).to_a
test.each do |z|
if !z[1].nil? and !z[3].nil?
puts z[0] + "\t" + z[1] + "\t\t" + z[2] + "\t" + z[3]
end
end
puts "Finished at #{Time.now.strftime("%H:%M:%S")}"
end
subdomains
domains.txt is my list of client domain names, for example google.com, bbc.co.uk, apple.com and 'tiny.txt' is a list of potential subdomain names, for example ftp, www, dev, files, upload. Resolv will then lookup files.bbc.co.uk for example and let me know if it exists.
One thing is you are creating a new Resolv instance with the Google nameservers, but never using it; you create a brand new Resolv instance to do the getaddress call, so that instance is probably using some default nameservers and not the Google ones. You could change the code to something like this:
resolv = Resolv.new(:nameserver => ['8.8.8.8', '8.8.4.4'])
# ...
ip = resolv.getaddress "#{subdomain}.#{d}" rescue ""
In addition, I suggest using the File.readlines method to simplify your code:
domains = File.readlines("domains.txt").map(&:chomp)
subdomains = File.readlines("tiny.txt").map(&:chomp)
Also, you're rescuing the bad ip and setting it to the empty string, but then in the next line you test for not nil, so all results should pass, and I don't think that's what you want.
I've refactored your code, but not tested it. Here is what I came up with, and may be clearer:
def subdomains
puts "Subdomain enumeration beginning at #{Time.now.strftime("%H:%M:%S")}"
domains = File.readlines("domains.txt").map(&:chomp)
subdomains = File.readlines("tiny.txt").map(&:chomp)
resolv = Resolv.new(:nameserver => ['8.8.8.8', '8.8.4.4'])
valid_subdomains = subdomains.each_with_object([]) do |subdomain, valid_subdomains|
domains.each do |domain|
combined_name = "#{subdomain}.#{domain}"
puts "Checking #{combined_name}"
ip = resolv.getaddress(combined_name) rescue nil
valid_subdomains << "#{combined_name}#{ip}" if ip
end
end
valid_subdomains.each_slice(4).each do |z|
if z[1] && z[3]
puts "#{z[0]}\t#{z[1]}\t\t#{z[2]}\t#{z[3]}"
end
end
puts "Finished at #{Time.now.strftime("%H:%M:%S")}"
end
Also, you might want to check out the dnsruby gem (https://github.com/alexdalitz/dnsruby). It might do what you want to do better than Resolv.
[Note: I've rewritten the code so that it fetches the IP addresses in chunks. Please see https://gist.github.com/keithrbennett/3cf0be2a1100a46314f662aea9b368ed. You can modify the RESOLVE_CHUNK_SIZE constant to balance performance with resource load.]
I've rewritten this code using the dnsruby gem (written mainly by Alex Dalitz in the UK, and contributed to by myself and others). This version uses asynchronous message processing so that all requests are being processed pretty much simultaneously. I've posted a gist at https://gist.github.com/keithrbennett/3cf0be2a1100a46314f662aea9b368ed but will also post the code here.
Note that since you are new to Ruby, there are lots of things in the code that might be instructive to you, such as method organization, use of Enumerable methods (e.g. the amazing 'partition' method), the Struct class, rescuing a specific Exception class, %w, and Benchmark.
NOTE: LOOKS LIKE STACK OVERFLOW ENFORCES A MAXIMUM MESSAGE SIZE, SO THIS CODE IS TRUNCATED. GO TO THE GIST IN THE LINK ABOVE FOR THE COMPLETE CODE.
#!/usr/bin/env ruby
# Takes a list of subdomain prefixes (e.g. %w(ftp xyz)) and a list of domains (e.g. %w(nytimes.com afp.com)),
# creates the subdomains combining them, fetches their IP addresses (or nil if not found).
require 'dnsruby'
require 'awesome_print'
RESOLVER = Dnsruby::Resolver.new(:nameserver => %w(8.8.8.8 8.8.4.4))
# Experiment with this to get fast throughput but not overload the dnsruby async mechanism:
RESOLVE_CHUNK_SIZE = 50
IpEntry = Struct.new(:name, :ip) do
def to_s
"#{name}: #{ip ? ip : '(nil)'}"
end
end
def assemble_subdomains(subdomain_prefixes, domains)
domains.each_with_object([]) do |domain, subdomains|
subdomain_prefixes.each do |prefix|
subdomains << "#{prefix}.#{domain}"
end
end
end
def create_query_message(name)
Dnsruby::Message.new(name, 'A')
end
def parse_response_for_address(response)
begin
a_answer = response.answer.detect { |a| a.type == 'A' }
a_answer ? a_answer.rdata.to_s : nil
rescue Dnsruby::NXDomain
return nil
end
end
def get_ip_entries(names)
queue = Queue.new
names.each do |name|
query_message = create_query_message(name)
RESOLVER.send_async(query_message, queue, name)
end
# Note: although map is used here, the record in the output array will not necessarily correspond
# to the record in the input array, since the order of the messages returned is not guaranteed.
# This is indicated by the lack of block variable specified (normally w/map you would use the element).
# That should not matter to us though.
names.map do
_id, result, error = queue.pop
name = _id
case error
when Dnsruby::NXDomain
IpEntry.new(name, nil)
when NilClass
ip = parse_response_for_address(result)
IpEntry.new(name, ip)
else
raise error
end
end
end
def main
# domains = File.readlines("domains.txt").map(&:chomp)
domains = %w(nytimes.com afp.com cnn.com bbc.com)
# subdomain_prefixes = File.readlines("subdomain_prefixes.txt").map(&:chomp)
subdomain_prefixes = %w(www xyz)
subdomains = assemble_subdomains(subdomain_prefixes, domains)
start_time = Time.now
ip_entries = subdomains.each_slice(RESOLVE_CHUNK_SIZE).each_with_object([]) do |ip_entries_chunk, results|
results.concat get_ip_entries(ip_entries_chunk)
end
duration = Time.now - start_time
found, not_found = ip_entries.partition { |entry| entry.ip }
puts "\nFound:\n\n"; puts found.map(&:to_s); puts "\n\n"
puts "Not Found:\n\n"; puts not_found.map(&:to_s); puts "\n\n"
stats = {
duration: duration,
domain_count: ip_entries.size,
found_count: found.size,
not_found_count: not_found.size,
}
ap stats
end
main

Ruby: Decorator pattern slows simple program by a lot

I recently wrote a program to return a bunch of stocks from the stock market that are unhealthy. The basic algorithm is this:
Look up all the quotes of every stock in an exchange (either NYSE or NASDAQ)
Find the ones that are less than 5 dollars from step 1
Find the ones from step 2 that are down 3 days and have large volume (expensive because I have to make a request for each stock, which is like ~700 currently for nasdaq).
Scan the news for the ones returned by step 3.
I had this all in one file:
Original implementation (https://github.com/EdmundMai/minion/blob/aa14bc3234a4953e7273ec502276c6f0073b459d/lib/minion.rb):
require 'bundler/setup'
require "minion/version"
require "yahoo-finance"
require "business_time"
require 'nokogiri'
require 'open-uri'
module Minion
class << self
def query(exchange)
client = YahooFinance::Client.new
all_companies = CSV.read("#{exchange}.csv")
small_caps = []
ticker_symbols = all_companies.map { |row| row[0] }
ticker_symbols.each_slice(200) do |batch|
data = client.quotes(batch, [:symbol, :last_trade_price, :average_daily_volume])
small_caps << data.select { |stock| stock.last_trade_price.to_f < 5.0 }
end
attractive = []
small_caps.flatten!.each_with_index do |small_cap, index|
begin
data = client.historical_quotes(small_cap.symbol, { start_date: 2.business_days.ago, end_date: Time.now })
closing_prices = data.map(&:close).map(&:to_f)
volumes = data.map(&:volume).map(&:to_i)
negative_3_days_in_a_row = closing_prices == closing_prices.sort
larger_than_average_volume = volumes.reduce(:+) / volumes.count > small_cap.average_daily_volume.to_i
if negative_3_days_in_a_row && larger_than_average_volume
attractive << small_cap.symbol
puts "Qualified: #{small_cap.symbol}, finished with #{index} out of #{small_caps.count}"
else
puts "Not qualified: #{small_cap.symbol}, finished with #{index} out of #{small_caps.count}"
end
rescue => e
puts e.inspect
end
end
final_results = []
attractive.each do |symbol|
rss_feed = Nokogiri::HTML(open("http://feeds.finance.yahoo.com/rss/2.0/headline?s=#{symbol}&region=US&lang=en-US"))
html_body = rss_feed.css('body')[0].text
diluting = false
['warrant', 'cashless exercise'].each do |keyword|
diluting = true if html_body.match(/#{keyword}/i)
end
final_results << symbol if diluting
end
final_results
end
end
end
This was really fast and would finish processing like ~700 stocks in a minute or less.
Then, I tried refactoring and splitting up the algorithm into different classes and files without changing the algorithm at all. I decided on using the decorator pattern since it seems to fit. However when I run the program now, it makes each request really slowly (15+ min). I know this because my puts statements get printed out really slowly.
New and slower implementation (https://github.com/EdmundMai/minion/blob/master/lib/minion.rb)
require 'bundler/setup'
require "minion/version"
require "yahoo-finance"
require "minion/dilution_finder"
require "minion/negative_finder"
require "minion/small_cap_finder"
require "minion/market_fetcher"
module Minion
class << self
def query(exchange)
all_companies = CSV.read("#{exchange}.csv")
all_tickers = all_companies.map { |row| row[0] }
short_finder = DilutionFinder.new(NegativeFinder.new(SmallCapFinder.new(MarketFetcher.new(all_tickers))))
short_finder.results
end
end
end
The part it's lagging at according to my puts:
require "yahoo-finance"
require "business_time"
require_relative "stock_finder"
class NegativeFinder < StockFinder
def results
client = YahooFinance::Client.new
results = []
finder.results.each_with_index do |stock, index|
begin
data = client.historical_quotes(stock.symbol, { start_date: 2.business_days.ago, end_date: Time.now })
closing_prices = data.map(&:close).map(&:to_f)
volumes = data.map(&:volume).map(&:to_i)
negative_3_days_in_a_row = closing_prices == closing_prices.sort
larger_than_average_volume = volumes.reduce(:+) / volumes.count > stock.average_daily_volume.to_i
if negative_3_days_in_a_row && larger_than_average_volume
results << stock
puts "Qualified: #{stock.symbol}, finished with #{index} out of #{finder.results.count}"
else
puts "Not qualified: #{stock.symbol}, finished with #{index} out of #{finder.results.count}"
end
rescue => e
puts e.inspect
end
end
results
end
end
It's lagging on step 3 (making one request for each stock). Not sure what's going on so any advice would be appreciated. If you want to clone the program and run it, just comment in the last line in lib/minion.rb and type ruby lib/minion.rb
After debugging it I figured it out. It was because I was calling finder.results (results being the decorated method) inside of the loop as shown below:
require 'bundler/setup'
require "minion/version"
require "yahoo-finance"
require "minion/dilution_finder"
require "minion/negative_finder"
require "minion/small_cap_finder"
require "minion/market_fetcher"
module Minion
class << self
def query(exchange)
all_companies = CSV.read("#{exchange}.csv")
all_tickers = all_companies.map { |row| row[0] }
short_finder = DilutionFinder.new(NegativeFinder.new(SmallCapFinder.new(MarketFetcher.new(all_tickers))))
short_finder.results
end
end
end
The part it's lagging at according to my puts:
require "yahoo-finance"
require "business_time"
require_relative "stock_finder"
class NegativeFinder < StockFinder
def results
client = YahooFinance::Client.new
results = []
finder.results.each_with_index do |stock, index|
begin
data = client.historical_quotes(stock.symbol, { start_date: 2.business_days.ago, end_date: Time.now })
closing_prices = data.map(&:close).map(&:to_f)
volumes = data.map(&:volume).map(&:to_i)
negative_3_days_in_a_row = closing_prices == closing_prices.sort
larger_than_average_volume = volumes.reduce(:+) / volumes.count > stock.average_daily_volume.to_i
if negative_3_days_in_a_row && larger_than_average_volume
results << stock
// HERE!!!!!!!!!!!!!!!!!!!!!!!!!
puts "Qualified: #{stock.symbol}, finished with #{index} out of #{finder.results.count}" <------------------------------------
else
// AND HERE!!!!!!!!!!!!!!!!!!!!!!!!!
puts "Not qualified: #{stock.symbol}, finished with #{index} out of #{finder.results.count}" <-----------------------------------------------------------
end
rescue => e
puts e.inspect
end
end
results
end
end
This caused a cascade of requests every time I iterated through the loop in NegativeFinder. Removing that call fixed it. Lesson: When using the decorator pattern, either only call the decorated method once, especially when you're doing something expensive in each call. Either that or hold the returned variable in an instance variable so you don't have to calculate it each time.
Also as a side note, I've decided not to go with the decorator pattern because I don't think it applies well here. Something like SmallCapFinder.new(SmallCapFinder.new(MarketFetcher.new(all_tickers))) doesn't add functionality at all (the primary function of using the decorator pattern), so chaining decorators doesn't do anything. Therefore, I'm just going to make them methods instead of adding unnecessary complexity.
There are some thing missing in the code you gave us (Base class StockFinder, MarketFetcher). But I think you are now instantate more than one YahooFinance::Client. Input/Output to other systems is very often the cause for speed problems.
I suggest that you first encapsulate the finance client and access to financial data. This makes it easier when you want to switch your financial data provider, or add another one. Instead of the decorator pattern, I would just use plain old methods for finding small caps, finding negative, etc.

Best way to check for a new tweet

Heres a piece of code from my irc bot. What it does is that it checks for new tweets at an specific account, then replies it in the channel.
Now, is there an better way to check for new tweets? that for i in 1..999999999 feels a bit unoptimal, and ddos'y.
tweetzsr = {}
xzsr = {}
zsr = {}
on :message, ".start zsr" do |m|
if zsr[m.channel] == true
m.reply "already doing it.."
else
m.reply "ok."
zsr[m.channel] = true
for i in 1..99999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999
sleep 60
puts "#{xzsr[m.channel]} = x, tweetzsr = #{tweetzsr[m.channel]}"
tweetzsr[m.channel] = Twitter.user_timeline("twiitterracount").first.text
if xzsr[m.channel] == tweetzsr[m.channel]
nil
else
m.reply "#{tweetzsr[m.channel]} - via twitter feed"
xzsr[m.channel] = tweetzsr[m.channel]
end
end
end
end
First of all, for an infinite loop use the loop method.
Best for this kind of thing is to use Twitter's streaming api. This api will send a single request to Twitter, which will then push any new data to your client. For that there is a gem called TweetStream.
Example usage:
TweetStream::Client.new.userstream do |status|
m.reply "#{status.text} - via twitter"
end
Is that your infinite loop?
Change it to this to improve sanity
loop do
sleep 60
newtweet[m.channel] = Twitter.user_timeline("twitteracount").first.text
next if oldtweet[m.channel] == newtweet[m.channel]
m.reply "#{newtweet[m.channel]} - via twitter"
oldtweet[m.channel] = newtweet[m.channel]
end
You're missing the two end keywords.

Resources