Ruby: HTTP.get_response from a website that does not exist - ruby

I have this code to check if the site is up or down:
require 'net/http'
require 'uri'
res = Net::HTTP.get_response(URI.parse('http://www.saddsdsds.com/'))
case res
when Net::HTTPSuccess, Net::HTTPRedirection
# OK
puts "ok"
else
puts "ok-error"
end
res2 = Net::HTTP.get_response(URI.parse('http://www.sads.com/'))
case res2
when Net::HTTPSuccess, Net::HTTPRedirection
# OK
puts "ok"
else
puts "ok-error"
end
However, the code works fine when the site is known but when the site is unknown (like the res example), the code breaks with this erorr:
`initialize': getaddrinfo: nodename nor servname provided, or not known (SocketError)
How can I fix this so when the address does not exist, the code does not break?
Also, is this the best way to check if a site is online?
Thanks

Surround all of that code with a begin..rescue block:
begin
...
rescue SocketError => e
puts "SocketError occurred: " + e
end

Related

In RoR, how do I catch an exception if I get no response from a server?

I’m using Rails 4.2.3 and Nokogiri to get data from a web site. I want to perform an action when I don’t get any response from the server, so I have:
begin
content = open(url).read
if content.lstrip[0] == '<'
doc = Nokogiri::HTML(content)
else
begin
json = JSON.parse(content)
rescue JSON::ParserError => e
content
end
end
rescue Net::OpenTimeout => e
attempts = attempts + 1
if attempts <= max_attempts
sleep(3)
retry
end
end
Note that this is different than getting a 500 from the server. I only want to retry when I get no response at all, either because I get no TCP connection or because the server fails to respond (or some other reason that causes me not to get any response). Is there a more generic way to take account of this situation other than how I have it? I feel like there are a lot of other exception types I’m not thinking of.
This is generic sample how you can define timeout durations for HTTP connection, and perform several retries in case of any error while fetching content (edited)
require 'open-uri'
require 'nokogiri'
url = "http://localhost:3000/r503"
openuri_params = {
# set timeout durations for HTTP connection
# default values for open_timeout and read_timeout is 60 seconds
:open_timeout => 1,
:read_timeout => 1,
}
attempt_count = 0
max_attempts = 3
begin
attempt_count += 1
puts "attempt ##{attempt_count}"
content = open(url, openuri_params).read
rescue OpenURI::HTTPError => e
# it's 404, etc. (do nothing)
rescue SocketError, Net::ReadTimeout => e
# server can't be reached or doesn't send any respones
puts "error: #{e}"
sleep 3
retry if attempt_count < max_attempts
else
# connection was successful,
# content is fetched,
# so here we can parse content with Nokogiri,
# or call a helper method, etc.
doc = Nokogiri::HTML(content)
p doc
end
When it comes to rescuing exceptions, you should aim to have a clear understanding of:
Which lines in your system can raise exceptions
What is going on under the hood when those lines of code run
What specific exceptions could be raised by the underlying code
In your code, the line that's fetching the content is also the one that could see network errors:
content = open(url).read
If you go to the documentation for the OpenURI module you'll see that it uses Net::HTTP & friends to get the content of arbitrary URIs.
Figuring out what Net::HTTP can raise is actually very complicated but, thankfully, others have already done this work for you. Thoughtbot's suspenders project has lists of common network errors that you can use. Notice that some of those errors have to do with different network conditions than what you had in mind, like the connection being reset. I think it's worth rescuing those as well, but feel free to trim the list down to your specific needs.
So here's what your code should look like (skipping the Nokogiri and JSON parts to simplify things a bit):
require 'net/http'
require 'open-uri'
HTTP_ERRORS = [
EOFError,
Errno::ECONNRESET,
Errno::EINVAL,
Net::HTTPBadResponse,
Net::HTTPHeaderSyntaxError,
Net::ProtocolError,
Timeout::Error,
]
MAX_RETRIES = 3
attempts = 0
begin
content = open(url).read
rescue *HTTP_ERRORS => e
if attempts < MAX_RETRIES
attempts += 1
sleep(2)
retry
else
raise e
end
end
I would think about using a Timeout that raises an exception after a short period:
MAX_RESPONSE_TIME = 2 # seconds
begin
content = nil # needs to be defined before the following block
Timeout.timeout(MAX_RESPONSE_TIME) do
content = open(url).read
end
# parsing `content`
rescue Timeout::Error => e
attempts += 1
if attempts <= max_attempts
sleep(3)
retry
end
end

/settings/ads/ Keeps popping up while scraping Google

I have a program that scrapes Google, it's an open source vulnerability scraper that uses mechanize to search Google. It uses a random search query provided in a text file to decide what to search for.
I'll post the main file and a link to the git due to the size of the program.
Anyways, I have this program that is used to scrape for sites, however, while it is scraping every now and then it comes across a 'URL' (I say that lightly) that looks like this:
[17:05:02 INFO]I'll run in default mode!
[17:05:02 INFO]I'm searching for possible SQL vulnerable sites, using search query inurl:/main.php?f1=
[17:05:04 SUCCESS]Site found: http://forix.autosport.com/main.php?l=0&c=1
[17:05:05 SUCCESS]Site found: https://zweeler.com/formula1/FantasyFormula12016/main.php?ref=103
[17:05:06 SUCCESS]Site found: https://en.zweeler.com/formula1/FantasyFormula1YearGame2015/main.php
[17:05:07 SUCCESS]Site found: http://modelcargo.com/main.php?mod=sambachoose&dep=samba
[17:05:08 SUCCESS]Site found: http://www.ukdirt.co.uk/main.php?P=rules&f=8
[17:05:09 SUCCESS]Site found: http://www.ukdirt.co.uk/main.php?P=tracks&g=2&d=2&m=0
[17:05:11 SUCCESS]Site found: http://zoohoo.sk/redir.php?q=v%FDsledok&url=http%3A%2F%2Flivescore.sk%2Fmain.php%3Flang%3Dsk
[17:05:12 SUCCESS]Site found: http://www.chemical-plus.com/main.php?f1=pearl_pigment.htm
[17:05:13 SUCCESS]Site found: http://www.fantasyf1.co/main.php
[17:05:14 SUCCESS]Site found: http://www.escritores.cl/base.php?f1=escritores/main.php
[17:05:15 SUCCESS]Site found: /settings/ads/preferences?hl=en #<= Right here
When this shows up, it completely crashes the program. I've tried doing the following:
next if urls == '/settings/ads/preferences?hl=en'
next if urls =~ /preferences?hl=en/
next if urls.split('/')[2] == 'ads/preferences?hl=en'
However, it keeps popping up. Also I should mention, the last 5 characters depend on your locations, so far I've seen:
hl=en
hl=ru
hl=ia
Does anybody have any idea what this is, I've done some research and literally can't find anything on it. Any help with this would be fantastic.
Main source:
#!/usr/local/env ruby
require 'rubygems'
require 'bundler/setup'
require 'mechanize'
require 'nokogiri'
require 'rest-client'
require 'timeout'
require 'uri'
require 'fileutils'
require 'colored'
require 'yaml'
require 'date'
require 'optparse'
require 'tempfile'
require 'socket'
require 'net/http'
require_relative 'lib/modules/format.rb'
require_relative 'lib/modules/credits.rb'
require_relative 'lib/modules/legal.rb'
require_relative 'lib/modules/spider.rb'
require_relative 'lib/modules/copy.rb'
require_relative 'lib/modules/site_info.rb'
include Format
include Credits
include Legal
include Whitewidow
include Copy
include SiteInfo
PATH = Dir.pwd
VERSION = Whitewidow.version
SEARCH = File.readlines("#{PATH}/lib/search_query.txt").sample
info = YAML.load_file("#{PATH}/lib/rand-agents.yaml")
#user_agent = info['user_agents'][info.keys.sample]
OPTIONS = {}
def usage_page
Format.usage("You can run me with the following flags: #{File.basename(__FILE__)} -[d|e|h] -[f] <path/to/file/if/any>")
exit
end
def examples_page
Format.usage('This is my examples page, I\'ll show you a few examples of how to get me to do what you want.')
Format.usage('Running me with a file: whitewidow.rb -f <path/to/file> keep the file inside of one of my directories.')
Format.usage('Running me default, if you don\'t want to use a file, because you don\'t think I can handle it, or for whatever reason, you can run me default by passing the Default flag: whitewidow.rb -d this will allow me to scrape Google for some SQL vuln sites, no guarentees though!')
Format.usage('Running me with my Help flag will show you all options an explanation of what they do and how to use them')
Format.usage('Running me without a flag will show you the usage page. Not descriptive at all but gets the point across')
end
OptionParser.new do |opt|
opt.on('-f FILE', '--file FILE', 'Pass a file name to me, remember to drop the first slash. /tmp/txt.txt <= INCORRECT tmp/text.txt <= CORRECT') { |o| OPTIONS[:file] = o }
opt.on('-d', '--default', 'Run me in default mode, this will allow me to scrape Google using my built in search queries.') { |o| OPTIONS[:default] = o }
opt.on('-e', '--example', 'Shows my example page, gives you some pointers on how this works.') { |o| OPTIONS[:example] = o }
end.parse!
def page(site)
Nokogiri::HTML(RestClient.get(site))
end
def parse(site, tag, i)
parsing = page(site)
parsing.css(tag)[i].to_s
end
def format_file
Format.info('Writing to temporary file..')
if File.exists?(OPTIONS[:file])
file = Tempfile.new('file')
IO.read(OPTIONS[:file]).each_line do |s|
File.open(file, 'a+') { |format| format.puts(s) unless s.chomp.empty? }
end
IO.read(file).each_line do |file|
File.open("#{PATH}/tmp/#sites.txt", 'a+') { |line| line.puts(file) }
end
file.unlink
Format.info("File: #{OPTIONS[:file]}, has been formatted and saved as #sites.txt in the tmp directory.")
else
puts <<-_END_
Hey now my friend, I know you're eager, I am also, but that file #{OPTIONS[:file]}
either doesn't exist, or it's not in the directory you say it's in..
I'm gonna need you to go find that file, move it to the correct directory and then
run me again.
Don't worry I'll wait!
_END_
.yellow.bold
end
end
def get_urls
Format.info("I'll run in default mode!")
Format.info("I'm searching for possible SQL vulnerable sites, using search query #{SEARCH}")
agent = Mechanize.new
agent.user_agent = #user_agent
page = agent.get('http://www.google.com/')
google_form = page.form('f')
google_form.q = "#{SEARCH}"
url = agent.submit(google_form, google_form.buttons.first)
url.links.each do |link|
if link.href.to_s =~ /url.q/
str = link.href.to_s
str_list = str.split(%r{=|&})
urls = str_list[1]
next if urls.split('/')[2].start_with? 'stackoverflow.com', 'github.com', 'www.sa-k.net', 'yoursearch.me', 'search1.speedbit.com', 'duckfm.net', 'search.clearch.org', 'webcache.googleusercontent.com'
next if urls == '/settings/ads/preferences?hl=en' #<= ADD HERE REMEMBER A COMMA =>
urls_to_log = URI.decode(urls)
Format.success("Site found: #{urls_to_log}")
sleep(1)
sql_syntax = ["'", "`", "--", ";"].each do |sql|
File.open("#{PATH}/tmp/SQL_sites_to_check.txt", 'a+') { |s| s.puts("#{urls_to_log}#{sql}") }
end
end
end
Format.info("I've dumped possible vulnerable sites into #{PATH}/tmp/SQL_sites_to_check.txt")
end
def vulnerability_check
case
when OPTIONS[:default]
file_to_read = "tmp/SQL_sites_to_check.txt"
when OPTIONS[:file]
Format.info("Let's check out this file real quick like..")
file_to_read = "tmp/#sites.txt"
end
Format.info('Forcing encoding to UTF-8') unless OPTIONS[:file]
IO.read("#{PATH}/#{file_to_read}").each_line do |vuln|
begin
Format.info("Parsing page for SQL syntax error: #{vuln.chomp}")
Timeout::timeout(10) do
vulns = vuln.encode(Encoding.find('UTF-8'), {invalid: :replace, undef: :replace, replace: ''})
begin
if parse("#{vulns.chomp}'", 'html', 0)[/You have an error in your SQL syntax/]
Format.site_found(vulns.chomp)
File.open("#{PATH}/tmp/SQL_VULN.txt", "a+") { |s| s.puts(vulns) }
sleep(1)
else
Format.warning("URL: #{vulns.chomp} is not vulnerable, dumped to non_exploitable.txt")
File.open("#{PATH}/log/non_exploitable.txt", "a+") { |s| s.puts(vulns) }
sleep(1)
end
rescue Timeout::Error, OpenSSL::SSL::SSLError
Format.warning("URL: #{vulns.chomp} failed to load dumped to non_exploitable.txt")
File.open("#{PATH}/log/non_exploitable.txt", "a+") { |s| s.puts(vulns) }
next
sleep(1)
end
end
rescue RestClient::ResourceNotFound, RestClient::InternalServerError, RestClient::RequestTimeout, RestClient::Gone, RestClient::SSLCertificateNotVerified, RestClient::Forbidden, OpenSSL::SSL::SSLError, Errno::ECONNREFUSED, URI::InvalidURIError, Errno::ECONNRESET, Timeout::Error, OpenSSL::SSL::SSLError, Zlib::GzipFile::Error, RestClient::MultipleChoices, RestClient::Unauthorized, SocketError, RestClient::BadRequest, RestClient::ServerBrokeConnection, RestClient::MaxRedirectsReached => e
Format.err("URL: #{vuln.chomp} failed due to an error while connecting, URL dumped to non_exploitable.txt")
File.open("#{PATH}/log/non_exploitable.txt", "a+") { |s| s.puts(vuln) }
next
end
end
end
case
when OPTIONS[:default]
begin
Whitewidow.spider
sleep(1)
Credits.credits
sleep(1)
Legal.legal
get_urls
vulnerability_check unless File.size("#{PATH}/tmp/SQL_sites_to_check.txt") == 0
Format.warn("No sites found for search query: #{SEARCH}. Logging into error_log.LOG. Create a issue regarding this.") if File.size("#{PATH}/tmp/SQL_sites_to_check.txt") == 0
File.open("#{PATH}/log/error_log.LOG", 'a+') { |s| s.puts("No sites found with search query #{SEARCH}") } if File.size("#{PATH}/tmp/SQL_sites_to_check.txt") == 0
File.truncate("#{PATH}/tmp/SQL_sites_to_check.txt", 0)
Format.info("I'm truncating SQL_sites_to_check file back to #{File.size("#{PATH}/tmp/SQL_sites_to_check.txt")}")
Copy.file("#{PATH}/tmp/SQL_VULN.txt", "#{PATH}/log/SQL_VULN.LOG")
File.truncate("#{PATH}/tmp/SQL_VULN.txt", 0)
Format.info("I've run all my tests and queries, and logged all important information into #{PATH}/log/SQL_VULN.LOG")
rescue Mechanize::ResponseCodeError, RestClient::ServiceUnavailable, OpenSSL::SSL::SSLError, RestClient::BadGateway => e
d = DateTime.now
Format.fatal("Well this is pretty crappy.. I seem to have encountered a #{e} error. I'm gonna take the safe road and quit scanning before I break something. You can either try again, or manually delete the URL that caused the error.")
File.open("#{PATH}/log/error_log.LOG", 'a+'){ |error| error.puts("[#{d.month}-#{d.day}-#{d.year} :: #{Time.now.strftime("%T")}]#{e}") }
Format.info("I'll log the error inside of #{PATH}/log/error_log.LOG for further analysis.")
end
when OPTIONS[:file]
begin
Whitewidow.spider
sleep(1)
Credits.credits
sleep(1)
Legal.legal
Format.info('Formatting file')
format_file
vulnerability_check
File.truncate("#{PATH}/tmp/SQL_sites_to_check.txt", 0)
Format.info("I'm truncating SQL_sites_to_check file back to #{File.size("#{PATH}/tmp/SQL_sites_to_check.txt")}")
Copy.file("#{PATH}/tmp/SQL_VULN.txt", "#{PATH}/log/SQL_VULN.LOG")
File.truncate("#{PATH}/tmp/SQL_VULN.txt", 0)
Format.info("I've run all my tests and queries, and logged all important information into #{PATH}/log/SQL_VULN.LOG") unless File.size("#{PATH}/log/SQL_VULN.LOG") == 0
rescue Mechanize::ResponseCodeError, RestClient::ServiceUnavailable, OpenSSL::SSL::SSLError, RestClient::BadGateway => e
d = DateTime.now
Format.fatal("Well this is pretty crappy.. I seem to have encountered a #{e} error. I'm gonna take the safe road and quit scanning before I break something. You can either try again, or manually delete the URL that caused the error.")
File.open("#{PATH}/log/error_log.LOG", 'a+'){ |error| error.puts("[#{d.month}-#{d.day}-#{d.year} :: #{Time.now.strftime("%T")}]#{e}") }
Format.info("I'll log the error inside of #{PATH}/log/error_log.LOG for further analysis.")
end
when OPTIONS[:example]
examples_page
else
Format.warning('You failed to pass me a flag!')
usage_page
end
IS there anything within this code, that would cause this to randomly popup? It only happens with random search queries.
Link to GitHub
UPDATE:
Ive discovered that Googles advertisement services link has the same extension in its URL as the one giving me problems.. However this doesn't explain why I'm getting this link, and why I can't seem to skip over it.
urls = "settings/ads/preferences?hl=ru"
if urls =~ /settings\/ads\/preferences\?hl=[a-z]{2}/
p "I'm skipped"
end
=> "I'm skipped"

Webcrawler skipping URLS

I'm writing a program that scans for vulnerable websites, I happen to know that there are a couple sites that have vulnerabilities, and return a SQL syntax error, however, when I run the program, it skips over these sites and doesn't output that they where found or output that they where saved into a file. This program is being used for pentesting and all owners of sites are made aware of the vulnerability.
Source:
def get_urls
info("Searching for possible SQL vulnerable sites.")
#agent = Mechanize.new
page = #agent.get('http://www.google.com/')
google_form = page.form('f')
google_form.q = "#{SEARCH}"
url = #agent.submit(google_form, google_form.buttons.first)
url.links.each do |link|
if link.href.to_s =~ /url.q/
str = link.href.to_s
str_list = str.split(%r{=|&})
urls = str_list[1]
next if str_list[1].split('/')[2] == "webcache.googleusercontent.com"
urls_to_log = urls.gsub("%3F", '?').gsub("%3D", '=')
success("Site found: #{urls_to_log}")
File.open("#{PATH}/temp/SQL_sites_to_check.txt", "a+") {|s| s.puts("#{urls_to_log}'")}
end
end
info("Possible vulnerable sites dumped into #{PATH}/temp/SQL_sites_to_check.txt")
end
def check_if_vulnerable
info("Checking if sites are vulnerable.")
IO.read("#{PATH}/temp/SQL_sites_to_check.txt").each_line do |parse|
begin
Timeout::timeout(5) do
parsing = Nokogiri::HTML(RestClient.get("#{parse.chomp}"))
end
rescue Timeout::Error, RestClient::ResourceNotFound, RestClient::SSLCertificateNotVerified, Errno::ECONNABORTED, Mechanize::ResponseCodeError, RestClient::InternalServerError => e
if e
warn("URL: #{parse.chomp} failed with error: [#{e}] dumped to non_exploitable.txt")
File.open("#{PATH}/lib/non_exploitable.txt", "a+"){|s| s.puts(parse)}
else
success("SQL syntax error discovered in URL: #{parse.chomp} dumped to SQL_VULN.txt")
File.open("#{PATH}/lib/SQL_VULN.txt", "a+"){|vuln| vuln.puts(parse)}
end
end
end
end
Example of usage:
[22:49:29 INFO]Checking if sites are vulnerable.
[22:49:53 WARNING]URL: http://www.police.bd/content.php?id=275' failed with error: [execution expired] dumped to non_exploitable.txt
File containing the URLs:
http://www.bible.com/subcat.php?id=2'
http://www.cidko.com/pro_con.php?id=3'
http://www.slavsandtat.com/about.php?id=25'
http://www.police.bd/content.php?id=275'
http://www.icdcprage.org/index.php?id=10'
http://huawei.com/en/plugin.php?id=hwdownload'
https://huawei.com/en/plugin.php?id=unlock'
https://facebook.com/profile.php?id'
http://www.footballclub.com.au/index.php?id=43'
http://www.mesrs.qc.ca/index.php?id=1525'
As you can see the program skips over 3 URLs and goes straight to the fourth one, why?
Am I doing something wrong to where this will happen?
I'm not sure if that rescue block is where it should be. You are not doing anything with the content you fetch in parsing = Nokogiri::HTML(RestClient.get("#{parse.chomp}")) and for the first three it maybe just works hence no exception and no error output. Add some output after that line to see them being fetched.

Ruby Checking to see if website exists

Ok i am checking to see if my server is running. It works as long as the port is correct. But if I cange the port to one I know is not excepted it completely skips my if routine. The example below works fine. But change the port number to say 99 and it completely skips the if. I would think it should fall into the else section.
url = URI.parse("http://www.google.com/")
url.port = 80
req = Net::HTTP.new(url.host, url.port)
res = req.request_head(url.path)
if res.code == "200"
#do something
else
#do something else
end
You should provide a timeout and rescue SocketError and Timeout::Error:
require "net/http"
def check_server(server, port)
begin
http = Net::HTTP.start(server, port, {open_timeout: 5, read_timeout: 5})
begin
response = http.head("/")
if response.code == "200"
# everything fine
else
# unexpected status code
end
rescue Timeout::Error
# timeout reading from server
end
rescue Timeout::Error
# timeout connecting to server
rescue SocketError
# unknown server
end
end
If you just want to check if your server is up, this can be simplified:
require "net/http"
def up?(server, port)
http = Net::HTTP.start(server, port, {open_timeout: 5, read_timeout: 5})
response = http.head("/")
response.code == "200"
rescue Timeout::Error, SocketError
false
end
It returns true if / returns a 200 status code and false otherwise, i.e. for other status codes, timeouts and typical error conditions.

Ruby - Handling null objects properly

In ruby, I'm not quite sure how to handle whether objects are nil or not.
For example I have the following:
begin
sp = SerialPort.new(#serial_device, #serial_bps, #serial_par, #serial_bits, SerialPort::NONE)
tcp = TCPSocket.new(#host, #port)
if (sp)
sp.print(command)
sp.close
elsif
tcp.print(command)
tcp.close
end
say siri_output
rescue
pp $!
puts "Sorry, I encountered an error: #{$!}"
ensure
request_completed
end
The problem is that the first object returns an error relating to:
No route to host - connect(2)
Which is correct, because TCP isn't connected duh. So I'd like it to use the next object instead.
Is there a way to do this without using certain Exceptions, I was wondering if there's a better way of doing what I'm after any how.
The problem is not the checks, you are doing that right. Anything that's not nil or false is true in ruby. It's that when you get an exception on the row that starts with "sp = .." the execution jumps to the resque block. You should restructure the code like this (I've removed the ensure clause because I do not know what it does). A good thing to do it's to rescue every specific type of exception in it's own row. by class name ex. NoConnectivityException => e (or what the class of the exception would be).
begin
sp = SerialPort.new(#serial_device, #serial_bps, #serial_par, #serial_bits, SerialPort::NONE)
sp.print(command)
sp.close
say siri_output
rescue Exception => e
puts "Sorry, I encountered an error: #{e.inspect}"
puts "trying TCP"
begin
tcp = TCPSocket.new(#host, #port)
tcp.print(command)
tcp.close
say siri_output
rescue Exception => e
puts "Sorry, I encountered an error: #{e.inspect}"
end
end
For quick and sloppy programming you can do another thing, but it's not recommended and generally a pain to debug, as any error results in nil and is silenced.
sp = SerialPort.new(#serial_device, #serial_bps, #serial_par, #serial_bits, SerialPort::NONE) rescue nil
tcp = TCPSocket.new(#host, #port) rescue nil
This way you'd end up with either a SerialPort object or nil object in the sp variable, and the same for sp.

Resources