I have working code that gets data over https (below). In fact it runs some test through php. I used standard timeout that works fine. Now while "waiting" for server response I need to implement timer. Because in some cases the test won't finish - the php code will not finish - the ruby time out works ok. So I need to kill some process to capture the error in the existing https session.
How can I implement my own time out for https request on top of existing time out?
The existing timeout will be always greater than custom timeout. eg existing timeout is 10mins and the custom will be 5 mins.
uri = URI.parse(url)
start = Time.new
http_read_timeout=60*10
connection = Net::HTTP.new(uri.host, 443)
connection.use_ssl = true
begin
response = connection.start() do |http|
http.open_timeout = 50
http.read_timeout = http_read_timeout
http.request_get(uri.request_uri)
# here I need to place a code that is triggered
# in case of custom timeout is reached
end
rescue Timeout::Error
# "Connection failed
time_out_message ="security time out - after #{http_read_timeout} sec"
return time_out_message
end
puts "finished"
I don't get it. What does your custom timeout do? You are making an HTTP request...it either returns or times out.
You're already setting the timeout value. Your code can't reach into the future & tell you what the external code would eventually return, if it did...so what do you want it to do, exactly?
But if you really just need an external Timeout wrapper, you can use Timeout::Timeout. Like this:
require 'timeout'
Timeout::timeout(your_timeout_period) do
run_some_code
rescue => err
do_something_with err
# and maybe the below?
raise
end
Related
I'm connecting to a TCP server using Ruby's TCPSocket class.
I send some data about an address and I must wait for the server to do some processing to give me the geocoding of said address. Since the process in the server takes some time, I cannot read the response immediately.
When I used socket.readpartial() I got a response of two white spaces.
I temporarily solved this using sleep(5) but I don't like this at all, because it is hackish and clumsy, and I risk that even after 5 seconds the response is not ready and I still get an empty response.
I know that the responses will always be 285 characters long.
Is there a more correct and elegant way of having my TCP socket wait for the full response?
Here's my code:
def matchgeocode(rua, nro, cidade, uf)
count = 0
begin
socket = TCPSocket.new(GEOCODER_URL, GEOCODER_PORT)
# Needed for authentication
socket.write("TICKET #{GEOCODER_TICKET}")
socket.read(2)
# Here's the message I send to the server
socket.write("MATCHGEOCODE -Rua:\"#{rua}\" -Nro:#{nro} -Cidade:\"#{cidade}\" -Uf:\"#{uf}\"")
# My hackish sleep
sleep(5)
# Reading the fixed size response
response = socket.readpartial(285)
socket.write('QUIT')
socket.close
rescue Exception => e
count += 1
puts e.message
if count <= 5 && response.eql?('')
retry
end
end
response
end
Since you know the length of the response you should use read, not readpartial.
readpartial returns immediately if ANY data is available, even one byte is enough. That's why you need the sleep call so that the response has time to return to you before readpartial tries to peek at what data is present.
read on the other hand blocks completely until ALL requested data is available. Since you know the length of the result then read is the natural solution here.
I have this snippet of code:
def httpsGet url
uri = URI.parse(url)
http = Net::HTTP.new(uri.host, uri.port)
request = Net::HTTP::Get.new(uri.request_uri)
http.use_ssl = true
request.initialize_http_header({"someHeader" => "82739840273985734"})
http.request(request)
end
i've been running a script that uses this just fine for the past week. the script basically calls out to some 3rd party service with different parameters many many times over and over again. suddenly, yesterday and today, this method seems to be hanging sometimes (i stuck puts in several places). it is annoying because this method sometimes hangs after 100 calls, sometimes 20 calls, sometimes many hours later...etc.
is that code not the best way to make an https call with headers in Ruby?
how do i debug this to ensure i'm not doing something wrong?
is the 3rd party service down? but even if so, shouldn't the connection in ruby time out? (like i get a timeout exception) ?
Take a look at open_timeout and ssl_timeout timeout defined for this library:
http = Net::HTTP.new(uri.host, uri.port)
http.open_timeout = 5 # create connection timeout after 5 seconds
http.ssl_timeout = 5 # read timeout after 5 seconds
So, I'm trying to simulate some basic HTTP persistent connections using sockets and Ruby - for a college class.
The point is to build a server - able to handle multiple clients - that receives a file path and gives back the file content - just like an HTTP GET.
The current server implementation loops listening for clients, fires a new thread when there's an incoming connection and reads the file paths from this socket. It's very dumb, but it works fine when working with non-presistent connections - one request per connection.
But they should be persistent.
Which means the client shouldn't worry about closing the connection. In the non-persistent version the servers echoes the response and close the connection - goodbye client, farewell.
But being persistent means the server thread should loop and wait for more incoming requests until... well until there's no more requests. How does the server knows that? It doesn't! Some sort of timeout is needed. I tried to do that with Ruby's Timeout, but it didn't work.
Googling for some solutions - besides being thoroughly advised to avoid using Timeout module - I've seen a lot of posts about the IO.select method, that should handle this socket waiting issue way better than using threads and stuff (which really sounds cool, considering how Ruby threads (don't) work). I'm trying to understand here how IO.select works, but still wasn't able to make it work in the current scenario.
So I aske basically two things:
how can I efficiently work this timeout issue on the server-side, either using some thread based solution, low-level socket options or some IO.select magic?
how can the client side know that the server has closed its side of the connection?
Here's the current code for the server:
require 'date'
module Sockettp
class Server
def initialize(dir, port = Sockettp::DEFAULT_PORT)
#dir = dir
#port = port
end
def start
puts "Starting Sockettp server..."
puts "Serving #{#dir.yellow} on port #{#port.to_s.green}"
Socket.tcp_server_loop(#port) do |socket, client_addrinfo|
handle socket, client_addrinfo
end
end
private
def handle(socket, addrinfo)
Thread.new(socket) do |client|
log "New client connected"
begin
loop do
if client.eof?
puts "#{'-' * 100} end connection"
break
end
input = client.gets.chomp
body = content_for(input)
response = {}
if body
response.merge!({
status: 200,
body: body
})
else
response.merge!({
status: 404,
body: Sockettp::STATUSES[404]
})
end
log "#{addrinfo.ip_address} #{input} -- #{response[:status]} #{Sockettp::STATUSES[response[:status]]}".send(response[:status] == 200 ? :green : :red)
client.puts(response.to_json)
end
ensure
socket.close
end
end
end
def content_for(path)
path = File.join(#dir, path)
return File.read(path) if File.file?(path)
return Dir["#{path}/*"] if File.directory?(path)
end
def log(msg)
puts "#{Thread.current} -- #{DateTime.now.to_s} -- #{msg}"
end
end
end
Update
I was able to simulate the timeout behaviour using the IO.select method, but the implementation doesn't feel good when combining with a couple of threads for accepting new connections and another couple for handling requests. The concurrency makes the situation mad and unstable, and I'm probably not sticking with it unless I can figure out a better way of using this solution.
Update 2
Seems like Timeout is still the best way to handle this. I'm sticking with it till find a better option.
I still don't know how to deal with zombie client connections.
Solution
I endend up using IO.select (got inspired when looking at the webrick code). You cha check the final version here (lib/http/server/client_handler.rb)
You should implement something like heartbeat packets.Client side should send special packets to after few secs/mins to ensure that server doesn't time out the connection on the client end.You just avoid doing anything in this call.
I try to read URLs from a Redis store and simply fetch the HTTP status of the URLs. All within EventMachine. I don't know what's wrong with my code, but it's not asynchronous like expected.
All requests are fired from the first one to the last one and curiously I only get the first response (the HTTP header I want to check) after the last request. Does anyone have a hint what's going wrong there?
require 'eventmachine'
require 'em-hiredis'
require 'em-http'
EM.run do
#redis = EM::Hiredis.connect
#redis.errback do |code|
puts "Error code: #{code}"
end
#redis.keys("domain:*") do |domains|
domains.each do |domain|
if domain
http = EM::HttpRequest.new("http://www.#{domain}", :connect_timeout => 1).get
http.callback do
puts http.response_header.http_status
end
else
EM.stop
end
end
end
end
I'm running this script for a few thousand domains so I would expect to get the first responses before sending the last request.
While EventMachine is async, the reactor itself is single threaded. So, while your loop is running and firing off those thousands of requests, none of them are being executed until the loop exits. Then, if you call EM.stop, you'll stop the reactor before they execute.
You can use something like EM::iterator to break up the processing of domains into chunks that let the reactor execute. Then you'll need to do some magic if you really want to EM.stop by keeping a counter of the dispatched requests and the received responses before you stop the reactor.
I have a very basic TCP server implemented in Ruby. In general it does what it's supposed to, but every once in a while I get "The connection to the server was reset while the page was loading" error. I have a feeling that it has something to do with close terminating the connection too soon. If so, how do I wait for all the data to be sent? Or is it something else?
require 'socket'
server = TCPServer.new('', 80)
loop do
session = server.accept
begin
session.print Time.now
ensure
session.close
end
end
I'm not an expert in this area, but here is what I believe is happening....
The browser sends a GET request with the header field "Connection: keep-alive". So the browser is expecting to keep the connection alive at least until it receives a complete chunk of the response. Under this protocol, the server response must include a header specifying the length of the response, so that the browser knows when it has received the complete response. After this point, the connection can be closed without the browser caring.
The original example closes the connection too quickly, before the browser can validate that a complete response was received. Curiously, if I run that example and refresh my browser several times, it will load about every 1 in 10 tries. Maybe this erratic behavior is due to the browser occasionally executing fast enough to beat my server closing the connection.
Below is a code example that executes consistently in my browser:
require 'socket'
response = %{HTTP/1.1 200 OK
Content-Type: text;charset=utf-8
Content-Length: 12
Hello World!
}
server = TCPServer.open(80)
loop do
client = server.accept
client.puts response
sleep 1
client.close
end
I suspect it's because the browser is expecting an HTTP response with headers &c. Curiously, you can make the "reset" error happen every time if you put before the "ensure" a sleep of, say, one second.
How to fix it depends upon what you are after. If this is not to be an HTTP server, then don't use the browser to test it. Instead, use telnet or write a little program. If it is to be an HTTP server, then take a look at webrick, which is built into Ruby MRI >= 1.8. Here's how:
#!/usr/bin/ruby1.8
require 'webrick'
# This class handles time requests
class TimeServer < WEBrick::HTTPServlet::AbstractServlet
def do_GET(request, response)
response.status = 200
response['Content-Type'] = 'text/plain'
response.body = Time.now.to_s
end
end
# Create the server. There are many other options, if you need them.
server = WEBrick::HTTPServer.new(:Port=>8080)
# Whenever a request comes in for the root page, use TimeServer to handle it
server.mount('/', TimeServer)
# Finally, start the server. Does not normally return.
server.start
Also, should note that including Connection: close in the response header doesn't seem to help me at all with this connection reset error in my browser (FFv3.6). I have to include both the content-length header field, and include the sleep method to put some delay in the connection closing in order to get a consistent response in my browser.