How do you find a random open port in Ruby? - ruby

If you call DRb.start_service(nil, some_obj) and then DRb.uri, you get back the local URI, including a port number, that another process can use to make calls.
I'm looking to just have some code find a random available port and return that port number, instead of starting up a full-fledged DRb service. Is there a simple way to do this in Ruby?

Haven't tried it, but this may work.
From http://wiki.tcl.tk/2230
The process can let the system
automatically assign a port. For
both the Internet domain and the XNS
domain, specifying a port number of
0 before calling bind() requests the
system to do this.
Also see http://www.ruby-doc.org/stdlib/libdoc/socket/rdoc/classes/Socket.html#M003723
require 'socket'
# use Addrinfo
socket = Socket.new(:INET, :STREAM, 0)
socket.bind(Addrinfo.tcp("127.0.0.1", 0))
p socket.local_address #=> #<Addrinfo: 127.0.0.1:2222 TCP>
Note the use port 0 in socket.bind call. Expected behavior is the local_address will contain the random open port.

You can try random-port, a simple Ruby gem (I'm the author):
require 'random-port'
port = RandomPort::Pool.new.acquire
The best way, though, is to use the block:
RandomPort::Pool.new.acquire do |port|
# Use the port, it will be returned back
# to the pool afterward.
end
The pool is thread-safe and guarantees that the port won't be used by another thread or anywhere else in the app, until it's released.

Related

Forwarding Raw Encrypted HTTPS Data in Ruby for MITM

I'm investigating man-in-the-middle attacks and trying to pipe raw HTTPS data (that is, before decryption) to and from a pair of sockets. For now, I just want to listen to the encrypted traffic, so I want any data going out to go from my web browser, through my script, and out to the intended recipient, and any data coming in to do the reverse. Ideally I'd just like to connect the incoming and outgoing sockets together and have them transfer data between each other automatically, but I haven't seen a way to do it in Ruby so I have been using the following, which I took from How can I create a two-way SSL socket in Ruby .
Here is my code:
def socketLoop(incoming, outgoing)
loop do
puts "selecting"
ready = IO.select([outgoing, incoming])
if ready[0].include?(incoming)
data_to_send = incoming.read_nonblock(32768)
outgoing.write(data_to_send)
puts "sent out"
puts data_to_send
end
if ready[0].include?(outgoing)
data_received = outgoing.read_nonblock(32768)
incoming.write(data_received)
puts "read in"
puts data_received
break if outgoing.nil? || outgoing.closed? || outgoing.eof?
end
end
end
server = TCPServer.open(LISTENING_PORT)
loop {
Thread.start(server.accept){ |incoming|
outgoing = TCPSocket.new(TARGET_IP, TARGET_PORT)
socketLoop(incoming, outgoing)
outgoing.close # Disconnect from target
incoming.close # Disconnect from the client
}
}
It works beautifully for HTTP but for HTTPS, my browser keeps spinning, and the output seems to indicate that at least part of a certificate has been sent over, but not much more. I presume I was being naïve to think that it would work for SSL, but as far as I know it uses TCP as the transport layer so I'm not sure why it doesn't work. Is it possible to get the raw data in this way? Is it an issue with my Ruby or have I made some wrong assumptions? I'd prefer not to use a system-wide packet sniffer if possible. If it would not be easy in Ruby, I'd be very grateful for any pointers in another language too.
Thanks a lot for your help!
EDIT: It seems that I can do this easily with netcat -
sudo nc -l 443 0<backpipe | nc $TARGET_IP 443 >backpipe
so I am rather embarassed that I didn't think of something so simple in the first place, however I would still be interested to see what I was not doing right in Ruby.

Boost::asio UDP Broadcast with ephemeral port

I'm having trouble with udp broadcast transactions under boost::asio, related to the following code snippet. Since I'm trying to broadcast in this instance, so deviceIP = "255.255.255.255". devicePort is a specified management port for my device. I want to use an ephemeral local port, so I would prefer if at all possible not to have to socket.bind() after the connection, and the code supports this for unicast by setting localPort = 0.
boost::asio::ip::address_v4 targetIP = boost::asio::ip::address_v4::from_string(deviceIP);
m_targetEndPoint = boost::asio::ip::udp::endpoint(targetIP, devicePort);
m_ioServicePtr = boost::shared_ptr<boost::asio::io_service>(new boost::asio::io_service);
m_socketPtr = boost::shared_ptr<boost::asio::ip::udp::socket>(new boost::asio::ip::udp::socket(*m_ioServicePtr));
m_socketPtr->open(m_targetEndPoint.protocol());
m_socketPtr->set_option(boost::asio::socket_base::broadcast(true));
// If no local port is specified, default parameter is 0
// If local port is specified, bind to that port.
if(localPort != 0)
{
boost::asio::ip::udp::endpoint localEndpoint(boost::asio::ip::address_v4::any(), localPort);
m_socketPtr->bind(localEndpoint);
}
if(m_forceConnect)
m_socketPtr->connect(m_targetEndPoint);
this->AsyncReceive(); // Register Asynch Recieve callback and buffer
m_socketThread = boost::shared_ptr<boost::thread>(new boost::thread(boost::bind(&MyNetworkBase::RunSocketThread, this))); // Start thread running io_service process
No matter what I do in terms of the following settings, the transmit is working fine, and I can use Wireshark to see the response packets coming back from the device as expected. These response packets are also broadcasts, as the device may be on a different subnet to the pc searching for it.
The issues are extremely strange to my mind, but are as follows:
If I specify the local port and set m_forceConnect=false, everything works fine, and my recieve callback fires appropriately.
If I set m_forceConnect = true in the constructor, but pass in a local port of 0, the transmit works fine, but my receive callback never fires. I would assume this is because the 'target' (m_targetEndpoint) is 255.255.255.255, and since the device has a real IP, the response packet gets filtered out.
(what I actually want) If m_forceConnect = false (and data is transmitted using a send_to call), and local port = 0, therefore taking an ephemeral port, my RX callback immediately fires with an error code 10022, which I believe is an "Invalid Argument" socket error.
Can anyone suggest why I can't use the connection in this manner (not explicitly bound and not explicitly connected)? I obviously don't want to use socket.connect() in this case, as I want to respond to anything I receive. I also don't want to use a predefined port, as I want the user to be able to construct multiple copies of this object without port conflicts.
As some people may have noticed, the overall aim of this is to use the same network-interface base-class to handle both the unicast and broadcast cases. Obviously for the unicast version, I can perfectly happily m_socket->connect() as I know the device's IP, and I receive the responses since they're from the connected IP address, therefore I set m_forceConnect = true, and it all just works.
As all my transmits use send_to, I have also tried to socket.connect(endpoint(ip::addressv4::any(), devicePort), but I get a 'The requested address is not valid in its context' exception when I try it.
I've tried a pretty serious hack:
boost::asio::ip::udp::endpoint localEndpoint(boost::asio::ip::address_v4::any(), m_socketPtr->local_endpoint().port());
m_socketPtr->bind(localEndpoint);
where I extract the initial ephemeral port number and attempt to bind to it, but funnily enough that throws an Invalid Argument exception when I try and bind.
OK, I found a solution to this issue. Under linux it's not necessary, but under windows I discovered that if you are neither binding nor connecting, you must have transmitted something before you make the call to asynch_recieve_from(), the call to which is included within my this->asynch_receive() method.
My solution, make a dummy transmission of an empty string immediately before making the asynch_receive call under windows, so the modified code becomes:
m_socketPtr->set_option(boost::asio::socket_base::broadcast(true));
// If no local port is specified, default parameter is 0
// If local port is specified, bind to that port.
if(localPort != 0)
{
boost::asio::ip::udp::endpoint localEndpoint(boost::asio::ip::address_v4::any(), localPort);
m_socketPtr->bind(localEndpoint);
}
if(m_forceConnect)
m_socketPtr->connect(m_targetEndPoint);
// A dummy TX is required for the socket to acquire the local port properly under windoze
// Transmitting an empty string works fine for this, but the TX must take place BEFORE the first call to Asynch_receive_from(...)
#ifdef WIN32
m_socketPtr->send_to(boost::asio::buffer("", 0), m_targetEndPoint);
#endif
this->AsyncReceive(); // Register Asynch Recieve callback and buffer
m_socketThread = boost::shared_ptr<boost::thread>(new boost::thread(boost::bind(&MyNetworkBase::RunSocketThread, this)));
It's a bit of a hack in my book, but it is a lot better than implementing all the requirements to defer the call to the asynch recieve until after the first transmission.

Set socket timeout in Ruby via SO_RCVTIMEO socket option

I'm trying to make sockets timeout in Ruby via the SO_RCVTIMEO socket option however it seems to have no effect on any recent *nix operating system.
Using Ruby's Timeout module is not an option as it requires spawning and joining threads for each timeout which can become expensive. In applications that require low socket timeouts and which have a high number of threads it essentially kills performance. This has been noted in many places including Stack Overflow.
I've read Mike Perham's excellent post on the subject here and in an effort to reduce the problem to one file of runnable code created a simple example of a TCP server that will receive a request, wait the amount of time sent in the request and then close the connection.
The client creates a socket, sets the receive timeout to be 1 second, and then connects to the server. The client tells the server to close the session after 5 seconds then waits for data.
The client should timeout after one second but instead successfully closes the connection after 5.
#!/usr/bin/env ruby
require 'socket'
def timeout
sock = Socket.new(Socket::AF_INET, Socket::SOCK_STREAM, 0)
# Timeout set to 1 second
timeval = [1, 0].pack("l_2")
sock.setsockopt Socket::SOL_SOCKET, Socket::SO_RCVTIMEO, timeval
# Connect and tell the server to wait 5 seconds
sock.connect(Socket.pack_sockaddr_in(1234, '127.0.0.1'))
sock.write("5\n")
# Wait for data to be sent back
begin
result = sock.recvfrom(1024)
puts "session closed"
rescue Errno::EAGAIN
puts "timed out!"
end
end
Thread.new do
server = TCPServer.new(nil, 1234)
while (session = server.accept)
request = session.gets
sleep request.to_i
session.close
end
end
timeout
I've tried doing the same thing with a TCPSocket as well (which connects automatically) and have seen similar code in redis and other projects.
Additionally, I can verify that the option has been set by calling getsockopt like this:
sock.getsockopt(Socket::SOL_SOCKET, Socket::SO_RCVTIMEO).inspect
Does setting this socket option actually work for anyone?
You can do this efficiently using select from Ruby's IO class.
IO::select takes 4 parameters. The first three are arrays of sockets to monitor and the last one is a timeout (specified in seconds).
The way select works is that it makes lists of IO objects ready for a given operation by blocking until at least one of them is ready to either be read from, written to, or wants to raise an error.
The first three arguments therefore, correspond to the different types of states to monitor.
Ready for reading
Ready for writing
Has pending exception
The fourth is the timeout you want to set (if any). We are going to take advantage of this parameter.
Select returns an array that contains arrays of IO objects (sockets in this case) which are deemed ready by the operating system for the particular action being monitored.
So the return value of select will look like this:
[
[sockets ready for reading],
[sockets ready for writing],
[sockets raising errors]
]
However, select returns nil if the optional timeout value is given and no IO object is ready within timeout seconds.
Therefore, if you want to do performant IO timeouts in Ruby and avoid having to use the Timeout module, you can do the following:
Let's build an example where we wait timeout seconds for a read on socket:
ready = IO.select([socket], nil, nil, timeout)
if ready
# do the read
else
# raise something that indicates a timeout
end
This has the benefit of not spinning up a new thread for each timeout (as in the Timeout module) and will make multi-threaded applications with many timeouts much faster in Ruby.
I think you're basically out of luck. When I run your example with strace (only using an external server to keep the output clean), it's easy to check that setsockopt is indeed getting called:
$ strace -f ruby foo.rb 2>&1 | grep setsockopt
[pid 5833] setsockopt(5, SOL_SOCKET, SO_RCVTIMEO, "\1\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0", 16) = 0
strace also shows what's blocking the program. This is the line I see on the screen before the server times out:
[pid 5958] ppoll([{fd=5, events=POLLIN}], 1, NULL, NULL, 8
That means that the program is blocking on this call to ppoll, not on a call to recvfrom. The man page that lists socket options (socket(7)) states that:
Timeouts have no effect for select(2), poll(2), epoll_wait(2), etc.
So the timeout is being set but has no effect. I hope I'm wrong here, but it seems there's no way to change this behavior in Ruby. I took a quick look at the implementation and didn't find an obvious way out. Again, I hope I'm wrong -- this seems to be something basic, how come it's not there?
One (very ugly) workaround is by using dl to call read or recvfrom directly. Those calls are affected by the timeout you set. For example:
require 'socket'
require 'dl'
require 'dl/import'
module LibC
extend DL::Importer
dlload 'libc.so.6'
extern 'long read(int, void *, long)'
end
sock = Socket.new(Socket::AF_INET, Socket::SOCK_STREAM, 0)
timeval = [3, 0].pack("l_l_")
sock.setsockopt Socket::SOL_SOCKET, Socket::SO_RCVTIMEO, timeval
sock.connect( Socket.pack_sockaddr_in(1234, '127.0.0.1'))
buf = "\0" * 1024
count = LibC.read(sock.fileno, buf, 1024)
if count == -1
puts 'Timeout'
end
This code works here. Of course: it's an ugly solution, which won't work on many platforms, etc. It may be a way out though.
Also please notice that this is the first time I do something similar in Ruby, so I'm not aware of all the pitfalls I may be overlooking -- in particular, I'm suspect of the types I specified in 'long read(int, void *, long)' and of the way I'm passing a buffer to read.
Based on my testing, and Jesse Storimer's excellent ebook on "Working with TCP Sockets" (in Ruby), the timeout socket options do not work in Ruby 1.9 (and, I presume 2.0 and 2.1). Jesse says:
Your operating system also offers native socket timeouts that can be set via the
SNDTIMEO and RCVTIMEO socket options. But, as of Ruby 1.9, this feature is no longer
functional."
Wow. I think the moral of the story is to forget about these options and use IO.select or Tony Arcieri's NIO library.

Controlling Tor client with Ruby

I am writing a Ruby script which automatically crawls websites for data analysis, and now I have a requirement which is fairly complicated: I have to be able to simulate access from a variety of countries, about 20 different ones. The website will contain different information depending on the IP location, so the only way to get it done is to request it from a server which is actually in that country.
Since I don't want to buy servers in each of those 20 countries, I chose to give Tor a try - as many of you will know, by editing the torrc configuration file it is possible to specify the exit node and hence the country from which the actual request will originate.
When I do this manually, e.g. by editing the torrc file to use an Argentinian server, then disconnecting Tor using Vidalia, reconnecting Vidalia, and then rerunning the request, it works fine. However, I want to automate this process entirely, and do it as efficiently as possible. Tor is written in C, and I'd like to avoid taking apart its entire source code for this. Any idea of what's the easiest way to automate the whole process using only Ruby?
Also, if I'm missing something and there's a simpler alternative to this whole ordeal, let me know.
Thanks!
Please take a look at Tor control protocol. You can control circuits using telnet.
http://thesprawl.org/memdump/?entry=8
To switch to a new circuit wich switches to a new endpoint:
require 'net/telnet'
def switch_endpoint
localhost = Net::Telnet::new("Host" => "localhost", "Port" => "9051", "Timeout" => 10, "Prompt" => /250 OK\n/)
localhost.cmd('AUTHENTICATE ""') { |c| print c; throw "Cannot authenticate to Tor" if c != "250 OK\n" }
localhost.cmd('signal NEWNYM') { |c| print c; throw "Cannot switch Tor to new route" if c != "250 OK\n" }
localhost.close
end
Be aware of the delay to make a new circuit, may take couple seconds, so you'd better add a delay in the code, or check if your address has changed by calling some remote IP detection site.

Ruby TCPSocket write doesn't work, but puts does?

I'm working on a Ruby TCP client/server app using GServer and TCPSocket. I've run into a problem that I don't understand. My TCPSocket client successfully connects to my GServer, but I can only send data using puts. Calls to TCPSocket.send or TCPSocket.write do nothing. Is there some magic that I'm missing?
tcp_client = TCPSocket.new( ipaddr, port )
tcp_client.puts( 'Z' ) # -> GServer receives "Z\n"
But if I use write or send...
tcp_client = TCPSocket.new( ipaddr, port )
tcp_client.write( 'Z' ) # -> nothing is received
tcp_client.send( 'Z' ) # -> nothing is received
Thanks for the help
Additional information:
The behavior is the same on Linux & Windows.
Flushing the socket after write doesn't change the behavior.
Are you sure the problem isn't on the server side? Are you using some method to read that expects a string or something ending in "\n"?
With buffering taken care of in previous posts to address the question of whether the data is being sent consider capturing the data on the line using something like wireshark. If the data you are sending is seen on the line then the server isn't receiving it.
Otherwise, if the data isn't going onto the line, TCP may hold onto data to avoid sending a single segment with only a few bytes in it (see Nagle's Algorithm). Depending on your OS or TCP vendor you may have different behaviour, but most TCP stacks support the TCP_NODELAY option which may help get the data out in a more timely manner.
tcp_client.setsockopt(Socket::IPPROTO_TCP, Socket::TCP_NODELAY, 1)
This can help debugging, but typically shouldn't be left in production code if throughput is higher priority than responsiveness.
Try explicitly flushing:
tcp_client = TCPSocket.new( ipaddr, port )
tcp_client.write( 'Z' )
tcp_client.send( 'Z' )
tcp_client.flush
This way, the output is buffered at most only until the point at which you decide it should be sent out.
Hi there the reason should be related to the fact puts add automatic LF and CRL to your string.
If you want to use send or write you need to add them yourself so for instance that would be:
tcp_client.send( "Z\r\n",0)
I had the same problem, so after reading the socket, I had to explicitly delete the last instance of "\n" by doing the following:
client_socket.gets.gsub(/\n$/, '')

Resources