Ruby application does not receive UDP packet from different host - ruby

Sending a UDP packet from Ruby client to Ruby server using the server address 192.168.1.30 works as expected, but only if client and server are on the same host. If the client runs on a different machine, the UDP packet finds its way to the server, but my server process won't notice.
Server:
require 'socket'
sock = UDPSocket.new()
sock.bind('', 8999)
p sock
while true do
p sock.recvfrom(2000)
end
sock.close
Client:
require 'socket'
sock = UDPSocket.new
p sock.send("foo", 0, "192.168.1.30", 8999)
sock.close
After starting the server, netstat -n --udp --listen approves that the socket is open:
Proto Recv-Q Send-Q Local Address Foreign Address State
udp 0 0 0.0.0.0:8999 0.0.0.0:*
After running the client twice (on 192.168.1.30 and on .23), server output shows only one incoming packet, missing the one from 192.168.1.23:
#<UDPSocket:fd 7, AF_INET, 0.0.0.0, 8999>
["foo", ["AF_INET", 52187, "192.168.1.30", "192.168.1.30"]]
while Wireshark indicates that two packets were noticed
No Time Source Destination Proto Length Info
1 0.000000000 192.168.1.30 192.168.1.30 UDP 47 52187 → 8999 Len=3
2 2.804243569 192.168.1.23 192.168.1.30 UDP 62 39800 → 8999 Len=3
Which probably obvious detail am I missing?

Check if you have any firewall rules active:
sudo iptables -L
sudo ufw status

Related

Pinging local host doesn't function

elasticsearch==7.10.0
I wish to ping local host '5601' to ensure kibana is running or not but apparently unable to ping.
Note: I am aware that elastic search has in-built function to ping but I still wish to ping using cmd line for a specific reason in my project.
C:\User>ping 5601
Pinging f00:b00:f00:b00 with 32 bytes of data:
PING: transmit failed. General failure.
PING: transmit failed. General failure.
PING: transmit failed. General failure.
PING: transmit failed. General failure.
Ping statistics for f00:b00:f00:b00:
Packets: Sent = 4, Received = 0, Lost = 4 (100% loss)
C:\User>ping http://localhost:5601
Ping request could not find host http://localhost:5601. Please check the name and try again.
Could someone help me?
You can use netstat to check if the port exposed by the Kibana UI, 5061 is in LISTEN mode
$ netstat -tlpn | grep 5601
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
tcp6 0 0 :::5601 :::* LISTEN -
Or if you want to establish a connection to destination port 5601 you can use nc
$ nc -vz localhost 5601
Connection to localhost 5601 port [tcp/*] succeeded!

How does Ruby manage localhost communications via TCP under Windows 7?

I am trying to implement a simple local TCP communication between a Ruby server and a C++ library interfaced to Unreal Engine 4.
I have used some sample code for the ruby server, which uses the socket library and instanciates a TCPServer:
require 'socket'
puts "Starting up server..."
# establish the server
## Server established to listen for connections on port 2008
server = TCPServer.new(15300)
# setup to listen and accept connections
while (session = server.accept)
#start new thread conversation
## Here we will establish a new thread for a connection client
Thread.start do
## I want to be sure to output something on the server side
## to show that there has been a connection
puts "log: Connection from #{session.peeraddr[2]} at
#{session.peeraddr[3]}"
puts "log: got input from client"
## lets see what the client has to say by grabbing the input
## then display it. Please note that the session.gets will look
## for an end of line character "\n" before moving forward.
input = session.gets
puts input
## Lets respond with a nice warm welcome message
session.puts "Server: Welcome #{session.peeraddr[2]}\n"
# reply with goodbye
## now lets end the session since all we wanted to do is
## acknowledge the client
puts "log: sending goodbye"
session.puts "Server: Goodbye\n"
end #end thread conversation
end #end loop
This application, if tested with a ruby client, works perfectly fine.
This is the client:
require 'socket'
# establish connection
## We need to tell the client where to connect
## Conveniently it is on localhost at port 15300!
clientSession = TCPSocket.new( "localhost", 15300 )
puts "log: starting connection"
#send a quick message
## Note that this has a carriage return. Remember our server
## uses the method gets() to get input back from the server.
puts "log: saying hello"
clientSession.puts "Client: Hello Server World!\n"
#wait for messages from the server
## You've sent your message, now we need to make sure
## the session isn't closed, spit out any messages the server
## has to say, and check to see if any of those messages
## contain 'Goodbye'. If they do we can close the connection
while !(clientSession.closed?) &&
(serverMessage = clientSession.gets)
## lets output our server messages
puts serverMessage
#if one of the messages contains 'Goodbye' we'll disconnect
## we disconnect by 'closing' the session.
if serverMessage.include?("Goodbye")
puts "log: closing connection"
clientSession.close
end
end #end loop
Server output:
Starting up server...
log: Connection from ::1 at
::1
log: got input from client
Client: Hello Server World!
log: sending goodbye
But when I try to connect to the server using C++11 and the TCP functions in Unreal Engine 4 I do not get any kind of response from the server implemented in ruby (not even "Connection from...").
To understand what was the problem I tried to run some netwrok analysis, starting from the simplest (Window's Resource Monitor), to the most complex (ZenMap). In no case there was a single service running with the selected port open (port 15300). I have double checked every single possible cause for this (e.g. firewall, other security software) but there was no block to the ruby interpreter.
In order to understand why such a simple application is not working I started using Wireshark. It was then that I noticed that there is no local loopback interface, which required me to use RawCap (wich manages to capture the local communications and dump them on file). Using it I managed to dump the local communications on my machine during both a run of the ruby client/server pair and Cygwin Socat/Unreal Engine 4. What I found was pretty baffling: there were NO open local TCP sockets on the port for the ruby pair (port 15300), but the TCP ports opened by Unreal Engine 4 were there (along with port 15301, the one I used for the tests).
Update 01
I have changed the code in my Unreal Engine 4 application, as the first version used an included TCPsocket builder which used bind by default instead of connect. Now the code is:
bool UNetworkBlueprintLibrary::NetworkSetup(int32 ServerPort) {
bool res = false;
// Creating a Socket pointer, wich will temporary contain our
FSocket* Socket = nullptr;
ISocketSubsystem* SocketSubsystem = ISocketSubsystem::Get(PLATFORM_SOCKETSUBSYSTEM);
// Instead of that useless FTcpSocketBuilder I will try to create the socket by hand and then debug it... >:(
if (SocketSubsystem != nullptr) {
// Try to create a stream socket using the system-intependent interface
Socket = SocketSubsystem->CreateSocket(NAME_Stream, TEXT("Server Socket"), false);
if (Socket != nullptr) {
FIPv4Address ServerAddress = FIPv4Address(127, 0, 0, 1);
TSharedRef<FInternetAddr> LocalAddress = ISocketSubsystem::Get(PLATFORM_SOCKETSUBSYSTEM)->CreateInternetAddr();
uint16 castServerPort = (uint16)ServerPort;
// Attach received port and IPAddress to Internet Address pointer
LocalAddress->SetIp(ServerAddress.GetValue());
LocalAddress->SetPort(castServerPort);
bool SocketCreationError = !Socket->Connect(*LocalAddress);
if (SocketCreationError)
{
GLog->Logf(TEXT("Failed to create Server Socket as configured!"));
SocketSubsystem->DestroySocket(Socket);
Socket = nullptr;
}
}
}
if (Socket != nullptr){
UNetworkBlueprintLibrary::ServerSocket = Socket;
res = true;
}
else {
res = false;
}
return res;
}
In order to test whether the code in Unreal Engine 4 is working or not I have used socat under Cygwin:
socat tcp-listen:15300 readline
Once my Unreal Engine 4 application starts I can send data over the socket. I have confirmed this works, removing the possibility that my application developed in Unreal Engine 4 was not communicating through the TCP socket.
Given this positive result I have tried again to use the server code reported on top of this question, removing the lines that wait for a message from the client but the result is the same: the server does not even output the debug lines it should print when there is a connection:
puts "log: Connection from #{session.peeraddr[2]} at #{session.peeraddr[3]}"
Update 02
Back once again!
I have managed to make ruby and Unreal Engine 4 communicate through local TCP sockets. What I needed to make the ruby server work was explicitly providing the local address, like so:
server = TCPServer.new("127.0.0.1", 15300)
However, even if this matter is solved, I am still interested in some of the questions I put the first time.
Update 03
Just one consideration that I made in the last hour.
When I did not provide an explicit server address the server, this command
puts "log: Connection from #{session.peeraddr[2]} at
#{session.peeraddr[3]}"
Returned
log: Connection from ::1 at
::1
Which is indeed correct! ::1 is the IPv6 version of 127.0.0.1, so it looks like that ruby assumes that it has to create an IPv6 socket, instead of an IPv4. Unreal Engine 4 does not provide an IPv6 interface, so this might have been the reason why the application did not work.
Now communications over localhost when dumped, show the following communications over port 15300:
No. Time Source Destination Protocol Length Info
105 7.573433 127.0.0.1 127.0.0.1 TCP 52 9994→15300 [SYN] Seq=0 Win=8192 Len=0 MSS=65495 WS=256 SACK_PERM=1
106 7.573433 127.0.0.1 127.0.0.1 TCP 52 15300→9994 [SYN, ACK] Seq=0 Ack=1 Win=8192 Len=0 MSS=65495 WS=256 SACK_PERM=1
107 7.573433 127.0.0.1 127.0.0.1 TCP 40 9994→15300 [ACK] Seq=1 Ack=1 Win=8192 Len=0
108 7.583434 127.0.0.1 127.0.0.1 TCP 66 15300→9994 [PSH, ACK] Seq=1 Ack=1 Win=8192 Len=26
109 7.583434 127.0.0.1 127.0.0.1 TCP 40 9994→15300 [ACK] Seq=1 Ack=27 Win=7936 Len=0
110 7.583434 127.0.0.1 127.0.0.1 TCP 56 15300→9994 [PSH, ACK] Seq=27 Ack=1 Win=8192 Len=16
111 7.583434 127.0.0.1 127.0.0.1 TCP 40 9994→15300 [ACK] Seq=1 Ack=43 Win=7936 Len=0
208 16.450941 127.0.0.1 127.0.0.1 TCP 52 9995→15300 [SYN] Seq=0 Win=8192 Len=0 MSS=65495 WS=256 SACK_PERM=1
209 16.451941 127.0.0.1 127.0.0.1 TCP 52 15300→9995 [SYN, ACK] Seq=0 Ack=1 Win=8192 Len=0 MSS=65495 WS=256 SACK_PERM=1
210 16.451941 127.0.0.1 127.0.0.1 TCP 40 9995→15300 [ACK] Seq=1 Ack=1 Win=8192 Len=0
211 16.456941 127.0.0.1 127.0.0.1 TCP 66 15300→9995 [PSH, ACK] Seq=1 Ack=1 Win=8192 Len=26
212 16.456941 127.0.0.1 127.0.0.1 TCP 40 9995→15300 [ACK] Seq=1 Ack=27 Win=7936 Len=0
213 16.456941 127.0.0.1 127.0.0.1 TCP 56 15300→9995 [PSH, ACK] Seq=27 Ack=1 Win=8192 Len=16
214 16.456941 127.0.0.1 127.0.0.1 TCP 40 9995→15300 [ACK] Seq=1 Ack=43 Win=7936 Len=0
When a packages show a LEN of either 26 or 16 means that they are sending either
Server: Welcome 127.0.0.1.
or
Server: Goodbye.
I just wanted to share this information, as it is not really useful for the answer, but shows how IPv6 communications do not pass through the standard localhost virtual adapter.
The questions
What I would like to understand is:
As there is no local loopback interface how does ruby communicate through local TCP sockets?

Ruby socket binding on system w/ multiple interfaces

I have a computer with 2 interfaces, eth0 (192.168.251.10) and eth1 (192.168.251.11), and I'm trying to have two instances of a Ruby application listen on each of them. They should both listen on the same port, just on different interfaces. The interface to be binded to specified via command line args during runtime.
Regardless of the order I start the application, the one listening on eth0 will always succeed, but the one listening on eth1 will always not receive anything. I checked using Wireshark and shows the packets are being received, but is the Ruby application isn't getting anything.
I've tried with a very simple textbook case code, so I'm very puzzled as to why it doesn't work
BasicSocket.do_not_reverse_lookup = true
socket = UDPSocket.new
ip_addr = ARGV[0]
port = 8722
socket.bind(ip_addr, port)
puts "Listener started on #{ip_addr}:#{port}"
while(true)
msg, sender_sockaddr = socket.recvfrom(1024)
end
I'm running Ruby 1.8.7 on Ubuntu 12.04 LTS. Something else I noticed is that if I bring down both the interfaces, and then up again, it will work on the first interface that was brought up, but not the second.
Output from netstat seems correct, showing listening on two different addresses.
Proto Recv-Q Send-Q Local Address Foreign Address State
udp 0 0 0.0.0.0:68 0.0.0.0:*
udp 0 0 192.168.251.10:8722 0.0.0.0:*
udp 0 0 192.168.251.11:8722 0.0.0.0:*
udp 0 0 0.0.0.0:5353 0.0.0.0:*
udp 0 0 0.0.0.0:54506 0.0.0.0:*
udp6 0 0 :::38033 :::*
udp6 0 0 :::5353 :::*
This is not a standard way to setup an app. Hard coding networking assignments is not recommended. You'd be better of using a load balancer and let it handle the interface assignments.

LFTP active mode with servers that do not recognize the PORT command

I am using LFTP to transfer files from a server, which unfortunately does not recognize the PORT command. I do not have control over the server (do not know in detail what server is) and I have to use the active mode.
This is the command line as:
lftp -e 'debug 10;set ftp:passive-mode off; set ftp:auto-passive-mode no; ls; bye;' -u user,password ftp://ftp.site.com
This is the debug output:
<--- 200 Using default language en_US
---> OPTS UTF8 ON
<--- 200 UTF8 set to on
---> OPTS MLST modify;perm;size;type;UNIX.group;UNIX.mode;UNIX.owner;
<--- 200 OPTS MLST modify;perm;size;type;UNIX.group;UNIX.mode;UNIX.owner;
---> USER xxxxx
<--- 331 Password required for xxxxx
---> PASS xxxxxx
<--- 230 User xxxxx logged in
---> PBSZ 0
<--- 200 PBSZ 0 successful
---> PROT P
<--- 200 Protection set to Private
---> PORT 172,16,133,11,146,168
<--- 500 Illegal PORT command
---> LIST
---> ABOR
---- Closing aborted data socket
---- Chiusura del socket di controllo
It seems that LFTP renounces to connect to data socket because the remote server does not support the PORT command. Is there a way to convince LFTP can still connect to port 20? By FTP manual obviously no problem.
The issue, I think, is not that the FTP server doesn't support the PORT command (it does), but rather, it doesn't like the IP address/port that your FTP client is sending in the PORT command.
PORT 172,16,133,11,146,168
...tells the server to connect to address 172.16.133.11, port 37544*. The interesting part here is the IP address: it's an RFC 1918 address (i.e. it's a private network address). That, in turn, suggests that your FTP client is in a LAN somewhere, and is connecting to an FTP server using a public IP address.
That remote FTP server cannot connect to a private network address; by definition, RFC 1918 address are not publicly routable.
Thus it very well could be that the FTP server is trying to make a connection to the address/port given in your PORT command, fails, thus that is why the FTP server fails the command, saying:
500 Illegal PORT command
To make a PORT command work with that FTP server, you would need to discover the public IP address that that server can connect to, to reach your client machine. Let's say that this address is 1.2.3.4. Then you would need to tell lftp to use that address in its PORT command, using the ftp:port-ipv4 option.
Chances are, though, that public IP address is the address of a NAT/router/firewall, and that that NAT/router/firewall will not allow connections, from the outside world to a high numbered port (e.g. 37544), to be routed to a machine within the LAN. This is one of the issues with active FTP data transfers, i.e. FTP data transfers which use the PORT (or EPRT) commands: they are not considered "firewall-friendly".
Hope this helps!
* - why 146,168 translates to port 37544?
According to FTP's RFC959 those parameters are:
(...) 16-bit TCP port address. This address information is broken into
8-bit fields and the value of each field is transmitted as a decimal
number (in character string representation).
146 dec = 10010010 bin = A
168 dec = 10101000 bin = B
A B
10010010 10101000 bin = 37544 dec

Problem connecting a linux server and windows client with sockets

Hello all
I am trying to learn more about sockets and how to use them and I have been stuck on an issue for a while now.
I started with Beej's guide to network programming and I made the talker and listener from this page on the linux (Fedora 14) machine I am working on. It works and I can get it to talk to each other.
Then I went on to Windows (7) and worked with this tutorial and got that up and running and can send messages to myself on the windows machine. The only change I really have is that I am using getHostbyAddr and not by name.
It is when I glue the two together that I start to get issues, or rather one issue for now. I am running listener from Beej on the linux machine and I try to have the client from Johnnie send it a message. I get a winsock error 10061 on the windows machine and nothing ever shows up on the linux (not surprisingly). I have tested this with the firewall on the linux and I still get that error.
What can I do to fix this?
Thank you
Edited to add some more info:
When I run tcpdump I get this
[root#localhost ~]# tcpdump tcp port 4950
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
12:08:56.246753 IP TLARGE.WIFI.schoolname.EDU.62394 > hmd46.cs.schoolname.edu.sybasesrvmon: Flags [S], seq 150153995, win 8192, options [mss 1460,nop,wscale 2,nop,nop,sackOK], length 0
12:08:56.246794 IP hmd46.cs.schoolname.edu.sybasesrvmon > TLARGE.WIFI.schoolname.EDU.62394: Flags [R.], seq 0, ack 150153996, win 0, length 0
12:08:56.746170 IP TLARGE.WIFI.schoolname.EDU.62394 > hmd46.cs.schoolname.edu.sybasesrvmon: Flags [S], seq 150153995, win 8192, options [mss 1460,nop,wscale 2,nop,nop,sackOK], length 0
12:08:56.746221 IP hmd46.cs.schoolname.edu.sybasesrvmon > TLARGE.WIFI.schoolname.EDU.62394: Flags [R.], seq 0, ack 1, win 0, length 0
12:08:57.246138 IP TLARGE.WIFI.schoolname.EDU.62394 > hmd46.cs.schoolname.edu.sybasesrvmon: Flags [S], seq 150153995, win 8192, options [mss 1460,nop,nop,sackOK], length 0
12:08:57.246185 IP hmd46.cs.schoolname.edu.sybasesrvmon > TONJELARGE.WIFI.schoolname.EDU.62394: Flags [R.], seq 0, ack 1, win 0, length 0
^C
6 packets captured
6 packets received by filter
0 packets dropped by kernel
Running netstat gives me this:
[root#localhost ~]# netstat -tlnp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:58661 0.0.0.0:* LISTEN 1083/rpc.statd
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 1013/rpcbind
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1265/sshd
tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 1148/cupsd
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1554/sendmail: acce
tcp 0 0 :::56315 :::* LISTEN 1083/rpc.statd
tcp 0 0 :::111 :::* LISTEN 1013/rpcbind
tcp 0 0 :::22 :::* LISTEN 1265/sshd
tcp 0 0 ::1:631 :::* LISTEN 1148/cupsd
Both of these were from the linux machine
Error 10061 means WSAECONNREFUSED. In the link you posted I see the client is using port 80. Are you sure you modified it to 4950 ?
Something is definitely getting through to the server, otherwise it wouldn't send you the "I don't like you" RST (that's what connection refused means: not only does it refuse your connection, to add insult to injury it's telling you).
EDIT 1
From the netstat output it seems nobody is listening on 4950.
EDIT 2
I finally brought myself to read that beej stuff (to be honest I always considered his tutorials the worst). Didn't this set off any alarm ? You're creating a udp socket, that's why nobody is listening in TCP 4950, that's why you can't connect.
hints.ai_socktype = SOCK_DGRAM;
You have two options:
Use a UDP socket in the windows side
Change the code on the server side to use TCP.
The server code as it stands isn't suitable for TCP (recvfrom and all that stuff) but should be easily adapted).

Resources