Create a UDP ping packet to fetch mumble(murmur) status - ruby

I'd like to send a udp packet to a mumble server using ruby to get status information about how many users are currently connected.
The documentation states there is a way using a UDP packet: http://mumble.sourceforge.net/Protocol#UDP_Ping_packet
However I don't know how to formulate that with ruby and thus get no reply from the server.
require 'socket'
sock = UDPSocket.new
sock.connect("99.99.99.99", 66666)
sock.send("00", 0)
p sock.recvfrom(1) # this does not return
sock.close
How do I format data of the udp packet?

This should work to generate your ping packet:
def ping(identifier)
v = identifier
a = []
while v > 256 # extract bytes from the identifier
a << v % 256
v = v / 256
end
a << v % 256
prefix = [0] * (8-a.length) # pad the identifier
([0,0,0,0] + prefix + a).pack("C*") # pack the message as bytes
end
usage:
# random 8 byte number as a message identifier - compare this to any packet
# received to ensure you're receiving the correct response.
identifier = rand(256**8)
sock.send ping(identifier), 0
# you should get a response here if the mumble server is
# accessible and responding to pings.
sock.recvfrom(1)

Related

Check and cast error in Omnet++ TSN. Unable to transmit UDP packets

I am trying to send a UDP packet from Omnett ++ TSN Device to a standard Host through a TSN switch that is connected to a Router.
However, I get the following check_and_cast error:-
check_and_cast(): Cannot cast(inet::physicallayer::signal*)app[0]-0 to type 'inet::physicallayer::EthernetSignalBase *' in module (inet::EthernetMac) of router.eth[0].mac
My omnetpp.ini udp app setup is as follows.
extends = omnetpptsnnetworksample
#Source application
*.tsnDevice1.numApps = 1
*.tsnDevice1.app[0].typename = "UdpSourceApp"
*.tsnDevice1.app[0].source.packetLength = 10B
*.tsnDevice1.app[0].source.productionInterval = 1ms
*.tsnDevice1.app[0].io.destAddress = "ue[0]"
*.tsnDevice1.app[0].io.destPort = 1000
*.tsnDevice1.app[0].source.clockModule = "^.^.clock"
#Sink application
*.standardHost[*].numApps = 1
*.standardHost[*].app[*].typename = "UdpSinkApp"
*.standardHost[*].app[*].io.localPort = 1000
Where did I go wrong?
TsnDevice and TsnSwitch have LayeredEthernetInterface by default, but StandardHost has EthernetInterface. The two interfaces are not compatible (not sure if they should be or not). So by setting standardHost's ethernet interface type to LayeredEthernetInterface, it should work:
*.standardHost[*].eth[*].typename = "LayeredEthernetInterface"

Implementation of a teamspeak like voice server

I'm implementing a voice chat server which will be used in my Virtual Class e-learning application for Windows, which makes use of the Remote Desktop API.
So far I 've been compressing the voice in with OPUS and I 've tested various options:
To pass the voice through the RDP Virtual Channel. This works but it creates lots of lag despite the channel creation with CHANNEL_PRIORITY_HI.
To use my own TCP (or UDP) voice server. For this option I have been wondering what would be the best method to implement.
Currently I 'm sending the udp datagram received, to all other clients (later on I will do server-side mixing).
The problem with my current UDP voice server is that is has lag even within the same pc: One server, and four clients connected, two of them have open mics, for example.
I get audible lag with this setup:
void VoiceServer(int port)
{
XSOCKET Y = make_shared<XSOCKET>(AF_INET, SOCK_DGRAM, IPPROTO_UDP);
if (!Y->Bind(port))
return;
auto VoiceServer2 = [&]()
{
OPUSBUFF o;
char d[200] = { 0 };
map<int, vector<char>> udps;
for (;;)
{
// get datagram
int sle = sizeof(sockaddr_in6);
int r = recvfrom(*Y, o.d, 4000, 0, (sockaddr*)d, &sle);
if (r <= 0)
break;
// a MESSAGE is a header and opus data follows
MESSAGE* m = (MESSAGE*)o.d;
// have we received data from this client already?
// m->arb holds the RDP ID of the user
if (udps.find(m->arb) == udps.end())
{
vector<char>& uu = udps[m->arb];
uu.resize(sle);
memcpy(uu.data(), d, sle);
}
for (auto& att2 : aatts) // attendee list
{
long lxid = 0;
att2->get_Id(&lxid);
#ifndef _DEBUG
if (lxid == m->arb) // if same
continue;
#endif
const vector<char>& uud = udps[lxid];
sendto(*Y, o.d + sizeof(MESSAGE), r - sizeof(MESSAGE), 0, (sockaddr*)uud.data(), uud.size());
}
}
};
// 10 threads receiving
for (int i = 0; i < 9; i++)
{
std::thread t(VoiceServer2);
t.detach();
}
VoiceServer2();
}
Each client runs a VoiceServer thread:
void VoiceServer()
{
char b[4000] = { 0 };
vector<char> d2;
for (;;)
{
int r = recvfrom(Socket, b, 4000, 0, 0,0);
if (r <= 0)
break;
d2.resize(r);
memcpy(d2.data(), b, r);
if (audioin && wout)
audioin->push(d2); // this pushes the buffer to a waveOut writing class
SetEvent(hPlayEvent);
}
}
Is this because I test in the same machine? But with a TeamSpeak client I had setup in the past there is no lag whatsoever.
Thanks for your opinion.
SendTo():
For message-oriented sockets, care must be taken not to exceed the
maximum packet size of the underlying subnets, which can be obtained
by using getsockopt to retrieve the value of socket option
SO_MAX_MSG_SIZE. If the data is too long to pass atomically through
the underlying protocol, the error WSAEMSGSIZE is returned and no data
is transmitted.
A typical IPv4 header is 20 bytes, and the UDP header is 8 bytes. The theoretical limit (on Windows) for the maximum size of a UDP packet is 65507 bytes(determined by the following formula: 0xffff - 20 - 8 = 65507). Is it actually the best way to send such a large packet? If we set a packet size too large, bottom of network protocol will splits packets at the IP layer. This takes up a lot of network bandwidth, cause the delay.
MTU(maximum transmission unit), is actually related to the link layer protocol. The structure of the EthernetII frame DMAC+SMAC+Type+Data+CRC has a minimum size of 64 bytes per Ethernet frame due to the electrical limitations of Ethernet transmission, and the maximum size can not exceed 1518 bytes. For Ethernet frames less than or greater than this limitation, we can regard it as a mistake. Since the largest data frame of Ethernet EthernetII is 1518 bytes, except for the frame header 14Bytes and the frame tail CRC check part 4Bytes, there is only 1500 bytes in the data domain left. That's MTU.
In the case that the MTU is 1500 bytes, the maximum size of UDP packet should be 1500 bytes - IP header (20 bytes) - UDP header (8 bytes) = 1472 bytes if you want IP layer not to split packets. However, since the standard MTU value on the Internet is 576 bytes, it is recommended that UDP data length should be controlled within (576-8-20) 548 bytes in a sendto/recvfrom when programming UDP on the Internet.
You need to reduce the bytes of a send/receive and then control the number of times.

Strange behavior of the ZeroMQ PUB/SUB pattern with TCP as transport layer

In order to design our API/messages, I've made some preliminary tests with our data:
Protobuf V3 Message:
message TcpGraphes {
uint32 flowId = 1;
repeated uint64 curTcpWinSizeUl = 2; // max 3600 elements
repeated uint64 curTcpWinSizeDl = 3; // max 3600 elements
repeated uint64 retransUl = 4; // max 3600 elements
repeated uint64 retransDl = 5; // max 3600 elements
repeated uint32 rtt = 6; // max 3600 elements
}
Message build as multipart message in order to add the filter functionality for the client
Tested with 10 python clients: 5 running on the same PC (localhost), 5 running on an external PC.
Protocol used was TCP. About 200 messages were sent every second.
Results:
Local client are working: they get every messages
Remote clients are missing some messages (throughput seems to be limited by the server to 1Mbit/s per client)
Server code (C++):
// zeroMQ init
zmq_ctx = zmq_ctx_new();
zmq_pub_sock = zmq_socket(zmq_ctx, ZMQ_PUB);
zmq_bind(zmq_pub_sock, "tcp://*:5559");
every second, about 200 messages are sent in a loop:
std::string serStrg;
tcpG.SerializeToString(&serStrg);
// first part identifier: [flowId]tcpAnalysis.TcpGraphes
std::stringstream id;
id << It->second->first << tcpG.GetTypeName();
zmq_send(zmq_pub_sock, id.str().c_str(), id.str().length(), ZMQ_SNDMORE);
zmq_send(zmq_pub_sock, serStrg.c_str(), serStrg.length(), 0);
Client code (python):
ctx = zmq.Context()
sub = ctx.socket(zmq.SUB)
sub.setsockopt(zmq.SUBSCRIBE, '')
sub.connect('tcp://x.x.x.x:5559')
print ("Waiting for data...")
while True:
message = sub.recv() # first part (filter part, eg:"134tcpAnalysis.TcpGraphes")
print ("Got some data:",message)
message = sub.recv() # second part (protobuf bin)
We have looked at the PCAP and the server don't use the full bandwidth available, I can add some new subscribers, remove some existing ones, every remote subscriber gets "only" 1Mbit/s.
I've tested an Iperf3 TCP connection between the two PCs and I reach 60Mbit/s.
The PC who runs the python clients has about 30% CPU last.
I've minimized the console where the clients are running in order to avoid the printout but it has no effect.
Is it a normal behavior for the TCP transport layer (PUB/SUB pattern) ? Does it means I should use the EPGM protocol ?
Config:
windows xp for the server
windows 7 for the python remote clients
zmq version 4.0.4 used
A performance motivated interest ?
Ok, let's first use the resources a bit more adequately :
// //////////////////////////////////////////////////////
// zeroMQ init
// //////////////////////////////////////////////////////
zmq_ctx = zmq_ctx_new();
int aRetCODE = zmq_ctx_set( zmq_ctx, ZMQ_IO_THREADS, 10 );
assert( 0 == aRetCODE );
zmq_pub_sock = zmq_socket( zmq_ctx, ZMQ_PUB );
aRetCODE = zmq_setsockopt( zmq_pub_sock, ZMQ_AFFINITY, 1023 );
// ^^^^
// ||||
// (:::::::::::)-------++++
// >>> print ( "[{0: >16b}]".format( 2**10 - 1 ) ).replace( " ", "." )
// [......1111111111]
// ||||||||||
// |||||||||+---- IO-thread 0
// ||||||||+----- IO-thread 1
// |......+------ IO-thread 2
// :: : :
// |+------------ IO-thread 8
// +------------- IO-thread 9
//
// API-defined AFFINITY-mapping
Non-windows platforms with a more recent API can touch also scheduler details and tweak O/S-side priorities even better.
Networking ?
Ok, let's first use the resources a bit more adequately :
aRetCODE = zmq_setsockopt( zmq_pub_sock, ZMQ_TOS, <_a_HIGH_PRIORITY_ToS#_> );
Converting the whole infrastructure into epgm:// ?
Well, if one wishes to experiment and gets warranted resources for doing that E2E.

how to copy message inside icmp header

Here is the code which I don't quite understand.
struct icmphdr *icmp;
icmp = (struct icmphdr *)(sb->data + sb->nh.iph->ihl * 4);
....
char *cp_data = (char *)((char *)icmp + sizeof(struct icmphdr));
memcpy(cp_data, buffer, 4);
dev_queue_xmit(sb);
Basically what it does is copy the buffer to cp_data, which points to somewhere in icmphdr structure, but where does cp_data exactly points to? what is (char *)((char *)icmp + sizeof(struct icmphdr))?
cp_data, which points to somewhere in icmphdr structure
No, it does not, it points to the memory after the icmphdr.
That's where the ICMP payload is. Look at this image:
The ICMP payload (message) starts after the header which has a size of sizeof(struct icmphdr) so it is located at icmp + sizeof(struct icmphdr).
memcpy(cp_data, buffer, 4); therefore copies four bytes from buffer to the ICMP packet message.
sb->data + sb->nh.iph->ihl * 4 actually skips the IP packet header and points to the ICMP header (look at the above image again). The IP header is at sb->data, the ICMP header at sb->data + sb->nh.iph->ihl * 4 and the ICMP message at sb->data + sb->nh.iph->ihl * 4 + sizeof(struct icmphdr).
For example, ping (echo request / reply) uses the ICMP message field to send the data forth and back. It can also be used for ICMP tunneling.
Update:
if I want to get the size of data section, just do "size = 1500(MTU) - (sizeof(iphdr) + sizeof(icmphdr) + sizeof(ethhdr)). Is that correct?
No, not at all, for the following reasons:
MTU is just the maximal packet size that can be transmitted without fragmentation. ICMP packets should actually be smaller.
The MTU does not include the ethernet header, it defines the maximal packet length in layer 3 (IP, not ethernet!).
sizeof(iphdr) is incorrect because the header size can vary based on IP options. Use iphdr.ihl to get the size of the IP header in 32 bit words.
The correct way is to determine the total IP packet length and subtract IP header length and ICMP header length:
tot_len = sb->nh.iph->tot_len
iphdr_len = sb->nh.iph->ihl * 4
icmphdr_len = sizeof(icmphdr)
size = tot_len - iphdr_len - icmphdr_len
Note: You should always use ntohs to convert network byte order to host byte order.
In the code you pasted,
struct icmphdr *icmp; //is a pointer to the beginning of icmp header in a packet.
& by this line-
char *cp_data = (char *)((char *)icmp + sizeof(struct icmphdr));
you make cp_data point to the beginning of the payload of the icmp packet.
2.((char *)icmp + sizeof(struct icmphdr)); - icmp is typecasted to (char*) so that the addition of (char*)icmp + sizeof(struct icmphdr) will return the address (icmpheader start address) + headersize bytes.
Here is an example -
suppose you have an integer pointer and you increment it by one, it points to the next integer(that is it automatically advances 4 bytes ahead),while a character pointer advances by a single byte since a char is 1 byte.
And so, since memcpy is used to copy bytes from a buffer to where cp_data now points(the payload of the icmp packet), (char*)icmp + sizeof(struct icmphdr) is again typecasted to (char*).
Hope this helps!

Ruby TFTP server

I have the following code that I put together for a simple Ruby TFTP server. It works fine in that it listens to port 69 and my TFTP client connects to it and I am able to write the packets to the test.txt, but instead of just writing packets, I want to be able to TFTP a file from my client to the /temp directory.
Thanks in advance for your help!
require 'socket.so'
class TFTPServer
def initialize(port)
#port = port
end
def start
#socket = UDPSocket.new
#socket.bind('', #port)
while true
packet = #socket.recvfrom(1024)
puts packet
File.open('/temp/test.txt', 'w') do |p|
p.puts packet
end
end
end
end
server = TFTPServer.new(69)
server.start
Instead of writing to the /temp/test.txt you can use ruby's Tempfile class
So in your example:
require 'socket.so'
require 'tempfile'
class TFTPServer
def initialize(port)
#port = port
end
def start
#socket = UDPSocket.new
#socket.bind('', #port)
while true
packet = #socket.recvfrom(1024)
puts packet
Tempfile.new('tftpserver') do |p|
p.puts process_packet( packet )
end
end
end
end
server = TFTPServer.new(69)
server.start
This will create a guaranteed unique temporary file in your /tmp directory with a name based off of 'tftpserver'.
EDIT: I noticed you wanted to write to /temp (not /tmp) to do this you can do Tempfile.new('tftpserver', '/temp') to specify a specific temporary directory.
Edit 2: For anyone interested there is a library that will do this https://github.com/spiceworks/net-tftp
you'll not get it so easily, the tftp protocol is relatively easy, but put/get is not stateless, or at least if the file does not fit in a single packet, that is something like 512, but some extensions allow a bigger packet
the file on the wire is splited and you'll get a sequence of packets
each packet has a sequence number so the other end can send error for a specific packet
you should take a look at wikipedia page:
http://en.wikipedia.org/wiki/Trivial_File_Transfer_Protocol
here a sample code I wrote in 2005 but id does the specular thing (it sends a file)
it's python but reasonably similar to ruby :D
def send_file(self, addr, filesend, filesize, blocksize):
print '[send_file] Sending %s (size: %d - blksize: %d) to %s:%d' % (filesend, filesize, blocksize, addr[0], addr[1])
fd = open(filesend, 'rb')
for c in range(1, (filesize / blocksize) + 2):
hdr = pack('!H', DATA) + pack('!H', c)
indata = fd.read(blocksize)
if debug > 5: print '[send_file] [%s] Sending block %d size %d' % (filesend, c, len(indata))
self.s.sendto(hdr + indata, addr)
data, addr = self.s.recvfrom(1024)
res = unpack('!H', data[:2])[0]
data = data[2:]
if res != ACK:
print '[send_file] Transfer aborted: %s' % errors[res]
break
if debug > 5:
print '[send_file] [%s] Received ack for block %d' % (filesend, unpack('>H', data[:2])[0] + 1)
fd.close()
## End Transfer
pkt = pack('!H', DATA) + pack('>H', c) + NULL
self.s.sendto(pkt, addr)
if debug: print '[send_file] File send Done (%d)' % c
you can find constants in arpa/tftp.h (you need a unix or search online)
the sequence number is a big endian (network order) short !H format for struct pack
ruby has something like python struct in String class

Resources