Raw Socket recvfrom not working for TCP on macOS - macos

I have created raw socket as below
rawSockfd = socket(AF_INET,SOCK_RAW,IPPROTO_IP)
Added flag 5 sec SO_RCVTIMEO, IP_HDRINCL to 1 via setsockopt.
Sending IP Packet as below:
struct sockaddr_in connection = getSockAddr(dstIPAddress);
long bytes = sendto(rawSockfd, (uint8_t *)packet, size, 0, (struct sockaddr *)&connection, sizeof(struct sockaddr));
I am trying to receive as below:
long rsize = recvfrom(rawSock, buffer, size, 0, (struct sockaddr *)&connection, (socklen_t *)&addrlen);
This works fine for ICMP, UDP. recvfrom able to read packet back.
We are facing issue during TCP. recvfrom returns error: Resource temporarily unavailable after 5 sec timeout. If we remove timeout flag SO_RCVTIMEO then it gets stuck forever.
TCPdump shows following logs on destination. Instead of SYN ACK it's getting Reset:
09:21:03.972632 IP 10.215.179.1.54745 > 10.207.134.154.8181: Flags [SEW], seq 358899317, win 65535, options [mss 1380,nop,wscale 6,nop,nop,TS val 426499980 ecr 0,sackOK,eol], length 0
09:21:03.972755 IP 10.207.134.154.8181 > 10.215.179.1.54745: Flags [R.], seq 0, ack 358899318, win 0, length 0
is this something macOS not sending TCP response back to rawsocket or something is wrong in my code? On linux, it is working fine.

Related

Raw socket for directing IPv6 datagrams to the kernel

I’m looking to inject IPv6 datagrams available in the user space (and received through a scheme that first requires some unwrapping that's performed in the user space) to a suitable raw socket for further processing by the Linux kernel. This is fairly simple to do with IPv4 using the following code:
int fd=socket(AF_INET, SOCK_RAW, IPPROTO_RAW);
struct sockaddr_ll sa;
memset(sa, 0, sizeof(sa));
// ip4h is the IPv4 datagram unwrapped in the user space and ready to be
// sent to the kernel
if (sendto(fd, iph, iplen, 0, (struct sockaddr *)&sa, sizeof(sa)) != iplen) {
// Error processing.
}
The above injects full IPv4 packets (including the IPv4 headers), and the IPv4 payload gets processed appropriately by the Linux stack. How should the above be modified for use with IPv6 packets? The following adjustments I tried did not work:
int fd=socket(AF_PACKET, SOCK_DGRAM, htons(ETH_P_ALL));
sa.sll_family=AF_PACKET;
sa.sll_protocol=htons(ETH_P_IPV6);
sa.sll_halen=ETH_ALEN;
sa.sll_ifindex=2; // <index of eth0>
if (sendto(fd, iph, iplen, 0, (struct sockaddr *)&sa, sizeof(sa)) != iplen) {
// Error processing.
}
Any thoughts on why the above doesn't work with raw IPv6 datagrams? 'tcpdump ip6' does show the IPv6 packets I'm inserting, which suggests the kernel sees them! It just happens to be ignoring them as well.

Implementation of a teamspeak like voice server

I'm implementing a voice chat server which will be used in my Virtual Class e-learning application for Windows, which makes use of the Remote Desktop API.
So far I 've been compressing the voice in with OPUS and I 've tested various options:
To pass the voice through the RDP Virtual Channel. This works but it creates lots of lag despite the channel creation with CHANNEL_PRIORITY_HI.
To use my own TCP (or UDP) voice server. For this option I have been wondering what would be the best method to implement.
Currently I 'm sending the udp datagram received, to all other clients (later on I will do server-side mixing).
The problem with my current UDP voice server is that is has lag even within the same pc: One server, and four clients connected, two of them have open mics, for example.
I get audible lag with this setup:
void VoiceServer(int port)
{
XSOCKET Y = make_shared<XSOCKET>(AF_INET, SOCK_DGRAM, IPPROTO_UDP);
if (!Y->Bind(port))
return;
auto VoiceServer2 = [&]()
{
OPUSBUFF o;
char d[200] = { 0 };
map<int, vector<char>> udps;
for (;;)
{
// get datagram
int sle = sizeof(sockaddr_in6);
int r = recvfrom(*Y, o.d, 4000, 0, (sockaddr*)d, &sle);
if (r <= 0)
break;
// a MESSAGE is a header and opus data follows
MESSAGE* m = (MESSAGE*)o.d;
// have we received data from this client already?
// m->arb holds the RDP ID of the user
if (udps.find(m->arb) == udps.end())
{
vector<char>& uu = udps[m->arb];
uu.resize(sle);
memcpy(uu.data(), d, sle);
}
for (auto& att2 : aatts) // attendee list
{
long lxid = 0;
att2->get_Id(&lxid);
#ifndef _DEBUG
if (lxid == m->arb) // if same
continue;
#endif
const vector<char>& uud = udps[lxid];
sendto(*Y, o.d + sizeof(MESSAGE), r - sizeof(MESSAGE), 0, (sockaddr*)uud.data(), uud.size());
}
}
};
// 10 threads receiving
for (int i = 0; i < 9; i++)
{
std::thread t(VoiceServer2);
t.detach();
}
VoiceServer2();
}
Each client runs a VoiceServer thread:
void VoiceServer()
{
char b[4000] = { 0 };
vector<char> d2;
for (;;)
{
int r = recvfrom(Socket, b, 4000, 0, 0,0);
if (r <= 0)
break;
d2.resize(r);
memcpy(d2.data(), b, r);
if (audioin && wout)
audioin->push(d2); // this pushes the buffer to a waveOut writing class
SetEvent(hPlayEvent);
}
}
Is this because I test in the same machine? But with a TeamSpeak client I had setup in the past there is no lag whatsoever.
Thanks for your opinion.
SendTo():
For message-oriented sockets, care must be taken not to exceed the
maximum packet size of the underlying subnets, which can be obtained
by using getsockopt to retrieve the value of socket option
SO_MAX_MSG_SIZE. If the data is too long to pass atomically through
the underlying protocol, the error WSAEMSGSIZE is returned and no data
is transmitted.
A typical IPv4 header is 20 bytes, and the UDP header is 8 bytes. The theoretical limit (on Windows) for the maximum size of a UDP packet is 65507 bytes(determined by the following formula: 0xffff - 20 - 8 = 65507). Is it actually the best way to send such a large packet? If we set a packet size too large, bottom of network protocol will splits packets at the IP layer. This takes up a lot of network bandwidth, cause the delay.
MTU(maximum transmission unit), is actually related to the link layer protocol. The structure of the EthernetII frame DMAC+SMAC+Type+Data+CRC has a minimum size of 64 bytes per Ethernet frame due to the electrical limitations of Ethernet transmission, and the maximum size can not exceed 1518 bytes. For Ethernet frames less than or greater than this limitation, we can regard it as a mistake. Since the largest data frame of Ethernet EthernetII is 1518 bytes, except for the frame header 14Bytes and the frame tail CRC check part 4Bytes, there is only 1500 bytes in the data domain left. That's MTU.
In the case that the MTU is 1500 bytes, the maximum size of UDP packet should be 1500 bytes - IP header (20 bytes) - UDP header (8 bytes) = 1472 bytes if you want IP layer not to split packets. However, since the standard MTU value on the Internet is 576 bytes, it is recommended that UDP data length should be controlled within (576-8-20) 548 bytes in a sendto/recvfrom when programming UDP on the Internet.
You need to reduce the bytes of a send/receive and then control the number of times.

Close network kernel socket

I'm developing a network kernel extension and tried to intercept packets, on DataOut callback returned EJUSTRETURN to swallow desired packets. Now I'm willing to pass out same data but on different socket. To achieve this I used
errno_t errorRet = 0;
socket_t newSocket;
errorRet = sock_socket(AF_INET, SOCK_STREAM, IPPROTO_TCP, sockectUpCallBack, cookie, &newSocket);
errorRet = sock_bind(newSocket, (struct sockaddr *)&localAddress);
errorRet = sock_connect(newSocket, (struct sockaddr *)&remoteAddress, MSG_DONTWAIT);
This thing is working and connect function return with code EINPROGRESS 36 /* Operation now in progress */. Now my question is, is it possible to close the socket the packet previously sent through?

Inconsistent behavior transmitting bursts of UDP packets on Windows 7

I've got two systems, both running Windows 7. The source is 192.168.0.87, the target is 192.168.0.22, they are both connected to a small switch on my desk.
The source is transmitting a burst of 100 UDP packets to the target with this program -
#include <iostream>
#include <vector>
using namespace std;
#include <winsock2.h>
int main()
{
// It's windows, we need this.
WSAData wsaData;
int wres = WSAStartup(MAKEWORD(2,2), &wsaData);
if (wres != 0) { exit(1); }
SOCKET s = socket(AF_INET, SOCK_DGRAM, 0);
if (s < 0) { exit(1); }
struct sockaddr_in addr;
memset(&addr, 0, sizeof(addr));
addr.sin_family = AF_INET;
addr.sin_addr.s_addr = htonl(INADDR_ANY);
addr.sin_port = htons(0);
if (bind(s, (struct sockaddr *)&addr, sizeof(addr)) < 0) { exit(3); }
int max = 100;
// build all the packets to send
typedef vector<unsigned char> ByteArray;
vector<ByteArray> v;
v.reserve(max);
for(int i=0;i<max;i++) {
ByteArray bytes(150+(i%25), 'a'+(i%26));
v.push_back(bytes);
}
// send all the packets out, one right after the other.
addr.sin_addr.s_addr = htonl(0xC0A80016);// 192.168.0.22
addr.sin_port = htons(24105);
for(int i=0;i<max;++i) {
if (sendto(s, (const char *)v[i].data(), v[i].size(), 0,
(struct sockaddr *)&addr, sizeof(addr)) < 0) {
cout << "i: " << i << " error: " << errno;
}
}
closesocket(s);
cout << "Complete!" << endl;
}
Now, on first run I get massive losses of UDP packets (often only 1 will get through!).
On subsequent runs, all 100 make it through.
If I wait for 2 minutes or so, and run again, I'm back to losing most of the packets.
Reception on the target system is done using Wireshark.
I also ran Wireshark at the same time on the source system, and found exactly the same trace as on the target in all cases.
That means that the packets are getting lost on the source machine, rather than being lost in the switch or on the wire.
I also tried running sysinternals process monitor, and found that indeed, all 100 sendto calls do result in appropriate winsock calls, but not necessarily in packets on the wire.
As near as I can tell (using arp -a), in all cases the target's IP is in the source's arp cache.
Can anyone tell me why Windows is so inconsistent in how it treats these packets? I get that in my actual application I've just got to rate limit my sends a bit, but I'd like to understand why it works sometimes and not others.
Oh yes, and I also tried swapping the systems for send and receive, with no change in behavior.
Most probably the client is overruning udp send buffer. Maybe while ARP protocol is running to get the target MAC address. You say that you lose datagrams the first run and if you wait 2 minutes or more. Why don't you check with Wireshark what happens in that first run? (If ARP frames are sent/received)
If that is the problem, you could apply one of these 2 alternatives:
1-Before running make sure the ARP entry is there.
2-Send the first datagram, wait 1 sec or less, send the burst

Winsock returns 10061 on connect only to localhost

I dont understand whats happening. If I create a socket to anywhere else other than localhost (either "localhost", "127.0.0.1" or the external ip of the machine) it works fine.
If I create a socket to an address without something listening in that port i would get a 10060 (timeout) but not a 10061 which makes sense. Why is it that I am getting connection refused when going to localhost.
I tried disabling the firewall just in case it was messing things up, but that is not it
I am doing all the WSA initialize stuff before this.
_socketToServer = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
if(_socketToServer == -1){
return false;
}
p_int = (int*)malloc(sizeof(int));
*p_int = 1;
if( (setsockopt(_socketToServer, SOL_SOCKET, SO_REUSEADDR,
(char*)p_int, sizeof(int)) == -1 )||
(setsockopt(_socketToServer, SOL_SOCKET, SO_KEEPALIVE, (char*)p_int,
sizeof(int)) == -1 ) ){
free(p_int);
return false;
}
free(p_int);
struct sockaddr_in my_addr;
my_addr.sin_family = AF_INET ;
my_addr.sin_port = htons(_serverPort);
memset(&(my_addr.sin_zero), 0, 8);
my_addr.sin_addr.s_addr = inet_addr(_serverIP);
if( connect( _socketToServer, (struct sockaddr*)&my_addr, sizeof(my_addr))
== SOCKET_ERROR ){
DWORD error = GetLastError(); //here is where I get the 10061
return false;
}
Any ideas?
You are not guaranteed to get a WSAETIMEDOUT error when connecting to a non-listening port on another machine. Any number of different errors can occur. However, a WSAETIMEDOUT error typically only occurs if the socket cannot reach the target machine on the network before connect() times out. If it can reach the target machine, a WSAECONNREFUSED error means the target machine is acknowledging the connect() request and is replying back saying the requested port is not able to accept the connection at that time because either it is not listening or its backlog is full (there is no way to differentiate which). So, when you are connecting to the localhost, you will pretty much always get a WSAECONNREFUSED error when connecting to a non-listening port because you are connecting to the same machine and there is no delay in determining the port's listening status. It has nothing to do with firewalls or anti-malwares. This is just normal behavior.

Resources