how to copy message inside icmp header - linux-kernel

Here is the code which I don't quite understand.
struct icmphdr *icmp;
icmp = (struct icmphdr *)(sb->data + sb->nh.iph->ihl * 4);
....
char *cp_data = (char *)((char *)icmp + sizeof(struct icmphdr));
memcpy(cp_data, buffer, 4);
dev_queue_xmit(sb);
Basically what it does is copy the buffer to cp_data, which points to somewhere in icmphdr structure, but where does cp_data exactly points to? what is (char *)((char *)icmp + sizeof(struct icmphdr))?

cp_data, which points to somewhere in icmphdr structure
No, it does not, it points to the memory after the icmphdr.
That's where the ICMP payload is. Look at this image:
The ICMP payload (message) starts after the header which has a size of sizeof(struct icmphdr) so it is located at icmp + sizeof(struct icmphdr).
memcpy(cp_data, buffer, 4); therefore copies four bytes from buffer to the ICMP packet message.
sb->data + sb->nh.iph->ihl * 4 actually skips the IP packet header and points to the ICMP header (look at the above image again). The IP header is at sb->data, the ICMP header at sb->data + sb->nh.iph->ihl * 4 and the ICMP message at sb->data + sb->nh.iph->ihl * 4 + sizeof(struct icmphdr).
For example, ping (echo request / reply) uses the ICMP message field to send the data forth and back. It can also be used for ICMP tunneling.
Update:
if I want to get the size of data section, just do "size = 1500(MTU) - (sizeof(iphdr) + sizeof(icmphdr) + sizeof(ethhdr)). Is that correct?
No, not at all, for the following reasons:
MTU is just the maximal packet size that can be transmitted without fragmentation. ICMP packets should actually be smaller.
The MTU does not include the ethernet header, it defines the maximal packet length in layer 3 (IP, not ethernet!).
sizeof(iphdr) is incorrect because the header size can vary based on IP options. Use iphdr.ihl to get the size of the IP header in 32 bit words.
The correct way is to determine the total IP packet length and subtract IP header length and ICMP header length:
tot_len = sb->nh.iph->tot_len
iphdr_len = sb->nh.iph->ihl * 4
icmphdr_len = sizeof(icmphdr)
size = tot_len - iphdr_len - icmphdr_len
Note: You should always use ntohs to convert network byte order to host byte order.

In the code you pasted,
struct icmphdr *icmp; //is a pointer to the beginning of icmp header in a packet.
& by this line-
char *cp_data = (char *)((char *)icmp + sizeof(struct icmphdr));
you make cp_data point to the beginning of the payload of the icmp packet.
2.((char *)icmp + sizeof(struct icmphdr)); - icmp is typecasted to (char*) so that the addition of (char*)icmp + sizeof(struct icmphdr) will return the address (icmpheader start address) + headersize bytes.
Here is an example -
suppose you have an integer pointer and you increment it by one, it points to the next integer(that is it automatically advances 4 bytes ahead),while a character pointer advances by a single byte since a char is 1 byte.
And so, since memcpy is used to copy bytes from a buffer to where cp_data now points(the payload of the icmp packet), (char*)icmp + sizeof(struct icmphdr) is again typecasted to (char*).
Hope this helps!

Related

Raw socket for directing IPv6 datagrams to the kernel

I’m looking to inject IPv6 datagrams available in the user space (and received through a scheme that first requires some unwrapping that's performed in the user space) to a suitable raw socket for further processing by the Linux kernel. This is fairly simple to do with IPv4 using the following code:
int fd=socket(AF_INET, SOCK_RAW, IPPROTO_RAW);
struct sockaddr_ll sa;
memset(sa, 0, sizeof(sa));
// ip4h is the IPv4 datagram unwrapped in the user space and ready to be
// sent to the kernel
if (sendto(fd, iph, iplen, 0, (struct sockaddr *)&sa, sizeof(sa)) != iplen) {
// Error processing.
}
The above injects full IPv4 packets (including the IPv4 headers), and the IPv4 payload gets processed appropriately by the Linux stack. How should the above be modified for use with IPv6 packets? The following adjustments I tried did not work:
int fd=socket(AF_PACKET, SOCK_DGRAM, htons(ETH_P_ALL));
sa.sll_family=AF_PACKET;
sa.sll_protocol=htons(ETH_P_IPV6);
sa.sll_halen=ETH_ALEN;
sa.sll_ifindex=2; // <index of eth0>
if (sendto(fd, iph, iplen, 0, (struct sockaddr *)&sa, sizeof(sa)) != iplen) {
// Error processing.
}
Any thoughts on why the above doesn't work with raw IPv6 datagrams? 'tcpdump ip6' does show the IPv6 packets I'm inserting, which suggests the kernel sees them! It just happens to be ignoring them as well.

Implementation of a teamspeak like voice server

I'm implementing a voice chat server which will be used in my Virtual Class e-learning application for Windows, which makes use of the Remote Desktop API.
So far I 've been compressing the voice in with OPUS and I 've tested various options:
To pass the voice through the RDP Virtual Channel. This works but it creates lots of lag despite the channel creation with CHANNEL_PRIORITY_HI.
To use my own TCP (or UDP) voice server. For this option I have been wondering what would be the best method to implement.
Currently I 'm sending the udp datagram received, to all other clients (later on I will do server-side mixing).
The problem with my current UDP voice server is that is has lag even within the same pc: One server, and four clients connected, two of them have open mics, for example.
I get audible lag with this setup:
void VoiceServer(int port)
{
XSOCKET Y = make_shared<XSOCKET>(AF_INET, SOCK_DGRAM, IPPROTO_UDP);
if (!Y->Bind(port))
return;
auto VoiceServer2 = [&]()
{
OPUSBUFF o;
char d[200] = { 0 };
map<int, vector<char>> udps;
for (;;)
{
// get datagram
int sle = sizeof(sockaddr_in6);
int r = recvfrom(*Y, o.d, 4000, 0, (sockaddr*)d, &sle);
if (r <= 0)
break;
// a MESSAGE is a header and opus data follows
MESSAGE* m = (MESSAGE*)o.d;
// have we received data from this client already?
// m->arb holds the RDP ID of the user
if (udps.find(m->arb) == udps.end())
{
vector<char>& uu = udps[m->arb];
uu.resize(sle);
memcpy(uu.data(), d, sle);
}
for (auto& att2 : aatts) // attendee list
{
long lxid = 0;
att2->get_Id(&lxid);
#ifndef _DEBUG
if (lxid == m->arb) // if same
continue;
#endif
const vector<char>& uud = udps[lxid];
sendto(*Y, o.d + sizeof(MESSAGE), r - sizeof(MESSAGE), 0, (sockaddr*)uud.data(), uud.size());
}
}
};
// 10 threads receiving
for (int i = 0; i < 9; i++)
{
std::thread t(VoiceServer2);
t.detach();
}
VoiceServer2();
}
Each client runs a VoiceServer thread:
void VoiceServer()
{
char b[4000] = { 0 };
vector<char> d2;
for (;;)
{
int r = recvfrom(Socket, b, 4000, 0, 0,0);
if (r <= 0)
break;
d2.resize(r);
memcpy(d2.data(), b, r);
if (audioin && wout)
audioin->push(d2); // this pushes the buffer to a waveOut writing class
SetEvent(hPlayEvent);
}
}
Is this because I test in the same machine? But with a TeamSpeak client I had setup in the past there is no lag whatsoever.
Thanks for your opinion.
SendTo():
For message-oriented sockets, care must be taken not to exceed the
maximum packet size of the underlying subnets, which can be obtained
by using getsockopt to retrieve the value of socket option
SO_MAX_MSG_SIZE. If the data is too long to pass atomically through
the underlying protocol, the error WSAEMSGSIZE is returned and no data
is transmitted.
A typical IPv4 header is 20 bytes, and the UDP header is 8 bytes. The theoretical limit (on Windows) for the maximum size of a UDP packet is 65507 bytes(determined by the following formula: 0xffff - 20 - 8 = 65507). Is it actually the best way to send such a large packet? If we set a packet size too large, bottom of network protocol will splits packets at the IP layer. This takes up a lot of network bandwidth, cause the delay.
MTU(maximum transmission unit), is actually related to the link layer protocol. The structure of the EthernetII frame DMAC+SMAC+Type+Data+CRC has a minimum size of 64 bytes per Ethernet frame due to the electrical limitations of Ethernet transmission, and the maximum size can not exceed 1518 bytes. For Ethernet frames less than or greater than this limitation, we can regard it as a mistake. Since the largest data frame of Ethernet EthernetII is 1518 bytes, except for the frame header 14Bytes and the frame tail CRC check part 4Bytes, there is only 1500 bytes in the data domain left. That's MTU.
In the case that the MTU is 1500 bytes, the maximum size of UDP packet should be 1500 bytes - IP header (20 bytes) - UDP header (8 bytes) = 1472 bytes if you want IP layer not to split packets. However, since the standard MTU value on the Internet is 576 bytes, it is recommended that UDP data length should be controlled within (576-8-20) 548 bytes in a sendto/recvfrom when programming UDP on the Internet.
You need to reduce the bytes of a send/receive and then control the number of times.

Read/Write from ATtiny1616 EEPROM

Using the ATting1616 within avr-gcc I am trying to read and write to the EEPROM.
The ATtiny1616 uses NVMCTRL - Nonvolatile Memory Controller for byte level read/writes. I am using NVMCTRL to read/write blocks from the EEPROM, but it is not working correctly.
Here is an example to demonstrate what I am trying to so.
Lets say that I was to save two different values within the EEPROM and then read back each ones value.
uint16_t eeprom_address1 = 0x01;//!< Address one for first saved value
uint16_t eeprom_address2 = 0x32;//!< Address two for second saved value
char save_one = "12345"; //!< Test value to save, one
char save_two = "testing";//!< Test value to save, two
FLASH_0_write_eeprom_block(eeprom_address1,save_one,7); //!< Save first value to address 1
FLASH_0_write_eeprom_block(eeprom_address2,save_two,7); //!< Save second value to address 2
char test_data[7] = {0}; //!< Just some empty array to put chars into
FLASH_0_read_eeprom_block(eeprom_address1,test_data,7); //!< Read eeprom from address, to address+ 7, and store back into test_data
Here are the read/write functions:
# define EEPROM_START (0x1400)//!< is located in header file
/**
* \brief Read a block from eeprom
*
* \param[in] eeprom_adr The byte-address in eeprom to read from
* \param[in] data Buffer to place read data into
*
* \return Nothing
*/
void FLASH_0_read_eeprom_block(eeprom_adr_t eeprom_adr, uint8_t *data, size_t size)
{
// Read operation will be stalled by hardware if any write is in progress
memcpy(data, (uint8_t *)(EEPROM_START + eeprom_adr), size);
}
/**
* \brief Write a block to eeprom
*
* \param[in] eeprom_adr The byte-address in eeprom to write to
* \param[in] data The buffer to write
*
* \return Status of write operation
*/
nvmctrl_status_t FLASH_0_write_eeprom_block(eeprom_adr_t eeprom_adr, uint8_t *data, size_t size)
{
uint8_t *write = (uint8_t *)(EEPROM_START + eeprom_adr);
/* Wait for completion of previous write */
while (NVMCTRL.STATUS & NVMCTRL_EEBUSY_bm)
;
/* Clear page buffer */
ccp_write_spm((void *)&NVMCTRL.CTRLA, NVMCTRL_CMD_PAGEBUFCLR_gc);
do {
/* Write byte to page buffer */
*write++ = *data++;
size--;
// If we have filled an entire page or written last byte to a partially filled page
if ((((uintptr_t)write % EEPROM_PAGE_SIZE) == 0) || (size == 0)) {
/* Erase written part of page and program with desired value(s) */
ccp_write_spm((void *)&NVMCTRL.CTRLA, NVMCTRL_CMD_PAGEERASEWRITE_gc);
}
} while (size != 0);
return NVM_OK;
}
The value that is turned if test_data[7] is printed will be "testing".
When looking at the memory in debug mode I am able to see that the value is always being written to the first memory location in the data EEPROM.[0x1400]
In this case starting at memory x1400 the value of "testing" starts.
There seems to be something fundamental that I have failed to understand with reading and write to the EEPROM. Any guidance would be greatly appreciated.

Inconsistent behavior transmitting bursts of UDP packets on Windows 7

I've got two systems, both running Windows 7. The source is 192.168.0.87, the target is 192.168.0.22, they are both connected to a small switch on my desk.
The source is transmitting a burst of 100 UDP packets to the target with this program -
#include <iostream>
#include <vector>
using namespace std;
#include <winsock2.h>
int main()
{
// It's windows, we need this.
WSAData wsaData;
int wres = WSAStartup(MAKEWORD(2,2), &wsaData);
if (wres != 0) { exit(1); }
SOCKET s = socket(AF_INET, SOCK_DGRAM, 0);
if (s < 0) { exit(1); }
struct sockaddr_in addr;
memset(&addr, 0, sizeof(addr));
addr.sin_family = AF_INET;
addr.sin_addr.s_addr = htonl(INADDR_ANY);
addr.sin_port = htons(0);
if (bind(s, (struct sockaddr *)&addr, sizeof(addr)) < 0) { exit(3); }
int max = 100;
// build all the packets to send
typedef vector<unsigned char> ByteArray;
vector<ByteArray> v;
v.reserve(max);
for(int i=0;i<max;i++) {
ByteArray bytes(150+(i%25), 'a'+(i%26));
v.push_back(bytes);
}
// send all the packets out, one right after the other.
addr.sin_addr.s_addr = htonl(0xC0A80016);// 192.168.0.22
addr.sin_port = htons(24105);
for(int i=0;i<max;++i) {
if (sendto(s, (const char *)v[i].data(), v[i].size(), 0,
(struct sockaddr *)&addr, sizeof(addr)) < 0) {
cout << "i: " << i << " error: " << errno;
}
}
closesocket(s);
cout << "Complete!" << endl;
}
Now, on first run I get massive losses of UDP packets (often only 1 will get through!).
On subsequent runs, all 100 make it through.
If I wait for 2 minutes or so, and run again, I'm back to losing most of the packets.
Reception on the target system is done using Wireshark.
I also ran Wireshark at the same time on the source system, and found exactly the same trace as on the target in all cases.
That means that the packets are getting lost on the source machine, rather than being lost in the switch or on the wire.
I also tried running sysinternals process monitor, and found that indeed, all 100 sendto calls do result in appropriate winsock calls, but not necessarily in packets on the wire.
As near as I can tell (using arp -a), in all cases the target's IP is in the source's arp cache.
Can anyone tell me why Windows is so inconsistent in how it treats these packets? I get that in my actual application I've just got to rate limit my sends a bit, but I'd like to understand why it works sometimes and not others.
Oh yes, and I also tried swapping the systems for send and receive, with no change in behavior.
Most probably the client is overruning udp send buffer. Maybe while ARP protocol is running to get the target MAC address. You say that you lose datagrams the first run and if you wait 2 minutes or more. Why don't you check with Wireshark what happens in that first run? (If ARP frames are sent/received)
If that is the problem, you could apply one of these 2 alternatives:
1-Before running make sure the ARP entry is there.
2-Send the first datagram, wait 1 sec or less, send the burst

Core MIDI: when I send a MIDIPacketList using MIDISend() only the first packet is being sent

I am trying to send a MIDIPacketList containing two packets that describe controller position change message relating to a x-y style controller.
The function i'm trying to implement takes the an x and y position, and then creates the packets and sends them to the selected target device as follows:
- (void)matrixCtrlSetPosX:(int)posX PosY:()posY {
MIDIPacketList packetList;
packetList.numPackets = 2;
packetList.packet[0].length = 3;
packetList.packet[0].data[0] = 0xB0; // status: controller change
packetList.packet[0].data[1] = 0x32; // controller number 50
packetList.packet[0].data[2] = (Byte)posX; // value (x position)
packetList.packet[0].timeStamp = 0;
packetList.packet[1].length = 3;
packetList.packet[1].data[0] = 0xB0; // status: controller change
packetList.packet[1].data[1] = 0x33; // controller number 51
packetList.packet[1].data[2] = (Byte)posY; // value (y position)
packetList.packet[1].timeStamp = 0;
CheckError(MIDISend(_outputPort, _destinationEndpoint, &packetList), "Couldn't send MIDI packet list");
}
The problem I am having is that the program only appears to be sending out the first packet.
I have tried splitting the output into two separate MIDIPacketLists and two making two calls to MIDISend(), which does work, but I am sure that there must be something trivial I am missing out in building the midi packet list so that the two messages can be sent in one call to MIDISend(). I just cannot seem to figure out what the problem is here! Anyone here had experience doing this, or am I going about this the wrong way entirely?
Just declaring the MIDIPacketList doesn't allocate memory or set up the structure. There's a process to adding packets to the list. Here's a quick and dirty example:
- (void)matrixCtrlSetPosX:(int)posX PosY:(int)posY {
MIDITimeStamp timestamp = 0;
const ByteCount MESSAGELENGTH = 6;
Byte buffer[1024]; // storage space for MIDI Packets
MIDIPacketList *packetlist = (MIDIPacketList*)buffer;
MIDIPacket *currentpacket = MIDIPacketListInit(packetlist);
Byte msgs[MESSAGELENGTH] = {0xB0, 0x32, (Byte)posX, 0xB0, 0x33, (Byte)posY};
currentpacket = MIDIPacketListAdd(packetlist, sizeof(buffer),
currentpacket, timestamp, MESSAGELENGTH, msgs);
CheckError(MIDISend(_outputPort, _destinationEndpoint, packetlist), "Couldn't send MIDI packet list");
}
I adapted this code from testout.c found here

Resources