IPsec anti-replay service, sequence no is less than lower sequence in the window, packet will be dropped? - ipsec

I have one more query over the IPsec anti replay window service, considering one example. I am having a 64 window size, window size range from 1 to 64. Considering all sequence number received by the receiver except seq no 3, later received seq no 68 and the top window shifted to 4 bits and bottom window to 4 bit right. Top= 68 Bottom= 5 So now in this case,
the first question is:
Whether the window will shift 4 bit? I think yes. need input for the same?
If yes what will happen for seq no 3 index which is not received( which was not marked). Later If seq no 3 gets in then seq no < bottom so packet should dropped right? Could someone please share their inputs for the same.
NOTE: I am using a odp-dpdk as the data engine here, linux is not coming into play here.

I didn't quite understand your first question, but yes, the bottom limit becomes 5 now. If you receive a packet with sequence number 3 after that, then the packet will be dropped.
There's no re-transmission mechanism in IPSec; the upper layer protocols need to take care of the missing packets. For example, TCP will re-transmit a packet which hasn't been acknowledged within a time-frame. At IPSec layer, this packet will get encrypted and transmitted again. IPSec won't even care that it's a re-transmission.

Related

Start bit and end bits in Serial Data Transmission Confusion

I am bit confused how start and stop bit are differentiated from the actual data bits. For example say "data" whose binary is 01100100 01100001 01110100 01100001 is being set from System A to System B as a single packet (because it's less than 64 Kibibytes) bit by bit. Please let me know how start bit and stop bits are added to these data bits. There were two related thread on Stacloverflow with only one answer this was not accepted but is very confusing. Can someone explain it in simple terms please. Thank you
When you want to send data over serial line, you need to synchronize transmitter and receiver. The start bit simply marks the beginning of the data chunk (typically one byte with or without parity bit), and the stop bit marks the end of data chunk.
In the beginning, there’s no data being transmitted - let´s say there is ‘0’ on the line for some time. The receiver is waiting for the start bit (both start and stop bits are always ‘1’). When the start bit arrives, it starts an internal timer and on every tick it reads the value from the line, until all data and parity bits are read. Then it waits for the stop bit and then it begins to start waiting for a new start bit.
Without the start bit, the receiver would not now when to start reading data bits. Imagine sending zero byte without parity: The line would just stay in 0 state all the time.
The stop bit is not necessary, it’s there just for enhancing the reliability (both receiver and transmitter must use the same frequency).
So, the start and stop bits don’t need to be distinguished from data bits. Quite the oposite: They allow the receiver to properly identify data bits.
When sending your data, you would take them byte by byte and for each of them you would send the start bit (‘1’) first, then individual bits, then maybe parity bit and then a ‘1’ - the stop bit, everything at a given frequency. Then you would wait at least for one timer tick.
Usually you don’t need to do all of this, because there are specialized chips for this on the board. You just provide your data using a buffer and wait until they’re sent, or you wait for data being received.

PIC SPI configuration questions

I have some questions related to SPIxCON registers of SPI. I use PIC18F26K83.
1) There is a SPIxTCNTH: SPI TRANSFER COUNTER MSB REGISTER. And I can set first 3 bits on it which counters the bits to be transmitted. And according to datasheet it is writable bit. According to the datasheet it counts bits that will be transmitted then why is it writable? Do I need to write it according to bits that I will send? Or is it there to inform user.
2) There is SPIxTWIDTH: SPI TRANSFER WIDTH REGISTER. In case of BMODE=1, it is
Size (in bits) of each transfer counted by the transfer counter
I will be sending values such as 1.1 or 2.3 to DAC. In this case what should I set it to? Is there a standart value for this register?
3) I couldn't get what are FIFO registers are for according to datasheet we cannot control them by software. Isn't it like a buffer? So If I am writing to transmit register faster than transmission speed, the transmit data will be put into the FIFO. And one by one they will be transmitted. Am I correct? I do not need to anything rather than writing to the transmit buffer.
4) I read but could not understand the polarity bits in SPIxCON1. Is it okay if I do not touch these bits in the control register? I do not want to mess up.
5) How can I select slaves? There is a SSET (Slave select enable bit) in the SPIxCON2 register. I can make it 1, but then how can I select the slave?
Thank you for your answers. I am a newbie. Sorry for the simple and maybe non sense questions. Or I can simply show my configuration code, but I believe it would be harder to analyse.
1) The transfer counter (when in use) is written to with the number of bytes, or partial bytes, to send or receive (depending on mode). So you'd set it, if you are using it (BMODE=0 or TXR=0) to the number of bytes that you are expecting to send or receive.
2) You'd need to look at your binary representation of those numbers to see how many bits you'd be sending in each case. Standard value is a full byte.
3) the FIFOs are hidden elements, writing to the SPIxTXB or reading from the SPIxRXB registers accesses the respective FIFO. the FIFOs are only two bytes deeps so you'd still need to check for overrun if you are sending fast TXWE bit (iirc) but if you have lots of data to transfer fast I'd recommend using DMA to do the transfer then you'd just be setting it up and letting it go and can do other things until it is finished.
4) I think the polarity bits just set the line level during idle state to either high or low. It should be set the same for everyone (masters and slaves).
5) If you only have one slave you can tie that line to the slaves enable line. If you have more than one slave you'll need to set up a gpio line for each and (for each one) OR the signals together and attach the OR output to the slave enable (if it is active low, which it usually is). Make sure only one slave is active at a time. Doing a daisy chain of slaves can be done as well. I haven't worked with that kind of setup.

An algorithm to determine if a number belongs to a group or not

I'm not even sure if this is possible but I think it's worth asking anyway.
Say we have 100 devices in a network. Each device has a unique ID.
I want to tell a group of these devices to do something by broadcasting only one packet (A packet that all the devices receive).
For example, if I wanted to tell devices 2,5,75,116 and 530 to do something, I have to broadcast this : 2-5-75-116-530
But this packet can get pretty long if I wanted (for example) 95 of the devices to do something!!!
So I need a method to reduce the length of this packet.
After thinking for a while, I came up with an idea:
what if I used only prime numbers as device IDs? Then I could send the product of device IDs of the group I need, as the packet and every device will check if the remainder of the received number and its device ID is 0.
For example if I wanted devices 2,3,5 and 7 to do something, I would broadcast 2*3*5*7 = 210 and then each device will calculate "210 mod self ID" and only devices with IDs 2,3,5 and 7 will get 0 so they know that they should do something.
But this method is not efficient because the 100th prime numbers is 541 and the broadcasted number may get really big and the "mod" calculation may get really hard.(the devices have 8bit processors).
So I just need a method for the devices to determine if they should do something or ignore the received packet. And I need the packet to be as short as possible.
I tried my best to explain the question, If its still vague, please tell me to explain more.
You can just use a bit string in which every bit represents a device. Then, you just need a bitwise AND to tell if a given machine should react.
You'd need one bit per device, which would be, for example, 32 bytes for 256 devices. Admittedly, that's a little wasteful if you only need one machine to react, but it's pretty compact if you need, say, 95 devices to respond.
You mentioned that you need the device id to be <= 4 bytes, but that's no problem: 4 bytes = 32 bits = enough space to store 2^32 device ids. For example, the device id for the 101st machine (if you start at 0) could just be 100 (0b01100100) = 1 byte. You would just need to use that to figure out which byte of the packet to use (ceil(100 / 8) = the 13th) and bitwise AND that byte against 100 % 8 = 4 = 0b00000100.
As cobarzan said, you also can use a hybrid scheme allowing for individual addressing. In that scenario, you could use the first bit as a signal to indicate multiple- or single-machine addressing. As cobarzan said, that requires more processing, and it means the first byte can only store 7 machine signals, rather than 8.
Like Ed Cottrell suggested, a bit string would do the job. If the machines are labeled {1,..,n}, there are 2n-1 possible subsets (assuming you do not send requests with no intended target). So you need a data structure able to hold every possible signature of such a subset, whatever you decide the signature to be. And n bits (one for each machine) is the best one can do regarding the size of such a data structure. The evaluation performed on the machines takes constant time (on machine with label l just look at the lth bit).
But one could go for some hybrid scheme. Say you have a task for one device only, then it would be a pity to send n bits (all 0s, except one). So you can take one additional bit T which indicates the type of packet. The value of T is set to 0 if you are sending a bit string of length n as described above or set to 1 if you are using a more appropriate scheme (i.e. less bits). In the case of just one machine that needs to perform the task, you could send directly the label of the machine (which is O(log n) bits long). This approach reduces the size of the packet if you have less than O(n/log n) machines you need to perform the task. Evaluation on the machines is more expensive though.

Go-Back-N window size

Why in TCP's Go-Back-N Algorithm window size(N) has to be smaller than the sequence number space(S): S>N? I tried figuring it out myself but don't quiet get it
Assume that sequence space was four (sequence numbers 0,1,2,3). Lets say window size was also 4. Sender sends 4 packets with sequence numbers (0,1,2,3). Receiver receives all four packets. So it sends 4 acknowledgements(0,1,2,3). Now assume all acknowledgements are lost. Sender will resend all four packets but receiver will assume they're the new ones. To avoid confusions arising from lost acknowledgements, we keep n < s

one two-directed tcp socket OR two one-directed? (linux, high volume, low latency)

I need to send (interchange) a high volume of data periodically with the lowest possible latency between 2 machines. The network is rather fast (e.g. 1Gbit or even 2G+). Os is linux. Is it be faster with using 1 tcp socket (for send and recv) or with using 2 uni-directed tcp sockets?
The test for this task is very like NetPIPE network benchmark - measure latency and bandwidth for sizes from 2^1 up to 2^13 bytes, each size sent and received 3 times at least (in teal task the number of sends is greater. both processes will be sending and receiving, like ping-pong maybe).
The benefit of 2 uni-directed connections come from linux:
http://lxr.linux.no/linux+v2.6.18/net/ipv4/tcp_input.c#L3847
3847/*
3848 * TCP receive function for the ESTABLISHED state.
3849 *
3850 * It is split into a fast path and a slow path. The fast path is
3851 * disabled when:
...
3859 * - Data is sent in both directions. Fast path only supports pure senders
3860 * or pure receivers (this means either the sequence number or the ack
3861 * value must stay constant)
...
3863 *
3864 * When these conditions are not satisfied it drops into a standard
3865 * receive procedure patterned after RFC793 to handle all cases.
3866 * The first three cases are guaranteed by proper pred_flags setting,
3867 * the rest is checked inline. Fast processing is turned on in
3868 * tcp_data_queue when everything is OK.
All other conditions for disabling fast path is false. And only not-unidirected socket stops kernel from fastpath in receive
There are too many variables for a single answer to always hold here. Unless you have a very very fast network link - probably > 1 GBit/sec on modern hardware - the fastpath/slowpath stuff you linked to probably doesn't matter.
Just in case, you can choose to write your program to work either way. Just store a readsocket and a writesocket, and at connect() time, you can either assign them to be the same socket, or two different sockets. Then you can just try it both ways and see which is faster.
It's highly likely you won't notice any difference between the two.
I know this doesn't directly answer your question, but I would suggest taking a look at something like ZeroMQ. There is an introductory article about it on lwn: 0MQ: A new approach to messaging.
I haven't gotten to try it out yet, but I've read up on it a bit and it looks like it might be what you're looking for. Why reinvent the wheel?

Resources