Why there are 65520 dnp3 source addresses instead of 65536? - scada

DNP3 link-layer source and destination addresses are 16 bits each. It means it can have 2^16 = 65536 total different addresses. Based on official DNP3 docs, there are 65536 destination addresses, which I understand. But there are only 65520 source addresses, why is that? What are other remaining 16 addresses for?
On what I said above, you can read from any dnp3 docs or this link also works: https://www.ixiacom.com/company/blog/scada-distributed-network-protocol-dnp3

I'm not familiar with DNP3, but I found a specification for a DNP3 link layer protocol implementation at https://library.e.abb.com/public/06e4e2267fd04c3884515a0360210068/1MRK511380-UUS_-_en_Point_list_manual__DNP_650_series_2.1.pdf. See page 36:
1.4.1 Data Link Address: Indicates if the link address is configurable over the entire valid range of 0 to 65,519. Data link addresses 0xFFF0 through
0xFFFF are reserved for broadcast or other special purposes.
While the source doesn't indicate what these 16 addresses are reserved for (possibly as a precaution for future needs), it does indicate that they are reserved.

Related

Why does the GDT have 3 entries for TLS?

I am reading the "Understanding the Linux Kernel" book, and would like to understand the TLS entries in the GDT.
From the book, I take there are 3 entries in each processor's GDT table, that indicate the processor's current TLS segments.
What I am trying to understand is:
Why are there 3 TLS entries in the GDT table?
I have read that the segment selector %fs is commonly used to refer to a thread-local storage address, in the form (%fs:address). When does the fs register get populated with the TLS entry from the GDT?
Does the processor have a non-programmable register similar to the tr register but for the TLS entry?
I would expect a TLS entry to be modified in a processor's GDT at a process context switch, followed by storing the index of the TLS entry in the %fs register. But based on this description, a processor would only require 1 TLS entry in the GDT.

How does the Linux kernel find the right offset to parse IP packets?

I've found what code parses IP (v4) packets in the kernel source tree. This function, ip_rcv, can to a high degree of certainty detect whether a packet is correct or not, as is outlined in one of the comments:
Length at least the size of an ip header
Version of 4
Checksums correctly. [Speed optimisation for later, skip loopback checksums]
Doesn't have a bogus length
Malformed packets are simply dropped. This function seems to get a bunch of bytes that should resemble an IP packet, but what if some malicious actor would sneak an extra byte on the line? If not handled correctly, all the chunks of bytes that ip_rcv receives from now on will start 1 byte off and no correct IP packet can be reconstructed anymore. I assume the kernel does something smarter than to try all different byte offsets at which to start parsing an IP packet. What exactly, I'm unable to find. Could anyone shed some light on this?
I haven't taken the time to look at the kernel code but most protocol stacks are going to work by parsing data immediately following the previous stack location and not by searching for data.
In the case of Ethernet, an Ethernet frame header is typically 14 bytes in size. It can vary but the header itself indicates the different length in the etherType field when necessary. In this example, the NIC (Network Interface Card) will receive an Ethernet frame. If the frame is destined for this NIC then the data passed from the NIC driver to the IP stack will be an Ethernet frame containing this 14-byte header followed immediately by the IP header (first nibble will be 4 if it is a version 4 IP header for instance).
Again, I didn't look at the network stack code but there are two common cases here:
1) The IP stack is told this is an Ethernet frame and only needs to parse the Ethernet frame header for its length and the very next byte must be an IP header or the data is deemed not an IP frame.
2) The IP stack is given a pointer to the beginning of the data immediately following the Ethernet frame header and the IP stack then starts parsing at that location.

Mac changes IP total length field

I'm programming with sockets in Mac 10.6.8. Whenever I receive a packet, it starts with the IP header. I've been using Wireshark to analyze incoming packets, and I noticed that my machine's socket implementation will consistently change the "total length" field in the IP header. Specifically, it will subtract the IP header length and reverse the bytes (from network to host order).
For example, here's the beginning of an IP header as reported by Wireshark:
45 c0 00 38 ...
That breaks down as follows:
4 bits (0x4): IP version: 4
4 bits (0x5): IP header length: 5 words (20 bytes)
8 bits (0xc0): differentiated services flags
16 bits (0x0038): total length: 56 bytes
However, when I print the contents of the buffer filled by recvfrom for the same packet, I get a different lede:
ssize_t recvbytes = recvfrom(sock->fd, buffer, size, /*flags=*/0,
(struct sockaddr*)src, &src_len);
returns
45 c0 24 00 ...
4 bits (0x4): IP version: 4
4 bits (0x5): IP header length: 5 words (20 bytes)
8 bits (0xc0): differentiated services flags
16 bits (0x2400): total length: 9216 bytes (?!)
I figured out that before I get access to the buffer, the socket implementation is reading the total length, subtracting the IP header length, and then writing it back in host order (little endian on my machine) rather than network order (big endian). In this example, that means:
read the total length: 0x0038 = 56
subtract the header length: 56 - 20 = 36
write back in host order: 36 = 0x0024 (big endian) = 0x2400 (little endian = host order on my machine)
The problem gets worse. It won't just change the total length of the outermost IP header. It will also change the total length fields of internal IP headers, e.g., the one buried in an ICMP "time exceeded" message (which must include the original IP header of the dropped packet). Funnier still, it won't subtract the IP header length from the internal headers; it just reverses the byte order.
Is this happening to anyone else? Is it part of a standard I'm unaware of? Is there a way to fix my machine's socket implementation to stop tampering with packets? How is Wireshark able to get around this problem?
Thanks in advance for your consideration.
EDIT: My code and Makefile are available on GitHub. I wrote a fixip_osx function to allow verifying IP checksums:
https://github.com/thejohnfreeman/netutils/blob/master/lib/ip.c
void fixip_osx(struct ip* ip) {
/* Something on my Mac subtracts the header length from `ip_len` and stores
* it in host order (little endian). */
u16_t ip_hdrlen = ip->ip_hl << 2;
u16_t ip_totlen = ip->ip_len + ip_hdrlen;
ip->ip_len = htons(ip_totlen);
}
However, it's still a problem for verifying ICMP checksums when the payload contains another IP header.
The problem exists whether I compile with Clang 3.2 (built from trunk) or GCC 4.7 (MacPorts port), so I assume the problem lies in either the sockets implementation (packaged with Mac OS) or in Mac OS X itself.
The BSD suite of platforms (excluding OpenBSD) present the IP offset and length in host byte order. All other platforms present in the received network byte order. This is a "feature", and is referenced in the man page for IP(4) - Internet Protocol (FreeBSD, OS X).
The ip_len and ip_off fields must be provided in host byte order .
All other fields must be provided in network byte order.
IP length can equal packet length - IP header length in FreeBSD/NetBSD.
Reference: Stevens/Fenner/Rudolph, Unix Network Programming Vol.1, p.739
I have to deal with these anomalies with a user space implementation of the PGM network protocol, specific code:
https://code.google.com/p/openpgm/source/browse/trunk/openpgm/pgm/packet_parse.c#76
It's actually quite annoying to detect for AutoConf, I think all packages have this hard coded on a per-platform basis. I've seen a bug report (header byte order config options detected incorrectly) raised this week on this very issue.
It is very unlikely that Mac itself is doing that. That would fundamentally break the IP protocol if it were. More likely whatever is capturing the packets and delivering them to recvfrom() (presumably you are doing a promiscuous network capture, right?) is what is transforming the data after Mac is done processing it. ireshark operates on a lower level and has access to the actual network data.

How to add persistent IPv6 address in Vista/Windows7?

I want to add a persistent IPv6 address using just API calls or with Registry edits. I have currently implemented a code which uses CreateUnicastIpAddressEntry API to add the IPv6 address successfully, but the IP address is destroyed when the adapter is reset or machine rebooted (as mentioned in MSDN docs).
With IPv4, it was easy to do. Just use AddIPAddress API combined with registry entries to get the desired result.
I have tried to find any entry in the Windows Registry which is being used to save the IPv6 address without any success. The MSDN docs suggests to use netsh.exe to do the task, but then I am quite sure netsh.exe is doing some API call or Registry entry to achieve this task (which is not documented by Microsoft anywhere).
How can this be achieved?
Well, after some reverse engineering of netsh.exe and detailed analysis I think there is sufficient info to create a persistent ipv6 address.
The ipv6 address (UNICAST) is stored in following registry key:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Nsi\{eb004a01-9b1a-11d4-9123-0050047759bc}\10
For every ipv6 address to be added, create a REG_BINARY value such that the name of the value contains NET_LUID concatenated with the ipv6 address in full. Like for example, if the ipv6 address is 2001::1, the name of the value will be 000000090000060020010000000000000000000000000001, where the first 16 characters is the NET_LUID of the network adapter and the rest the ipv6 address in full.
This registry value data is made of a 48 byte long structure given below:
typedef struct _UNKNOWN {
ULONG ValidLifetime;
ULONG PreferredLifetime;
NL_PREFIX_ORIGIN PrefixOrigin;
NL_SUFFIX_ORIGIN SuffixOrigin;
UINT8 OnLinkPrefixLength;
BOOLEAN SkipAsSource;
UCHAR Unknown[28];
} UNKNOWN;
The last 28 bytes of this structure is unknown and must be initialized to 0xFF.
Refer to MIB_UNICASTIPADDRESS_ROW structure info in msdn for more info on the UNKNOWN structure members.
While doing this, I also figured out that ipv6 ANYCAST addresses are stored similarly in registry key:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Nsi\{eb004a01-9b1a-11d4-9123-0050047759bc}\8`\

Is this a valid IPv6 address, 74DC::02BA?

Is 74DC::02BA a valid IPv6 address?
I am trying to break it down, but the various shortcuts are confusing me.
Valid address, yes. See this question. Also, this validator breaks it down nicely.
Correct address, probably not. See RFC 4291, section 2.4, where this address is defined as a Global Unicast address. (the first bits are 0111 0100, which falls under "everything else" in the table) Then see the IPv6 address assignments. You'll notice this address range has not been assigned for use.
Normally you wouldn't see an address written like this, since it contains extra information. (the leading 0 in the second group of digits) So you would probably see it written like 74dc::2ba. (The IETF makes recommendations about how to print IPv6 addresses in RFC 5952.)
If you want to know the rules for IPv6 address shortening, they are specified in RFC 4291, section 2.2.
Here's what I believe to be the best online IPv6 validator (and not just because I wrote it). It will show you the various address forms and show you how the different representations relate to each other (try hovering over each address group).
The "::" means there's all 0s in between the colons. The address expands to 74dc:0000:0000:0000:0000:0000:0000:02ba
IPv6 Address Validator

Resources