Is it possible to create a DoS attack in IPv6 by using ICMP packet too big messages?
So for instance, say you want to deny access to somewhere by somehow spoofing an ICMP packet too big message, and set the size to 68 octets (the minimum for IPv4), to throttle any traffic that a particular node receives. Would this kind of attack be possible?
In RFC 1981 it says
A node MUST NOT reduce its estimate of the Path MTU below the IPv6 minimum link MTU. Note: A node may receive a Packet Too Big message reporting a next-hop MTU that is less than the IPv6 minimum link MTU. In that case, the node is not required to reduce the size of subsequent packets sent on the path to less than the IPv6 minimun link MTU, but rather must include a Fragment header in those packets [IPv6-SPEC].
So this case in RFC 1981 would normally only occur in the case that there's a IPv6-IPv4 translation where an IPv4 node would have an MTU smaller than 1280. If my understanding is correct however, if there's an IPv6-IPv4 tunnel along the path, we can significantly slow down the traffic since the IPv6-IPv4 node would fragment it.
However, this didn't quite make sense to me since IPv6 doesn't allow fragmentation.
Yes, that's possible. But you'll find that people have thought about it before, so it's not as easy as you might think. You'll generally need to impersonate a router close to the victim, and an ingress filter will prevent you from doing that. There are a few more hurdles.
But you can attack other hosts on your ethernet.
Related
How can I go about programmatically pinging all address through Windows for an IPv6 network.
My address is fe80::1881:1fc2:a153:71f0%3(Preferred).
I have done this via IPv4 with no issue, but having a hard time understanding how to do this to build my ARP table for IPv6.
How can I go about programmatically [sic] pinging all address through
Windows for an IPv6 network. [sic]
If you try to ping every one of the possible 18,446,744,073,709,551,616 addresses on a standard /64 IPv6 network, at 1,000,000 addresses per second, it will take you over 584,542 years. You simply cannot try to ping every host on an IPv6 network.
...having a hard time understanding how to do this to build my ARP table
for IPv6.
IPv6 doesn't use ARP. IPv6 uses ND. IPv6 ND maintains a few tables, among them are the Neighbor Cache and the Destination Cache.
RFC 4861, Neighbor Discovery for IP version 6 (IPv6), explains the host data structures for IPv6 ND.
5.1. Conceptual Data Structures
Hosts will need to maintain the following pieces of information for
each interface:
Neighbor Cache
A set of entries about individual neighbors to which traffic has been sent recently. Entries are keyed on the neighbor's on-link
unicast IP address and contain such information as its link-layer
address, a flag indicating whether the neighbor is a router or a
host (called IsRouter in this document), a pointer to any queued
packets waiting for address resolution to complete, etc. A
Neighbor Cache entry also contains information used by the Neighbor
Unreachability Detection algorithm, including the reachability
state, the number of unanswered probes, and the time the next
Neighbor Unreachability Detection event is scheduled to take place.
Destination Cache
A set of entries about destinations to which
traffic has been sent recently. The Destination
Cache includes both on-link and off-link
destinations and provides a level of indirection
into the Neighbor Cache; the Destination Cache maps
a destination IP address to the IP address of the
next-hop neighbor. This cache is updated with
information learned from Redirect messages.
Implementations may find it convenient to store
additional information not directly related to
Neighbor Discovery in Destination Cache entries,
such as the Path MTU (PMTU) and round-trip timers
maintained by transport protocols.
Prefix List
A list of the prefixes that define a set of
addresses that are on-link. Prefix List entries
are created from information received in Router
Advertisements. Each entry has an associated
invalidation timer value (extracted from the
advertisement) used to expire prefixes when they
become invalid. A special "infinity" timer value
specifies that a prefix remains valid forever,
unless a new (finite) value is received in a
subsequent advertisement. The link-local prefix is considered to be on the
prefix list with an infinite invalidation timer
regardless of whether routers are advertising a
prefix for it. Received Router Advertisements
SHOULD NOT modify the invalidation timer for the
link-local prefix.
Default Router List
A list of routers to which packets may be sent.
Router list entries point to entries in the
Neighbor Cache; the algorithm for selecting a
default router favors routers known to be reachable
over those whose reachability is suspect. Each
entry also has an associated invalidation timer
value (extracted from Router Advertisements) used
to delete entries that are no longer advertised.
Given that UDP packets don't actually send acks, how does a program like iperf measure their one-way performance, i.e., how can it confirm that the packets actually reached:
within a time frame
intact, and uncorrupted
To contrast, Intuitively, to me, it seems that TCP packets, which have an ack signal sent back to allow rigorous benchmarking of their movement across a network can be done very reliably from a client.
1/ "how can it confirm that the packets actually reached [...] intact, and uncorrupted"
UDP is an unfairly despised protocol, but come on, this is going way too far here! :-)
UDP have checksum, just like TCP:
https://en.wikipedia.org/wiki/User_Datagram_Protocol#Checksum_computation
2/ "how can it confirm that the packets actually reached [...] within a time frame"
It does not, because this is not what UDP is about, nor TCP by the way.[*]
As can be seen from its source code here:
https://github.com/esnet/iperf/blob/master/src/iperf_udp.c#L55
...what it does though, is check for out of order packets. A "pcount" is set in the sending side, and checked at the receiving side here:
https://github.com/esnet/iperf/blob/master/src/iperf_udp.c#L99
...and somewhat calculate a bogus jitter:
https://github.com/esnet/iperf/blob/master/src/iperf_udp.c#L110
(real life is more complicated than this, you not only have jitter, but also drift)
[*]:
For semi-garanteed, soft "within a time frame" / real time layer 3 and above protocols, look at RTP, RTSP and such. But neither TCP nor UDP inherently have this.
For real, serious hard real-time garantee, you've got to go to layer 2 protocols such as Ethernet-AVB:
https://en.wikipedia.org/wiki/Audio_Video_Bridging
...which were designed because IP and above simply cannot. make. hard. real. time. guaranteed. delivery. Period.
EDIT:
This is another debate, but...
The first thing you need for "within a time frame", is a shared wall clock on sending/receiving systems (else, how could you tell that such received packet is out of date?)
From Layer 3 (IP) and above, NTP precision target is about 1ms. It can be less than that on a LAN (but accross IP networks, it's just taking a chance and hope the best).
On layer 2, aka "LAN" the layer 2 PTP (Precision Time Protocol) IEEE 1588 is for sub-microsecond range. That's a 1000 times more accurate. Same goes for the derived IEEE 802.1AS, "Timing and Synchronization for Time-Sensitive Applications (gPTP)" used In Ethernet AVB.
Conclusion on this sub-topic:
TCP/IP, though very handy and powerful, is not designed to "guarantee delivery within a time frame". Be it TCP or UDP. Get this idea out of your head.
The obvious way would be to connect to a server that participates in the testing.
The client starts by (for example) connecting to an NTP server to get an accurate time base.
Then the UDP client sends a series of packets to the server. In its payload, each packet contains:
a serial number
a timestamp when it was sent
a CRC
The server then looks these over and notes whether any serial numbers are missing (after some reasonable timeout) and compares the time it received each packet to the time the client sent the packet. After some period of time, the server sends a reply indicating how many packets it received, the mean and standard deviation of the transmission times, and an indication of how many appeared to be corrupted based on the CRCs they contained.
Depending on taste, you might also want to set up a simultaneous TCP connection from the client to the server to coordinate testing of the UDP channel and (possibly) return results.
Actually i'm trying to evaluate the performance and efficiency of different protocols. I'm aware that TCP implementation considers a packet as "lost" when either the ack timeout or 3 duplicate acks are received. But this cannot tell when the packet is actually lost among the network.
Now i'm capturing packets on both sides of the connection via tcpdump, and get two pcap files. Can i get which exact packets are lost by comparing the two pcap files? And is this worth trying?
A direct attempting would be differing the ip packets' IDs to get the unmatched ones, which is also my first try. The problem is that the network adapter splits and combines some packets so libpcap cannot capture the actual ip packets through the network.
If i don't turn off features such like "generic receive offload" on NIC, which are on by default, what i captured on sender side and receiver side will not match exactly, so simply differing the ids will lead to wrong conclusion; But turning off the features that splitting/combining packets may affect transporting performance badly and thus my conclusion in the other way. So i'm in the middle of a dilemma.
A related post about how "GRO", "LRO" features behave on NIC:
different tcp packets captured on sender and receiver
I'm using 2 computers with an application to send and receive udp datagrams. There is no flow control and ICMP is disabled. Frequently when I send a file as UDP datagrams via the application, I get two packets changing their order and therefore - packet loss.
I've disabled and kind of firewall and there is no hardware switch connected between the computers (they are directly wired).
Is there a way to make sure Winsock and send() will send the packets the same way they got there?
Or is the OS doing that?
Or network device configuration needed?
UDP is a lightweight protocol that by design doesn't handle things like packet sequencing. TCP is a better choice if you want robust packet delivery and sequencing.
UDP is generally designed for applications where packet loss is acceptable or preferable to the delay which TCP incurs when it has to re-request packets. UDP is therefore commonly used for media streaming.
If you're limited to using UDP you would have to develop a method of identifying the out of sequence packets and resequencing them.
UDP does not guarantee that your packets will arrive in order. (It does not even guarantee that your packets will arrive at all.) If you need that level of robustness you are better off with TCP. Alternatively you could add sequence markers to your datagrams and rearrange them at the other end, but why reinvent the wheel?
is there a way to make sure winsock and send() will send the packets the same way they got there?
It's called TCP.
Alternatively try a reliable UDP protocol such as UDT. I'm guessing you might be on a small embedded platform so you want a more compact protocol like Bell Lab's RUDP.
there is no flow control (ICMP disabled)
You can implement your own flow control using UDP:
Send one or more UDP packets
Wait for acknowledgement (sent as another UDP packets from receiver to sender)
Repeat as above
See Sliding window protocol for further details.
[This would be in addition to having a sequence number in the packets which you send.]
There is no point in trying to create your own TCP like wrapper. We love the speed of UPD and that is just going to slow things down. Your problem can be overcome if you design your protocol so that every UDP datagram is independent of each other. Our packets can arrive in any order so long as the header packet arrives first. The header says how many packets are suppose to arrive. Also, UPD has become a lot more reliable since this post was created over a decade ago. Don't try to
This question is 12 years old, and it seems almost a waste to answer it now. Even as the suggestions that I have have already been posed. I dealt with this issue back in 2002, in a program that was using UDP broadcasts to communicate with other running instances on the network. If a packet got lost, it wasn't a big deal. But if I had to send a large packet, greater than 1020 bytes, I broke it up into multiple packets. Each packet contained a header that described what packet number it was, along with a header that told me it was part of a larger overall packet. So, the structure was created, and the payload was simply dropped in the (correct) place in the buffer, and the bytes were subtracted from the overall total that was needed. I knew all the packets had arrived once the needed byte total reached zero. Once all of the packets arrived, that packet got processed. If another advertisement packet came in, then everything that had been building up was thrown away. That told me that one of the fragments didn't make it. But again, this wasn't critical data; the code could live without it. But, I did implement an AdvReplyType in every packet, so that if it was a critical packet, I could reply to the sender with an ADVERTISE_INCOMPLETE_REQUEST_RETRY packet type, and the whole process could start over again.
This whole system was designed for LAN operation, and in all of my debugging/beta testing, I rarely ever lost a packet, but on larger networks I would often get them out of order...but I did get them. Being that it's now 12 years later, and UDP broadcasting seems to be frowned upon by a lot of IT Admins, UDP doesn't seem like a good, solid system any longer. ChrisW mentioned a Sliding Window Protocol; this is sort of what I built...without the sliding part! More of a "Fixed Window Protocol". I just wasted a few more bytes in the header of each of the payload packets to tell how many total bytes are in this Overlapped Packet, which packet this was, and the unique MsgID it belonged to so that I didn't have to get the initial packet telling me how many packets to expect. Naturally, I didn't go as far as implementing RFC 1982, as that seemed like overkill for this. As long as I got one packet, I'd know the total length, unique Message Id, and which packet number this one was, making it pretty easy to malloc() a buffer large enough to hold the entire Message. Then, a little math could tell me where exactly in the Message this packet fits into. Once the Message buffer was filled in...I knew I got the whole message. If a packet arrived that didn't belong to this unique Message ID, then we knew this was a bust, and we likely weren't going to ever get the remainder of the old message.
The only real reason I mention this today, is that I believe there still is a time and a place to use a protocol like this. Where TCP actually involves too much overhead, on slow, or spotty networks; but it's those where you also have the most likelihood of and possible fear of packet loss. So, again, I'd also say that "reliability" cannot be a requirement, or you're just right back to TCP. If I had to write this code today, I probably would have just implemented a Multicast system, and the whole process probably would have been a lot easier on me. Maybe. It has been 12 years, and I've probably forgotten a huge portion of the implementation details.
Sorry, if I woke a sleeping giant here, that wasn't my intention. The original question intrigued me, and reminded me of this turn-of-the-century Windows C++ code I had written. So, please try to keep the negative comments to a minimum--if at all possible! (Positive comments, of course...always welcome!) J/K, of course, folks.
When receiving a raw ethernet packet over a wireless connection, where does the ethernet checksum get calculated, and where are errors handled?
Does the wireless stack handle this, or is it handled in the upper layers?
Checksums may be carried out in various places. Recent Ethernet cards offload the checksums from the network stack. I have had to disable hardware checksumming to make network forensics easier. This should make obvious sense as without this functionality hardware would always silently drop packets.
Usually, the Ethernet level FCS (Frame Check Sequence) is handled in the hardware MAC (Media Access Controller). Note that we are talking about a CRC here and not just a checksum (there isn't a "checksum" at the Ethernet frame level).
If an FCS mismatch is detected, it will most probably be discarded at the HW MAC level: a statistics counter will then be updated.
In other words, it is no use "bothering" the software stack with an unusable frame.
As the other posters have said the FCS is normally checked by the NIC
itself or by the driver. However, in the case where you read up raw
ethernet frames I think it depends completely on the driver. For
instance, in WiFi NICs that can be set in "monitor" or "promiscous"
modes you usually don't want them to discard frames with bad FCS since
that may signify an error that you are looking for.
One data point: the Intel 4965AGN Linux driver sets the FCS field in all
captured packets to 0 in monitor mode. If you run Wireshark you can
see that it calculates the expected FCS and complains that the 0-field
is invalid. Wether this means that it discards frames with bad FCS
in the MAC, or if those are also passed up is unfortunately unclear.
So if the original question is "Do I have to check the FCS myself
when capturing raw packets" the answer in the 4965AGN case is
"you can't", and may be "yes" if you get the real FCS from the
NIC.
Most network hardware will allow you to set an option in the hardware to "store bad packets." This allows you to see packets in which the ethernet CRC failed. If you pass a bad ethernet frame to the stack, it will most likely be rejected due to a bad upper layer checksum. The stack does not check ethernet CRCs; this is left to the NIC, and CRC computation in software is time-consuming.
Keep in mind that stacked network protocols usually calculate checksums at various points in the stack. TCP will typically calculate a CRC at the network layer, IP header checksum at the IP layer, and TCP checksum at the TCP layer. The application may also verify the integrity of the data.