Does dnsmasq support overlapping address space. For example, 2 (or more than 2) subnets with same address space. If yes, how does that work?
Related
I was reading about Cache-only memory access on Wikipedia. There is a line saying that "In NUMA, each address in the global address space is typically assigned a fixed home node". I don't know what is global address space. Can someone tell me what is global address space?
In a machine with multiple nodes, each node has a set of addressable memory locations--its local address space. The global address space of the machine is the union of these sets.
I have a PCI device which has some memory address inside BAR0. I suppose this memory address is just OS virtual address which points to some physical memory of the device. The question is where it points? Reading the documentation of the device as well as the firmware source code I noticed that this device have some register responsible for setting so called memory windows. I was hopping that BAR0 will point exactly to them, hovewer this is not the case and looks like this:
BAR0 address -> Some unknown memory -> + 0x80000 My memory window
So why is my memory window offset by 0x80000 from where BAR0 points to, where is this BAR0 pointing us to + how is it set any by whom?
Thanks
No. The address in a BAR is the physical address of the beginning of the BAR. That is how the device knows when and how to respond to a memory read or write request. For example, let's say the BAR (BAR0) is of length 128K and has a base address of 0xb840 0000, then the device will respond to a memory read or write to any of these addresses:
0xb840 0000
0xb840 0080
0xb840 1184
0xb841 fffc
but NOT to any of these addresses:
0x5844 0000 (Below BAR)
0xb83f 0000 (Below)
0xb83f fffc (Below)
0xb842 0000 (Above BAR)
0xe022 0000 (Above)
This was more significant in the original PCI where the bus was actually a shared medium and devices might see requests for addresses belonging to other devices. With PCI-Express' point to point architecture, only PCI "bridges" will ever see requests for memory addresses they do not own. But it still functions in exactly the same way. And the low bits of the address space still allow the device to designate different functions / operations to different parts of the space (as in your device, creating the separate memory window you're attempting to access).
Now, how you as a programmer access the BAR memory space is a different question. For virtually all modern systems, all memory accesses made by programs are to virtual addresses. So, in order for your memory access to reach a device, there must be a mapping from the virtual address to the physical address. That is generally done through page tables (though some architectures, like MIPS, have a dedicated area of virtual address space that is permanently mapped to part of the physical address space).
The exact mechanism for allocating virtual address space, and setting up the page tables to map from that space to the BAR physical address space is processor- and OS-dependent. So you will need to allocate some virtual address space, then create page tables mapping from the start of the allocated space to (BAR0) + 0x80000 in order to work with your window. (I'm describing this as two steps, but your OS probably provides a single function call to allocate virtual address space and map it to a physical range in one fell swoop.)
Now, the process of assigning physical address space to the device (that is, actually sticking an address into the BAR) is generally done very early in system initialization by the system BIOS or an analogous early-boot mechanism while it's enumerating all the PCI devices installed in the system. The desired address space size is determined by querying the device, then the base address of a large enough physical address region is written into the BAR.
The final question: why your memory window is at an offset of 0x80000 within the device's address space is completely device-specific and cannot be answered more generally.
How can I go about programmatically pinging all address through Windows for an IPv6 network.
My address is fe80::1881:1fc2:a153:71f0%3(Preferred).
I have done this via IPv4 with no issue, but having a hard time understanding how to do this to build my ARP table for IPv6.
How can I go about programmatically [sic] pinging all address through
Windows for an IPv6 network. [sic]
If you try to ping every one of the possible 18,446,744,073,709,551,616 addresses on a standard /64 IPv6 network, at 1,000,000 addresses per second, it will take you over 584,542 years. You simply cannot try to ping every host on an IPv6 network.
...having a hard time understanding how to do this to build my ARP table
for IPv6.
IPv6 doesn't use ARP. IPv6 uses ND. IPv6 ND maintains a few tables, among them are the Neighbor Cache and the Destination Cache.
RFC 4861, Neighbor Discovery for IP version 6 (IPv6), explains the host data structures for IPv6 ND.
5.1. Conceptual Data Structures
Hosts will need to maintain the following pieces of information for
each interface:
Neighbor Cache
A set of entries about individual neighbors to which traffic has been sent recently. Entries are keyed on the neighbor's on-link
unicast IP address and contain such information as its link-layer
address, a flag indicating whether the neighbor is a router or a
host (called IsRouter in this document), a pointer to any queued
packets waiting for address resolution to complete, etc. A
Neighbor Cache entry also contains information used by the Neighbor
Unreachability Detection algorithm, including the reachability
state, the number of unanswered probes, and the time the next
Neighbor Unreachability Detection event is scheduled to take place.
Destination Cache
A set of entries about destinations to which
traffic has been sent recently. The Destination
Cache includes both on-link and off-link
destinations and provides a level of indirection
into the Neighbor Cache; the Destination Cache maps
a destination IP address to the IP address of the
next-hop neighbor. This cache is updated with
information learned from Redirect messages.
Implementations may find it convenient to store
additional information not directly related to
Neighbor Discovery in Destination Cache entries,
such as the Path MTU (PMTU) and round-trip timers
maintained by transport protocols.
Prefix List
A list of the prefixes that define a set of
addresses that are on-link. Prefix List entries
are created from information received in Router
Advertisements. Each entry has an associated
invalidation timer value (extracted from the
advertisement) used to expire prefixes when they
become invalid. A special "infinity" timer value
specifies that a prefix remains valid forever,
unless a new (finite) value is received in a
subsequent advertisement. The link-local prefix is considered to be on the
prefix list with an infinite invalidation timer
regardless of whether routers are advertising a
prefix for it. Received Router Advertisements
SHOULD NOT modify the invalidation timer for the
link-local prefix.
Default Router List
A list of routers to which packets may be sent.
Router list entries point to entries in the
Neighbor Cache; the algorithm for selecting a
default router favors routers known to be reachable
over those whose reachability is suspect. Each
entry also has an associated invalidation timer
value (extracted from Router Advertisements) used
to delete entries that are no longer advertised.
Is it possible to create a DoS attack in IPv6 by using ICMP packet too big messages?
So for instance, say you want to deny access to somewhere by somehow spoofing an ICMP packet too big message, and set the size to 68 octets (the minimum for IPv4), to throttle any traffic that a particular node receives. Would this kind of attack be possible?
In RFC 1981 it says
A node MUST NOT reduce its estimate of the Path MTU below the IPv6 minimum link MTU. Note: A node may receive a Packet Too Big message reporting a next-hop MTU that is less than the IPv6 minimum link MTU. In that case, the node is not required to reduce the size of subsequent packets sent on the path to less than the IPv6 minimun link MTU, but rather must include a Fragment header in those packets [IPv6-SPEC].
So this case in RFC 1981 would normally only occur in the case that there's a IPv6-IPv4 translation where an IPv4 node would have an MTU smaller than 1280. If my understanding is correct however, if there's an IPv6-IPv4 tunnel along the path, we can significantly slow down the traffic since the IPv6-IPv4 node would fragment it.
However, this didn't quite make sense to me since IPv6 doesn't allow fragmentation.
Yes, that's possible. But you'll find that people have thought about it before, so it's not as easy as you might think. You'll generally need to impersonate a router close to the victim, and an ingress filter will prevent you from doing that. There are a few more hurdles.
But you can attack other hosts on your ethernet.
I am trying to come up with an algorithm to allocate ip-address from a universal address space. I have a backend database to store the allocated subnets from which I can calculate the available ip-address space. I want to know if there any existing algorithms to do this.
Edit:
The user can decide whether she wants a contiguous address or not. The algorithm will allocate the addresses based on the users configuration object and the pre-defined set of rules.
Defragmentation is a different usecase. The administrator can defragment the address space whenever he wants. This is not under consideration now.
Thanks!