Do 802.11 probe requests ever contain real BSSIDs? - mac-address

It seems like 802.11 probe requests never contain a real BSSID but rather a wildcard BSSID (e.g. ff:ff:ff:ff:ff:ff) however I can't seem to find any documentation stating this. This Meraki documentation says:
"Because the probe request is sent from the mobile station to the
destination layer-2 address and BSSID of ff:ff:ff:ff:ff:ff all AP's
that receive it will respond."
Does this mean the probe requests never contain real BSSIDs? Even though they sometimes contain SSIDs?

I've seen many Probe Request frame with specific BSSID. For example, in a wireless distribution system(WDS), one AP would probe another AP with specific BSSID since they have the same SSID:
Frame 2022: 310 bytes on wire (2480 bits), 310 bytes captured (2480 bits)
Radiotap Header v0, Length 25
802.11 radio information
IEEE 802.11 Probe Request, Flags: opmP..FT.
Type/Subtype: Probe Request (0x0004)
Frame Control Field: 0x41f3
.... ..01 = Version: 1
.... 00.. = Type: Management frame (0)
0100 .... = Subtype: 4
Flags: 0xf3
.... ..11 = DS status: WDS (AP to AP) or Mesh (MP to MP) Frame (To DS: 1 From DS: 1) (0x3)
.... .0.. = More Fragments: This is the last fragment
.... 0... = Retry: Frame is not being retransmitted
...1 .... = PWR MGT: STA will go to sleep
..1. .... = More Data: Data is buffered for STA at AP
.1.. .... = Protected flag: Data is protected
1... .... = Order flag: Strictly ordered
.101 1101 0001 0110 = Duration: 23830 microseconds
Receiver address: 80:1d:30:a5:81:39 (80:1d:30:a5:81:39)
Destination address: 80:1d:30:a5:81:39 (80:1d:30:a5:81:39)
Transmitter address: 4b:3b:67:a4:4d:fe (4b:3b:67:a4:4d:fe)
Source address: 4b:3b:67:a4:4d:fe (4b:3b:67:a4:4d:fe)
BSS Id: ef:e1:f9:51:09:e6 (ef:e1:f9:51:09:e6)
.... .... .... 0010 = Fragment number: 2
0100 1110 1001 .... = Sequence number: 1257
Frame check sequence: 0x853d68c9 [incorrect, should be 0x7089dc98]
[FCS Status: Bad]
HT Control (+HTC): 0x8ab91f91
WEP parameters
Data (245 bytes)
Assume your PC had joined a open wireless network named Starbucks, and when you are at home, if some Rogue AP has the same name with it, then your PC connects to the AP. That's why some clients will actually allow you to optionally select a BSSID as well. And in ad-hoc network, there are many probe requests with specific BSSID.

I cannot find anything that definitely says a probe request will never contain a real BSSID. Yet in all examples I've found online, it is set to ff:ff:ff:ff:ff:ff. Here is another case from the blog of a wireless network expert:
Below shows the detail of Probe Request frame sent by the client which
is a management type with subtype value of 4. As you can see client is
sending it 6Mbps (lowest supported rate by the client). Address fields
are set like below
Address Field-1 = Receiver Address (= Destination Address) ff:ff:ff:ff:ff:ff
Address Field-2 = Transmitter Address (=Source Address) 84:38:38:58:63:D5
Address Field-3 = BSSID ff:ff:ff:ff:ff:ff
In addition I did my own testing and never found a real BSSID broadcast. So while I won't say it never happens, it definitely happens so rarely that it's worth considering that it will never be available.

Related

How can a socket connect when bound to a specific ethernet interface that's also being used by VPN / utun interface?

I'm trying to write a function that can connect to a server using a specific network interface so that it's consistently routed through that interface's gateway. This is on a macOS system that has one or more VPN connections.
Here's a proof-of-concept test function I've written:
void connectionTest(const char *hostname, int portNumber, const char *interface) {
struct hostent *serverHostname = gethostbyname(hostname);
if (serverHostname == NULL) {
printf("error: no such host\n");
return;
}
int socketDesc = socket(AF_INET, SOCK_STREAM, 0);
int interfaceIndex = if_nametoindex(interface);
if (interfaceIndex == 0) {
printf("Error: no such interface\n");
close(socketDesc);
return;
}
// Set the socket to specifically use the specified interface:
setsockopt(socketDesc, IPPROTO_IP, IP_BOUND_IF, &interfaceIndex, sizeof(interfaceIndex));
struct sockaddr_in servAddr;
bzero((char *)&servAddr, sizeof(servAddr));
servAddr.sin_family = AF_INET;
bcopy((char *)serverHostname->h_addr, (char *)&servAddr.sin_addr.s_addr, serverHostname->h_length);
servAddr.sin_port = htons(portNumber);
if (connect(socketDesc, (struct sockaddr *)&servAddr, sizeof(servAddr)) < 0) {
printf("connect failed, errno: %d", errno);
close(socketDesc);
return;
}
printf("connection succeeded\n");
close(socketDesc);
}
This function will successfully connect so long as the interface is one of the utun interfaces created by the VPNs, or a physical interface that is not used by the VPNs. But if I try to use the physical interface that is used by the VPNs, the function fails with errno 51: Network is unreachable.
For a more specific example, consider a system with the following network interfaces:
en0: Ethernet connection
en1: Wi-Fi connection
utun10: VPN connection 1, connected via en0
utun11: VPN connection 2, also connected via en0
If I call my function with something like:
connectionTest("api.ipify.org", 80, "en1");
connectionTest("api.ipify.org", 80, "utun10");
connectionTest("api.ipify.org", 80, "utun11");
... it will succeed. However, this is what produces the "network unreachable" error:
connectionTest("api.ipify.org", 80, "en0");
Is there some way to have the function work in the case of en0? (Preferably without changing the system's routing table just for this one connection?)
Edit:
It looks like the system doesn't know how to route packets through en0 when the VPN is up, unless it has a non-default route for en0.
I tried using the route command to check which route in the table would be used for a specific interface, and I get the following:
$ route get -ifscope en0 1.1.1.1
route: writing to routing socket: not in table
Only -ifscope en0 produces that error. However, the route table indicates there is a default route for en0. Here is the routing table when only ethernet and the VPN are connected (so no Wi-Fi or second VPN):
$ netstat -rn
Routing tables
Internet:
Destination Gateway Flags Refs Use Netif Expire
0/1 10.16.0.1 UGSc 165 0 utun10
default 192.168.20.1 UGSc 0 0 en0
10.16/16 10.16.0.8 UGSc 3 0 utun10
10.16.0.8 10.16.0.8 UH 2 0 utun10
127 127.0.0.1 UCS 0 0 lo0
127.0.0.1 127.0.0.1 UH 7 7108160 lo0
128.0/1 10.16.0.1 UGSc 40 0 utun10
169.254 link#8 UCS 1 0 en0 !
192.168.20 link#8 UCS 9 0 en0 !
192.168.20.1/32 link#8 UCS 2 0 en0 !
224.0.0/4 link#22 UmCS 0 0 utun10
224.0.0/4 link#8 UmCSI 1 0 en0 !
224.0.0.251 1:0:5e:0:0:fb UHmLWI 0 0 en0
255.255.255.255/32 link#22 UCS 0 0 utun10
255.255.255.255/32 link#8 UCSI 0 0 en0 !
There's clearly a default route listed for en0 pointing to its gateway, 192.168.20.1. Why isn't the packet being routed? If I create a static route for 1.1.1.1/32 or even 1/8 it will work. But so long as en0 only has a default route, it won't work. It's like the default route has been disabled somehow.
Edit 2:
If I add a new route to the table using:
$ route add -ifscope en0 0/0 192.168.20.1
so that the routing table now includes the following entry:
Destination Gateway Flags Refs Use Netif Expire
default 192.168.20.1 UGScI 1 0 en0
alongside all of the above entries, so there are now two default entries, then the connection works. Why is it necessary for there to be an interface-specific default route in order for this to work?
Once you added the routing table to your question, your problem became obvious.
It is the routing table that determines to which gateway a packet is sent. The routing table tells the sending host to which gateway the packet is sent. It does that by comparing the destination address to the routes in the routing table. The most-specific (longest match) route is used. A default route is the least-specific (shortest match) route, and it is used as the route of last resort when there are no more-specific routes in the routing table.
Based on the routing table you provided, any packet with a destination address from 1.0.0.0 to 126.255.255.255 (0.0.0.0/8 and 127.0.0.0/8 are exceptions as unusable ranges) will match the 0/1 routing table entry rather than the default route (0/0), and any packet with a destination address from 128.0.0.0 to 223.255.255.255 (224.0.0.0/4 is multicast, and 240.0.0.0/4 is unusable) will match the 128/1 routing table entry rather than the default route (0/0), because the route length of 1 is more specific than the default route length of 0. That means any packets destined to an address in those ranges (combined, all addresses destined for a different network) will be sent to the gateway (10.16.0.1) referenced by the routing table entries for the 0/1 and 128/1 routes.
To solve your problem, you need to remove the 0/1 and 128/1 routing table entries and replace them with one or more entries that are restricted to the networks which the tunnel can reach. With that, the entries not matching the tunnel route(s) or other more specific routing table entries will use the default route.

Obtaining the number of correct messages in the network layer by the Omnet++ result collection

Suppose in a wireless network with 25 nodes, we have a scenario where some of each node sends messages to some other nodes according to a routing protocol such as AODV.
We simulate this network. After finishing the simulation, how to obtain the number of correct messages on the network layer by the Omnet++ result collection? Two metrics are defined, sentPacketCount and receivedPacketCount.
By correct messages, I mean messages received by a node whose destination address field is the address of the same node. If retransmission occurs, it should be counted once for the receiver side for receivedPacketCount, in fact, received Packet Count will be increased when the packet is received in destination node. Every packet is sent, sentPacketCount will be increased.
If a node has more than an application, all the messages generated by all applications of the same node must be counted.
A part of omnetpp.ini file is for a node:
*.hostA.numApps = 2
*.hostA.app[0].typename = "UdpBasicApp"
*.hostA.app[0].destAddresses = "hostB"
*.hostA.app[0].destPort = 5000
*.hostA.app[0].messageLength = 1000B
*.hostA.app[0].sendInterval = exponential(12ms)
*.hostA.app[0].packetName = "UDPData"
*.hostA.app[0].typename = "TcpBasicApp"
*.hostA.app[0].destAddresses = "hostC"
*.hostA.app[0].destPort = 5001
*.hostA.app[0].messageLength = 1024B
*.hostA.app[0].sendInterval = exponential(45ms)
*.hostA.app[0].packetName = "TCPData"
For TcpBasicApp or any other TCP app, the count of packets is meaningless. Tcp apps have streams, not packets. Even if you send out 1000 bytes in one write operation to a TCP socket, the other end may get it using 3 read operation, or 20... TCP also guarantees the delivery of the packets, so number of successfully sent packets = number of successfully received packets. So byte count statistics also do not matter.
Number of sent/received packets/bytes makes sense for UDP traffic. In UDP there is a concept of packets and there is no guaranteed delivery. Luckily UdpBasicApp gathers these statistics by default. Take a look at the packetReceived and packetSent statistic. It gathers both packet count and total byte count.
You may need to turn on scalar recording on all apps:
**.app[*].*.scalar-recording = true

How to find dma_request_chan() failure reason details?

In an external kernel module, using DMA Engine, when calling dma_request_chan() returns an error pointer of value -19, i.e. ENODEV or "No such device".
Now, in the active device tree, I do find a dma-names entry with what I'm trying to get a channel for, so my suspicion is that something else deeper in the forest is already not found.
How do I find out what's wrong?
Background:
I have a Zynq MP Ultrascale+ board here, with an FPGA design which uses AXI VDMA block to provide one channel of data to be received on the Cortex A's Linux, where the data is written to DDR4 by the FPGA and to be read from Linux.
I found that there is a Xilinx DMA driver included in the kernel, in the Xilinx source repo anyway, currently kernel version 5.6.0.
And that that driver has no user space interface, such that an intermediate kernel driver is needed.
This is depicted, and they have an example here: Section "4 DMA Proxy Design". I modified the code in the dma-proxy.c of the zip file linked there such that it uses only the RX channel, i.e. also only tries to request it.
The code for that is here, to not make this post huge:
Modified dma-proxy.c at onlinegdb.com
Line 407 has the function create_channel(), which used to use dma_request_slave_channel() which ditches the error code of the function it wraps, so to see the error, I am using that one instead: dma_request_chan().
The function create_channel() is called in function dma_proxy_probe() # line 470 (the occurences before that are deactivated by compile switch).
So by way of this call, dma_request_chan() will be called with the parameters:
create_channel(pdev, &channels[RX_CHANNEL], "dma_proxy_rx", DMA_DEV_TO_MEM);
The Device Tree for my board has an added node for dma-proxy driver as is shown at the top of the dma-proxy.c
dma_proxy {
compatible ="xlnx,dma_proxy";
dmas = <&axi_dma_0 0>;
dma-names = "dma_proxy_rx";
};
The name "axi_dma_0" matches with the name in the axi DMA device tree node:
axi_dma_0: dma#a0000000 {
#dma-cells = <0x1>;
clock-names = "s_axi_lite_aclk", "m_axi_s2mm_aclk";
clocks = <0x3 0x47 0x3 0x47>;
compatible = "xlnx,axi-dma-7.1", "xlnx,axi-dma-1.00.a";
interrupt-names = "s2mm_introut";
interrupt-parent = <0x1d>;
interrupts = <0x0 0x2>;
reg = <0x0 0xa0000000 0x0 0x1000>;
xlnx,addrwidth = <0x28>;
xlnx,sg-length-width = <0x1a>;
phandle = <0x1e>;
dma-channel#a0000030 {
compatible = "xlnx,axi-dma-s2mm-channel";
dma-channels = <0x1>;
interrupts = <0x0 0x2>;
xlnx,datawidth = <0x40>;
xlnx,device-id = <0x0>;
};
If I now look here:
% cat /proc/device-tree/dma_proxy/dma-names
dma_proxy_rx
Looks like my dma_proxy_rx, that I'm trying to request the channel for, is in there.
Edit:
In the boot log, I see this:
xilinx-vdma a0000000.dma: Please ensure that IP supports buffer length > 23 bits
irq: no irq domain found for interrupt-controller#a0010000 !
xilinx-vdma a0000000.dma: unable to request IRQ 0
xilinx-vdma a0000000.dma: WARN: Device release is not defined so it is not safe to unbind this driver while in use
xilinx-vdma a0000000.dma: Xilinx AXI DMA Engine Driver Probed!!
There are warnings - but in the end, the Xilinx AXI DMA Engine got "probed", meaning the lowest level driver loaded and is ready, right?
So it looks to me like there should be my device, but the kernel disagrees.
I've got the same problem with similar configuration. After digging a lot of kernel source code (especially drivers/dma/xilinx/xilinx_dma.c) I've solved this problem by changing channel number in dmas parameter from 0 to 1 in dma-proxy device tree entry like this:
dma_proxy {
compatible ="xlnx,dma_proxy";
dmas = <&axi_dma_0 1>;
dma-names = "dma_proxy_rx";
};
It seems that dma-proxy example is written for AXI DMA block with both mm2s (channel #0) and s2mm (channel #1) channels. And if we remove mm2s channel from AXI DMA block, the s2mm channel stays #1.

How to send an RTS 802.11 packet using Scapy (and get a CTS response)

I'm quite new to Scapy, and I'm trying to craft an RTS packet and send it to an AP, in order to get a CTS response. However, I'm having a really hard time figuring out the proper way to do it (being a beginner in networking and 802.11 packets doesn't help either).
This is the code I have for now:
bytes = struct.pack("<H", 123) # 123 microseconds
timeval = struct.unpack(">H", bytes)[0]
pkt = RadioTap()/Dot11(addr1 = target_addr, addr2 = my_addr, type = 1, subtype = 11, ID = timeval)
I know that type must be equal to 1 since it's a Control packet, and that subtype must be equal to 11 because it's an RTS packet. However, when I send the packet with either sr() or srp() or sr1() I either get no response back (Scapy waits for a response but nothing gets back so it just continues waiting) or I get the exact message I sent.
This question mentions adding a Dot11Elt() layer at the end, however that changes nothing in my case.
This is the type of response I get back:
And if I open the 0th element of the response tuple with Wireshark, I get:
I've hidden the MAC addresses, but they are the sameas those I put in the packet I sent to the AP (target_addr and my_addr). I'm expecting to get back a CTS with my_addr as "destination address".
What am I doing wrong?

PIC16F877 + 24LC64 via i2c

My task is to copy first 255 bytes from external EEPROM (24LC64) to internal (PIC16F877) via i2c bus. I've read AN1488, all datasheets, MikroC gide (oh, yes, I'm using MikroC), but hopeless.. Meaning that my code trys to read smtng but then, reading my PIC's eeprom at programmer (which can't read 24LC64, so I don't even know what's on it, but there is smtng defenately and it is different from what i'm getting), and I'm getting all EEPROM filled by "A2" or "A3". My guess is that it's that first addr, by which I'm addressing to 24LC64. Could you pls inspect my code (it's quite small =)) and point me at my misstakes.
char i;
unsigned short Data;
void main(){
PORTB = 0;
TRISB = 0;
I2C1_Init(100000);
PORTB = 0b00000010;
for (i = 0x00; i<0xFF; i++) {
I2C1_Start();
I2C1_Wr(0xA2); //being 1010 001 0
//I'm getting full internal EE filled with what's in brackets from above
I2C1_Wr(0b00000000);
I2C1_Wr(i);
I2C1_Repeated_Start();
I2C1_Wr(0xA3); //being 1010 001 1
Data = I2C1_Rd(0);
I2C1_Stop();
EEPROM_write(i, Data); //How could that 1010 001 0 get into here???
Delay_100ms();
}
PORTB = 0b00000000;
while (1) {
}
}
P.S. I've tryed this with sequantial read, but it "reads" (again that "A2"..) only 1st byte.. So i've posted this one..
P.S.S. I`m working in "hardware", no Proteus involved..
P.S.S.S. I can't test writing, because I have only one 24LC64 with important info on it, so it's even pulld up to Vcc on it's WP pin...
This isn't a specific answer but more of a checklist for I2C comms, since it's difficult to help with your problem without looking at a scope and without delving into the API calls that you've provided.
Check the address of your EEPROM. I2C uses a 7-bit address with a R/W bit appended to the end, so it's easy to make a mistake here.
Check the command sequence that your EEPROM expects to receive for a "data read"
Check how the I2C_ API that you're using deals with acks from the EEPROM. They need to be handled somewhere (usually in an ISR) and it's not obvious where they're dealt with from your example.
Check that you've got the correct pull-ups on SDA and SCL as per the requirements of your design - they're needed for I2C to work.

Resources