Limit on number of wireless "sta" stations in openwrt - wireless

I have openwrt implementation on a TPLink WR902AC (pocket router)
I have a /etc/network/wireless configuration file with 10 sta configurations for connecting to AP all of which are active (option disabled '0')
This is to ensure that openwrt connects to any one of the APs configured.
Only the first 4 configured are attempted to be connected and the rest are simply ignored.
(if first 4 configured are not available the 5th one is being ignored)
I tried to identify the bottleneck.
Only first 4 wpa_supplicant instances are called as evident from these files in /tmp/run
./tmp/run/wpa_supplicant/wifi3
./tmp/run/wpa_supplicant/wifi1
./tmp/run/wpa_supplicant/wifi2
./tmp/run/wpa_supplicant/wifi0
When I disable the first one, the fifth one gets connected on reconnect with "wifi"
I tried to check the source code. I lost the track after ubus being called from wifi script.
This I believe is similar question to https://forum.openwrt.org/t/limit-on-the-number-of-wifi-ssids/63141
iw list on openwrt show me the limit.
valid interface combinations:
* #{ IBSS } <= 1, #{ managed, AP, mesh point, P2P-client, P2P-GO } <= 4,
total <= 4, #channels <= 1, STA/AP BI must match
I tried to use wpa_supplicant directly instead of depending on scripts.
wpa_supplicant -c /root/wifi0.conf -i wifi0 -s -B
wpa_supplicant -c /root/wifi1.conf -i wifi1 -s -B
wpa_supplicant -c /root/wifi2.conf -i wifi2 -s -B
wpa_supplicant -c /root/wifi3.conf -i wifi3 -s -B
wpa_supplicant -c /root/wifi4.conf -i wifi4 -s -B
wpa_supplicant -c /root/wifi5.conf -i wifi5 -s -B
This failed with "interface wifi4" not available error.
Could someone point me to the source where this hard limit is set?
Is there any way around this?
Thanks in advance.
Update:-
mt7601u based usb WiFi dongle was added to wr902ac and configured (as radio2)
This time only one is connected. If I have AP configured, sta doesn't even get connected.
so number of slots is limited. (ap counts as one slot and each sta is one slot)
The built-in 2.4 GHZ has 4 slots & 5 GHz has 8 slots.
The mt7601u based wifi has only 1 slot.
Probably there exists a usb dongle that has 8 slots. Could someone point me to the theory behind all this?

Related

Pcie completion timeout

when a pcie read request is given,the FPGA will respond with a delay(due to the delay in reading from custom component in FPGA).Due to this i am getting full 'ff' most of the times. Is it posiible to make the read wait. Does this have anything to do with the pcie completion timeout value.
used the command
sudo setpci -s01:00.0 CAP_EXP+0x28.W=0x0000
i set CTO value using this in the driver
pcie_capability_clear_and_set_word(dev, PCI_EXP_DEVCTL2, PCI_EXP_DEVCTL2_COMP_TIMEOUT, 0xd);
output:
$ sudo lspci -vv -s 01:00.0 | grep -A1 DevCtl2
DevCtl2: Completion Timeout: 4s to 13s, TimeoutDis-, LTR-, OBFF Disabled
LnkCtl2: Target Link Speed: 8GT/s, EnterCompliance- SpeedDis-

Disconnecting and reconnecting nvme

Is there capacity within amazon/centos/linux to switch the ordering round of nitro disks?
I have an ami which consistently has devices in the incorrect order, by this I mean nvme1n1 and nvme2n1 should be switched round. If I run nvme id-ctrl -v /dev/nvme1n1 | grep sn I get a different serial number back following a reboot. I know they're "wrong" as the serial numbers are not reflective of their capacity... Hope that makes sense (I appreciate it's a bit confusing). This only ever occurs on servers with two or more disks; upon a reboot the disks are "correct"
My question is, is there a method of forcing the nvme device to disconnect and reconnect (in the hope that the mapping works as expected in the correct order).
Thanks guys
Amazon Linux version 2017.09.01 and later contains scripts and a udev rule that automatically maps NVMe devices to /dev/xvd?. It is very briefly mentioned in the documentation, but there is not much information there.
You can obtain a copy by launching the Amazon Linux AMI, but there are also other places on the web where they have been posted. For example, I found this gist.
Very simple in the end:
echo 1 > /sys/bus/pci/devices/$(readlink -f /sys/class/nvme/nvme1 | awk -F "/" '{print $5}')/remove
echo 1 > /sys/bus/pci/devices/$(readlink -f /sys/class/nvme/nvme2 | awk -F "/" '{print $5}')/remove
echo 1 > /sys/bus/pci/rescan

Unable to download data using Aspera

I am trying to download data from the European Nucleotide Archive (ENA) using Aspera CLI however my downloads are getting stalled. I have downloaded several files earlier using the same tool but this is happening since last one month. I usually use the following command:
ascp -QT -P33001 -k 1 -i ~/.aspera/connect/etc/asperaweb_id_dsa.openssh era-fasp#fasp.sra.ebi.ac.uk:/vol1/fastq/ERR192/009/ERR1924229/ERR1924229.fastq.gz .
From a post on Beta Science, I learnt that this might be due to not limiting the download speed and hence tried usng the -l argument but was of no help.
ascp -QT -l 300m -P33001 -k 1 -i ~/.aspera/connect/etc/asperaweb_id_dsa.openssh era-fasp#fasp.sra.ebi.ac.uk:/vol1/fastq/ERR192/009/ERR1924229/ERR1924229.fastq.gz .
Your command works.
you might be overdriving your local network ?
how much bandwidth do you have ?
here "-l 300m" sets a target rate to 300 Mbps, if you have less than 30, this can cause such problems.
try to reduce the target rate to what you actually have.
(using wired ? Wifi ?)

debug bpf code on netlink messages

I am writing a bpf filter to prevent certain netlink messages. I am trying to debug the bpf code. Is there any debug tool that could help me?
I was initially thinking of using nlmon to capture netlink messages:
From https://jvns.ca/blog/2017/09/03/debugging-netlink-requests/
# create the network interface
sudo ip link add nlmon0 type nlmon
sudo ip link set dev nlmon0 up
sudo tcpdump -i nlmon0 -w netlink.pcap # capture your packets
Then use ./bpf_dbg (
https://github.com/cloudflare/bpftools/blob/master/linux_tools/bpf_dbg.c)
1) ./bpf_dbg to enter the shell (shell cmds denoted with '>'):
2) > load bpf 6,40 0 0 12,21 0 3 20... (this is the bpf code I intend to debug)
3) > load pcap netlink.pcap
4) > run /disassemble/dump/quit (self-explanatory)
5) > breakpoint 2 (sets bp at loaded BPF insns 2, do run then;
multiple bps can be set, of course, a call to breakpoint
w/o args shows currently loaded bps, breakpoint reset for
resetting all breakpoints)
6) > select 3 (run etc will start from the 3rd packet in the pcap)
7) > step [-, +] (performs single stepping through the BPF)
Did anyone try this before?
Also, I was not able to make nlmon module to load on my linux kernel(Is there a doc for this?)
I am running kernel version Linux version 4.10.0-40-generic
The nlmon module seems to be present in the kernel source:
https://elixir.free-electrons.com/linux/v4.10/source/drivers/net/nlmon.c#L41
But, when I search inside, /lib/modules/ for nlmon.ko I dont find anything.
instance-1:/lib/modules$ find . | grep -i nlmon
instance-1:/lib/modules$

Nmap - RTTVAR has grown to over 2.3 seconds, decreasing to 2.0

I have a script that I'm using to build a config for icinga2. The network is large, multiple /13's large. When I run the script I keep getting the RTTVAR has grown to over 2.3 seconds, decreasing to 2.0 error. I've tried raising my gc_thresh and breaking up the subnets. I've dived through the little info from google and can't seem to find a fix. If anyone has any ideas, I'd really appreciate it. I'm on Ubuntu 16.04
My script:
# Find devices and create IP list
i=72
while [ $i -lt 255 ]
do
echo "$(date) - Scanning xx.$i.0.0/16" >> files/scan.log
nmap -sn --host-timeout 5 xx.$i.0.0/16 -oG - | awk '/Up$/{print $2}' >> files/ip-list
let i=i+1
done
My /etc/sysctl.conf
# Force gc to clean-up quickly
net.ipv4.neigh.default.gc_interval = 3600
# Set ARP cache entry timeout
net.ipv4.neigh.default.gc_stale_time = 3600
# Setup DNS threshold for arp
net.ipv4.neigh.default.gc_thresh3 = 8192
net.ipv4.neigh.default.gc_thresh2 = 4096
net.ipv4.neigh.default.gc_thresh1 = 2048
Edit: added host-timeout 5 removed -n
I can suggest you tu use ping scan. If you want an "overall sight" of your network you can use
nmap -sP -n
It decreases the time a little bit comparing to nmap -sn , you can check it with small examples.
As I said in a comment. Use --host-timeout and --max-retries and that will improve your performance.

Resources