SbSocketResolveTest.IgnoreExtraBits test fail with NPLB - cobalt

The SbSocketResolveTest.IgnoreExtraBits test case of cobalt release 11 would be probabilistic failed(sometimes would pass) with the same platform and the same binary, from the test code in socket_resolve.cc
it can see that when the filters is set to 1<<14 or 0, it will both run into line 68(hints.ai_family = AF_UNSPEC; at line 39), but for the same hostname, the first time it get 2 IP addresses(1 IPV4, 1 IPV6), for the second time, it get 5 IP addresses(1 IPV4, 4 IPV6), and then it will fail because the test case think the ip addresses number should the same, so it seemed be something wrong with test test case itself, can someone help to have look?
[ RUN ] SbSocketResolveTest.IgnoreExtraBits
[AAAAA]in SbSocketResolve at 53 in ../../third_party/starboard/shared/posix/socket_resolve.cc, filters=16384
[AAAAA]in SbSocketResolve at 67 in ../../third_party/starboard/shared/posix/socket_resolve.cc
getaddrinfo response 0
Flags: 0x20
Family: AF_INET v4
IPv4 addr 203.188.200.67
getaddrinfo response 1
Flags: 0x20
Family: AF_INET v6
IPv6 addr 2406:2000:ec:c00::1001
[AAAAA]in SbSocketResolve at 53 in ../../third_party/starboard/shared/posix/socket_resolve.cc, filters=0
[AAAAA]in SbSocketResolve at 67 in ../../third_party/starboard/shared/posix/socket_resolve.cc
getaddrinfo response 0
Flags: 0x20
Family: AF_INET v4
IPv4 addr 203.188.200.67
getaddrinfo response 1
Flags: 0x20
Family: AF_INET v6
IPv6 addr 2001:4998:c:e33::53
getaddrinfo response 2
Flags: 0x20
Family: AF_INET v6
IPv6 addr 2001:4998:44:204::100d
getaddrinfo response 3
Flags: 0x20
Family: AF_INET v6
IPv6 addr 2001:4998:44:204::a7
getaddrinfo response 4
Flags: 0x20
Family: AF_INET v6
IPv6 addr 2001:4998:c:e33::54
../../starboard/nplb/socket_resolve_test.cc:80: Failure
Value of: resolution1->address_count
Actual: 2
Expected: resolution2->address_count
Which is: 5
[ FAILED ] SbSocketResolveTest.IgnoreExtraBits (5203 ms)
[1]: https://cobalt.googlesource.com/cobalt/+/release_11/src/starboard/shared/posix/socket_resolve.cc

Related

VPN affects DNS resolution on MacOS

I setup minikube on MacOS and as result there is a virtual intf created on the host machine as the following:
bridge100: flags=8a63<UP,BROADCAST,SMART,RUNNING,ALLMULTI,SIMPLEX,MULTICAST> mtu 1500
options=3<RXCSUM,TXCSUM>
ether f2:18:98:52:ec:64
inet 192.168.64.1 netmask 0xffffff00 broadcast 192.168.64.255
inet6 fe80::f018:98ff:fe52:ec64%bridge100 prefixlen 64 scopeid 0x13
inet6 fdd5:e29:6049:e016:475:5258:18a3:3700 prefixlen 64 autoconf secured
Configuration:
id 0:0:0:0:0:0 priority 0 hellotime 0 fwddelay 0
maxage 0 holdcnt 0 proto stp maxaddr 100 timeout 1200
root id 0:0:0:0:0:0 priority 0 ifcost 0 port 0
ipfilter disabled flags 0x0
member: vmenet0 flags=3<LEARNING,DISCOVER>
ifmaxaddr 0 port 18 priority 0 path cost 0
member: vmenet1 flags=3<LEARNING,DISCOVER>
ifmaxaddr 0 port 20 priority 0 path cost 0
nd6 options=201<PERFORMNUD,DAD>
media: autoselect
status: active
On a minikube VM, I got an error when trying to pull a image when I run a VPN on the host machine:
$ docker run -it --net=container:$ID --pid=container:$ID --volumes-from=$ID alpine sh
Unable to find image 'alpine:latest' locally
docker: Error response from daemon: Get "https://registry-1.docker.io/v2/": dial tcp: lookup registry-1.docker.io on 192.168.64.1:53: read udp 192.168.64.19:59651->192.168.64.1:53: i/o timeout.
If I do a dig on host when the VPN is running, I got the following outputs showing dns with 192.168.64.1 fails.
(base) /etc $ dig registry-1.docker.io
; <<>> DiG 9.10.6 <<>> registry-1.docker.io
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 45428
;; flags: qr rd ra; QUERY: 1, ANSWER: 8, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
;; QUESTION SECTION:
;registry-1.docker.io. IN A
;; ANSWER SECTION:
registry-1.docker.io. 3591 IN A 52.205.127.201
registry-1.docker.io. 3591 IN A 34.237.244.67
registry-1.docker.io. 3591 IN A 52.55.124.246
registry-1.docker.io. 3591 IN A 52.72.252.48
registry-1.docker.io. 3591 IN A 34.203.135.183
registry-1.docker.io. 3591 IN A 52.202.132.224
registry-1.docker.io. 3591 IN A 54.86.228.181
registry-1.docker.io. 3591 IN A 54.197.112.205
;; Query time: 347 msec
;; SERVER: 10.44.0.1#53(10.44.0.1)
;; WHEN: Wed Mar 02 17:25:26 CST 2022
;; MSG SIZE rcvd: 177
(base) /etc $ dig registry-1.docker.io #192.168.64.1
; <<>> DiG 9.10.6 <<>> registry-1.docker.io #192.168.64.1
;; global options: +cmd
;; connection timed out; no servers could be reached
(base) /etc $
If I stop the VPN and do a dig on the host, I got the following outputs showing dns with 192.168.64.1 success.
(base) /etc $ dig registry-1.docker.io
; <<>> DiG 9.10.6 <<>> registry-1.docker.io
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 39523
;; flags: qr rd ra; QUERY: 1, ANSWER: 8, AUTHORITY: 4, ADDITIONAL: 7
;; QUESTION SECTION:
;registry-1.docker.io. IN A
;; ANSWER SECTION:
registry-1.docker.io. 600 IN A 54.86.228.181
registry-1.docker.io. 600 IN A 52.72.252.48
registry-1.docker.io. 600 IN A 174.129.220.74
registry-1.docker.io. 600 IN A 34.237.244.67
registry-1.docker.io. 600 IN A 52.205.127.201
registry-1.docker.io. 600 IN A 52.202.132.224
registry-1.docker.io. 600 IN A 52.200.37.142
registry-1.docker.io. 600 IN A 52.203.238.92
;; AUTHORITY SECTION:
docker.io. 2920 IN NS ns-1168.awsdns-18.org.
docker.io. 2920 IN NS ns-513.awsdns-00.net.
docker.io. 2920 IN NS ns-1827.awsdns-36.co.uk.
docker.io. 2920 IN NS ns-421.awsdns-52.com.
;; ADDITIONAL SECTION:
ns-1168.awsdns-18.org. 143919 IN A 205.251.196.144
ns-421.awsdns-52.com. 170410 IN A 205.251.193.165
ns-513.awsdns-00.net. 132154 IN A 205.251.194.1
ns-1168.awsdns-18.org. 143919 IN AAAA 2600:9000:5304:9000::1
ns-1827.awsdns-36.co.uk. 171777 IN AAAA 2600:9000:5307:2300::1
ns-421.awsdns-52.com. 172051 IN AAAA 2600:9000:5301:a500::1
ns-513.awsdns-00.net. 132154 IN AAAA 2600:9000:5302:100::1
;; Query time: 6 msec
;; SERVER: 202.96.134.133#53(202.96.134.133)
;; WHEN: Wed Mar 02 17:25:56 CST 2022
;; MSG SIZE rcvd: 466
(base) /etc $ dig registry-1.docker.io #192.168.64.1
; <<>> DiG 9.10.6 <<>> registry-1.docker.io #192.168.64.1
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 21844
;; flags: qr rd ra; QUERY: 1, ANSWER: 8, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;registry-1.docker.io. IN A
;; ANSWER SECTION:
registry-1.docker.io. 752 IN A 52.72.252.48
registry-1.docker.io. 752 IN A 174.129.220.74
registry-1.docker.io. 752 IN A 34.237.244.67
registry-1.docker.io. 752 IN A 52.205.127.201
registry-1.docker.io. 752 IN A 52.202.132.224
registry-1.docker.io. 752 IN A 52.200.37.142
registry-1.docker.io. 752 IN A 52.203.238.92
registry-1.docker.io. 752 IN A 54.86.228.181
;; Query time: 3 msec
;; SERVER: 192.168.64.1#53(192.168.64.1)
;; WHEN: Wed Mar 02 17:25:59 CST 2022
;; MSG SIZE rcvd: 177
Why such behavior of DNS resolution with respect to VPN? How to make the DNS work when VPN is running?
When you connect to a VPN all your traffic is routed via VPN tunnel and it can't reach 192.168.64.1 since the router in at the other VPN end doesn't know where this address is:
; <<>> DiG 9.10.6 <<>> registry-1.docker.io #192.168.64.1
;; connection timed out; no servers could be reached
This is an expected behavior so you need to set up a route to 192.168.64.0 so it doesn't end up in a VPN tunnel.
You can read how to do this here and here.
Most simple one will look like: route add -host 192.168.64.1 my.local.gateway.ip which adds a route to 54.81.143.201 via specific gateway my.local.gateway.ip.

Is there support of hmac-md5-96 in setkey ipsec tools?

I want to use "hmac-md5-96" algorithm to create Security Associations at client side. I am using setkey ipsec tools. while adding spd entry, It is giving syntax error and unable to identify hmac-md5-96
I have tried keyed-md5 which is also not supported.
setkey -c << EOF
add $pcscf $ue esp $spi_uc -m transport -E aes-cbc $ck -A hmac-md5-96 "1234567890123456" ;
spdadd $pcscf/32[$port_ps] $ue/32[$port_uc] tcp -P in ipsec esp/transport//require ;
spdadd $pcscf/32[$port_ps] $ue/32[$port_uc] udp -P in ipsec esp/transport//require ;
EOF
Use ip xfrm state add instead of setkey, i.e.:
ip xfrm state add src $pcscf dst $ue proto esp spi $spi_uc enc "cbc(aes)" $ck auth-trunc "hmac(md5)" "1234567890123456" 96 mode transport
For some dummy parameters it creates the following SAD entry:
src 11.22.33.44 dst 22.33.44.55
proto esp spi 0x00000457 reqid 0 mode transport
replay-window 0
auth-trunc hmac(md5) 0x31323334353637383930313233343536 96
enc cbc(aes) 0x3131313131313131313131313131313131313131313131313131313131313131
anti-replay context: seq 0x0, oseq 0x0, bitmap 0x00000000
sel src 0.0.0.0/0 dst 0.0.0.0/0
Good luck!

Error while giving more command related to ipv6table

I am facing issue while giving command
ip6tables -A DDoS -j DROP
It is resulting in errors like
root#TimeProvider:~# ip6tables -A DDoS -j DROP
ip6tables: Invalid argument.
Run `dmesg' for more information.
root#TimeProvider:~# dmesg|tail -n 20 ip6_tables: limit match: invalid size 40 != 48
ip6_tables: limit match: invalid size 40 != 48
ip6_tables: limit match: invalid size 40 != 48
ip6_tables: limit match: invalid size 40 != 48
ip6_tables: limit match: invalid size 40 != 48
ip6_tables: limit match: invalid size 40 != 48
ip6_tables: limit match: invalid size 40 != 48
ip6_tables: limit match: invalid size 40 != 48
ip6_tables: limit match: invalid size 40 != 48
ADDRCONF(NETDEV_UP):
mgmt0: link is not ready
mgmt0: Link is up - 1000/Full
ADDRCONF(NETDEV_CHANGE):
mgmt0: link becomes ready TMAC switching to SyncE slave mode on PHY 0 TMAC switching to SyncE master mode on PHY 1
mgmt0: no IPv6 routers present TMAC switching to SyncE slave mode on PHY 0 TMAC switching to SyncE master mode on PHY 1
eth0: no IPv6 routers present
eth1: no IPv6 routers present
ip6_tables: limit match: invalid size 40 != 48
Is it a syntax error of the command or is it a system error?

Dtrace to get write size by distribution

I'm trying to get write size distribution by process. I ran:
sudo dtrace -n 'sysinfo:::writech { #dist[execname] = quantize(arg0); }'
and got the following error:
dtrace: invalid probe specifier sysinfo:::writech...
This is Mac OSX. Please help.
The error message is telling you that Mac OS X doesn't support the sysinfo::: provider. Perhaps you meant to use one of these?
# dtrace -ln sysinfo::writech:
ID PROVIDER MODULE FUNCTION NAME
dtrace: failed to match sysinfo::writech:: No probe matches description
# dtrace -ln sysinfo:::
ID PROVIDER MODULE FUNCTION NAME
dtrace: failed to match sysinfo:::: No probe matches description
# dtrace -ln 'syscall::write*:'
ID PROVIDER MODULE FUNCTION NAME
147 syscall write entry
148 syscall write return
381 syscall writev entry
382 syscall writev return
933 syscall write_nocancel entry
934 syscall write_nocancel return
963 syscall writev_nocancel entry
964 syscall writev_nocancel return
The following script works for me:
# dtrace -n 'syscall::write:entry {#dist[execname] = quantize(arg0)}'
dtrace: description 'syscall::write:entry ' matched 1 probe
^C
activitymonitor
value ------------- Distribution ------------- count
2 | 0
4 |######################################## 4
8 | 0
Activity Monito
value ------------- Distribution ------------- count
2 | 0
4 |######################################## 6
8 | 0
...

PureFtpd passive port range doesn't deliver listening address to client

I'm trying to configure my pureftpd behind the firewall to act as a passive ftp/TLS server.
Acting machines:
Server: 192.168.3.220 (internal network, default route to the router at 192.168.3.1)
Configuration: pureftpd with PassivePorts 64000 64300, MasqueradeAddress ww.xx.yy.zz (this one is configured on router)
Router: internal: 192.168.3.1, DNAT rule (PREROUTING chain) ww.xx.yy.zz tcp/21,64000:64300 NATed to address 192.168.3.220, FORWARD chain accepting these packets both directions.
Client1: external server with fixed public IP
Client2: NATed machine somewhere - on 192.168.5.x network
Scenario1:
- Client1: connect OK, login OK, command 'ls':
gets OK, after PASV:
---> PASV
GNUTLS: REC[0x28ecce0]: Sending Packet[9] Application Data(23) with length: 6
GNUTLS: REC[0x28ecce0]: Sent Packet[10] Application Data(23) with length: 37
GNUTLS: ASSERT: gnutls_buffers.c:322
GNUTLS: ASSERT: gnutls_buffers.c:322
GNUTLS: REC[0x28ecce0]: Expected Packet[9] Application Data(23) with length: 65536
GNUTLS: REC[0x28ecce0]: Received Packet[9] Application Data(23) with length: 64
GNUTLS: REC[0x28ecce0]: Decrypted Packet[9] Application Data(23) with length: 31
<--- 200 Protection set to Private
---> LIST
---> ABOR
Interesting thing: 227 from server, which I see in paranoid log from pureftpd, I don't see on the client - only the 200 Protection set to Private
...waits cca 30sec and reconnects using ACTIVE(!!) mode -> ls
Scenario2:
- using Client2 (sorry for czech locales):
---> USER xxxxxx
<--- 331 Password required for xxxxxx
---> PASS XXXX
<--- 230 User xxxxxx logged in
---> PWD
<--- 230 Ls oi a:2013-01-03 21:19:00
---> PBSZ 0
<--- 257 "/" is the current directory
---> PROT P
<--- 200 PBSZ 0 successful
---> PASV
<--- 200 Protection set to Private
---> LIST
---> ABOR
---- Přerušený datový socket bude uzavřen (means closing data socket)
---- Řídicí socket bude uzavřen (means closing control socket)
---- Pasivní režim bude vypnut (means Passive will be turned off)
---- dns cache hit
---- Navazuje se spojení na ftp1.xxxxxxxxx.cz (ww.xx.yy.zz) port 21
<--- 220 ww.xx.yy.zz FTP server ready
...
---> USER xxxxxx
<--- 331 Password required for xxxxxx
---> PASS XXXX
<--- 230 User xxxxxx logged in
---> PWD
<--- 230 Ls oi a:2013-01-03 21:19:22
---> PBSZ 0
<--- 257 "/" is the current directory
---> PROT P
<--- 200 PBSZ 0 successful
---> PORT 192,168,5,xx,185,136
<--- 200 Protection set to Private
---> LIST
<--- 500 Illegal PORT command
---- Closing data socket
---> QUIT
ls: Nepřekonatelná chyba: 500 Illegal PORT command
<--- 425 Unable to build data connection: Connection refused
iptables on the NAT machine don't increase my accounting counters on ports 64000:64300, so I expect there's no passive connection made at all.
So... the real problem was the second 230 reply:
---> PWD
<--- 230 Ls oi a:2013-01-03 21:19:22
This is a known issue of the PureFTPd 1.3.3a (default debian squeeze)
The solution was to compile PureFTPd from wheezy (1.3.4a-2), now everything works fine.
Thank you all, who tried to figure out what's going on. Tldv

Resources