How can I simulate packet loss using tc netem? - filter

I am trying to simulate a 5% packet loss using the tc tool at server port 1234. Here are my steps -
sudo tc qdisc del dev eth0 root
sudo tc qdisc add dev eth0 root handle 1: prio
sudo tc filter add dev eth0 parent 1: protocol ip prio 1 u32 flowid 1:1 match ip dport 1234 0xffff
sudo tc qdisc add dev eth0 parent 1:1 handle 1: netem loss 5%
There are no errors during the above commands. But when I send any TCP traffic to that port, there is no packet loss observed.
What am I doing wrong in the above commands ?
Any help is appreciated.

See https://serverfault.com/a/841865/342799 for similar case.
Commands I have in my testing rig to drop 5.5% of packets:
# tc qdisc add dev eth0 root handle 1: prio priomap 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
# tc qdisc add dev eth0 parent 1:1 handle 10: netem loss 5.5% 25%
# DST_IP=1.2.3.4/32
# tc filter add \
dev eth0 \
parent 1: \
protocol ip \
prio 1 \
u32 \
match ip dst $DST_IP \
flowid 1:1
To confirm, run:
# ping -f -c 1000 $DST_IP
before and after this setup.
Note: Almost all hosting providers start throttling your traffic if you do lot of flood pings.

Related

Adding a multicast route to an interface in OSX

I have a VM running in Fusion that I want to hit by routing a specific endpoint address through the virtual ethernet interface (multicast DNS, in particular). First I was sending packets and inspecting with Wireshark noticing that nothing was getting through. Then I thought to check the routing table
$ netstat -rn | grep vmnet8
Destination Gateway Flags Refs Use Netif Expire
172.16.12/24 link#29 UC 2 0 vmnet8 !
172.16.12.255 ff:ff:ff:ff:ff:ff UHLWbI 0 35 vmnet8 !
But unlike other interfaces,
Destination Gateway Flags Refs Use Netif Expire
224.0.0.251 a1:10:5e:50:0:fb UHmLWI 0 732 en0
224.0.0.251 a1:10:5e:50:0:fb UHmLWI 0 0 en8
There was no multicast route. So I added it:
$ sudo route add -host 224.0.0.251 -interface vmnet8
add host 224.0.0.251: gateway vmnet8
And so it was true
$ netstat -rn | grep vmnet8
Destination Gateway Flags Refs Use Netif Expire
172.16.12/24 link#29 UC 2 0 vmnet8 !
172.16.12.255 ff:ff:ff:ff:ff:ff UHLWbI 0 35 vmnet8 !
224.0.0.251 a1:10:5e:50:0:fb UHmLS 0 13 vmnet8
I was also sure to check the interface flags to ensure it had been configured to support multicast
$ ifconfig vmnet8
vmnet8: flags=8963<UP,BROADCAST,SMART,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1500
ether 00:70:61:c0:11:08
inet 172.16.12.1 netmask 0xffffff00 broadcast 172.16.12.255
Still, no multicast packets I send are getting through. I noted that the other interface's multicast route have different flags than the default ones given to my added route. Namely UHmLWI vs UHmLS. The differences I can see are insignificant. From man netstat:
I RTF_IFSCOPE Route is associated with an interface scope
S RTF_STATIC Manually added
W RTF_WASCLONED Route was generated as a result of cloning
Then again, I'm not claiming to be a routing expert. Perhaps a multicast route entry must be made somehow differently?
You'll note that the Use column is non-zero, despite no packets showing in a sniffer.

How to disable and enable internet connection from within Docker container?

I am clearing /etc/resolv.conf to disable network :
sudo mv /etc/resolv.conf /etc/resolv_backup.conf
sudo touch /etc/resolv.conf
Then to enable network:
sudo mv /etc/resolv_backup.conf /etc/resolv.conf
However the resource is busy and I cannot execute these commands.
I want to disable internet from within container and not using:
docker network disconnect [OPTIONS] NETWORK CONTAINER
which does this from server on which container is deployed.
I am using Alpine.
From inside of a container, you are typically forbidden from changing the state of the network:
$ docker run -it --rm alpine:latest /bin/sh
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
929: eth0#if930: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
/ # ip link set eth0 down
ip: ioctl 0x8914 failed: Operation not permitted
This is intentional, for security, to prevent applications from escaping the container sandbox. If you do not need security for your containers (and therefore something I recommend against doing), you can run your container with additional network capabilities:
$ docker run -it --rm --cap-add NET_ADMIN alpine:latest /bin/sh
/ # netstat -nr
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
0.0.0.0 172.17.0.1 0.0.0.0 UG 0 0 0 eth0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
933: eth0#if934: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
/ # ip link set eth0 down
/ # ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
ping: sendto: Network unreachable
When you try to bring the network back up, you'll need to also setup the default route again to be able to connect to external networks:
/ # ip link set eth0 up
/ # ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
ping: sendto: Network unreachable
/ # netstat -nr
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0
/ # route add default gw 172.17.0.1
/ # ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: seq=0 ttl=58 time=12.518 ms
64 bytes from 8.8.8.8: seq=1 ttl=58 time=11.481 ms
^C
--- 8.8.8.8 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 11.481/11.999/12.518 ms
First of all, clearing resolv.conf is not the proper way to disable network for your container. That just avoids name resolution, but you still can use IP connectivity.
To disable the network you should use the proper script depending if you are using systemd or sysV. Something similar to this should work (it depends on your distro):
# /etc/init.d/networking stop
# systemctl stop networking
Hope this helps! :-)

using tc control command in bash script gives unexplainable results

I send frames in packets from a client to a server on the server I want to shape my traffic.
I use this script to control the traffic. First down to 80 kbit after 10 seconds down to 40 kbit. (I know this is ridiculously low, I usually use bigger values)
#!/bin/bash
datenrate=80
datenrate2=40
echo "setting datarate to ${datenrate}"
touch started.info
sudo tc qdisc del dev ens3 root
sudo tc qdisc add dev ens3 handle 1: root htb default 11
sudo tc class add dev ens3 parent 1: classid 1:1 htb rate ${datenrate}kbit
sudo tc class add dev ens3 parent 1:1 classid 1:11 htb rate ${datenrate}kbit
echo "worked"
MSECONDS=$(($(date +%s%N)/1000000))
STOPTIME=0
while :
do
STOPTIME=$((($(date +%s%N)/1000000) - $MSECONDS))
if [ $STOPTIME -ge 10000 ]
then
sudo tc qdisc del dev ens3 root
sudo tc qdisc add dev ens3 handle 1: root htb default 11
sudo tc class add dev ens3 parent 1: classid 1:1 htb rate ${datenrate2}kbit
sudo tc class add dev ens3 parent 1:1 classid 1:11 htb rate ${datenrate2}kbit
touch calledthrottle.info
break
fi
done
echo "10 sec over - setting up a datarate drop to ${datenrate2} kbit"
while :
do
STOPTIME=$((($(date +%s%N)/1000000) - $MSECONDS))
if [ $STOPTIME -ge 20000 ]
then
sudo tc qdisc del dev ens3 root
echo "set to normal"
break
fi
done
touch ended.info
On my client I generate a logfile which I plot with GNUPlot and I both calculate the avg uploadspeed on server and on the client. In this case 2740 kbit/s. Am I not using the tc tool correctly?
Image of my results generated with GNUPlot:
Upload speed
tc qdisc show dev ens3
gives me
qdisc pfifo_fast 0: root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
which Im not able to delete with
sudo tc qdisc del dev ens3 root
Would be kind if someone could point me in the right direction and could explain why there is such a high upload rate, why there are frames coming through with a far higher throughput then shaped. Thank you.
Updated as adviced
Ok upload rate 80 kbit in tc gives me around 80*8 = 640 KiloBit/s. Still does not explain the fluctuation of the packet income

Add route in VPN connection Mac OS X

I have following routing table:
➜ ~ netstat -nr
Routing tables
Internet:
Destination Gateway Flags Refs Use Netif Expire
default 192.168.0.1 UGSc 63 1 en0
default 10.255.254.1 UGScI 1 0 ppp0
10 ppp0 USc 2 4 ppp0
10.255.254.1 10.255.254.2 UHr 1 0 ppp0
92.46.122.12 192.168.0.1 UGHS 0 0 en0
127 127.0.0.1 UCS 0 0 lo0
127.0.0.1 127.0.0.1 UH 2 62144 lo0
169.254 link#4 UCS 0 0 en0
192.168.0 link#4 UCS 8 0 en0
192.168.0.1 c0:4a:0:2d:18:48 UHLWIir 60 370 en0 974
192.168.0.100 a0:f3:c1:22:1d:6e UHLWIi 1 228 en0 1174
How can I add gateway(10.25.1.252) to specific IP(10.12.254.9) inside VPN.
I tried this command but with no luck:
sudo route -n add 10.12.0.0/16 10.25.1.252
But traceroute show that it uses default gateway:
~ traceroute 10.12.254.9
traceroute to 10.12.254.9 (10.12.254.9), 64 hops max, 52 byte packets
1 10.255.254.1 (10.255.254.1) 41.104 ms 203.766 ms 203.221 ms
Are you using Cisco AnyConnect? Here's a tidbit from https://supportforums.cisco.com/document/7651/anyconnect-vpn-client-faq
Q. How does the AnyConnect client enforce/monitor the
tunnel/split-tunnel policy?
A. AnyConnect enforces the tunnel policy in 2 ways:
1)Route monitoring and repair (e.g. if you change the route table),
AnyConnect will restore it to what was provisioned.
2)Filtering (on platforms that support filter engines). Filtering ensures that even if you could perform some sort of route injection, the filters would block the packets.
Which I interpret as: Whenever you change the route from, the Cisco client resets the route to what your VPN administrator configured.
Your best bet it to talk to you VPN administrator and ask them to add your route.

How to add a new qdisc in linux

I am trying to modify the Red Algorithm (http://en.wikipedia.org/wiki/Random_early_detection) for certain experiments.
After modifying the code, I loaded onto the kernel using the insmod command.
I verified the successful loading by using lsmod | grep red_new
However when I try to use the tc qdisc command it fails giving the following error:
tc qdisc add dev eth0 root red_new limit 100 min 80 max 90 avpkt 10 burst 10 probability 1 bandwidth 200 ecn
unknown qdisc "red_new" hence option "limit" is unparsable
What could be the possible reason ?
After running the ltrace command suggested by ymonad I get the following output:
strlen("red_new") = 7
strlen("red_new") = 7
strlen("red_new") = 7
strncpy(0x7fff6467ad10, "red_new", 15) = 0x7fff6467ad10
dlopen("./tc/q_red_new.so", 1) = 0x1abe030
dlsym(0x1abe030, "red_new_qdisc_util") = 0x7f62bdd240c0
memcpy(0x7fff6467ad48, "red_new\0", 8) = 0x7fff6467ad48
I ran the tc qdisc show to check if it was added but it hasn't.
tc qdisc show
qdisc mq 0: dev eth0 root
qdisc mq 0: dev eth1 root
qdisc mq 0: dev eth2 root
qdisc mq 0: dev eth3 root
According to the result of strace tc qdisc add dev eth0 root red_new, and source of tc command, it seems that tc is searching for $TC_LIB_DIR/q_red_new.so.
You have to create the module for your own. I would give you small instruction.
(1) Download source of iproute2 from following url, extract it, and cd to the folder.
https://wiki.linuxfoundation.org/networking/iproute2
(2) Copy q_red.c to q_red_new.c
$ cp tc/q_red.c tc/q_red_new.c
(3) Edit tc/q_red_new.c
Rename red_parse_opt, red_print_opt, red_print_xstats, to red_new_parse and so on.
Additionally you have to rename red_qdisk_util to req_new_qdisc_util and change the id and other members.
struct qdisc_util red_new_qdisc_util = {
.id = "red_new",
.parse_qopt = red_new_parse_opt,
.print_qopt = red_new_print_opt,
.print_xstats = red_new_print_xstats,
};
(4) Configure and build q_red_new.so
$ ./configure
$ make TCSO=q_red_new.so
now you see that ./tc/q_red_new.so is created
(5) Run tc command with TC_LIB_DIR environment.
$ TC_LIB_DIR='./tc' tc qdisc add dev eth0 root red_new
UPDATE: here's how to know that the tc command loaded the q_red_new.so correctly.
if dlopen returns zero then you failed to load./tc/q_red_new.so.
if dlsym returns zero then you failed to load red_new_qdisc_util inside the q_red_new.so.
# export TC_LIB_DIR='./tc'
# ltrace ./tc/tc qdisc add dev eth0 root red_new limit 100 min 80 max 90 avpkt 10 burst 10 probability 1 bandwidth 200 ecn 2>&1 | grep red_new
.. OMITTED ..
dlopen("./tc/q_red_new.so", 1) = 0x12c1030
snprintf("red_new_qdisc_util", 256, "%s_qdisc_util", "red_new") = 18
dlsym(0x12c1030, "red_new_qdisc_util") = 0x7f1cf0d6cc40
.. OMITTED ..

Resources