Removing the tc config on a particular interface - traffic

I am new to using the tc command.
I am writing a test script to add delays to an interface. This is being done using python and fabric api
So the script will do something like:
sudo tc qdisc add dev eth1 root netem delay
And at the end of script we would do
sudo tc qdisc del dev eth1 root netem
But at the same time I wanted to make sure at the very beginning that there was no existing tc control that has been done on the system. So I wanted to run the delete command before the whole script started.
but that gives me an error if there is no tc config done.
abc#abcvmm:~$ sudo tc qdisc del dev eth1 root netem
RTNETLINK answers: Invalid argument
Is there a way to delete the interface configured only if there is an existing tc config done and not otherwise.

your first step would be: tc qdisc del dev eth1 root
and then: tc qdisc add dev eth1 root handle 1: htb default 100
Checkout my code in my git repo: https://github.com/Puneeth-n/tcp-eval/blob/development/topology/build_net.py
I think I have implemented already what you are trying to implement (using fabric). Or may be you can use parts of the code.
The code makes sure that if there is no error when you are trying to delete a non existing qdisc.

I think I found a way of doing that.
I can use something like:
netem_exists= run("tc qdisc show dev eth1 | grep netem | awk '{print $2}'")
if netem_exists=="netem":
print "Delete"
run("sudo tc qdisc del dev eth1 root netem")
else:
print "No delete"

Related

Use unshare to start process in existing net namespace?

I want to launch a process using isolated namespaces for PID, UTS, IPC, and NET. However, inside the process, to setup the networking correctly, the network namespace has to be configured on the host with the veth adapters (so that they appear for the isolated process). So, I have the network setup using ip netns add vnet1. I want to use that network namespace for my process as well as give it PID isolation, etc. I know I can use ip nets exec to execute a process in that namespace, but I also want other namespace isolation. Is there a way to do that with unshare or do I need to take another approach?
when you run ip netns add vnet1 it will create an object at /run/netns so in this case /run/netns/vnet1 will be created.
Now, when you unshare your program, you can specify path to an existing namespaces. So, maybe like this.
$ ip netns add vnet1
$ ls /run/netns/
vnet1
$ unshare --net=/run/netns/vnet1 --pid --uts --ipc --fork bash
$ ip a
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: sit0#NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
link/sit 0.0.0.0 brd 0.0.0.0
You can see that there is only lo and no other interfaces meaning that our bash process is now in vnet1 network namespaces.

X11 Forwarding from WSL2 fails

I followed instructions on setting up X11 forwarding from my WSL2 to the host on Windows 10 with VcXsrv based on this answer: How to set up working X11 forwarding on WSL2
export DISPLAY=$(awk '/nameserver / {print $2; exit}' /etc/resolv.conf 2>/dev/null):0
export LIBGL_ALWAYS_INDIRECT=1
I allowed public access while starting up VcXsrv, and also switched off my firewall just to test if it worked.
mustafa#DESKTOP-MGJG0RL:~$ xeyes
Error: Can't open display: 172.25.32.1:0
Is there a step that I'm missing?
I had the same issue. In my case the problem was that I disabled the Windows Firewall for private networks assuming that the network with the WSL 2 virtual machine would be considered a private network. But actually it turned out that this network is handled as a public network and therefore disabling the firewall for private networks did not help. So the short answer is: Set up a proper firewall rule instead of trying the shortcut with disabling the firewall for a quick try.
instead of disabling the firewall, try adding this rule (admin PowerShell)
New-NetFirewallRule -DisplayName "WSL" -Direction Inbound -InterfaceAlias "vEthernet (WSL)" -Action Allow
I was able to resolve it:
In the sshd_config file
X11UseLocalhost yes
X11Forwarding yes
Adapted from this answer https://superuser.com/a/1476160/1014728
export DISPLAY=$(cat /etc/resolv.conf | grep nameserver | awk '{print $2}'):0
Use VcXsrv. Set -ac in the additional parameters field
Run xhost + if you get a no protocol found error
Run an xeyes to test

Retrieving Container IP address after `lxc start`

I have the following script I'm running in cloud-init on my cloud provider. It grabs a container from another host on my network, starts it, and then attempts to forward a port on the host to the container:
lxc init ...
lxc remote add gateway 10.132.98.1:8099 --accept-certificate --password securpwd
lxc copy gateway:build-slave build-slave
lxc start build-slave
CONTAINER_IP=$(lxc list "build-slave" -c 4 | awk '!/IPV4/{ if ( $2 != "" ) print $2}')
iptables -t nat -A PREROUTING -i ens3 -p tcp --dport 2200 -j DNAT --to ${CONTAINER_IP}
The only problem is that there is an arbitrary delay between when lxc start returns and when the IPV4 info is available. My current solution is to add sleep 5s after the lxc start command, but I'm worried that if my server is under load, it might actually be longer than 5 seconds before the container is initialized.
Is there a better solution that doesn't rely on an arbitrary wait period?
As Lawrence pointed out in the comments, LXD provides a "proxy" device that can be set on the container. In this way, I don't have to know the container's IP address in order to setup the correct IPTABLES entry. LXD will instead setup my proxy rule for me when the container I specify starts.
I configured this like so:
DROPLET_PUB_IP=$(ip -f inet addr show ens3 | sed -En -e 's/.*inet ([0-9.]+).*/\1/p')
lxc config device add build-slave ssh-slave proxy listen=tcp:${DROPLET_PUB_IP}:2200 connect=tcp:localhost:22

dnsmasq does not resolve directly specified name

I have trouble with dnsmasq - it does not resolve directly defined name.
$ sudo dnsmasq -d -A /test/172.17.0.2 --log-queries &
dnsmasq: started, version 2.48 cachesize 150
dnsmasq: compile time options: IPv6 GNU-getopt DBus no-I18N DHCP TFTP "--bind-interfaces with SO_BINDTODEVICE"
dnsmasq: read /etc/hosts - 2 addresses
$ ping test
ping: unknown host test
What is wrong?
You only set up a server. Your system's resolver (which is used by ping, your browser, and all other applications on your machine) must first know that this server exists and that it should be used. This can be done by modifying /etc/resolv.conf. For first, make sure, this line is in that file:
nameserver 127.0.0.1
But beware: modern systems auto-generate this file and potentially overwrite your changes. So watch out for "DO NOT EDIT THIS FILE BY HAND" comments in that file and instead do what's recommended in the file.

Adding /etc/hosts entry to host machine on vagrant up

Is it possible for one to modify files on the host machine during the vagrant up process? For example, adding an entry to the host machine's /etc/hosts file to avoid having to do this manually?
The solution is to use vagrant-hostsupdater
vagrant plugin install vagrant-hostsupdater
This plugin adds an entry to your /etc/hosts file on the host system.
On up and reload commands, it tries to add the information, if its not
already existant in your hosts file. If it needs to be added, you will
be asked for an administrator password, since it uses sudo to edit the
file.
On halt, suspend and destroy, those entries will be removed again.
OK, so now the guy sitting next to you at the coffee shop can most likely ssh to port 2222 (EDIT: changed on newer versions of vagrant, unless you explicitly enable external access) on your computer, login as vagrant with the insecure key, modify your Vagrantfile, since it's mounted read-write and owned by the vagrant user, insert arbitrary ruby code to run in the host environment, and now it looks like they've got root access on the host environment as well. Brilliant.
I hope people run firewalls on their development machines.
EDIT:
So after writing the above, I bugged the author of Vagrant, the default has been changed so that port 2222 is not open by default on the external interface. Big improvement (though still something to be careful of, since external access is often opened up for various reasons).
So, having put in effort to get the situation fixed since making this comment, I'm now getting down votes, apparently because the comment is out of date. Damn. It was correct when written.
EDIT:
In response to Steve Buzonas, the point is that if there's any likelhihood of the virtual machine being compromised then giving the vagrant up process elevated permissions represents a serious risk to the security of the host environment, and also being able to modify the /etc/hosts environment file is dangerous, even without general root access. As I've pointed out, vagrant's approach to keeping the VM secure is not particularly rigorous.
I don't want to depend on some plug in to vagrant. It should be standard feature in Vagrant!!!! Untill then I use a shell script to propagate VM's in my cluster of new VMs. The key lines are :
# Obtain the hostkey based on the IP-address and add it to the known_host list
ssh-keyscan -t ecdsa ${START}.${OFFSET} >> /home/vagrant/.ssh/known_hosts
# obtain the hostname, because you might not know it yet, with the IP address:
EXTERNAL_HOSTNAME=`ssh ${START}'.'${OFFSET} 'hostname'`
# obtain the key ot the new other VM based on hostname and also add to known_hosts
ssh-keyscan -t ecdsa ${EXTERNAL_HOSTNAME} >> /home/vagrant/.ssh/known_hosts
# so now you have the IP address and the corresponding hostname
# add to /etc/hosts without being asked for "yes/no"
echo ${START}'.'${OFFSET}' '${EXTERNAL_HOSTNAME} >> /etc/hosts
Where IPADRRESS is the IP address of the master VM in the cluster with several slave node VM's with succeedding ip-addresses. (IPADDRESS=IPADDRESS + 1 untill no successfull ping)
IPADDRESS=`ip addr show eth1 | grep 'inet ' | cut -d ' ' -f 6 | cut -d '/' -f1`
START=`echo ${IPADDRESS} | cut -d '.' -f1,2,3`
OFFSET=`echo ${IPADDRESS} | cut -d '.' -f4`
And then I loop trough the next IP addresses until no more succesfull pings.
I do not want to hardcode anything (ip-address or hostname), but to find out itself.
Resulting /etc/hosts file (after
sort /etc/hosts | uniq > /tmp/hosts.uniq && sudo sh -c 'mv /tmp/hosts.uniq /etc/hosts'
:
[vagrant#master ~]$ cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
127.0.0.1 master.RHEL70.local master
192.168.1.50 master.RHEL70.local
192.168.1.51 node01.RHEL70.local
192.168.1.52 node02.RHEL70.local
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
Previously I didn't know how to vagrant edit my etc/host file. But when i reinstalled window and vagrant, this feature disappeared.

Resources