Use unshare to start process in existing net namespace? - bash

I want to launch a process using isolated namespaces for PID, UTS, IPC, and NET. However, inside the process, to setup the networking correctly, the network namespace has to be configured on the host with the veth adapters (so that they appear for the isolated process). So, I have the network setup using ip netns add vnet1. I want to use that network namespace for my process as well as give it PID isolation, etc. I know I can use ip nets exec to execute a process in that namespace, but I also want other namespace isolation. Is there a way to do that with unshare or do I need to take another approach?

when you run ip netns add vnet1 it will create an object at /run/netns so in this case /run/netns/vnet1 will be created.
Now, when you unshare your program, you can specify path to an existing namespaces. So, maybe like this.
$ ip netns add vnet1
$ ls /run/netns/
vnet1
$ unshare --net=/run/netns/vnet1 --pid --uts --ipc --fork bash
$ ip a
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: sit0#NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
link/sit 0.0.0.0 brd 0.0.0.0
You can see that there is only lo and no other interfaces meaning that our bash process is now in vnet1 network namespaces.

Related

Retrieving Container IP address after `lxc start`

I have the following script I'm running in cloud-init on my cloud provider. It grabs a container from another host on my network, starts it, and then attempts to forward a port on the host to the container:
lxc init ...
lxc remote add gateway 10.132.98.1:8099 --accept-certificate --password securpwd
lxc copy gateway:build-slave build-slave
lxc start build-slave
CONTAINER_IP=$(lxc list "build-slave" -c 4 | awk '!/IPV4/{ if ( $2 != "" ) print $2}')
iptables -t nat -A PREROUTING -i ens3 -p tcp --dport 2200 -j DNAT --to ${CONTAINER_IP}
The only problem is that there is an arbitrary delay between when lxc start returns and when the IPV4 info is available. My current solution is to add sleep 5s after the lxc start command, but I'm worried that if my server is under load, it might actually be longer than 5 seconds before the container is initialized.
Is there a better solution that doesn't rely on an arbitrary wait period?
As Lawrence pointed out in the comments, LXD provides a "proxy" device that can be set on the container. In this way, I don't have to know the container's IP address in order to setup the correct IPTABLES entry. LXD will instead setup my proxy rule for me when the container I specify starts.
I configured this like so:
DROPLET_PUB_IP=$(ip -f inet addr show ens3 | sed -En -e 's/.*inet ([0-9.]+).*/\1/p')
lxc config device add build-slave ssh-slave proxy listen=tcp:${DROPLET_PUB_IP}:2200 connect=tcp:localhost:22

dnsmasq does not resolve directly specified name

I have trouble with dnsmasq - it does not resolve directly defined name.
$ sudo dnsmasq -d -A /test/172.17.0.2 --log-queries &
dnsmasq: started, version 2.48 cachesize 150
dnsmasq: compile time options: IPv6 GNU-getopt DBus no-I18N DHCP TFTP "--bind-interfaces with SO_BINDTODEVICE"
dnsmasq: read /etc/hosts - 2 addresses
$ ping test
ping: unknown host test
What is wrong?
You only set up a server. Your system's resolver (which is used by ping, your browser, and all other applications on your machine) must first know that this server exists and that it should be used. This can be done by modifying /etc/resolv.conf. For first, make sure, this line is in that file:
nameserver 127.0.0.1
But beware: modern systems auto-generate this file and potentially overwrite your changes. So watch out for "DO NOT EDIT THIS FILE BY HAND" comments in that file and instead do what's recommended in the file.

How to map ip:port to a new ip or a domain in mac

I am using macOS 10.12 and I want to do ip:port mapping
ex. 127.0.0.1:32769 to 10.0.0.1
then I can add 10.0.0.1 somedomain.com to my /etc/hosts
I did some search, and got solutions to this question on this post:
https://serverfault.com/questions/102416/iptables-equivalent-for-mac-os-x/673551#673551
but the command in this post works for only the newest one.
every time I use this command the system replies me:
$ sudo ifconfig lo0 10.0.0.2 alias
$ echo "rdr pass on lo0 inet proto tcp from any to 10.0.0.2 port 80 -> 127.0.0.1 port 32771" | sudo pfctl -ef -
pfctl: Use of -f option, could result in flushing of rules
present in the main ruleset added by the system at startup.
See /etc/pf.conf for further details.
No ALTQ support in kernel
ALTQ related functions disabled
pfctl: pf already enabled
how can I prevent flushing rules?
or is there any ways to get this work easier?
Thanks a lot

Forward host port to guest

I have MySQL running locally on my host machine and for reasons™ I can't run it inside of my Vagrant machine. I know that there's a way to address this issue with iptables by forwarding all traffic to 3306 on the guest to the host's IP address and port, but this complicates things a lot for me as I'll have to play around with iptables rules and probably get into TCP masquerading, which would be nice to avoid.
Is there a way in Vagrant (VirtualBox VM) to forward a host TCP port to the guest so that the guest can access 127.0.0.1:3306 and have all traffic forwarded to host:3306 seamlessly? If not, how exactly would I set this up in iptables?
According to this answer, Docker provides a way to do this natively without having to screw around with IP tables rules. Does VirtualBox and Vagrant provide a way to mimic this functionality?
I have two solutions, one involving iptables hacking and one more straightforward using SSH.
Tunnel a Host Port to the Guest over SSH
When connecting to the guest using vagrant ssh, pass the port along as an argument:
vagrant ssh -- -R 3306:localhost:3306
This will forward the local port 3306 to the remote machine at port 3306.
iptables Hackery
We can use iptables on the guest to forward all traffic to a local port on the guest to a remote port on the host. We need to ensure that the host and guest have more or less static IP addresses in relation to each other to ensure that everything works fine. We'll also need to open a port on the host's firewall to allow the guest to do this.
Give the Guest a Static IP
In your Vagrantfile, set a static IP address for the guest:
config.vm.network "private_network", ip: "10.10.10.10"
Now, when you hit 10.10.10.10, you'll always* be hitting your guest.
Configure iptables in the Guest
Found in this awesome answer in Server Fault:
$ remote_ip=10.0.2.2
$ mysql_port=3306
$ sudo iptables -t nat -A PREROUTING -i eth0 -p tcp --dport $mysql_port \
-j DNAT --to $remote_ip:$mysql_port
$ sudo iptables -N INET-PRIV
$ sudo iptables -A FORWARD -i eth0 -o eth1 -j INET-PRIV
$ sudo iptables -A FORWARD -j drop
$ sudo iptables -A INET-PRIV -p tcp -d $remote_ip --dport $mysql_port \
-j ACCEPT
$ sudo iptables -A INET-PRIV -j DROP
Then, enable port forwarding:
$ echo "1" | sudo tee /proc/sys/net/ipv4/ip_forward
First, test it out, then when you're sure it works, run:
$ sudo iptables-save
I'm not sure that /proc/sys/net/ipv4/ip_forward will remember settings on boot, so you might want to add that to a startup script.
Which Should I Use?
SSH is definitely easier to do, but there's a bit of a performance overhead of having to encrypt that port's traffic and forward it back to the host.
iptables feels like black magic, but once you get it working, it's really nice and fairly seamless.
Port forwarding (using NAT back network backend) doesn't seem to fit the use case well.
In your use case, Public Network (Bridged Networking) is a better choice. Create a 2nd network in Vagrantfile and do a vagrant reload.
Vagrant.configure("2") do |config|
config.vm.network "public_network"
end
Basically this will add an extra virtual NIC in the VM, and it'll get an IP from the same DHCP server in your network. Get its IP by using ifconfig -a or ip addr.
The host <=> VM will be able to communicate. VM should be able to connect to mysql running on the host via port 3306.
HTH

Removing the tc config on a particular interface

I am new to using the tc command.
I am writing a test script to add delays to an interface. This is being done using python and fabric api
So the script will do something like:
sudo tc qdisc add dev eth1 root netem delay
And at the end of script we would do
sudo tc qdisc del dev eth1 root netem
But at the same time I wanted to make sure at the very beginning that there was no existing tc control that has been done on the system. So I wanted to run the delete command before the whole script started.
but that gives me an error if there is no tc config done.
abc#abcvmm:~$ sudo tc qdisc del dev eth1 root netem
RTNETLINK answers: Invalid argument
Is there a way to delete the interface configured only if there is an existing tc config done and not otherwise.
your first step would be: tc qdisc del dev eth1 root
and then: tc qdisc add dev eth1 root handle 1: htb default 100
Checkout my code in my git repo: https://github.com/Puneeth-n/tcp-eval/blob/development/topology/build_net.py
I think I have implemented already what you are trying to implement (using fabric). Or may be you can use parts of the code.
The code makes sure that if there is no error when you are trying to delete a non existing qdisc.
I think I found a way of doing that.
I can use something like:
netem_exists= run("tc qdisc show dev eth1 | grep netem | awk '{print $2}'")
if netem_exists=="netem":
print "Delete"
run("sudo tc qdisc del dev eth1 root netem")
else:
print "No delete"

Resources