kube-proxy in iptables mode is not working in routing - proxy

What I have is
Kubernetes: v.1.1.2
iptables v1.4.21
kernel: 3.10.0-327.3.1.el7.x86_64 Centos
Networking is done via flannel udp
no cloud provider
what I do
I have enabled it with --proxy_mode=iptables argument. And I checked the iptables
Chain PREROUTING (policy ACCEPT)
target prot opt source destination
KUBE-SERVICES all -- anywhere anywhere /* kubernetes service portals */
DOCKER all -- anywhere anywhere ADDRTYPE match dst-type LOCAL
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
KUBE-SERVICES all -- anywhere anywhere /* kubernetes service portals */
DOCKER all -- anywhere !loopback/8 ADDRTYPE match dst-type LOCAL
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
MASQUERADE all -- SIDR26KUBEAPMORANGE-005/26 anywhere
MASQUERADE all -- 172.17.0.0/16 anywhere
MASQUERADE all -- anywhere anywhere /* kubernetes service traffic requiring SNAT */ mark match 0x4d415351
Chain DOCKER (2 references)
target prot opt source destination
Chain KUBE-NODEPORTS (1 references)
target prot opt source destination
Chain KUBE-SEP-3SX6E5663KCZDTLC (1 references)
target prot opt source destination
MARK all -- 172.20.10.130 anywhere /* default/nc-service: */ MARK set 0x4d415351
DNAT tcp -- anywhere anywhere /* default/nc-service: */ tcp to:172.20.10.130:9000
Chain KUBE-SEP-Q4LJF4YJE6VUB3Y2 (1 references)
target prot opt source destination
MARK all -- SIDR26KUBEAPMORANGE-001.serviceengage.com anywhere /* default/kubernetes: */ MARK set 0x4d415351
DNAT tcp -- anywhere anywhere /* default/kubernetes: */ tcp to:10.62.66.254:9443
Chain KUBE-SERVICES (2 references)
target prot opt source destination
KUBE-SVC-6N4SJQIF3IX3FORG tcp -- anywhere 172.21.0.1 /* default/kubernetes: cluster IP */ tcp dpt:https
KUBE-SVC-362XK5X6TGXLXGID tcp -- anywhere 172.21.145.28 /* default/nc-service: cluster IP */ tcp dpt:commplex-main
KUBE-NODEPORTS all -- anywhere anywhere /* kubernetes service nodeports; NOTE: this must be the last rule in this chain */ ADDRTYPE match dst-type LOCAL
Chain KUBE-SVC-362XK5X6TGXLXGID (1 references)
target prot opt source destination
KUBE-SEP-3SX6E5663KCZDTLC all -- anywhere anywhere /* default/nc-service: */
Chain KUBE-SVC-6N4SJQIF3IX3FORG (1 references)
target prot opt source destination
KUBE-SEP-Q4LJF4YJE6VUB3Y2 all -- anywhere anywhere /* default/kubernetes: */
When I do nc request to the service ip from another machine, in my case it's 10.116.0.2 I got an error like below
nc -v 172.21.145.28 5000
Ncat: Version 6.40 ( http://nmap.org/ncat )
hello
Ncat: Connection timed out.
while when I do request to the 172.20.10.130:9000 server it's working fine.
nc -v 172.20.10.130 9000
Ncat: Version 6.40 ( http://nmap.org/ncat )
Ncat: Connected to 172.20.10.130:9000.
hello
yes
From the dmesg log, I can see
[10153.318195] DBG#OUTPUT: IN= OUT=eth0 SRC=10.62.66.223 DST=172.21.145.28 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=62466 DF PROTO=TCP SPT=59075 DPT=5000 WINDOW=29200 RES=0x00 SYN URGP=0
[10153.318282] DBG#OUTPUT: IN= OUT=eth0 SRC=10.62.66.223 DST=172.21.145.28 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=62466 DF PROTO=TCP SPT=59075 DPT=5000 WINDOW=29200 RES=0x00 SYN URGP=0
[10153.318374] DBG#POSTROUTING: IN= OUT=flannel0 SRC=10.62.66.223 DST=172.20.10.130 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=62466 DF PROTO=TCP SPT=59075 DPT=9000 WINDOW=29200 RES=0x00 SYN URGP=0
And I found if I'm on the machine which the Pod is running. I can successfully to connect through service ip.
nc -v 172.21.145.28 5000
Ncat: Version 6.40 ( http://nmap.org/ncat )
Ncat: Connected to 172.21.145.28:5000.
hello
yes
I am wondering why and how to fix it.

I meet the same issue exactly, on Kubernetes 1.1.7 and 1.2.0. I start flannel without --ip-masq, and add parameter --masquerade-all=true for kube-proxy, it helps.

According to kube-proxy in iptables mode is not working , you might have to add a route routing your service IP to the docker bridge.

Related

Linux firewalld discards local UDP packet although ACCEPT rule is hit

On a RHEL8 system, I'm receiving UDP packet for destination port 2152 (gtp-user) from an external interface and they are not reaching the application listening on the UDP socket opened for that port. I see packets reaching the application fine if I stop firewalld. As soon as firewalld is started, packets get discarded.
I added a rule to explicitly accept these packets and I see my ACCEPT rule is now being hit with counter matching exactly the number of packets generated (1987 packets in dump below)
iptables -L -v
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
6755 4273K ACCEPT all -- any any anywhere anywhere state RELATED,ESTABLISHED
1 28 ACCEPT icmp -- any any anywhere anywhere
150 10358 ACCEPT all -- lo any anywhere anywhere
0 0 ACCEPT tcp -- any any anywhere anywhere state NEW tcp dpt:ssh
1987 1492K ACCEPT udp -- any any anywhere anywhere udp dpt:gtp-user
11 3849 REJECT all -- any any anywhere anywhere reject-with icmp-host-prohibited
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 REJECT all -- any any anywhere anywhere reject-with icmp-host-prohibited
Still packets are not reaching the application and when enabling log-denied=all I see in /var/log/messages a FINAL_REJECT entry for each packet sent while firewalld is running
kernel: FINAL_REJECT: IN=ens161 OUT= MAC=00:50:56:8a:de:55:00:50:56:8a:93:57:08:00 SRC=168.168.31.201 DST=168.168.31.31 LEN=751 TOS=0x18 PREC=0x60 TTL=100 ID=3109 DF PROTO=UDP SPT=2152 DPT=2152
Any ideas of why firewalld would reject these UDP packets after hitting the ACCEPT rule ?

error in nf tables when using ct state : Protocol wrong type for socket

I'm struggling to configure my nf tables rules on my distro. I'm using nft 1.0.4 and Linux 4.9.
When I am using the ct state instruction, nft throw the following error:
nftables.cfg:25:17-43: Error: Could not process rule: Protocol wrong type for socket
ct state established accept
^^^^^^^^^^^^^^^^^^^^^^^^^^^
My Kernel config contains the following parameters
# enable nftables support
CONFIG_NF_TABLES=y
CONFIG_NF_TABLES_INET=y # inet allows IPv4 and IPv6 config in single rule
CONFIG_NF_TABLES_NETDEV=y
CONFIG_NF_CONNTRACK=y # for NAT support
CONFIG_NF_NAT=y # for NAT support
CONFIG_NF_TABLES_SET=y # to use brackets (sets)
CONFIG_NFT_EXTHDR=y
CONFIG_NFT_META=y
CONFIG_NFT_CT=y
CONFIG_NFT_RBTREE=y
CONFIG_NFT_HASH=y
CONFIG_NFT_COUNTER=y
CONFIG_NFT_LOG=y
CONFIG_NFT_LIMIT=y
CONFIG_NFT_MASQ=y
CONFIG_NFT_REDIR=y
CONFIG_NFT_NAT=y
CONFIG_NFT_QUEUE=y
CONFIG_NFT_REJECT=y
CONFIG_NFT_REJECT_INET=y
CONFIG_NFT_COMPAT=y
CONFIG_NFT_CHAIN_ROUTE_IPV4=y
CONFIG_NFT_REJECT_IPV4=y
CONFIG_NFT_CHAIN_NAT_IPV4=y
CONFIG_NFT_MASQ_IPV4=y
# CONFIG_NFT_REDIR_IPV4 is not set
CONFIG_NFT_CHAIN_ROUTE_IPV6=y
CONFIG_NFT_REJECT_IPV6=y
CONFIG_NFT_CHAIN_NAT_IPV6=y
CONFIG_NFT_MASQ_IPV6=y
# CONFIG_NFT_REDIR_IPV6 is not set
CONFIG_NFT_BRIDGE_META=y
CONFIG_NFT_BRIDGE_REJECT=y
my ruleset is something like
#!/sbin/nft -f
flush ruleset
table inet myfilter {
chain myinput {
type filter hook input priority 0; policy drop;
ct state established,related accept
tcp dport ssh accept
tcp dport 53 accept
udp dport 53 accept
ip protocol icmp accept
iif "lo" accept
tcp dport 2181 accept
tcp dport 9092 accept
}
chain myoutput {
type filter hook output priority 0; policy drop;
ct state established accept
tcp dport ssh accept
tcp dport 53 accept
udp dport 53 accept
udp dport snmp accept
tcp dport http accept
tcp dport https accept
ip protocol icmp accept
}
chain forward {
type filter hook forward priority 0; policy drop;
}
}
Do you have any idea how to fix this?
Actually the error come from the version of the kernel I am using: 4.9.
nftables needs at least v4.10 to fully support ct states as stated in debian documentation here https://packages.debian.org/stretch/nftables
A Linux kernel >= 3.13 is required. However, >= 4.10 is recommended.

Can not connect with a simple http server(tcp connection) on oracle compute instance(oci), ssh connection works well

I am using oracle cloud to create a http server for learning , so I am new on this. Thank you for your any help!
Instance information
Image: Canonical-Ubuntu-20.04-2022.02.15-0
Shape: VM.Standard.E2.1.Micro
Have added ingress rule on subnet(7500 port):
Picture of subnet
Source IP Protocol Source Port Range Destination Port Range Allows
0.0.0.0/0 TCP All 7500 TCP traffic for ports: 7500
Using python to create a http server:
python3 -m http.server 7500 &
It was showing:
ubuntu#tcp-server:~$ Serving HTTP on 0.0.0.0 port 7500 (http://0.0.0.0:7500/) ...
Calling lsof -i returns
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
python3 1806 root 3u IPv4 33281 0t0 TCP *:7500 (LISTEN)
Allowed 7500 port on ufw:
ufw Status: active
To Action From
7500 ALLOW Anywhere
7500 (v6) ALLOW Anywhere (v6)
But I can not visit public_Ip_address:7500.
Using telnet:
sudo telnet 152.69.123.118 7500
Returns:
Trying 152.69.123.118...
and does not connect
Thank you in advance!
The reason is from iptables setting:
sudo nano /etc/iptables/rules.v4
add this sentence:
-A INPUT -p tcp -m state --state NEW -m tcp --dport 7500 -j ACCEPT
then:
sudo su
iptables-restore < /etc/iptables/rules.v4
Done!
ubuntu image from oci has been modified by oracle, the default setting has limitted ports accepted.
Therefor we have to open the port manually.
There are some important attributes you need to be aware of when using a fresh ubuntu image on oci. For the sake of this discussion firewall and iptables are synonymous
By default
there are 4 chains standard INPUT, FORWARD, OUTPUT and InstanceServices
OUTPUT will have 1 rule
InstanceServices all -- * * 0.0.0.0/0 169.254.0.0/16
InstanceServices destination 169.254.XXX.YYY point to oci services like bootvolume ect ...
FORWARD rejects all
Your default INPUT chain will look like
1 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED
2 ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0
3 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0
4 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp spt:123
5 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22
6 REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited
this allows ssh and udp port 123 for NTP only
create a rule for port 7500 and place it with the existing tcp rule for ssh
sudo iptables -I INPUT 6 -p tcp -m tcp --dport 7500 -j ACCEPT
now INPUT chain is
1 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED
2 ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0
3 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0
4 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp spt:123
5 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22
6 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:7500
7 REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited
as long as we have the correct VCN route table entries, Security list entries or network security group entries for tcp 7500 you can get thru the instance firewall to destination port 7500
Notes
Its really import not to delete the InstanceServices rule in the OUTPUT chain AND not to delete the InstanceServices chain
This can happen if you are new to iptables and you do something like
iptables -F
iptables -X
Its worth it to learn iptables however firewalld is easier.
Oci does not recommend ufw
Your iptable rules will not survive a reboot unless you persist them
these issues are well documented here under subheading Essential Firewall Rules

No IPV6 internet connectivity on client side of OpenVPN AWS EC2 server

I have an OpenVPN server I've set up on an AWS EC2 instance that is pulling an IPV6 address, and can traceroute6 and ping6 ipv6.google.com. The client can do neither and does not return an address when using online tests like ipleak, or testipv6. The server and client can ping6 and traceroute6 each other.
The client appears to pull the correct address locally, and via ip -6 route. IPV4 has always worked fine without issue. Everything appears good on the AWS side per their instructions here so the instance does have ipv6 enabled with the proper routing on the aws/vpc side. Security groups are pretty wide open for ipv6 as well.
I am assuming it's my routing, but I'm not really sure at this point as I'm no ipv6 or routing expert. Please help!
Relevant config info:
ipv6 addr of AWS instance:
aaaa:bbbb:cccc:dddd::/64
server.conf
local 172.31.44.1
port 443
proto udp
dev tun
ca ca.crt
cert server.crt
key server.key
dh dh.pem
auth SHA512
tls-crypt tc.key
topology subnet
server 10.8.0.0 255.255.255.0
push "redirect-gateway def1 bypass-dhcp"
ifconfig-pool-persist ipp.txt
push "dhcp-option DNS 1.1.1.1"
push "dhcp-option DNS 1.0.0.1"
keepalive 10 120
cipher AES-256-CBC
user nobody
group nogroup
persist-key
persist-tun
verb 3
crl-verify crl.pem
explicit-exit-notify
server-ipv6 aaaa:bbbb:cccc:dddd:80::/112
push "redirect-gateway-ipv6 def1 bypass-dhcp-ipv6"
push "route-ipv6 aaaa:bbbb:cccc:dddd::/64"
push "route-ipv6 2000::/3"
push "route 172.31.44.1 255.255.255.255 net_gateway"
push "dhcp-option DNS6 2001:4860:4860::8888"
push "dhcp-option DNS6 2001:4860:4860::8844"
/etc/sysctl.conf
net.ipv4.ip_forward=1
net.ipv6.conf.all.forwarding=1
net.ipv6.conf.all.proxy_ndp=1
ip6tables:
-A INPUT -p udp --dport 443 -j ACCEPT
-A FORWARD -m state --state NEW -i tun0 -o eth0 -s aaaa:bbbb:cccc:dddd::/64 -j ACCEPT
-A FORWARD -m state --state NEW -i eth0 -o tun0 -d aaaa:bbbb:cccc:dddd::/64 -j ACCEPT
-A FORWARD -m state --state ESTABLISHED,RELATED -j ACCEPT
Don't use proxy NDP. It's a mess.
What you need is to delegate (=route) a prefix to the EC2 instance, then configure this prefix in the OpenVPN config (server-ipv6 keyword with the assigned prefix and mask, e.g. 2001:db8:dead:beef:1::/80), then assign connected users addresses from the prefix.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/work-with-prefixes.html
https://openvpn.net/community-resources/reference-manual-for-openvpn-2-4/

IPTables configuration for Transparent Proxy

I am confuse why my IPTable does not work in Router. what I'm trying to do is redirect any packets from source ip destined to port 80 and 443 to 192.168.1.110:3128. however when I tried this:
iptables -t nat -A PREROUTING -s 192.168.1.5 -p tcp --dport 80:443 -j DNAT --to-destination 192.168.1.110:3128
does not work. however when I add this,
iptables -t nat -A POSTROUTING-j MASQUARADE
it works. but the problem with masquarade is I do not get the real ip but instead the ip of the router. I need to get the source ip so my proxy server could record all ip connected to it. can some one tell me how to make it work without making POSTROUTING jump to Masquarade?
For real transparent proxying you need to use the TPROXY target (in the mangle table, PREROUTING chain). All other iptables-mechanisms like any NAT, MASQUERADE, REDIRECT rewrite the IP addresses of the packet, which makes it impossible to find out where the packet originally was intended to.
The proxy program has to bind() and listen() on a socket like any other server, but needs some specific socket flags (which requires some Linux capabilities (type of permission) or root). – Once connected, there is some way to get the “intended server” from the OS.
Sorry, I’m a little lazy about the details, but searching for “TPROXY” as keyword will get you going quickly!
If I am not wrong, the correct syntax of the rule would be:
iptables -t nat -A PREROUTING -s 192.168.1.5 -p tcp -m multiport --dports 80,443 -j DNAT --to-destination 192.168.1.110:3128
--dport 80:443 will forward all ports from 80 to 443
--dports 80,443 will forward port 80 and 443 only.
If you want traffic hitting 192.168.1.5 on port 80 and 443 to be forwarded to 192.168.1.110's 3128 port then you should use the below rule:
iptables -t nat -A PREROUTING -d 192.168.1.5 -p tcp -m multiport --dports 80,443 -j DNAT --to-destination 192.168.1.110:3128
You should also make sure the gateway on 192.168.1.110 is pointed to your router ip.
Finally you can use the masquerade rule as below.
iptables -t nat -A POSTROUTING -s 192.168.1.0/24 -o eth1 -j MASQUERADE
eth1 should be your outgoing interface.
I had the same issue and the solution was to tell the transparent proxy to forward the source ip in the right header fields.
In case of my nginx proxy the rules were close to:
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://name_of_proxy;
proxy_redirect off;
}
i used the iptables -t nat -A PREROUTING -p tcp -s foreign ip to your device --dport 80:443 -j DNAT --to-destination your application or local ip:port.i think you did the prerouting the packet in your device out which never connect to port 80 or 443,these is for web server connect to device.192.168.1.5 is like my local address.
and remember to configecho 1 > /proc/sys/net/ipv4/ip_forward
I think you are doing NAT in both directions by not specifying an interface. Try adding -o eth0 to your -j MASQUERADE line. (Substitute whatever your "external" interface is, instead of eth0, depending on your setup.)

Resources