Linux firewalld discards local UDP packet although ACCEPT rule is hit - rhel8

On a RHEL8 system, I'm receiving UDP packet for destination port 2152 (gtp-user) from an external interface and they are not reaching the application listening on the UDP socket opened for that port. I see packets reaching the application fine if I stop firewalld. As soon as firewalld is started, packets get discarded.
I added a rule to explicitly accept these packets and I see my ACCEPT rule is now being hit with counter matching exactly the number of packets generated (1987 packets in dump below)
iptables -L -v
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
6755 4273K ACCEPT all -- any any anywhere anywhere state RELATED,ESTABLISHED
1 28 ACCEPT icmp -- any any anywhere anywhere
150 10358 ACCEPT all -- lo any anywhere anywhere
0 0 ACCEPT tcp -- any any anywhere anywhere state NEW tcp dpt:ssh
1987 1492K ACCEPT udp -- any any anywhere anywhere udp dpt:gtp-user
11 3849 REJECT all -- any any anywhere anywhere reject-with icmp-host-prohibited
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 REJECT all -- any any anywhere anywhere reject-with icmp-host-prohibited
Still packets are not reaching the application and when enabling log-denied=all I see in /var/log/messages a FINAL_REJECT entry for each packet sent while firewalld is running
kernel: FINAL_REJECT: IN=ens161 OUT= MAC=00:50:56:8a:de:55:00:50:56:8a:93:57:08:00 SRC=168.168.31.201 DST=168.168.31.31 LEN=751 TOS=0x18 PREC=0x60 TTL=100 ID=3109 DF PROTO=UDP SPT=2152 DPT=2152
Any ideas of why firewalld would reject these UDP packets after hitting the ACCEPT rule ?

Related

error in nf tables when using ct state : Protocol wrong type for socket

I'm struggling to configure my nf tables rules on my distro. I'm using nft 1.0.4 and Linux 4.9.
When I am using the ct state instruction, nft throw the following error:
nftables.cfg:25:17-43: Error: Could not process rule: Protocol wrong type for socket
ct state established accept
^^^^^^^^^^^^^^^^^^^^^^^^^^^
My Kernel config contains the following parameters
# enable nftables support
CONFIG_NF_TABLES=y
CONFIG_NF_TABLES_INET=y # inet allows IPv4 and IPv6 config in single rule
CONFIG_NF_TABLES_NETDEV=y
CONFIG_NF_CONNTRACK=y # for NAT support
CONFIG_NF_NAT=y # for NAT support
CONFIG_NF_TABLES_SET=y # to use brackets (sets)
CONFIG_NFT_EXTHDR=y
CONFIG_NFT_META=y
CONFIG_NFT_CT=y
CONFIG_NFT_RBTREE=y
CONFIG_NFT_HASH=y
CONFIG_NFT_COUNTER=y
CONFIG_NFT_LOG=y
CONFIG_NFT_LIMIT=y
CONFIG_NFT_MASQ=y
CONFIG_NFT_REDIR=y
CONFIG_NFT_NAT=y
CONFIG_NFT_QUEUE=y
CONFIG_NFT_REJECT=y
CONFIG_NFT_REJECT_INET=y
CONFIG_NFT_COMPAT=y
CONFIG_NFT_CHAIN_ROUTE_IPV4=y
CONFIG_NFT_REJECT_IPV4=y
CONFIG_NFT_CHAIN_NAT_IPV4=y
CONFIG_NFT_MASQ_IPV4=y
# CONFIG_NFT_REDIR_IPV4 is not set
CONFIG_NFT_CHAIN_ROUTE_IPV6=y
CONFIG_NFT_REJECT_IPV6=y
CONFIG_NFT_CHAIN_NAT_IPV6=y
CONFIG_NFT_MASQ_IPV6=y
# CONFIG_NFT_REDIR_IPV6 is not set
CONFIG_NFT_BRIDGE_META=y
CONFIG_NFT_BRIDGE_REJECT=y
my ruleset is something like
#!/sbin/nft -f
flush ruleset
table inet myfilter {
chain myinput {
type filter hook input priority 0; policy drop;
ct state established,related accept
tcp dport ssh accept
tcp dport 53 accept
udp dport 53 accept
ip protocol icmp accept
iif "lo" accept
tcp dport 2181 accept
tcp dport 9092 accept
}
chain myoutput {
type filter hook output priority 0; policy drop;
ct state established accept
tcp dport ssh accept
tcp dport 53 accept
udp dport 53 accept
udp dport snmp accept
tcp dport http accept
tcp dport https accept
ip protocol icmp accept
}
chain forward {
type filter hook forward priority 0; policy drop;
}
}
Do you have any idea how to fix this?
Actually the error come from the version of the kernel I am using: 4.9.
nftables needs at least v4.10 to fully support ct states as stated in debian documentation here https://packages.debian.org/stretch/nftables
A Linux kernel >= 3.13 is required. However, >= 4.10 is recommended.

Can not connect with a simple http server(tcp connection) on oracle compute instance(oci), ssh connection works well

I am using oracle cloud to create a http server for learning , so I am new on this. Thank you for your any help!
Instance information
Image: Canonical-Ubuntu-20.04-2022.02.15-0
Shape: VM.Standard.E2.1.Micro
Have added ingress rule on subnet(7500 port):
Picture of subnet
Source IP Protocol Source Port Range Destination Port Range Allows
0.0.0.0/0 TCP All 7500 TCP traffic for ports: 7500
Using python to create a http server:
python3 -m http.server 7500 &
It was showing:
ubuntu#tcp-server:~$ Serving HTTP on 0.0.0.0 port 7500 (http://0.0.0.0:7500/) ...
Calling lsof -i returns
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
python3 1806 root 3u IPv4 33281 0t0 TCP *:7500 (LISTEN)
Allowed 7500 port on ufw:
ufw Status: active
To Action From
7500 ALLOW Anywhere
7500 (v6) ALLOW Anywhere (v6)
But I can not visit public_Ip_address:7500.
Using telnet:
sudo telnet 152.69.123.118 7500
Returns:
Trying 152.69.123.118...
and does not connect
Thank you in advance!
The reason is from iptables setting:
sudo nano /etc/iptables/rules.v4
add this sentence:
-A INPUT -p tcp -m state --state NEW -m tcp --dport 7500 -j ACCEPT
then:
sudo su
iptables-restore < /etc/iptables/rules.v4
Done!
ubuntu image from oci has been modified by oracle, the default setting has limitted ports accepted.
Therefor we have to open the port manually.
There are some important attributes you need to be aware of when using a fresh ubuntu image on oci. For the sake of this discussion firewall and iptables are synonymous
By default
there are 4 chains standard INPUT, FORWARD, OUTPUT and InstanceServices
OUTPUT will have 1 rule
InstanceServices all -- * * 0.0.0.0/0 169.254.0.0/16
InstanceServices destination 169.254.XXX.YYY point to oci services like bootvolume ect ...
FORWARD rejects all
Your default INPUT chain will look like
1 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED
2 ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0
3 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0
4 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp spt:123
5 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22
6 REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited
this allows ssh and udp port 123 for NTP only
create a rule for port 7500 and place it with the existing tcp rule for ssh
sudo iptables -I INPUT 6 -p tcp -m tcp --dport 7500 -j ACCEPT
now INPUT chain is
1 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED
2 ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0
3 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0
4 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp spt:123
5 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22
6 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:7500
7 REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited
as long as we have the correct VCN route table entries, Security list entries or network security group entries for tcp 7500 you can get thru the instance firewall to destination port 7500
Notes
Its really import not to delete the InstanceServices rule in the OUTPUT chain AND not to delete the InstanceServices chain
This can happen if you are new to iptables and you do something like
iptables -F
iptables -X
Its worth it to learn iptables however firewalld is easier.
Oci does not recommend ufw
Your iptable rules will not survive a reboot unless you persist them
these issues are well documented here under subheading Essential Firewall Rules

Ruby application does not receive UDP packet from different host

Sending a UDP packet from Ruby client to Ruby server using the server address 192.168.1.30 works as expected, but only if client and server are on the same host. If the client runs on a different machine, the UDP packet finds its way to the server, but my server process won't notice.
Server:
require 'socket'
sock = UDPSocket.new()
sock.bind('', 8999)
p sock
while true do
p sock.recvfrom(2000)
end
sock.close
Client:
require 'socket'
sock = UDPSocket.new
p sock.send("foo", 0, "192.168.1.30", 8999)
sock.close
After starting the server, netstat -n --udp --listen approves that the socket is open:
Proto Recv-Q Send-Q Local Address Foreign Address State
udp 0 0 0.0.0.0:8999 0.0.0.0:*
After running the client twice (on 192.168.1.30 and on .23), server output shows only one incoming packet, missing the one from 192.168.1.23:
#<UDPSocket:fd 7, AF_INET, 0.0.0.0, 8999>
["foo", ["AF_INET", 52187, "192.168.1.30", "192.168.1.30"]]
while Wireshark indicates that two packets were noticed
No Time Source Destination Proto Length Info
1 0.000000000 192.168.1.30 192.168.1.30 UDP 47 52187 → 8999 Len=3
2 2.804243569 192.168.1.23 192.168.1.30 UDP 62 39800 → 8999 Len=3
Which probably obvious detail am I missing?
Check if you have any firewall rules active:
sudo iptables -L
sudo ufw status

bash - Remove non-UDP related data from pcap file

I have a file called test1.pcap which contains ICMP, ARP, and UDP messages. I want to read test1.pcap and write to test2.pcap with only UDP messages.
I tried the following:
tcpdump -r test1.pcap udp -w test2.pcap
but the non udp messages - ICMP and ARP still show up in test2.pcap. I used wireshark to view the results.
Any suggestions?

kube-proxy in iptables mode is not working in routing

What I have is
Kubernetes: v.1.1.2
iptables v1.4.21
kernel: 3.10.0-327.3.1.el7.x86_64 Centos
Networking is done via flannel udp
no cloud provider
what I do
I have enabled it with --proxy_mode=iptables argument. And I checked the iptables
Chain PREROUTING (policy ACCEPT)
target prot opt source destination
KUBE-SERVICES all -- anywhere anywhere /* kubernetes service portals */
DOCKER all -- anywhere anywhere ADDRTYPE match dst-type LOCAL
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
KUBE-SERVICES all -- anywhere anywhere /* kubernetes service portals */
DOCKER all -- anywhere !loopback/8 ADDRTYPE match dst-type LOCAL
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
MASQUERADE all -- SIDR26KUBEAPMORANGE-005/26 anywhere
MASQUERADE all -- 172.17.0.0/16 anywhere
MASQUERADE all -- anywhere anywhere /* kubernetes service traffic requiring SNAT */ mark match 0x4d415351
Chain DOCKER (2 references)
target prot opt source destination
Chain KUBE-NODEPORTS (1 references)
target prot opt source destination
Chain KUBE-SEP-3SX6E5663KCZDTLC (1 references)
target prot opt source destination
MARK all -- 172.20.10.130 anywhere /* default/nc-service: */ MARK set 0x4d415351
DNAT tcp -- anywhere anywhere /* default/nc-service: */ tcp to:172.20.10.130:9000
Chain KUBE-SEP-Q4LJF4YJE6VUB3Y2 (1 references)
target prot opt source destination
MARK all -- SIDR26KUBEAPMORANGE-001.serviceengage.com anywhere /* default/kubernetes: */ MARK set 0x4d415351
DNAT tcp -- anywhere anywhere /* default/kubernetes: */ tcp to:10.62.66.254:9443
Chain KUBE-SERVICES (2 references)
target prot opt source destination
KUBE-SVC-6N4SJQIF3IX3FORG tcp -- anywhere 172.21.0.1 /* default/kubernetes: cluster IP */ tcp dpt:https
KUBE-SVC-362XK5X6TGXLXGID tcp -- anywhere 172.21.145.28 /* default/nc-service: cluster IP */ tcp dpt:commplex-main
KUBE-NODEPORTS all -- anywhere anywhere /* kubernetes service nodeports; NOTE: this must be the last rule in this chain */ ADDRTYPE match dst-type LOCAL
Chain KUBE-SVC-362XK5X6TGXLXGID (1 references)
target prot opt source destination
KUBE-SEP-3SX6E5663KCZDTLC all -- anywhere anywhere /* default/nc-service: */
Chain KUBE-SVC-6N4SJQIF3IX3FORG (1 references)
target prot opt source destination
KUBE-SEP-Q4LJF4YJE6VUB3Y2 all -- anywhere anywhere /* default/kubernetes: */
When I do nc request to the service ip from another machine, in my case it's 10.116.0.2 I got an error like below
nc -v 172.21.145.28 5000
Ncat: Version 6.40 ( http://nmap.org/ncat )
hello
Ncat: Connection timed out.
while when I do request to the 172.20.10.130:9000 server it's working fine.
nc -v 172.20.10.130 9000
Ncat: Version 6.40 ( http://nmap.org/ncat )
Ncat: Connected to 172.20.10.130:9000.
hello
yes
From the dmesg log, I can see
[10153.318195] DBG#OUTPUT: IN= OUT=eth0 SRC=10.62.66.223 DST=172.21.145.28 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=62466 DF PROTO=TCP SPT=59075 DPT=5000 WINDOW=29200 RES=0x00 SYN URGP=0
[10153.318282] DBG#OUTPUT: IN= OUT=eth0 SRC=10.62.66.223 DST=172.21.145.28 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=62466 DF PROTO=TCP SPT=59075 DPT=5000 WINDOW=29200 RES=0x00 SYN URGP=0
[10153.318374] DBG#POSTROUTING: IN= OUT=flannel0 SRC=10.62.66.223 DST=172.20.10.130 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=62466 DF PROTO=TCP SPT=59075 DPT=9000 WINDOW=29200 RES=0x00 SYN URGP=0
And I found if I'm on the machine which the Pod is running. I can successfully to connect through service ip.
nc -v 172.21.145.28 5000
Ncat: Version 6.40 ( http://nmap.org/ncat )
Ncat: Connected to 172.21.145.28:5000.
hello
yes
I am wondering why and how to fix it.
I meet the same issue exactly, on Kubernetes 1.1.7 and 1.2.0. I start flannel without --ip-masq, and add parameter --masquerade-all=true for kube-proxy, it helps.
According to kube-proxy in iptables mode is not working , you might have to add a route routing your service IP to the docker bridge.

Resources