Unable to NAT IP with Iptables and Strongswan in AWS - amazon-ec2

I've just configured Strongswan and can successfully bring the VPN tunnel up on an AWS EC2 instance but I’m having issues with the traffic because we need to NAT the private IP address of my EC2 instance so all traffic going through the VPN come from a specific IP.
But currently if I ping the [DESTINATION_IP] address my traffic still originates from my private IP. I have tried several PREROUTING and POSTROUTING rules in iptables but nothing seems to work. Can anyone explain what the problem might be?
Current Settings
In AWS Source/destination checks disabled.
strongswan statusall
Listening IP addresses:
[PRIVATE_IP]
Connections:
vpn: %any...[VPN_FIREWALL_IP] IKEv2, dpddelay=10s
vpn: local: [[ELASTIC_PUBLIC_IP]] uses pre-shared key authentication
vpn: remote: [[VPN_FIREWALL_IP]] uses pre-shared key authentication
vpn: child: 0.0.0.0/0 === [DESTINATION_IP]/32 TUNNEL, dpdaction=restart
Security Associations (1 up, 0 connecting):
vpn[1]: ESTABLISHED 5 seconds ago, [PRIVATE_IP][[ELASTIC_PUBLIC_IP]]...[VPN_FIREWALL_IP][[VPN_FIREWALL_IP]]
vpn[1]: IKEv2 SPIs: 6055db442ef8607c_i* 3d2ec0bb945e9a2c_r, pre-shared key reauthentication in 7 hours
vpn[1]: IKE proposal: AES_CBC_128/HMAC_SHA2_256_128/PRF_HMAC_SHA1/MODP_2048
vpn{1}: INSTALLED, TUNNEL, reqid 1, ESP in UDP SPIs: ca9d2ca0_i df70a539_o
vpn{1}: AES_CBC_128/HMAC_SHA1_96, 0 bytes_i, 0 bytes_o, rekeying in 46 minutes
vpn{1}: [NAT_SOURCE_IP]/31 === [DESTINATION_IP]/32
ipsec.conf
config setup
charondebug="all"
uniqueids=no
conn %default
ikelifetime=28800s
keyexchange=ikev2
keylife=3600s
keyingtries=%forever
mobike=no
conn vpn
authby=psk
auto=start
dpddelay=10s
dpdtimeout=30s
dpdaction=restart
ike=aes128-sha256-prfsha1-modp2048!
esp=aes128-sha256-modp2048,aes128-sha1-modp2048!
left=%defaultroute
leftid=[ELASTIC_PUBLIC_IP]
leftsubnet=0.0.0.0/0
leftfirewall=yes
rightsubnet=[DESTINATION_IP]/32
right=[VPN_FIREWALL_IP]
rightid=[VPN_FIREWALL_IP]
type=tunnel
mark=100
iptables-save
*nat
:PREROUTING ACCEPT [9728:543855]
:INPUT ACCEPT [7882:388791]
:OUTPUT ACCEPT [20219:1527154]
:POSTROUTING ACCEPT [20725:1569658]
COMMIT
*filter
:INPUT ACCEPT [142:30437]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [188:34735]
-A FORWARD -s [DESTINATION_IP]/32 -d [NAT_SOURCE_IP]/31 -i eth0 -m policy --dir in --pol ipsec --reqid 1 --proto esp -j ACCEPT
-A FORWARD -s [NAT_SOURCE_IP]/31 -d [DESTINATION_IP]/32 -o eth0 -m policy --dir out --pol ipsec --reqid 1 --proto esp -j ACCEPT
COMMIT

If I correctly understand your question, you are asking about how to setup source NAT on an EC2 instance with Strongswan. I run the same setup and in my case, following iptables rules from [1] provide the requested functionality:
iptables -t nat -A POSTROUTING -s <NAT_SOURCE_IP>/24 -o eth0 -m policy --dir out --pol ipsec -j ACCEPT
iptables -t nat -A POSTROUTING -s <NAT_SOURCE_IP>/24 -o eth0 -j MASQUERADE
[1] https://wiki.strongswan.org/projects/strongswan/wiki/ForwardingAndSplitTunneling

Related

Apache2 - RTSP stream redirection without ffmpeg processing

I have IP camera on local network with address let say: 192.168.5.111:36121 (for rtsp conection)
So I can view real time video in local network for example with:
ffplay rtsp://admin:xxxx#192.168.5.111:36121/cam/realmonitor?.......
I can access my camera from any place as well from my linux VPS server with:
ffplay rtsp://admin:xxxx#xbox2.com:37021/cam/realmonitor?.......
To do this on my remote VPS linux server I have VPN server and locally DD-WRT router connected to this server as client:
So on the local router I have:
iptables -t nat -A PREROUTING -i tun11 -p tcp --dport 36121 -j DNAT --to-destination 192.168.5.111
And on VPS server:
iptables -t nat -A PREROUTING -d 176.123.123.123/32 -p tcp -m tcp --dport 37021 -j DNAT --to-destination 10.8.0.10:36121
and that is allowing me to connect my camera from global network with:
ffplay rtsp://admin:xxxx#xbox2.com:37021/cam/realmonitor?.......
or
ffplay rtsp://admin:xxxx#176.123.123.123:37021/cam/realmonitor?.......
Everything works fine - but I would like to avoid to use specific port number directly (and use only :443 for all my cameras) so for example instead of
using xbox2.com:37021 to use xbox2.com/cam1 with redirection from apache2, so to access the camera it would be: ffplay rtsp://admin:xxxx#xbox2.com/cam1/cam/realmonitor?.......
Was trying to use RedirectMatch, ProxyPass, ProxyPassReverse in the VirtualHost config but did not succeed.
So:
question nr.1 - is it at all possible to use apache2 to redirect the rtsp as described before?
question nr.2 - if yes so how?
Please note that I can redirect the stream with the node-rtsp-stream within apache2 (over port :443) using:
ProxyPass /wss1 ws://127.0.0.1:3001
ProxyPassReverse /wss1 ws://127.0.0.1:3001
and later to play it with jsmpeg on the web page but the problem is that ffmpeg processing on my VPS server makes big load to the processor, that's why I would like only to redirect the rtsp stream using apache2 without any ffmpeg processing.

No IPV6 internet connectivity on client side of OpenVPN AWS EC2 server

I have an OpenVPN server I've set up on an AWS EC2 instance that is pulling an IPV6 address, and can traceroute6 and ping6 ipv6.google.com. The client can do neither and does not return an address when using online tests like ipleak, or testipv6. The server and client can ping6 and traceroute6 each other.
The client appears to pull the correct address locally, and via ip -6 route. IPV4 has always worked fine without issue. Everything appears good on the AWS side per their instructions here so the instance does have ipv6 enabled with the proper routing on the aws/vpc side. Security groups are pretty wide open for ipv6 as well.
I am assuming it's my routing, but I'm not really sure at this point as I'm no ipv6 or routing expert. Please help!
Relevant config info:
ipv6 addr of AWS instance:
aaaa:bbbb:cccc:dddd::/64
server.conf
local 172.31.44.1
port 443
proto udp
dev tun
ca ca.crt
cert server.crt
key server.key
dh dh.pem
auth SHA512
tls-crypt tc.key
topology subnet
server 10.8.0.0 255.255.255.0
push "redirect-gateway def1 bypass-dhcp"
ifconfig-pool-persist ipp.txt
push "dhcp-option DNS 1.1.1.1"
push "dhcp-option DNS 1.0.0.1"
keepalive 10 120
cipher AES-256-CBC
user nobody
group nogroup
persist-key
persist-tun
verb 3
crl-verify crl.pem
explicit-exit-notify
server-ipv6 aaaa:bbbb:cccc:dddd:80::/112
push "redirect-gateway-ipv6 def1 bypass-dhcp-ipv6"
push "route-ipv6 aaaa:bbbb:cccc:dddd::/64"
push "route-ipv6 2000::/3"
push "route 172.31.44.1 255.255.255.255 net_gateway"
push "dhcp-option DNS6 2001:4860:4860::8888"
push "dhcp-option DNS6 2001:4860:4860::8844"
/etc/sysctl.conf
net.ipv4.ip_forward=1
net.ipv6.conf.all.forwarding=1
net.ipv6.conf.all.proxy_ndp=1
ip6tables:
-A INPUT -p udp --dport 443 -j ACCEPT
-A FORWARD -m state --state NEW -i tun0 -o eth0 -s aaaa:bbbb:cccc:dddd::/64 -j ACCEPT
-A FORWARD -m state --state NEW -i eth0 -o tun0 -d aaaa:bbbb:cccc:dddd::/64 -j ACCEPT
-A FORWARD -m state --state ESTABLISHED,RELATED -j ACCEPT
Don't use proxy NDP. It's a mess.
What you need is to delegate (=route) a prefix to the EC2 instance, then configure this prefix in the OpenVPN config (server-ipv6 keyword with the assigned prefix and mask, e.g. 2001:db8:dead:beef:1::/80), then assign connected users addresses from the prefix.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/work-with-prefixes.html
https://openvpn.net/community-resources/reference-manual-for-openvpn-2-4/

Kubernetes - Connection tracking does not mangle packages back to the original destination IP (DNAT)

We have a Kubernetes cluster setup using AWS EC2 instances which we created using KOPS. We are experiencing problems with internal pod communication through kubernetes services (which will load balance traffic between destination pods). The problem emerges when the source and destination pod are on the same EC2 instance (node). Kubernetes is setup with flannel for internode communication using vxlan, and kubernetes services are managed by kube-proxy using iptables.
In a scenario where:
PodA running on EC2 instance 1 (ip-172-20-121-84, us-east-1c): 100.96.54.240
PodB running on EC2 instance 1 (ip-172-20-121-84, us-east-1c): 100.96.54.247
ServiceB (service where PodB is a possible destination endpoint): 100.67.30.133
If we go inside PodA and execute "curl -v http://ServiceB/", no answer is received and finally, a timeout is produced.
When we inspect the traffic (cni0 interface in instance 1), we observe:
PodA sends a SYN package to ServiceB IP
The package is mangled and the destination IP is changed from ServiceB IP to PodB IP
Conntrack registers that change:
root#ip-172-20-121-84:/home/admin# conntrack -L|grep 100.67.30.133
tcp 6 118 SYN_SENT src=100.96.54.240 dst=100.67.30.133 sport=53084 dport=80 [UNREPLIED] src=100.96.54.247 dst=100.96.54.240 sport=80 dport=43534 mark=0 use=1
PodB sends a SYN+ACK package to PodA
The source IP for the SYN+ACK package is not reverted back from the PodB IP to the ServiceB IP
PodA receives a SYN+ACK package from PodB, which was not expected and it send back a RESET package
PodA sends a SYN package to ServiceB again after a timeout, and the whole process repeats
Here the tcpdump annotated details:
root#ip-172-20-121-84:/home/admin# tcpdump -vv -i cni0 -n "src host 100.96.54.240 or dst host 100.96.54.240"
TCP SYN:
15:26:01.221833 IP (tos 0x0, ttl 64, id 2160, offset 0, flags [DF], proto TCP (6), length 60)
100.96.54.240.43534 > 100.67.30.133.80: Flags [S], cksum 0x1e47 (incorrect -> 0x3e31), seq 506285654, win 26733, options [mss 8911,sackOK,TS val 153372198 ecr 0,nop,wscale 9], length 0
15:26:01.221866 IP (tos 0x0, ttl 63, id 2160, offset 0, flags [DF], proto TCP (6), length 60)
100.96.54.240.43534 > 100.96.54.247.80: Flags [S], cksum 0x36d6 (incorrect -> 0x25a2), seq 506285654, win 26733, options [mss 8911,sackOK,TS val 153372198 ecr 0,nop,wscale 9], length 0
Level 2:
15:26:01.221898 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 100.96.54.240 tell 100.96.54.247, length 28
15:26:01.222050 ARP, Ethernet (len 6), IPv4 (len 4), Reply 100.96.54.240 is-at 0a:58:64:60:36:f0, length 28
TCP SYN+ACK:
15:26:01.222151 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 60)
100.96.54.247.80 > 100.96.54.240.43534: Flags [S.], cksum 0x36d6 (incorrect -> 0xc318), seq 2871879716, ack 506285655, win 26697, options [mss 8911,sackOK,TS val 153372198 ecr 153372198,nop,wscale 9], length 0
TCP RESET:
15:26:01.222166 IP (tos 0x0, ttl 64, id 32433, offset 0, flags [DF], proto TCP (6), length 40)
100.96.54.240.43534 > 100.96.54.247.80: Flags [R], cksum 0x6256 (correct), seq 506285655, win 0, length 0
TCP SYN (2nd time):
15:26:02.220815 IP (tos 0x0, ttl 64, id 2161, offset 0, flags [DF], proto TCP (6), length 60)
100.96.54.240.43534 > 100.67.30.133.80: Flags [S], cksum 0x1e47 (incorrect -> 0x3d37), seq 506285654, win 26733, options [mss 8911,sackOK,TS val 153372448 ecr 0,nop,wscale 9], length 0
15:26:02.220855 IP (tos 0x0, ttl 63, id 2161, offset 0, flags [DF], proto TCP (6), length 60)
100.96.54.240.43534 > 100.96.54.247.80: Flags [S], cksum 0x36d6 (incorrect -> 0x24a8), seq 506285654, win 26733, options [mss 8911,sackOK,TS val 153372448 ecr 0,nop,wscale 9], length 0
15:26:02.220897 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 60)
100.96.54.247.80 > 100.96.54.240.43534: Flags [S.], cksum 0x36d6 (incorrect -> 0x91f0), seq 2887489130, ack 506285655, win 26697, options [mss 8911,sackOK,TS val 153372448 ecr 153372448,nop,wscale 9], length 0
15:26:02.220915 IP (tos 0x0, ttl 64, id 32492, offset 0, flags [DF], proto TCP (6), length 40)
100.96.54.240.43534 > 100.96.54.247.80: Flags [R], cksum 0x6256 (correct), seq 506285655, win 0, length 0
The relevant iptable rules (automatically managed by kube-proxy) on instance 1 (ip-172-20-121-84, us-east-1c):
-A INPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A KUBE-SERVICES ! -s 100.96.0.0/11 -d 100.67.30.133/32 -p tcp -m comment --comment "prod/export: cluster IP" -m tcp --dport 80 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 100.67.30.133/32 -p tcp -m comment --comment "prod/export: cluster IP" -m tcp --dport 80 -j KUBE-SVC-3IL52ANAN3BQ2L74
-A KUBE-SVC-3IL52ANAN3BQ2L74 -m comment --comment "prod/export:" -m statistic --mode random --probability 0.10000000009 -j KUBE-SEP-4XYJJELQ3E7C4ILJ
-A KUBE-SVC-3IL52ANAN3BQ2L74 -m comment --comment "prod/export:" -m statistic --mode random --probability 0.11110999994 -j KUBE-SEP-2ARYYMMMNDJELHE4
-A KUBE-SVC-3IL52ANAN3BQ2L74 -m comment --comment "prod/export:" -m statistic --mode random --probability 0.12500000000 -j KUBE-SEP-OAQPXBQCZ2RBB4R7
-A KUBE-SVC-3IL52ANAN3BQ2L74 -m comment --comment "prod/export:" -m statistic --mode random --probability 0.14286000002 -j KUBE-SEP-SCYIBWIJAXIRXS6R
-A KUBE-SVC-3IL52ANAN3BQ2L74 -m comment --comment "prod/export:" -m statistic --mode random --probability 0.16667000018 -j KUBE-SEP-G4DTLZEMDSEVF3G4
-A KUBE-SVC-3IL52ANAN3BQ2L74 -m comment --comment "prod/export:" -m statistic --mode random --probability 0.20000000019 -j KUBE-SEP-NXPFCT6ZBXHAOXQN
-A KUBE-SVC-3IL52ANAN3BQ2L74 -m comment --comment "prod/export:" -m statistic --mode random --probability 0.25000000000 -j KUBE-SEP-7DUMGWOXA5S7CFHJ
-A KUBE-SVC-3IL52ANAN3BQ2L74 -m comment --comment "prod/export:" -m statistic --mode random --probability 0.33332999982 -j KUBE-SEP-LNIY4F5PIJA3CQPM
-A KUBE-SVC-3IL52ANAN3BQ2L74 -m comment --comment "prod/export:" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-SLBETXT7UIBTZCPK
-A KUBE-SVC-3IL52ANAN3BQ2L74 -m comment --comment "prod/export:" -j KUBE-SEP-FMCOTKNLEICO2V37
-A KUBE-SEP-OAQPXBQCZ2RBB4R7 -s 100.96.54.247/32 -m comment --comment "prod/export:" -j KUBE-MARK-MASQ
-A KUBE-SEP-OAQPXBQCZ2RBB4R7 -p tcp -m comment --comment "prod/export:" -m tcp -j DNAT --to-destination 100.96.54.247:80
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE
This is the service definition:
root#adsvm010:/yamls# kubectl describe service export
Name: export
Namespace: prod
Labels: <none>
Annotations: <none>
Selector: run=export
Type: ClusterIP
IP: 100.67.30.133
Port: <unset> 80/TCP
TargetPort: 80/TCP
Endpoints: 100.96.5.44:80,100.96.54.235:80,100.96.54.247:80 + 7 more...
Session Affinity: None
Events: <none>
If instead of the service we use directly PodB IP (so there is no need to mangle packages), the connection works.
If we use the service but the randomly selected destination pod is running in a different instance, then the connection tracking mechanism works properly and it mangles the package back so that PodA sees the SYN+ACK package as it expected it (coming from ServiceB IP). In this case, traffic goes through cni0 and flannel.0 interfaces.
This behavior started some weeks ago before we were not observing any problems (over a year) and we do not recall any major change to the cluster setup or to the pods we are running. Does anybody have any idea that would explain why the SYN+ACK package is not mangled back to the expected src/dst IPs?
I finally found the answer. The cni0 interface is in bridge mode with all the pod virtual interfaces (one veth0 per pod running on that node):
root#ip-172-20-121-84:/home/admin# brctl show
bridge name bridge id STP enabled interfaces
cni0 8000.0a5864603601 no veth05420679
veth078b53a1
veth0a60985d
...
root#ip-172-20-121-84:/home/admin# ip addr
5: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8951 qdisc noqueue state UP group default qlen 1000
link/ether 0a:58:64:60:36:01 brd ff:ff:ff:ff:ff:ff
inet 100.96.54.1/24 scope global cni0
valid_lft forever preferred_lft forever
inet6 fe80::1c66:76ff:feb6:2122/64 scope link
valid_lft forever preferred_lft forever
The traffic that goes from/to the bridged interface to/from some other interface is processed by netfilter/iptables, but the traffic that does not leave the bridged interface (e.g. from one veth0 to another, both belonging to the same bridge) is NOT processed by netfilter/iptables.
In the example I exposed in the question, PodA (100.96.54.240) sends a SYN package to ServiceB (100.67.30.133) which is not in the cni0 subnet (100.96.54.1/24) so this package will not stay in the bridged cni0 interface and iptable processes it. That is why we see that the DNAT happened and it got registered in the conntrack. But if the selected destination pod is in the same node, for instance PodB (100.96.54.247), then PodB sees the SYN package and responses with a SYN+ACK where the source is 100.96.54.247 and the destination is 100.96.54.240. These are IPs inside the cni0 subnet and do not need to leave it, hence netfilter/iptables does not process it and does not mangle back the package based on conntrack information (i.e., the real source 100.96.54.247 is not replaced by the expected source 100.67.30.133).
Fortunately, there is the bridge-netfilter kernel module that can enable netfilter/iptables to process traffic that happens in the bridged interfaces:
root#ip-172-20-121-84:/home/admin# modprobe br_netfilter
root#ip-172-20-121-84:/home/admin# cat /proc/sys/net/bridge/bridge-nf-call-iptables
1
To fix this in a Kubernetes cluster setup with KOPS (credits), edit the cluster manifest with kops edit cluster and under spec: include:
hooks:
- name: fix-bridge.service
roles:
- Node
- Master
before:
- network-pre.target
- kubelet.service
manifest: |
Type=oneshot
ExecStart=/sbin/modprobe br_netfilter
[Unit]
Wants=network-pre.target
[Install]
WantedBy=multi-user.target
This will create a systemd service in /lib/systemd/system/fix-bridge.service in your nodes that will run at startup and it will make sure the br_netfilter module is loaded before kubernetes (i.e., kubelet) starts. If we do not do this, what we experienced with AWS EC2 instances (Debian Jessie images) is that sometimes the module is loaded during startup and sometimes it is not (I do not know why there such a variability), so depending on that the problem may manifest itself or not.

Simulate Network Latency mac Sierra

I am trying to simulate network latency for all traffic to a certain ip/url. I tried using a proxy through Charles but the traffic is going through HTTP or SOCKS. I found some information online but it does not seem to work for me. Can anyone see what is wrong with my commands?
#enable pf
pfctl -E
#add a temporary extra ruleset (called "anchor") named "deeelay
(cat /etc/pf.conf && echo "dummynet-anchor \"deeelay\"" && echo "anchor
\"deeelay\"") | sudo pfctl -f -
#add a rule to the deeelay set to send any traffic to endpoint through new rule
echo "dummynet out proto tcp from any to myurl.com pipe 1" |
sudo pfctl -a deeelay -f -
#Add a rule to dummynet pipe 1 to delay every packet by 500ms
sudo dnctl pipe 1 config delay 500
I see this warning when I run the commands:
No ALTQ support in kernel
ALTQ related functions disabled
Is that the issue?
The problem was the proto parameter. The application is not using tcp, it is using another protocol. You can either supply all the protocols you want as a list like so:
proto { tcp udp icmp ipv6 tlsp smp }
Or you can just remove the proto parameter altogether and it will do all protocols.

IPTables configuration for Transparent Proxy

I am confuse why my IPTable does not work in Router. what I'm trying to do is redirect any packets from source ip destined to port 80 and 443 to 192.168.1.110:3128. however when I tried this:
iptables -t nat -A PREROUTING -s 192.168.1.5 -p tcp --dport 80:443 -j DNAT --to-destination 192.168.1.110:3128
does not work. however when I add this,
iptables -t nat -A POSTROUTING-j MASQUARADE
it works. but the problem with masquarade is I do not get the real ip but instead the ip of the router. I need to get the source ip so my proxy server could record all ip connected to it. can some one tell me how to make it work without making POSTROUTING jump to Masquarade?
For real transparent proxying you need to use the TPROXY target (in the mangle table, PREROUTING chain). All other iptables-mechanisms like any NAT, MASQUERADE, REDIRECT rewrite the IP addresses of the packet, which makes it impossible to find out where the packet originally was intended to.
The proxy program has to bind() and listen() on a socket like any other server, but needs some specific socket flags (which requires some Linux capabilities (type of permission) or root). – Once connected, there is some way to get the “intended server” from the OS.
Sorry, I’m a little lazy about the details, but searching for “TPROXY” as keyword will get you going quickly!
If I am not wrong, the correct syntax of the rule would be:
iptables -t nat -A PREROUTING -s 192.168.1.5 -p tcp -m multiport --dports 80,443 -j DNAT --to-destination 192.168.1.110:3128
--dport 80:443 will forward all ports from 80 to 443
--dports 80,443 will forward port 80 and 443 only.
If you want traffic hitting 192.168.1.5 on port 80 and 443 to be forwarded to 192.168.1.110's 3128 port then you should use the below rule:
iptables -t nat -A PREROUTING -d 192.168.1.5 -p tcp -m multiport --dports 80,443 -j DNAT --to-destination 192.168.1.110:3128
You should also make sure the gateway on 192.168.1.110 is pointed to your router ip.
Finally you can use the masquerade rule as below.
iptables -t nat -A POSTROUTING -s 192.168.1.0/24 -o eth1 -j MASQUERADE
eth1 should be your outgoing interface.
I had the same issue and the solution was to tell the transparent proxy to forward the source ip in the right header fields.
In case of my nginx proxy the rules were close to:
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://name_of_proxy;
proxy_redirect off;
}
i used the iptables -t nat -A PREROUTING -p tcp -s foreign ip to your device --dport 80:443 -j DNAT --to-destination your application or local ip:port.i think you did the prerouting the packet in your device out which never connect to port 80 or 443,these is for web server connect to device.192.168.1.5 is like my local address.
and remember to configecho 1 > /proc/sys/net/ipv4/ip_forward
I think you are doing NAT in both directions by not specifying an interface. Try adding -o eth0 to your -j MASQUERADE line. (Substitute whatever your "external" interface is, instead of eth0, depending on your setup.)

Resources