I am using dotnet 6 runtime windows container. Application hosted inside container is publishing IGMPv3 packets for any listener in the network.
I can view the packets in wireshark for NAT
But packets do not flow (NATed) to ethernet NIC. Is there any sort of filter for NAT?
Related
Suppose I have pc A and pc B,
I set pc A as the the default gateway for pc B,
now all the traffic from pc B will appear in pc A ,
and I need to do some kind of dnat ,so the tcp/udp packets sending to ip 113.245.224.27:80 will instead be sent to 127.0.0.1:80 (pc A),
I know I can add ip address 113.245.224.27 to my nic ,but I can't do that ,because that will affect other programs .
Is there a command or api I can call to make this possible ,
or do I have to mannually capture the tcp/udp packets and modify the packets myself .
This should be fairly easy on linux by using iptables .
I am constructing an ip-ip tunnel between remote linux server and the local windows server.
The windows server does not support raw ip-ip tunnel so I develop a simple app which makes usage of wintun.
Said the local windows server is bound to ip_local, the remote linux server is bound to ip_remote, and the tunnel ip is ip_tun.
Currently, the windows server can receive the incoming packet (outer ip_remote->ip_local, inner ip_client->ip_tun) from the remote linux server in the tunnel.
For the outgoing packet (ip_tun->ip_client), if I route the packet to the tun NIC, the packet can be sent out.
However, if I route the packet to the real NIC, the packet is ignored.
Thus there seems a mechanism that blocks the sending of packets when the source ip is not bound to the corresponding NIC.
(In the above case, because ip_tun is not bound to the real NIC, the outgoing packet is ignore.)
Is there any configuration that can disable such mechanism?
Thanks for any help.
Enable ip routing in registry then everything ok.
There are 2 NICs in my Windows-based machine, the IP address of one of NICs is 192.168.1.x/24 and the other one is 192.168.2.x/24. The Windows-based machine run an application that need to send out the multicast packet 229.255.10.1 through two of the NIC. However, the multicast packet 229.255.10.1 can be sent out from the NIC 192.168.1.x/24 only. Can I use route add -p Windows Command to send out the multicast packet from the NIC 192.168.2.x/24?
I think you can use:
route -p add 229.255.10.1 mask 255.255.255.255 192.168.2.x metric 1
where 192.168.2.x is your IP on nic 2
It remains only a problem that I haven't found a solution yet: if your application send a packet before you add the root, you need to reboot PC. This is a problem if the second NIC is, for example, a VPN client and it starts after the application
I am a developer of WinPcap, a famous packet capturing and sending software under Windows. I have ported WinPcap to NDIS 6 Light-Weight Filter technique, but it still doesn't support loopback packets (such as packets sent to and received from 127.0.0.1) capturing because of Windows' nature: The loopback packets are handled directly in TCP/IP stack and don't go down to NDIS layer.
Someone told me that Windows Filtering Platform can see the loopback traffic, so I have done some research about it. I have several questions about this.
1) What are actually loopback packets? I mean the packets NDIS can't see? Like when I ping 127.0.0.1, these ICMP packets are definitely loopback ones. When I ping an address that a local network adapter has bound to (like 192.168.0.24), it is also loopback I think. Are these all conditions? If yes, then I can filter out which packets are loopback ones based on whether their local IPs are 127.0.0.1 or local adapter IPs (like 192.168.0.24).
2) I learnt that WFP has many layers, I think my requirement should use the "IP Packet (Network Layer)", I wonder that can WFP Network Layer captures all loopback packets inbound and outbound?
3) I don't know if the loopback packets captured by WFP will have an Ethernet header? If not, then I think I should manually add Ethernet header to the packet prefix before sending them to user mode, as WinPcap is an ethernet-level packet capture software and many software using WinPcap (like Wireshark) will by default parse the packets from Ethernet layer. However, I think I will let the whole Ethernet header to be all-0, as there's actually no Ethernet header in fact.
Thanks and appreciated for your help!
Is there a problem using VMware on Windows to host a virtual linux box running iptables? I have a configuration that seems to work on physical hardware but is flaky under VMware.
I'm using VMware to run a virtual linux 2.6.24 machine on a Windows 2003 Server host. The linux application is essentially a NATting router that runs iptables. The rules in the nat table include:
Chain foo_pre
target prot opt in out source destination
LOG all -- * * 0.0.0.0/0 0.0.0.0/0 [options here]
LOG all -- * * 0.0.0.0/0 10.10.1.33 [options here]
DNAT all -- * * 0.0.0.0/0 10.10.1.33 tcp dpt:80 to:192.168.0.33:8080
Chain PREROUTING
target prot opt in out source destination
foo_pre all -- * * 0.0.0.0/0 0.0.0.0/0
I'm seeing the incoming packets to 10.10.1.33:80 using tcpdump, and the first LOG generates messages, but neither the DNAT or the second LOG show the packets registering on their packet counters, the second LOG generates no messages, and tcpdump doesn't show the packets to 192.168.0.33.
The eth0 adapter is on the 10.10.0.0/16 network with a default gateway of 10.10.1.1; it has a secondary address of 10.10.1.33/32. /proc/sys/net/ipv4/config/eth0/forwarding is set to 1.
Is VMware the culprit, or am I missing something? Thanks!
Update: we've simplified the test environment. No NAT rules at all, just a linux VM running under a Win2k3 Server host. Test steps:
VM is bridged to host NIC. VM and host are on the same subnet, with the same default gateway as above.
VM communicates with devices both on and off its subnet: ICMP, TCP, UDP. Communication is bidirectional: it doesn't matter which equipment initiates it.
Engineer power-cycled the default gateway while poking at the system.
VM now communicates only with devices on its subnet. Any attempt to communicate through the gateway to the same equipment from Step 2 fails to put packets on the wire. tcpdump on eth0 on the VM shows outgoing packets with no response; WireShark on the host shows nothing on the physical NIC.
Stopping and restarting the VM does not change its behavior. Stopping the VM and replacing it with a different VM with appropriate IP address, etc. does not change the behavior.
The Win2k3 host continues to communicate normally, both on and off its subnet.
I can only conclude from this that "something happens" between the VM and the host: in the VMware drivers, or in the host's network stacks. I'm off to scour the web again.... it's hard to imagine we're the first to observe this.
Updates as they come. Thanks for your thoughts and discussion.
Your second log line is trying to match packets sent to 10.10.1.33, but you changed the destination address to 192.168.0.33 on the line above it.
I'm not sure why you don't see the outgoing packets in tcpdump yet. I assume you're running tcpdump on the linux VM itself. Is the VM sending packets on the same interface it's receiving, or is there a second virtual ethernet adapter? What machines are the various IP addresses assigned to (other than 10.10.1.33).
Regarding update:
I gather you're not using DHCP (people usually don't bother when using static IP addresses). Also, it sounds like the gateway sees one NIC using two IP addresses. Normally that should be ok, but it's always the details that get you.
Is it possible the gateway will only assign one IP address to the NIC and is ignoring traffic from the VM?
After your edit, I suggest an experiment: on your physical machine, configure your NIC to disable all hardware acceleration.