I setup a Linux instance on EC2 and opened up a TCP port at 8001. I have an application listening on it for some custom raw data coming through.
In addition to that, however, I'm seeing:
GET /haproxy-status HTTP/1.0
pings coming through. I cannot figure out what's doing it. It seems to be coming from Amazon, but I cannot figure out what configuration is causing it. I have no loadbalancers setup.
Any clue how to disable that?
I'd try to find the source IP and block it?
E.g., using iptables:
sudo iptables -I INPUT -s IP.ADDRESS.HERE -j DROP
Related
I'm attempting to write a script to connect to a DLNA audio renderer.
There are a few articles on the web giving information on how to do this using UDP and curl, however in my particular scenario I'm having some difficulties.
The first step is to send a UDP multicast announcement over the network to discover DLNA devices on the network.
The message sent to discover devices is:
M-SEARCH * HTTP/1.1
HOST: 239.255.255.250:1900
MX: 5
Man: "ssdp:discover"
ST: urn:schemas-upnp-org:device:MediaRenderer:1
All lines in this message sent over UDP should have crlf line endings and the last line should have an extra crlf according to this article
That all seems fine. And if the message above is in a file devicediscovery.txt supposedly it's possible to use netcat to send out this message:
cat devicediscovery.txt | nc -u -4 239.255.255.250 1900
239.255.255.250:1900 is the multicast address and port over which DLNA devices communicate.
This all seems fine too, however, as is pointed out in the linked article netcat ignores the response from the dlna media renderer because there is a mismatch in IP addresses the message is sent out over the dlna multicast address, though the response comes from the router. The article suggests using tcpdump to capture the response, however I'm on Windows and using Bash on Windows WSL so tcpdump is not available and such a technique would possibly be complicated when developing a script to automate the dlna connection.
Would it be possible to use two seperate instances of netcat? One instance sending the message over the dlna multicast address and the other listening for the response from the router?
I have tried to get this working, however I'm unsure which port netcat should be listening on to hear the incomming response. Is there a standard port that netcat should listen on?
I've tried commands such as: nc -luv 192.168.0.1, however I get an error Servname not supported for ai_socktype. I've tried to remedy this by playing around with /etc/services but had no luck.
What command can I use and how must I configure the system to listen for the response from the search for dlna devices? I'd like to parse the response in a script so that the dlna connection can be automated.
Although you mention issues with DLNA it looks that you are really asking for how to best troubleshoot this.
Network cards don't allow access to incoming traffic unless set in promiscuous mode. Netcat won't be able to do what you need because of this. But, you can use Wireshark to read the traffic on the interface. TCPdump and Wireshark have close ties and almost interchangeable.
https://www.wireshark.org/
I would recommend to use it to troubleshoot further. Ppost the capture (not just a picture) and show where it failed.
I have a setup with a "main" server and then a set of "slaves" that do the hard work. All users connect to the "main" that then redirects them to a "slave".
Ideally I want the system to work in such a way that a user connects to "main" who then redirects to a "slave" and then all future communication is done directly between the user and the "slave" without any packets passing through the "main".
Right now I'm using iptables with:
iptables -t nat -A PREROUTING -i eth0 -p udp -j DNAT --dport $2 --to $1:$2
iptables -t nat -A PREROUTING -i eth0 -p tcp -j DNAT --dport $2 --to $1:$2
The service is using both UDP and TCP and sessions are established once and then used for a long time with data going both ways (Minecraft servers).
While the setup with iptables works fine, the problem is that (to my best understanding) all traffic passes through the "main" resulting in high data costs. Also, when the slaves look for who they have connected to it seems that they can't even get the ip of the user but rather gets the ip of "main".
How do I make only the "initialization" happen through main and the rest of the traffic to go directly? Do I need a proxy? Should I use some other parameters to iptables?
Thanks for helping! I've been trying to search for the answer but all the terminology in this field makes me confused.
/b3
Seems like no one will answer this question so I'm gonna give it a try as I've now learnt more.
With IPTables it seems that the traffic will always pass through the main server, and it seems to also be the same with proxies.
Instead, the solution seems to be to set up a Dynamic DNS. I have this running now for some 40-45 servers, but it is causing problems whenever IPs change. Even at a 60 second TTL changes can take several minutes to propagate.
I'm trying Wowza and am a bit confused why can't the port 10000 be opened on my EC2/RHEL instance.
I have already applied the required rule in the Security Group which is in used on that instance. Like:
As see above, i even opened for TCP. Then in the RHEL/EC2 instance,
However i started or stopped the iptables, nothing happened but still being CLOSED.
What went wrong please?
You need to run
iptables save
to save the rules.
See my cheat sheet for other commands: http://www.jamescoyle.net/how-to/375-iptables-cheat-sheet
This is part programming, part sysadmin, so please excuse me if you feel that this should be over on serverfault.
I have an application that is not SOCKS aware and that we need to use through a firewall. We cannot modify the application to have SOCKS support either.
At the moment, we do this by aliasing the IPs the application talks to the loopback adapter on the host, then creating SSH tunnels out to another host. The IP's the application uses are hardcoded. Our SSH connections look like:
ssh -L 1.2.3.4:9999:1.2.3.4:9999 user#somehost
Where 1.2.3.x are aliases on the loopback.
So the application connects to the open port on the loopback, which gets sent out to the SSH host and onto the real 1.2.3.4.
It works, but the problem is that this application connects to quite a few IPs ( 50+ ), so we end up with 50 ssh connections out from the box.
We've tried to use several 'proxifying' apps, like tsocks and others but have had alot of issues with them ( the app is running on OS X and tsocks doesn't work so well, even with the patches )
Our idea was to write a daemon that listened on all interfaces on the specified port - it would then take the incoming packets from the application, scrape the packet info ( dst IP, port, payload ), recreate the packet and proxify it through a single SSH SOCKS connection ( ssh -D 1080 user#somehost ). That way, we only have 1 SSH connection that all the ports are being proxied through.
My question is - is this feasible? Is there something that I'm missing here? I've been combing through pfctl, ipfw, iptables docs, but I don't see any option to do it through those and this doesn't seem like it'd be the most difficult thing to code. It would recreate the packet based on the original destination IP and port, connect to the local SOCKs proxy and resend the packet as if it were the original application, but now with SOCKS support.
If I'm missing something that someone knows about that already does this, please let me know. I don't know socket programming or SOCKs too well, but this doesn't seem like it'd be too big of a project to tackle, but I'd like some opinions if I'm biting off way more that I should.
Thanks
If your application could add SOCKS client support, you can simply ssh -D lock_socks_port remote_machine, which will open up the local *lock_socks_port* as a SOCKS server at localhost, which can then connect to any host accesible by the remote machine.
Example: imagine you are using an untrusted wifi network without encryption. You can simply launch ssh -D 1082 home, and then configure your web browser to use localhost:1080 as SOCKS server. Of course, you need a SOCKS-enabled client. All the traffic would appear as coming from your gateway, and the connection would be opaque to those snooping the wifi.
You can also open a single ssh client with an indefinite number of LocalForward requests, which would be tunneled on top of a single ssh session.
Moreover, you can add ssh connections to an already-established ssh connection by using the ControlMaster and ControlPath options of ssh.
Code:
for i in {0..3}; do ping http://www.pythonchallenge.com/pc/def/$i.html; done
A host should be found at www.pythonchallenge.com/pc/def/0.html.
I get this error for all pings:
ping: cannot resolve www.pythonchallenge.com/pc/def/0.html:
Unknown host
html pages != hosts. If you want to check if the three web pages actually exist, use wget. If you only want to check if the host is up, ping www.pythonchallenge.com.
You can't ping an address, you can only ping the domain aka www.pythonchallenge.com
If your trying to find the pages that actually contain content, you will need to use something like wget and combine that with grep to check the content.
You're confusing protocols here. HTTP has nothing to do with ICMP pings.
That said, you can ping www.pythonchallenge.com because it resolves to an IP. On the other hand, there's no DNS resolution for www.pythonchallenge.com/pc/def/0.html simply because that's an URL, not a host. Browsers first resolve www.pythonchallenge.com via DNS, then they make a HTTP request for the page itself.
I'm not sure what you're trying to accomplish here. You may want to simply ping www.pythonchallenge.com.
I think you're going about this problem the wrong way. Have you tried http://www.pythonchallenge.com/pc/def/1.html? Have you tried Googling that number?
(Assuming that your URL isn't just an example, of course.)