Memcached servers can be hijacked for DDoS attacks
How does it work?
How can I test my server if it's vulnerable?
How can I prevent it?
I wrote a little post that answers all of your questions. To summarize:
How does it work?
In essence an attacker spoofs the IP of a victim and sends UDP requests to a memcached server on behalf of the victim. The attacker basically sends a tiny request to get a large stored value thus flooding the victim.
Is your server vulnerable?
Basically, if you are running memcached server version < 1.5.6 which came out on the 27th of February, 2018 and you did not specifically turn off the UDP port, then your memcached server is vulnerable. If you have a firewall that prevents access to UDP port 11211 you are still safe though.
A simple way to test your server is to send a forged stats command form a computer that should not have access to your memcached server:
$ echo -en "\x00\x00\x00\x00\x00\x01\x00\x00stats\r\n" | nc -q1 -u <SERVER_IP> 11211
If you get a response, you are vulnerable.
How to prevent it?
You need to start memcached without UDP support (unless you need it). To do so you need to start memcached with the -U 0 flag. If you use a systemd based system you can add the flag in service file which is located in /etc/systemd/system/memcached.service. You need to restart memcached for the changes to take effect (sudo systemctl restart memcached).
You should also get your firewall in order. A deny all policy with selective ports that you need being open is generally the way to go.
Related
I am not able to successfully bind and secure the rethinkdb http client, either being exposed to the whole network or refusing connections behind the proxy.
I am thus left with no choice but to restart the rdb daemon with bind-http=all each
time I want to access it...
Rdb starts with systemctl under archlinux. Three configurations I tried:
# /etc/rethinkdb/instances.d/mydb.conf
bind-http=localhost #(1)
bind-http=127.0.0.1 #(2)
bind-http=1.2.3.4 #(3)
Resulting in:
Fails to parse 'localhost'
Refuses connections behind the proxy
Equivalent to bind-http=all
Firefox 59 uses a socks proxy, working ok
as the browser's ip address does become 1.2.3.4:
$ ssh -TND 8080 user#1.2.3.4
I am quite convinced that I had secured the http client as expected,
and problems started after I updated both FF and rdb
(FF59 fails to parse 'localhost' as well for example)
I don't know if this is a bug or a feature or if I am missing something,
any help is most welcome. Many thanks
Beware of the "localhost" string.
Configuring the rethinkdb server with:
#/etc/rethinkdb/instances.d/mydb.conf
bind-http=127.0.0.1
http-port=8084
and binding some local port with SSH:
[client]$ ssh -L 8080:127.0.0.1:8084 server
is enough to access the web interface at 127.0.0.1:8080, as suggested by #jishi.
Configuring the browser to use a SOCKS proxy as per the rdb docs is not at all necessary.
For some reason localhost:8080 is not understood by FF59 (gets invisibly prefixed by www or something).
For those of you running Go backends in production:
What is your stack / configuration for running a Go web application?
I haven't seen much on this topic besides people using the standard library net/http package to keep a server running. I read using Nginx to pass requests to a Go server - nginx with Go
This seems a little fragile to me. For instance, the server would not automatically restart if the machine was restarted (without additional configuration scripts).
Is there a more solid production setup?
An aside about my intent - I'm planning out a Go powered REST backend server for my next project and want to make sure Go is going to be viable for launching the project live before I invest too much into it.
Go programs can listen on port 80 and serve HTTP requests directly. Instead, you may want to use a reverse proxy in front of your Go program, so that it listens on port 80 and and connects to your program on port, say, 4000. There are many reason for doing the latter: not having to run your Go program as root, serving other websites/services on the same host, SSL termination, load balancing, logging, etc.
I use HAProxy in front. Any reverse proxy could work. Nginx is also a great option (much more popular than HAProxy and capable of doing more).
HAProxy is very easy to configure if you read its documentation (HTML version). My whole haproxy.cfg file for one of my Go projects follows, in case you need a starting pont.
global
log 127.0.0.1 local0
maxconn 10000
user haproxy
group haproxy
daemon
defaults
log global
mode http
option httplog
option dontlognull
retries 3
timeout connect 5000
timeout client 50000
timeout server 50000
frontend http
bind :80
acl is_stats hdr(host) -i hastats.myapp.com
use_backend stats if is_stats
default_backend myapp
capture request header Host len 20
capture request header Referer len 50
backend myapp
server main 127.0.0.1:4000
backend stats
mode http
stats enable
stats scope http
stats scope myapp
stats realm Haproxy\ Statistics
stats uri /
stats auth username:password
Nginx is even easier.
Regarding service control, I run my Go program as a system service. I think everybody does that. My server runs Ubuntu, so it uses Upstart. I have put this at /etc/init/myapp.conf for Upstart to control my program:
start on runlevel [2345]
stop on runlevel [!2345]
chdir /home/myapp/myapp
setgid myapp
setuid myapp
exec ./myapp start 1>>_logs/stdout.log 2>>_logs/stderr.log
Another aspect is deployment. One option is to deploy by just sending binary file of the program and necessary assets. This is a pretty great solution IMO. I use the other option: compiling on server. (I’ll switch to deploying with binary files when I set up a so-called “Continuous Integration/Deployment” system.)
I have a small shell script on the server that pulls code for my project from a remote Git repository, builds it with Go, copies the binaries and other assets to ~/myapp/, and restarts the service.
Overall, the whole thing is not very different from any other server setup: you have to have a way to run your code and have it serve HTTP requests. In practice, Go has proved to be very stable for this stuff.
nginx for:
Reverse HTTP proxy to my Go application
Static file handling
SSL termination
HTTP headers (Cache-Control, et. al)
Access logs (and therefore leveraging system log rotation)
Rewrites (naked to www, http:// to https://, etc.)
nginx makes this very easy, and although you can serve directly from Go thanks to net/http, there's a lot of "re-inventing the wheel" and stuff like global HTTP headers involves some boilerplate you can probably avoid.
supervisord for managing my Go binary. Ubuntu's Upstart (as mentioned by Mostafa) is also good, but I like supervisord as it's relatively distro-agnostic and is well documented.
Supervisord, for me:
Runs my Go binary as needed
Brings it up after a crash
Holds my environmental variables (session auth keys, etc.) as part of a single config.
Runs my DB (to make sure my Go binary isn't running without it)
For those who want simple go app running as a daemon, use systemd (Supported by many linux distros) instead of Upstart.
Create a service file at
touch /etc/systemd/system/my-go-daemon.service
Enter
[Unit]
Description=My Go App
[Service]
Type=simple
WorkingDirectory=/my/go/app/directory
ExecStart=/usr/lib/go run main.go
[Install]
WantedBy=multi-user.target
Then enable and start the service
systemctl enable my-go-daemon
systemctl start my-go-daemon
systemctl status my-go-daemon
systemd has a separate journaling system that will let you tail logs for easy trouble-shooting.
You can bind your binary to a socket to Internet domain privileged ports (port numbers less than 1024) using setcap
setcap 'cap_net_bind_service=+ep' /path/to/binary
This command needs to be escalated. sudo as necessary
Every new version of your program will result in a new binary that will need to be reauthorized by setcap
setcap documentation
cap_net_bind_service documentation
This is part programming, part sysadmin, so please excuse me if you feel that this should be over on serverfault.
I have an application that is not SOCKS aware and that we need to use through a firewall. We cannot modify the application to have SOCKS support either.
At the moment, we do this by aliasing the IPs the application talks to the loopback adapter on the host, then creating SSH tunnels out to another host. The IP's the application uses are hardcoded. Our SSH connections look like:
ssh -L 1.2.3.4:9999:1.2.3.4:9999 user#somehost
Where 1.2.3.x are aliases on the loopback.
So the application connects to the open port on the loopback, which gets sent out to the SSH host and onto the real 1.2.3.4.
It works, but the problem is that this application connects to quite a few IPs ( 50+ ), so we end up with 50 ssh connections out from the box.
We've tried to use several 'proxifying' apps, like tsocks and others but have had alot of issues with them ( the app is running on OS X and tsocks doesn't work so well, even with the patches )
Our idea was to write a daemon that listened on all interfaces on the specified port - it would then take the incoming packets from the application, scrape the packet info ( dst IP, port, payload ), recreate the packet and proxify it through a single SSH SOCKS connection ( ssh -D 1080 user#somehost ). That way, we only have 1 SSH connection that all the ports are being proxied through.
My question is - is this feasible? Is there something that I'm missing here? I've been combing through pfctl, ipfw, iptables docs, but I don't see any option to do it through those and this doesn't seem like it'd be the most difficult thing to code. It would recreate the packet based on the original destination IP and port, connect to the local SOCKs proxy and resend the packet as if it were the original application, but now with SOCKS support.
If I'm missing something that someone knows about that already does this, please let me know. I don't know socket programming or SOCKs too well, but this doesn't seem like it'd be too big of a project to tackle, but I'd like some opinions if I'm biting off way more that I should.
Thanks
If your application could add SOCKS client support, you can simply ssh -D lock_socks_port remote_machine, which will open up the local *lock_socks_port* as a SOCKS server at localhost, which can then connect to any host accesible by the remote machine.
Example: imagine you are using an untrusted wifi network without encryption. You can simply launch ssh -D 1082 home, and then configure your web browser to use localhost:1080 as SOCKS server. Of course, you need a SOCKS-enabled client. All the traffic would appear as coming from your gateway, and the connection would be opaque to those snooping the wifi.
You can also open a single ssh client with an indefinite number of LocalForward requests, which would be tunneled on top of a single ssh session.
Moreover, you can add ssh connections to an already-established ssh connection by using the ControlMaster and ControlPath options of ssh.
Most proxy servers perform the job of forwarding data to an appropriate "real" server. However, I am in the process of designing a distributed system in which when the "proxy" receives a TCP/IP socket connection, the remote system actually connects with a real server which the proxy nominates. All subsequent data flows from remote to the real server.
So is it possible to "forward" the socket connection request so that the remote system connects with the real server?
(I am assuming for the moment that nothing further can be done with the remote system. Ie the proxy can't respond to the connection by sending the IP address of the actual server and the remote connections with that. )
This will be under vanilla Windows (not Server), so can't use cunning stuff like TCPCP.
I assume your "remote system" is the one that initiates connection attempts, i.e. client of the proxy.
If I get this right: when the "remote system" wants to connect somewhere, you want the "proxy server" to decide where the connection will really go ("real server"). When the decision is made, you don't want to involve the proxy server any further - the data of the connection should not pass the proxy, but go directly between the "remote system" and the "real server".
Problem is, if you want the connection to be truly direct, the "remote system" must know the IP address of of the "real server", and vice versa.
(I am assuming for the moment that nothing further can be done with
the remote system. Ie the proxy can't respond to the connection by
sending the IP address of the actual server and the remote connections
with that. )
Like I said, not possible. Why is it a problem to have the "proxy" send back the actual IP address?
Is it security - you want to make sure the connection really goes where the proxy wanted? If that's the case, you don't have an option - you have to compromise. Either the proxy forwards all the data, and it knows where the data is going, or let the client connect itself, but you don't have control where it connects.
Most networking problems can be solved as long as you have complete control over the entire network. Here, for instance, you could involve routers on the path between the "remote system" and the "real client", to make sure the connection is direct and that it goes where the proxy wanted. But this is complex, and probably not an option in practice (since you may not have control over those routers).
A compromise may be to have several "relay servers" distributed around the network that will forward the connections instead of having the actual proxy server forward them. When a proxy makes a decision, it finds the best (closest) relay server, tells it about the connection, then orders the client to connect to the relay server, which makes sure the connection goes where the proxy intended it to go.
There might be a way of doing this but you need to use a Windows driver to achieve it. I've not tried this when the connection comes from an IP other than localhost, but it might work.
Take a look at NetFilter SDK. There's a trial version which is fully functional up to 100000 TCP and UDP connections. The other possibility is to write a Windows driver yourself, but this is non-trivial.
http://www.netfiltersdk.com
Basically it works as follows:
1) You create a class which inherits from NF_EventHandler. In there you can provide your own implementation of methods like tcpConnectRequest to allow you to redirect TCP connections somewhere else.
2) You initialize the library with a call to nf_init. This provides the link between the driver and your proxy, as you provide an instance of your NF_EventHandler implementation to it.
There are also some example programs for you to see the redirection happening. For example, to redirect a connection on port 80 from process id 214 to 127.0.0.0:8081, you can run:
TcpRedirector.exe -p 80 -pid 214 -r 127.0.0.1:8081
For your proxy, this would be used as follows:
1) Connect from your client application to the proxy.
2) The connection request is intercepted by NetFilterSDK (tcpConnectRequest) and the connection endpoint is modified to connect to the server the proxy chooses. This is the crucial bit because your connection is coming from outside and this is the part that may not work.
Sounds like routing problem, one layer lower than TCP/IP;
You're actually looking for ARP like proxy:
I'd say you need to manage ARP packets, chekcing the ARP requests:
CLIENT -> WHOIS PROXY.MAC
PROXY -> PROXY.IP is SERVER.IP
Then normal socket connection via TCP/IP from client to server.
If I create a c++ server/client application, the port I used to communicate does it need to be open on the router of the server and client machine
Or what other approach could I take? the client computer needs to receive information from the server but I am not able to have any ports opened because it is on a school network....
[edit]
Hmm My setup is a php page running on a server say when I press hello, the server makes a ssh connection through php and sends shell commands to the machine. The server is running off of a school server which I do have ssh access to and run all my things from there. The client computer will be my pc running off of the school wifi which is not connected to the server. The server will try to make a ssh connection to the public ip of my computer running off of the school wifi(no ports open/can ssh out but no ssh in). Will these methods you mention make this possible, in particular the connect.c since I can't run putty off of the server, and the connect.c I could call from the php.
The choice of language is highly irrelevant here.
There don't need to be ports 'open' on any router, unless your traffic must pass through it. On normal peer hosts in the same network (or subnet) there would hardly be any firewall policy, not even in schools.
Technically it is possible for the switch to block peer-2-peer traffic (meaning traffic not destined to the outgoing gateway), but that is not very usual.
Of course, if the school doesn't allow outbound (WAN) traffic on most ports, tough luck, and they're absolutely right :)
You can look at
ssh (with tunnels -L, -D and -R options, perhaps -o GatewayPorts on)
stunnel
connect.c
http-tunnel
All very readily googled
To establish a TCP/IP connection, only the server port needs to be accessible by the client. The connection is full-duplex, therefore data can flow from the client to the server and vice-versa.
If you are using UDP for your application, which is a connection-less protocol, what happens depends heavily on the firewall or router and whether it performs connection tracking for your service or not.
Unless you provide some additional information on your service and the network setup on both the client and the server side, we cannot provide more concrete information.