Golang production web application configuration - go

For those of you running Go backends in production:
What is your stack / configuration for running a Go web application?
I haven't seen much on this topic besides people using the standard library net/http package to keep a server running. I read using Nginx to pass requests to a Go server - nginx with Go
This seems a little fragile to me. For instance, the server would not automatically restart if the machine was restarted (without additional configuration scripts).
Is there a more solid production setup?
An aside about my intent - I'm planning out a Go powered REST backend server for my next project and want to make sure Go is going to be viable for launching the project live before I invest too much into it.

Go programs can listen on port 80 and serve HTTP requests directly. Instead, you may want to use a reverse proxy in front of your Go program, so that it listens on port 80 and and connects to your program on port, say, 4000. There are many reason for doing the latter: not having to run your Go program as root, serving other websites/services on the same host, SSL termination, load balancing, logging, etc.
I use HAProxy in front. Any reverse proxy could work. Nginx is also a great option (much more popular than HAProxy and capable of doing more).
HAProxy is very easy to configure if you read its documentation (HTML version). My whole haproxy.cfg file for one of my Go projects follows, in case you need a starting pont.
global
log 127.0.0.1 local0
maxconn 10000
user haproxy
group haproxy
daemon
defaults
log global
mode http
option httplog
option dontlognull
retries 3
timeout connect 5000
timeout client 50000
timeout server 50000
frontend http
bind :80
acl is_stats hdr(host) -i hastats.myapp.com
use_backend stats if is_stats
default_backend myapp
capture request header Host len 20
capture request header Referer len 50
backend myapp
server main 127.0.0.1:4000
backend stats
mode http
stats enable
stats scope http
stats scope myapp
stats realm Haproxy\ Statistics
stats uri /
stats auth username:password
Nginx is even easier.
Regarding service control, I run my Go program as a system service. I think everybody does that. My server runs Ubuntu, so it uses Upstart. I have put this at /etc/init/myapp.conf for Upstart to control my program:
start on runlevel [2345]
stop on runlevel [!2345]
chdir /home/myapp/myapp
setgid myapp
setuid myapp
exec ./myapp start 1>>_logs/stdout.log 2>>_logs/stderr.log
Another aspect is deployment. One option is to deploy by just sending binary file of the program and necessary assets. This is a pretty great solution IMO. I use the other option: compiling on server. (I’ll switch to deploying with binary files when I set up a so-called “Continuous Integration/Deployment” system.)
I have a small shell script on the server that pulls code for my project from a remote Git repository, builds it with Go, copies the binaries and other assets to ~/myapp/, and restarts the service.
Overall, the whole thing is not very different from any other server setup: you have to have a way to run your code and have it serve HTTP requests. In practice, Go has proved to be very stable for this stuff.

nginx for:
Reverse HTTP proxy to my Go application
Static file handling
SSL termination
HTTP headers (Cache-Control, et. al)
Access logs (and therefore leveraging system log rotation)
Rewrites (naked to www, http:// to https://, etc.)
nginx makes this very easy, and although you can serve directly from Go thanks to net/http, there's a lot of "re-inventing the wheel" and stuff like global HTTP headers involves some boilerplate you can probably avoid.
supervisord for managing my Go binary. Ubuntu's Upstart (as mentioned by Mostafa) is also good, but I like supervisord as it's relatively distro-agnostic and is well documented.
Supervisord, for me:
Runs my Go binary as needed
Brings it up after a crash
Holds my environmental variables (session auth keys, etc.) as part of a single config.
Runs my DB (to make sure my Go binary isn't running without it)

For those who want simple go app running as a daemon, use systemd (Supported by many linux distros) instead of Upstart.
Create a service file at
touch /etc/systemd/system/my-go-daemon.service
Enter
[Unit]
Description=My Go App
[Service]
Type=simple
WorkingDirectory=/my/go/app/directory
ExecStart=/usr/lib/go run main.go
[Install]
WantedBy=multi-user.target
Then enable and start the service
systemctl enable my-go-daemon
systemctl start my-go-daemon
systemctl status my-go-daemon
systemd has a separate journaling system that will let you tail logs for easy trouble-shooting.

You can bind your binary to a socket to Internet domain privileged ports (port numbers less than 1024) using setcap
setcap 'cap_net_bind_service=+ep' /path/to/binary
This command needs to be escalated. sudo as necessary
Every new version of your program will result in a new binary that will need to be reauthorized by setcap
setcap documentation
cap_net_bind_service documentation

Related

Running go program on Google Cloud Run without listening for incoming HTTP requests

I wrote a Go program which doesn't need to retrieve external http calls at all by default. I tried to deploy it on Google Cloud Run and received the following error:
The user-provided container failed to start and listen on the port
defined provided by the PORT=8080 environment variable. Logs for this
revision might contain more information.
I understand it happens because my code doesn't provide a port. As this answer states:
container must listen for incoming HTTP requests on the port
that is defined by Cloud Run and provided in the $PORT environment
variable
My question is what can I do if wouldn't like define any ports and just want to run the same code I run locally? Is there an alternate solution to deploy my code without it, or I must add it anyway if I want run the code from Cloud Run?
For containers that do not require an HTTP listener (HTTP server), use Cloud Run Jobs.
Cloud Run Jobs is in preview.
Your Go program must exit with exit code 0 for success and non-zero for failure.
Your container should not listen on a port or start a web server.
Environment variables are different from Cloud Run.
Container instances run until the container instance exits, until the task timeout is reached, or until the container crashes. Task timeout default is 10 minutes, max is one hour.
Cloud Run - Create jobs

Memcached used for DDOS

Memcached servers can be hijacked for DDoS attacks
How does it work?
How can I test my server if it's vulnerable?
How can I prevent it?
I wrote a little post that answers all of your questions. To summarize:
How does it work?
In essence an attacker spoofs the IP of a victim and sends UDP requests to a memcached server on behalf of the victim. The attacker basically sends a tiny request to get a large stored value thus flooding the victim.
Is your server vulnerable?
Basically, if you are running memcached server version < 1.5.6 which came out on the 27th of February, 2018 and you did not specifically turn off the UDP port, then your memcached server is vulnerable. If you have a firewall that prevents access to UDP port 11211 you are still safe though.
A simple way to test your server is to send a forged stats command form a computer that should not have access to your memcached server:
$ echo -en "\x00\x00\x00\x00\x00\x01\x00\x00stats\r\n" | nc -q1 -u <SERVER_IP> 11211
If you get a response, you are vulnerable.
How to prevent it?
You need to start memcached without UDP support (unless you need it). To do so you need to start memcached with the -U 0 flag. If you use a systemd based system you can add the flag in service file which is located in /etc/systemd/system/memcached.service. You need to restart memcached for the changes to take effect (sudo systemctl restart memcached).
You should also get your firewall in order. A deny all policy with selective ports that you need being open is generally the way to go.

web server running inside a docker container running inside an EC2 instance responses very slowly

I have a web server running inside a docker container in AWS EC2 Ubuntu instance. When I send requests to the web server, I get the response very slowly (20+ seconds most of the times, although the response time varies). It does not time-out though. The web server is a very lightweight. It is only to test, so almost does not do anything.
docker version 17.03.0-ce
docker-compose version 1.12.0-rc1
How I debugged so far
When sending requests to the web server running in the docker container from within the EC2 instance (url = ' http:// localhost:xxxx/api ') it is still very slow. So should not be related to sending requests from outside.
I run another web server inside the EC2 directly (not in a docker container), and it is not slow. It responses very fastly.
I run another web server inside another docker container in EC2, and it is also very slow!
When I send the request from inside the docker container to the web server that is running in it (at its localhost), it is also very slow!
I run the containers with the same command on my mac computer and the get response is not slow!
Here is one of the containers stats:
CPU %: 0.28%
MEM USAGE / LIMIT: 27.49 MiB / 992.5 MiB
MEM %: 2.77%
NET I/O: 53.7 kB / 30.5 kB
BLOCK I/O: 2.24 MB / 0 B
I understand it might be very hard to know the issue. My question is the steps to debug the cause and finally find the solution. I appreciate if you could explain your approach in detail.
This sounds like a name resolution problem.
To debug this, you can do different things.
You can first start a simple tcp server with nc -l -p 8000 within the docker container (which is started -p 8000:8000), and the on the host launch: nc 127.0.0.1 8000, type some charachter, see if the TCP communication works, they should appear within the container.
Next, you can do the same as before but using "localhost" instead of 127.0.0.1
After this, you can do the same HTTP request you did, but using 127.0.0.1 instead of localhost (this will set the request's Host: header to the same value, which the webserver might not check , or might resolve more easily).
You can also have have a look at the generated /etc/hosts and /etc/resolv.conf within the container. Maybe they do not make sense in the network context of your container.
Also, you can precisely measure the time needed for your requests, if they are near a precise second, this sounds once more like a DNS timeout (if it's 5.003, 10.200, 20.030 seconds, it's like a X seconds timeout, plus the real time needed to respond).

Node.js: Running example of Chat?

Trying to setup an example for node.js chat on Windows x64.
Command line:
D:\Websites\dev\chat>node server.js
Server at http://127.0.0.1:8001/
Now when server part runs, trying http://dev/chat/index.html
After submitting Name, it gives me "error connecting to server".
Same error message on http://chat.nodejs.org/
Does the thing actually work? =)
Do I need to set up an Apache's mod_proxy to handle /join to port 8001?
Some of the issues are with using http://dev/chat/index.html and also, I suspect, with:
Do I need to set up an Apache's mod_proxy to handle /join to port 8001?
Node's http module is more for creating the server than it is for integrating with other servers like Apache. (It's possible, e.g. iisnode, but not the default.)
While node server.js is running, you should be able to access index.html via either:
http://localhost:8001/
http://127.0.0.1:8001/
Then, /join, /recv, /send, etc. should be able to route through the same origin.
Otherwise, using http://dev/ has 2 problems:
Requests will route based on the current address. For example, /join will request http://dev/join rather than http://127.0.0.1:8001/join, likely resulting in a 404 response. And, even if you modified the client script to specify the origin...
Same-origin policy. Pages requested from http://dev/ cannot make Ajax requests to http://127.0.0.1:8001 without exceptions, which this demo does not have established.

How to expose a tornado server serving websockets on dotcloud to the www?

I am trying to install the IPython html notebook server
on dotCloud. The IPython server uses tornado with websockets (and other internal communications using zeromq on tcp sockets).
Hhere's my dotcloud.yml:
www:
type: custom
buildscript: builder
ports:
nbserver: tcp
I am following the custom port recipes given here and here. As the logs show, I run the tornado server on 127.0.0.1:$DOTCLOUD_WWW_NBSERVER_PORT:
/var/log/supervisor/www.log:
[NotebookApp] The IPython Notebook is running at: 'http://127.0.0.1:35928/'
[NotebookApp] Use Control-C to stop this server and shut down all kernels.
But when I push, the dotCloud CLI tells me:
WARNING: The service crashed at startup or is listening to the wrong port. It failed to >respond on port "nbserver" (42801) within 30 seconds. Please check the application logs.
...
Deployment finished. Your application is available at the following URLs
No URL found. That's ok, it means that your application does not include a webservice."
There's nothing on my-app.dotcloud.com or my-app.dotcloud.com:DOTCLOUD_WWW_NBSERVER_PORT
What am I missing here? Thanks for your help.
UPDATE
Issue solved. The usual HTTP port works fine with websockets so the custom port recipes are not required. This is my new dotcloud.yml:
www:
type: custom
buildscript: builder
ports:
web: http
works with the following in ipython_notebook_config.py:
c.NotebookApp.ip = '*'
This makes it so that the tornado webserver listens to all ip addresses.
WARNING: setup security and authentication first!
See Running a Public Notebook Server for more information.
Glad you got it working!
In the future, and for other readers, you actually want your app to listen on $PORT_NBSERVER and then connect to it on DOTCLOUD_WWW_NBSERVER_PORT. $PORT_NBSERVER is the local port while the latter is the port that's exposed to the outside world through our routing/NAT layer.
If you have any other issue, don't hesitate to reach out to us at http://support.dotcloud.com
Source: I'm a dotCloud employee.

Resources