How to run multiple Golang apps on a dedicated server? [closed] - go

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I'm a Go newbie and I'm stuck trying to figure out how to deploy my apps on a dedicated server.
When I worked with PHP I used the standard setup:
But I'm confused as to how I'm supposed to deploy my Go apps.
I know I can run a single app on port :80 but how do I run multiple apps?
Is this the general idea:
Or should I be using something like this:
Can someone clarify the way most Go developers deploy their apps?
Thanks!

I'd highly recommend going with Caddy. You can set up you server with all the apps on different ports (esp. higher ports i.e. 1024 and up, so they don't need to be root), and then use proxy directives to forward traffic to your apps and such. As a bonus, you also then get free Let's Encrypt certificate support!
https://caddyserver.com/docs/proxy for more on the proxy directive

If you need multiple apps to serve HTTP requests, you should definitely consider using Nginx as a reverse proxy. You can forward all requests on a given route, say /api to one service and /ui to a second service, provided they are bound to different ports.

You might want to look a Traefik (https://traefik.io/), a go based web proxy

Related

Spring boot: How should I deploy my microservices [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
So at first I built a simple monolith application and deploy it using docker and nginx (for reverse proxy only). for now I have plan to separate each services because some services require a lot of time and IO to do their jobs. I have researched about it and I know some components that I'll need like spring cloud eureka, service discovery and etc. I'm a bit confused because I only use docker and nginx if I add these components do I still need nginx on top it? can you give me an example of structure that I should know or apply to my project.
In your first iteration of the refactoring you can do without Service Discovery:
create a SpringBoot app for each microservice
services talk to each other directly (no need to have Nginx), also without Service Discovery it means that you hardcode (or store in a property file) the URL of the endpoints
deploy NGINX in front of the application/service which serves the end users (ie a Web Application)
Once you have validated your new architecture (splitting the responsibilities across the microservices) you can introduce Service Discovery (Eureka) so the endpoints are no longer hardcoded.
Nginx is pretty light so it can also be used for handling internal traffic if you like, but at this point you architecture should start considering volume of traffic and number of components to decide what works better.

Kestrel server vs HTTP.sys [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
In Dotnet core, there are two built-in servers Kestrel and HTTP.sys.
I would like to know the differences between those two servers and when to use a particular server when it comes to performance, reliability, micro-service friendly, etc.
Answer: Kestrel vs. HTTP.sys from the official Microsoft docs.
See Kestrel vs. HTTP.sys from the official Microsoft docs.
Main differences are that HTTP.sys is windows only while kestrel can run on linux as well. That also means that HTTP.sys works with windows authentication "out of box" with few settings whereas kestrel needs a lot more to set it up. Performance wise they are similar with http.sys being a bit faster since it is optimized for windows. Also the base for IIS is HTTP.sys.
Reliability not only depends on the server but the infra it is on. I.E if you put both in docker with kubernetes they will be reliable and scalable since you will have containers to take care of that part.
Now i have microservices on both and they are very friendly and i use them for different purposes, environments depending on the service in question.
Also to mention that for public facing services i use reverse proxy anyway i am not familiar with how the two act in that role. Having said that Microsoft recommends HTTP.sys if you have a front facing service since it is more resilient to attacks out of box, but like I said since my services are behind a reverse proxy that handles those requests cannot verify the claims.
hope this helps a bit

Scripting access to a website using different ips [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I would like to test automatically my website from different locations in order to localize content's presentation. I think I have to write a bash script to access the website with wget program, using an ip from a list. There is somewhere an established solution to this kind of problem ?
There is many solutions. I think to these ones :
IP spoofing. But it's not easy. In particular if you want orchestrate these tests to automate them...
Another solution is to use a reverse-proxy. An example: your application is hosted by Tomcat and you use Apache as reverse proxy. In this case you can easily configure several end-points in Apache where you lie about XFF
Another solution, you can rent VM in the cloud. This is a good approach if you want to perform real performance tests from a remote client, or check the behavior of Internet cache...
Some compagnies sells services to check availability of your web-stuff from different sites.

How to cache every call made for an offline web experience [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I'm going up to the mountains with no internet connection to present something. I'd like to be able to use interactive examples since I'll be presenting on a certain website.
So is there a way I can set up a proxy caching server or something to cache every call made in order to have a fully cached website experience with no internet connection?
I've looked at http://squidman.net/ but I'm not sure how it works or how to use it.
You might want to try something like this. It might be a lot more work than the steps below, but this could be a good starting point.
Create a local proxy server along with memcache or redis
Update the browser proxy settings to use your proxy server details
Make the local server look for the url in the redis server.
If found, return the data in the redis server
Else, do a web request and store the data in the redis server
You'll have to do this manually for the pages that you want while you have the internet connection. Once you've got all the data you need, you can work without the internet connection too.
If the pages are essentially static then you could use something like HTTrack http://www.httrack.com/ to make an offline copy
If there's anything requiring server side interaction or dynamic generation of pages you're most likely going to need to run your own local instance of the server.

process-specific /etc/hosts [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I'm developing a service that talks to other services. To test these interaction, I have a fake http server. I'm using node.js and test via http requests. The tests are running external to the process, so I cannot (and don't want to) mock the request/response.
So far, I have an environment variable that allows me to switch hosts within the service itself. However, I cannot base the fake request/response on the hostname.
I also run a development version of the service that interacts with the real external services. I could programmatically change /etc/hosts during the test run, as I probably won't be "using" the development service while running tests, but I'd rather keep the purity of the test sandbox.
Ideally, I'd like to have a version of /etc/hosts apply only to the process. This would allow the fake http server to also glean the intended host of the request.
I'm also open to different approaches to achieving the test hostname sandbox.
/etc/hosts is used among other things by gethostbyname(), a system call that is actually performing the resolving.
There's no easy way to do it the way you want it.
How about a local DNS server with fake names/addresses?
This will work for a lot of implementations but it may not be feasible for you. You have a UNIX/OSX tag, so:
You have to put your process in what amounts to a chroot jail. chroot changes the root / to be some other location, ex.: /localroot.
You can then create your version of hosts under /localroot/etc/hosts. It is seen by the chrooted process as /etc/hosts
There is a lot of information on how to set one up on the web. And any user account you create is "locked" in there.
I cannot find basic OSX chroot information, this is more advanced, and is meant primarily for sftp users.:
http://www.macresearch.org/restricted-sftp-mac-os-x-leopard

Resources