Nginx upstreams and Heroku - heroku

I would like to use Nginx upstreams to balance two instances of an application, one of them is on an EC2 server and the other on Heroku.
The problem is, when I put the app.herokuapp.com in the upstream directive, it resolves to the ip address, and the requests are sent to the ip address, but heroku uses the host to identify the application, so it doesn't work.
I'm stuck on this, what could I do?
Update: My app uses host too, so I think I am stuck on this. As I can't change Heroku, I guess I will have to add a header with the original request Host to be used by my application and keep the Host as default, so Heroku will find my application

Add Host header to proxy.
proxy_pass http://upstream;
proxy_set_header Host $host;
....

Related

What URI do I need on the browser side to connect to an amazon redis server from heroku's redis addon

I'm trying to set up a chat application on heroku with redis and socket.io, but I can't figure out what uri am I suppose to put on the client side.
All uri's I have tried, give me a 404, name_not_resolved, or timeout erorrs.
I have one heroku app, which is running a node.js buildpack, and all it does is runs the socket.js file.
And I have another php heroku app which has the laravel back end with redis broadcasting and a vue front end.
The broadcasting is set up so that when someone publishes a post or makes a GET request of '/', an event is fired on 'new-post-channell' and 'user-entered-chat-channel' respectively.
I can go into the bash of the socket.js app and run 'node socket.js'. I can see that it connects to heroku's redis addon Amazon server and picks up on the broadcasts.
I can also go into the heroku's redis-cli of the second app, into the monitor mode, and see that broadcasts are being picked up as intended.
It all worked in a vagrant homestead virtual server, but doesn't on heroku.
var socket = io('redis://h:oaisuhaosiufhasodiufh#ec2-99-81-167-43.eu-west-1.compute.amazonaws.com:6639');
(and maybe you also know how can I run the 'node socket.js' command on my first app automatically, so that I wouldn't have to go into the heroku's bash and run it manually?)
All and all.. I finally got a VPS on Vultr.com.. And ran into the same problem..
So the answer is
if you have https done.. then you need to put the domain you are on.
io('https://'+ window.location.hostname, {reconnect: true});
You need to navigate to your nginx configuration files and edit them.. I've set up mine in the sites-available section.
/etc/nginx/sites-available/yourDomainOrIp.conf
3.
You will have these sections "location" in the configuration file. Make a new one. I put mine before the others.
location /socket.io/ {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_pass "http://localhost:3000/socket.io/";
}
this section means if you visit something /socket.io/ it will be redirected to localhost port 3000.
And on local host port 3000 I have the nodejs app listening
var server = require('http').Server();
var io = require('socket.io')(server);
var Redis = require('ioredis');
var redisNewMessage = new Redis();
var redisUserEntered = new Redis();
server.listen(3000);
Soo.. I still don't know how to fully answer the question.. but basically:
in that io() you would need to pass an address which eventually passes the "/socket.io/polling%something&something" to the localhost and the port of the server in which nodejs app is sitting in.
That amazon link stays in the nodejs new Redis ().. Socket.io has to connect to the nodejs app file and then it all should work.

How to serve a Heroku app with Google cloud fixed IP

I have a Heroku app that uses nodejs to serve a static web page https://foda-app.herokuapp.com
Heroku does not provide a fixed IP and I really need one for a personal project, so I'm trying to use Google Cloud's VPC reserved static external IP addresses.
I was able to reserve the IP but I'm not sure how should I link it with my Heroku app, since the Google Cloud offers so many options and services. I just wanna redirect all traffic from this IP to the Heroku app and I can't find a simple way to do it.
I need to create a global forwarding rule but I can't find a way to achieve this without using a lot of other services. Do I need a VM instance? Do I need a load balancer? Should I use VPC routes or Cloud DNS? I'm overwhelmed with all those services.
Please can someone tell me if it's possible, and what is the simplest way to achieve this?
You can achieve this using below two ways. -
Use a third party addon on heroku. eg. https://devcenter.heroku.com/articles/quotaguardstatic
Setup a proxy server on the static IP, and redirect all traffic to the desired Heroku url.
Details for step 2 -
Assigning a static external IP address to a new VM instance https://cloud.google.com/compute/docs/ip-addresses/reserve-static-external-ip-address
Install Nginx/HAProxy on the newly procured VM.
setup config. like below -
upstream heroku-1{
server foda-app.herokuapp.com fail_timeout=15s;
}
server{
listen 80;
server_name yourdomain.example or ip address
location / {
proxy_pass http://heroku-1;
proxy_read_timeout 300;
}
}
Change DNS mapping for your domain(if any) to point to the static IP.

How to resolve this DNS, rocketchat, webservers routing issue?

So I have a few servers:
Server1(winserver2016): Webapplication1 on IIS port 80 + 443, Webapplication2 Apache port 9000 + 9001
Server2(ubuntu16.4): Rocketchat snap, OpenVPN
2 Domain controllers (winserver2016) and a purchased dns record from godaddy domain.co.uk.
I cannot for the life of me figure out how to redirect HTTPS requests to the internal servers via port 443 on the router tried a reverse proxy in IIS 10 with aarp and rewrite (nothing) I tried using subdomains on godaddy but it just redirects to to the IPaddress:port. tried adding subdomains in DNS nothing still the same response 404
Essentially if I point port 80 and 443 at rocket chat it works and I get SSL via caddy but if I try to connect any other services on those ports I get nothing returned. If I connect rocket chat on ports 3000 and 8443 for example I get no SSL and the https site for it doesn't work
I am ready to try a reverse proxy on another Linux deploy in a min and see how that goes but I suspect it will be the same result.
All of these servers run on Hyper-V on 2 win10 boxes.
If you are trying to redirect based on source IP, You might have to use policy routes in the firewall to control the behavior depending on the source of the packet. I'd check to see if your firewall or router has such abilities. Cheap routers tend to use basic static routing
If that doesn't help, you might also need to have a separate reverse proxy web server in place. The configuration is a little tricky in apache...You could put the following inside a virtual host block if you wanted to route based on sub-directory:
<Proxy balancer://myset>
# xxx.xxx.xxx.xxx is your server that will be behind the proxy
BalancerMember http://xxx.xxx.xxx.xxx/subdirectoryName/
ProxySet lbmethod=bytraffic
</Proxy>
ProxyPreservehost On
ProxyPass "/subdirectoryName/" "balancer://myset/"
ProxyPassReverse "/subdirectoryName/" "balancer://myset/"
Not sure if this what would work exactly for subdomains, but I'd try something like this.

Why does HTTPS break my hosts file redirection?

There is a machine (let's call it Machine) with a hostname in my local network. If I go to abc.def.com, my DNS service resolves Machine's external IP and connects me successfully with https://. I've added a hosts file entry so that local.abc.def.com resolves to Machine's local, internal IP.
However, using https://local.abc.def.com breaks everything. I get ERR_CONNECTION_REFUSED in Chrome and This page can't be displayed in Internet Explorer. If I replace https:// with http://, it works again. What's going on?
I assume, for your abc.def.com machine you have https redirect configured with 443 port as well.
Based on description above your application/web server you are using
is not listening port 443 or there is a firewall rejecting your connection.

Same session and session ID for different subdomains in Grails project - How can I do that?

I am currently working on a project that supports multiple languages. In order to be seo friendly, I am trying to redirect users subdomains corresponding to their locale (or their preferred language).
I.e., my projects's url is mydomain.com and I work with the subdomains en.mydomain.com, es.mydomain.com, de.mydomain.com, fr.mydomain.com ... you get the idea. All subdomains are served by the same grails app for now.
What happens is that my grails project maintains different sessions (as seen by the session ids) for every single subdomain, hence information is lost, when a user switches between languages. I had not forseen that. :(
How can I explicitly set the session identifier? I would like it to be based on just mydomain.com.
I got the hint that Apache Tomcat offers something like
<Context sessionCookiePath="/" sessionCookieDomain=".mydomain.com">
, but that does not help for the devel environment etc.
Any hints? Have you tried storing session information in the DB? This is sometimes used for load-balancing purposes and might help here as well?!
Help is highly appreciated (as always)! Cheers!
One way of solving it, is using an nginx as an reverse proxy in front of your tomcat and translate requests from fr.mydomain.com to localhost/yourapplication/fr/ or something.
He will care about your cookies. I have appended an example configuration (slightly shortened), which I have used once:
server {
server_name fr.yourdomain.com;
location /office {
proxy_pass http://localhost:8080/yourapplication/fr;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
}
}
server {
server_name es.yourdomain.com;
location /office {
proxy_pass http://localhost:8080/yourapplication/es;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
}
}
[..]
I don't think you can share sessions between different context roots let alone sub-domains.
For load balancing most configs use sticky sessions where the same session requests are directed to the same app server. There are configs to replicate sessions across the cluster under the load balancer to enable switching servers for subsequent request.
You have several options here:
Ask a question specific to apache URL rewrite rules if they can preserve http session across a url rewrite.
forgo the subdomain approach and use their browser locale to sniff out what message bundles to use. (i like this approach)
(used to be 2:) ) explore putting session info in the cookies that is readable across domain. i know cross domain cookies r not allowed, but subdomain should be ok.

Resources