Is running multiple web: processes possible? - heroku

Our PHP application runs on apache, however php-pm can be used make the application persistent (no warmup, database connection remains open, caches are populated and application initialized).
I'd like to keep this behaviour when deploying to heroku, that is have apache serve static content and php-pm server the API layer. Locally this is handled using an .htaccess Rewrite proxy rule that sends all traffic from /middleware to php-pm listing on e.g. port 8082.
On heroku it would mean running two processes in the web dyno. Is that possible?
If not- are there other alternatives that can be used to handle web traffic through different processes or make a persistent process listen to web traffic?

Related

Docker on Windows server and multiple websites listening port 80 and 443

When installing ASP.NET Core apps on a windows machine, I used to install the websites within IIS, I used the bindings there to route depending on the URL to the correct web application and I used Letsencrypt to create the SSL certificates.
Now I want to start shipping my applications using Docker. The samples show, how to easily create an ASP.NET Core dockerized project, but that's where most of them end. So in the end I've got an ASP.NET application in my docker running listening on port 5000.
Are there any suggestion or resources showing how to set it up on a production system?
multiple web sites listening on the standard ports 80 and 443 and forwaring to the correct docker image
SSL certificate handling
Setup ngingx as a front end. It is world-class solution, used by top-traffic sites as a front-end for incoming requests.
Among other features it does:
Redirecting based on plenty of rules
SSL management (you can use unencrypted connections behind it)
Load balancing
It is free and available as docker image.
So, you open only ngingx outside your docker network, and make it route all your traffic inside.
Setup reverse proxy like nginx, even in IIS also you redirect to corresponding docker service having a particular port. You can fan out traffic to respective ports.
Image: https://blogs.msdn.microsoft.com/friis/2016/08/25/setup-iis-with-url-rewrite-as-a-reverse-proxy-for-real-world-apps/

How do I get Google Cloud load balancer to set a cookie via a socket.io server it's proxying?

I'm running a Google Cloud HTTPS load balancer in front of 2 Compute Engine VM instances that are each running a socket.io server on port 80. They work fine and are reachable from my HTML/JS socket.io client that I'm running locally.
I have set my Google Cloud load balancer to use Session Affinity with "Generated Cookie". According to the docs, this should set a cookie named GCLB on the client. The cookie never gets set in my client, why?
I think my issue might be that I'm not really serving the client through the load balancer (or on google at all), but serving it locally, once this is a real client it will sit on a CDN somewhere. I am using a hostname locally that is on the same domain name as the load balancer. Meaning, my socket.io html client is at http://local.mydomain.com:8000/ and it connects to my socket.io server/load balancer which lives at https://io.mydomain.com/ note: ssl on server only
Any ideas as to how I can get the cookie set, or do I need to handle this differently?

Sharing sessions between different servers behind an nginx reverse proxy

Wondering if we can share session data between two servers (running different code) behind an Nginx reverse proxy.
To be precise, we have a legacy app in PHP running on an apache server. We are updating some functionality and hosting only that functionality on a separate server (nginx). Both apps update the same DB.
nginX uses load balancing/ reverse proxy URL rewritting techniques to decide which server to send the client to based on the URL path they use.
So, a person can add items to his virtual basket (held in session) on
the new server application.
He then decides to edit his personal information which is on the other server (Legacy).
Nginx uses it's reverse proxy/load balancing magic to decide which server to send the person to based on where an app is available.
The question is, how can a session created on one app server be available to another app server aswell? is it possible to setup the reverse proxy to store all session data and how. Please point me to the right direction of you can help with google links aswell.
your question has several possible answers. It all depends on the way the application is designed.
A possible scenario would be to keep session information on a database shared among different web heads. In this way the client, once authenticated will retrieve its "session status" regardless which server he is accessing in the final servers cluster backend.
Again, this depends very much on the way the application is/has been designed.
I think there is very little magic you can do on an old legacy application just by configuring the reverse proxy engine.
In the end, sessions are handled by the application server and not the proxy frontend.

Maintenance mode in AWS EC2

In heroku we could use the command heroku maintenance on or off... How can I do it in my AWS EC2 web services? I tried just to stop the server using sudo service nginx stop but I don't like the error page that was displayed. It says error in url. In heroku if i use the maintenance command, the error page will display "Under contruction" "Maintenance" or something like that.
How can I do it in Amazon web services? thanks
You have to do it yourself. AWS does not provide that.
Yeah, so heroku has routing magick, think of the default WebBrick server your rails app is running as running outside of rails. Idk, assuming you use passenger you would'nt really notice this but at a high level, nginx is proxying your apps' port 3000 to port 80 for certain requests. I appologize if you don't know what I mean by that.
So (YOUR APP) ---> (NGINX) -----> (Client)
the advantage here, is that nginx keeps running but during maintience mode they most likely start shooting you pages to static content. If you too run you rails app via a proxy rather than via passenger, than you solution is easy. Stop your WebBrick, mongrel, unicorn, thin (what ever app server here) and setup an error message page for bad gateway errors aka 502 route.
If you use passenger. You could write a location block, that would over ride all your routes to a maintence page and switch Passenger to off in that server block, now rather than serving your app you can serve a static maintence page.
Heroku wraps alot of technology and exposes nice tools, so you'll need to do a bit more ground work to automate your stack. But this is a place to start.

Deploying Compojure/Sinatra Applications

What is the preferred way of deploying a compojure/sinatra applications? I have multiple sites running on the host so i can't run jetty/mongrel on port 80. How should i handle multiple sites/applications running at the same host?
Use a web server such as Apache that runs on port 80 and use virtual hosts to direct the traffic to the right app server. So basically you would run each application server (jetty/mongrel, etc.) on a different port and then in each virtual host would have a different configuration to use something like mod proxy to forward the traffic to the app server. You could use a different web server such as lighttpd or nginx. For the sinatra app you could also look at Phusion Passenger, a.k.a mod rails, a.k.a mod rack, which theoretically works with any rack app, although I've only used it with Rails.
If you look into it some more you'll find that there are various schemes for forwarding traffic to the app server from a web server, but the basic mechanism for doing this kind of thing always boils down to having a web server that listens on port 80 that uses name-based virtual hosts to then forward the traffic to the appropriate app.
I've been doing this kind of thing with various standalone servers (e.g., AllegroServe) for years. I've found the best approach to be:
Run each server on a different, non-privileged port (such as 8080)
Run pound (or Nginx etc.) on 80, configured to map requests to each application.
Pound is great, and the configurations end up very simple (unlike Nginx). It will also do SSL fronting and HTTP sanitization for you, which takes the burden off your application.
Use passenger! http://modrails.com - it is a plugin for apache and nginx that lets you (very) easily run a ruby app as a virtual host

Resources