How to find out the worker/socket number from within app code running in thin web server - ruby

I've recently started using a centralized log server (graylog2) and have been happily adding info to be logged.
I'm also running several Ruby web applications (Rails and Sinatra) under the thin web server, each with a number of workers (1-4). The workers listen to UNIX sockets.
I'd like to log which of the thin workers (i.e. worker #1, worker #2) is serving the request. The idea is to check that all workers get roughly the same load from the load balancer.
There seems to be no HTTP header or ENV variable set up for this in thin.
Anyone know if this information can be made available to the web app running inside the worker ?

Related

How to deploy a flask socket io application on IIS server?

my use case is
I am trying to build an API that takes images as input and does some
image processing operations and return the output JSON back to the
client.
Multiple clients can concurrently request Server and as the server
does take 2 to 3 minutes time to process.
Initially I thought of a normal flask Application, where client
would poll the server for a response on a timely basis
But as Flask-SocketIO can respond back to the client event-based, I
want to use Flask-SocketIO
As the other APIs in my project are hosted on IIS, I wanted to use
the same IIS as the hosting server
my questions are
Can I use Flask-SocketIO for my use case, where API takes 2 to 3
minutes to respond back
If not IIS, how to deploy flask-socketIO on
the windows machine, I have gone through the documentation but I did
not find any deployment strategy for hosting it on windows machine
The best way to achieve concurrency in this case
Thanks in advance
Prasad.

What is rack in ruby? What is puma in ruby?

According to the definition, the puma is kind of web server and the rack is an interface between web server and application server.
But, lots of videos mention that rack is a interface between web framework and web server. So can I interpret that we use web framework to build our application, so the rack is an interface between web framework and web server?
Another question is that if puma is kind of web server, can I use Apache or Nginx to replace it?
Puma is an application server, more specifically a Rack app server. (There are more than just Puma: Unicorn, Passenger etc. There are also application servers for different interfaces; for example, Tomcat and JBoss are Java application servers.) An application server accepts a HTTP request, parses it into a structure in the application's language, hands it off to the application, and awaits a response object which it then returns to the client.
Nginx/Apache are general purpose web servers. Apache does not know how to serve Rack applications, and Puma doesn't know how to do a bunch of other things Nginx/Apache do (e.g. CGI scripts, URL rewriting, proxies, balancing, blacklisting...)
Rack is a library for Ruby that accepts parsed HTTP requests from an app server, funnels them through a configurable stack of middleware (such as e.g. session handling) passing the request object to a handler, and returning the response object the app server, making web development in Ruby easy. You can execute a Rack app directly (or rather, with a very simple server that is installed with Rack), but it is not recommended outside development, which is where "proper" application servers come in: they know how to keep your app alive, restart it if it dies, guarantee that there is the predetermined number of threads running, things like that.
Thus, typically, you have your Web server accept connections, then using simple reverse proxy pass the appropriate requests to your Rack application, which is executing inside the Rack app server. This gives you the benefits from all the involved pieces.

How do I find out what port another process is running on besides the web process on Heroku?

I have a webhook URL and a normal web server (running HapiJS).
I'd like to proxy certain requests in HapiJS to the webhook server that's running on a private port but I need to know what the $PORT is on the other non web process.
Is there a way to find this port number?
There is no way to find that port number.
Heroku dynos run on different runtimes. So even if you did know it, you would also need to figure out the IP address of that server, which would change with every deployment and once every 24 hours.
This would also not be very scalable, as the strength of heroku is to allow you to boot more dynos easily. If you rely on knowing where the other dyno is, you're losing that easy scaling.
You don't necessarily need this to communicate between processes though. Using a redis queue, you could enqueue asynchronous jobs to be processed by your worker process. Both processes would communicate, and they wouldn't need to know where the other one is.

GKE + WebSocket + NodePort 30s dropped connections

I have a golang service that implements a WebSocket client using gorilla that is exposed to a Google Container Engine (GKE)/k8s cluster via a NodePort (30002 in this case).
I've got a manually created load balancer (i.e. NOT at k8s ingress/load balancer) with HTTP/HTTPS frontends (i.e. 80/443) that forward traffic to nodes in my GKE/k8s cluster on port 30002.
I can get my JavaScript WebSocket implementation in the browser (Chrome 58.0.3029.110 on OSX) to connect, upgrade and send / receive messages.
I log ping/pongs in the golang WebSocket client and all looks good until 30s in. 30s after connection my golang WebSocket client gets an EOF / close 1006 (abnormal closure) and my JavaScript code gets a close event. As far as I can tell, neither my Golang or JavaScript code is initiating the WebSocket closure.
I don't particularly care about session affinity in this case AFAIK, but I have tried both IP and cookie based affinity in the load balancer with long lived cookies.
Additionally, this exact same set of k8s deployment/pod/service specs and golang service code works great on my KOPS based k8s cluster on AWS through AWS' ELBs.
Any ideas where the 30s forced closures might be coming from? Could that be a k8s default cluster setting specific to GKE or something on the GCE load balancer?
Thanks for reading!
-- UPDATE --
There is a backend configuration timeout setting on the load balancer which is for "How long to wait for the backend service to respond before considering it a failed request".
The WebSocket is not unresponsive. It is sending ping/pong and other messages right up until getting killed which I can verify by console.log's in the browser and logs in the golang service.
That said, if I bump the load balancer backend timeout setting to 30000 seconds, things "work".
Doesn't feel like a real fix though because the load balancer will continue to feed actual unresponsive services traffic inappropriately, never mind if the WebSocket does become unresponsive.
I've isolated the high timeout setting to a specific backend setting using a path map, but hoping to come up with a real fix to the problem.
I think this may be Working as Intended. Google just updated the documentation today (about an hour ago).
LB Proxy Support docs
Backend Service Components docs
Cheers,
Matt
Check out the following example: https://github.com/kubernetes/ingress-gce/tree/master/examples/websocket

Amazon Load Balancers dropping Web Socket connections to TorqueBox

I'm running TorqueBox on Amazon AWS. I've created a load balancer, which does TCP pass through for Web Socket connections on port 8675. When I first load up the page this seems to work quite nicely, however if I leave the page open for a while, the connection just stops working. I don't get an error message, it just silently ignores any further messages sent over the connection. If I reload the page at this point, everything works fine again.
I've tried connecting to individual nodes in the cluster directly, and the connection does not get dropped in that case, so my suspicion is that it has something to do with the load balancers.
Any ideas what might be causing this?
More information about your specific architecture might be useful, but my first guess is that you should enable session stickiness so that requests from the same host get directed to the same machine on AWS (if the request gets directed to another machine the protocol would have to be renegociated).

Resources