How do I find out what port another process is running on besides the web process on Heroku? - heroku

I have a webhook URL and a normal web server (running HapiJS).
I'd like to proxy certain requests in HapiJS to the webhook server that's running on a private port but I need to know what the $PORT is on the other non web process.
Is there a way to find this port number?

There is no way to find that port number.
Heroku dynos run on different runtimes. So even if you did know it, you would also need to figure out the IP address of that server, which would change with every deployment and once every 24 hours.
This would also not be very scalable, as the strength of heroku is to allow you to boot more dynos easily. If you rely on knowing where the other dyno is, you're losing that easy scaling.
You don't necessarily need this to communicate between processes though. Using a redis queue, you could enqueue asynchronous jobs to be processed by your worker process. Both processes would communicate, and they wouldn't need to know where the other one is.

Related

Send request to all dynos

I run 4 dynos with nodejs app on it under same subdomain (Example: app.website.com).
I need to send to all my dynos request from other dyno app (other domain: admin.website.com), also nodejs app. All dynos should remove some cached data in memory on command from admin server.
If I send http request to app.website.com only one of the dynos get it (because of the load balancer).
In Herokus "common runtime" its neither possible to connect to individual dynos from the outside, nor can you send http requests from one dyno to another. (See https://devcenter.heroku.com/articles/dynos#common-runtime-networking). In Herokus private space on the other hand the later would be possible, because all dynos form a VPN. (See https://devcenter.heroku.com/articles/dynos#private-spaces-runtime-networking)
Instead of trying to send a cache purge event to all dynos using http, it would probably be a better idea to change the setup. You could use a central component, e.g. a message queue, to which both your admin dynos and your app dynos can connect. The admin app could then dispatch cache purge events, which your app consumes. If you don't want to expand your infrastructure unnecessarily, but you already have any other central storage (e.g. a database) connected, you could also use that.

On Windows platform, how does nginx master process share listen sockets to worker?

I have look through the code of Nginx on Windows version.
But I don't understand how master is sharing listening sockets with workers.
It is straight forward on Linux. When it fork(), workers inherit the file descriptions from the master.
But when it comes to the Windows version, in the CreateProcess() function, it specify the "bInheritHandles" argument field as "0", meaning it does not inherit the handles!
So then how the workers share listening sockets with master?
I did read the code for two days just to find the answer to this question.
But I still can not understand it.
Thanks!
*This question looks very similar with the another one How does nginx worker process share the 'listen socket'
But it is not. Because that one is asking about the one on Linux platform.
It doesn't. Instead, on Windows each nginx worker process creates its own listening sockets, and uses them to accept connections.
Creating listening sockets on the same ports is possible as nginx uses setsockopt(SO_REUSEADDR) on listening sockets, and this allows completely duplicate listening sockets on Windows.
Only one of these duplicate listening sockets actually work though, and here comes the first limitation of nginx for Windows as listed in the documentation:
Although several workers can be started, only one of them actually does any work.
Note well that nginx for Windows "is considered to be a beta version".

Enabling sockets.io in sails to run on multiple ports

I have to set sails app where I can have socket.io connections on multiple ports - for example authentication on port 3999 and data synchronization on port 4999.
Any way to do so ?
I asked a similar question yesterday and it seems that yours is also similar to mine, here's what I'm going to implement.
Given that you will have multiple instances that are going to work on different ports, they won't be able to talk to each other directly and that breaks websocket functionality.
It seems that there are multiple solutions to this (sticky sessions vs using the pub/sub functionality of Redis), I chose Redis. There's a module for that called socket.io-redis. You also need emitter module, it's here.
If you choose that route, no matter how many servers (multiple servers with multiple instances) OR many instances on a single server you run your app on, it will function without a problem thanks to Redis.
At least that's what I know for now, been searching for a few days, haven't tried it yet.
Not to mention, you can use Nginx for load balancing, like below. (Copied from socket.io docs)
upstream io_nodes {
ip_hash;
server 127.0.0.1:6001;
server 127.0.0.1:6002;
server 127.0.0.1:6003;
server 127.0.0.1:6004;
}

How can I detect another instance of the same Win32 application running on another workstation?

I have a small application, which is free for personal use, but requires a paid license for corporate use.
It is most likely that in a corporate environment my application will run on multiple workstations. If it is the freeware version, I want to show an unobtrusive message. (and continue)
It doesn't have to be bulletproof, if it is not possible (i.e. firewall) then the application should just continue. And I don't want to make the user set up some kind of central service to track the instances. I don't want to annoy my users (especially not the paying ones *g*)
Is there any way to achive this kind of functionality?
I remember an older version of Dreamweaver had this kind of feature. You couldn't run it more than once in the same network.
One way: Listen for UDP broadcast on specific ports. Let each instance send broadcast UDP packet on this port to local network. If application receives such packet, and recognizes its structure, it knows that other instance is running.
You can include license details to avoid messages if two valid licenses are used.
Broadcasts usually aren't routed, so this works on local network only. (And user can disable it completely via firewall too... but if you will use some standard port like 53 (DNS), it won't be blocked).
Other way is to use custom server, which is informed about all running instances around the world ;-)
There are two primary ways to achieve this:
First, you can set up a small server application on each workstation that communicates with other workstations on the network (personally I would use Bonjour for discovery, but there are other options). The drawback here is that you're going to write quite a bit more code to make this work than option #2.
Second (probably simpler) would be to use WMI to enumerate processes on other workstations (again, probably use a Bonjour-like system for discovery), and find your process running on other machines. The drawback to this is that your enumeration code will require privileges on all machines to conduct the search.
When the application starts, it sends out a UDP broadcast on a specific port. This will be restricted to the local subnet, and might not make it through firewalls. This is the "is anyone else running, or can I start?" query.
If there are no responses, the application starts as normal, listening for this UDP broadcast. If it sees one, it responds with an "I'm already running; you can't start" packet.
The application that's just started receives this response packet and then refuses to start or (if you don't want to be that strict) displays a warning to the user.
You'd want to include the product ID and license key (or a hash) in the initial request, so that you can have more than one license on the same network. The response probably wants the machine name in it, so that the second user can go and find the first user and ask if they really need to use the application.
Evil corporation solution:
Have the application call home every time it starts. If more than one application for a license wakes up, tell it not to. If there is no internet connection, don't start at all.

Detecting dead applications while server is alive in NLB

Windows NLB works great and removes computer from the cluster when the computer is dead.
But what happens if the application dies but the server still works fine? How have you solved this issue?
Thanks
By not using NLB.
Hardware load balancers often have configurable "probe" functions to determine if a server is responding to requests. This can be by accessing the real application port/URL, or some specific "healthcheck" URL that returns only if the application is healthy.
Other options on these look at the queue/time taken to respond to requests
Cisco put it like this:
The Cisco CSM continually monitors server and application availability
using a variety of probes, in-band
health monitoring, return code
checking, and the Dynamic Feedback
Protocol (DFP). When a real server or
gateway failure occurs, the Cisco CSM
redirects traffic to a different
location. Servers are added and
removed without disrupting
service—systems easily are scaled up
or down.
(from here: http://www.cisco.com/en/US/products/hw/modules/ps2706/products_data_sheet09186a00800887f3.html#wp1002630)
Presumably with Windows NLB there is some way to programmatically set the weight of nodes? The nodes should self-monitor and if there is some problem (e.g. a particular node is low on disc space), set its weight to zero so it receives no further traffic.
However, this needs to be carefully engineered and have further human monitoring to ensure that you don't end up with a situation where one fault causes the entire cluster to announce itself down.
You can't really hope to deal with a "byzantine general" situation in network load balancing; an appropriately broken node may think it's fine, appear fine, but while being completely unable to do any actual work. The trick is to try to minimise the possibility of these situations happening in production.
There are multiple levels of health check for a network application.
is the server machine up?
is the application (service) running?
is the service accepting network connections?
does the service respond appropriately to a "are you ok" request?
does the service perform real work? (this will also check back-end systems behind the service your are probing)
My experience with NLB may be incomplete, but I'll describe what I know. NLB can do 1 and 2. With custom coding you can add the other levels with varying difficulty. With some network architectures this can be very difficult.
Most hardware load balancers from vendors like Cisco or F5 can be easily configured to do 3 or 4. Level 5 testing still requires custom coding.
We start in the situation where all nodes are part of the cluster but inactive.
We run a custom service monitor which makes a request on the service locally via the external interface. If the response was successful we start the node (allow it to start handling NLB traffic). If the response failed we stop the node from receiving traffic.
All the intermediate steps described by Darron are irrelevant. Did it work or not is the only thing we care about. If the machine is inaccessible then the rest of the NLB cluster will treat it as failed.

Resources