Can Heroku load-balance WebSocket connections? - heroku

I use socket.io to establish a WebSocket connection between clients and a Heroku dyno (ephemeral Linux node).
Heroku load-balances HTTP connections to enable horizontal scaling (addition of dynos).
Will this load-balancing "just work" with WebSockets?

Yes, having read the documentation I believe it can.
From the documentation on the Heroku router that sits in front of user dynos:
WebSocket functionality is supported for all applications.

Related

Make a request to a specific instance in heroku cloud

Some cloud provider services allow to make a request to a specific instance. Is this possible to do in heroku? Looking at the docs i cant see any header to achieve this.
If by "make a request" you mean an HTTP request, no, this is not possible. Heroku's routers distribute HTTP traffic between dynos randomly:
Routers use a random selection algorithm for balancing HTTP requests across web dynos. In cases where there are a large number of dynos, the algorithm may optionally bias its selection towards dynos resident in the same AWS availability zone as the router making the selection.
However, if you mean "connect via SSH", you can use heroku ps:exec:
Heroku Exec is a feature for creating secure TCP and SSH tunnels into a dyno. It allows for SSH sessions, port forwarding, remote debugging, and inspection with popular Java diagnostic tools.
By default, Heroku Exec connects to your web.1 dyno, but you can optionally specify a dyno:
heroku ps:exec --dyno=web.2
Establishing credentials... done
Connecting to web.2 on ⬢ your-app...
~ $

It's possible to setup NATS protocol on heroku?

I've seen a lot of documentation and tutorials how to setup HTTPS and Websockets on heroku, but it's possible to setup another protocol like TLS or NATS?
If it's possible how can I do that?
Unfortunately, no.
Inbound requests are received by a load balancer that offers SSL
termination. From here they are passed directly to a set of routers.
The routers are responsible for determining the location of your
application’s web dynos and forwarding the HTTP request to one of
these dynos.
https://devcenter.heroku.com/articles/http-routing#routing
Not supported
TCP Routing
https://devcenter.heroku.com/articles/http-routing#not-supported
Heroku offers only http/https routing for applications hosted on it.

ActiveMQ - Stomp over websockets - Same Origin Policy

I have a process that runs in California that wants to talk to a process in New York, using Stomp over Websockets.
Also note that my process is not a web app, but I implemented a stomp over websocket client in C++, in order to connect things up to my backend. Maybe this was or wasn't a good idea. So, I want my client to talk to the server and subscribe, where their client pushed messages.
I was implementing my own server when I saw that ApacheMQ supported Stomp over Websockets. So, I started reading the docs.
It says with the last line under 'configuration' at
http://activemq.apache.org/websockets :
One thing worth noting is that web sockets (just as Ajax) implements ? > the same origin policy, so you can access only brokers running on the > same host as the web application running the client.
it says it again in several related searches such as http://sensatic.net/activemq/activemq-54-stomp-over-web-sockets.html
Is this a limitation of the server or the web client?
With that limitation, if I understand right, the server is not going to accept websocket connections from a client, of any kind, that is not on the same machine?
I am not sure I see the point of that...
If that is indeed its meaning, then how do I get around it in order to implement my scenario?
I've not found that bit of documentation you are referring to but from what I know of the STOMP implementation on the broker this seems incorrect. There shouldn't be any limit to the transport connector accepting connect requests from an outside host by default and I don't think the browser treats the websocket requests the same as it does other things like an Ajax case in terms of the same origin policy.
This probably a case that is best checked by actually trying it to see if it works, I've connected just fine from outside the same host using AMQP over websockets on ActiveMQ so I'd guess the STOMP stack should also work fine.

Does Azure Redis work over http?

Does Azure Redis support transport over http. I am aware of the setting that allows me to choose whether to enable SSL or not. But it seems to me the connection to Azure Redis happens over TCP.
"Every Redis Cluster node requires two TCP connections open. The normal Redis TCP port used to serve clients, for example 6379, plus the port obtained by adding 10000 to the data port, so 16379 in the example."
I have also posted this question on the Microsoft forum. It can be found here.
No, Redis (and Azure's as well) does not use HTTP but rather a text-based protocol called RESP. There are 3rd party servers that let you do that, such as Lark, Webdis and tinywebdis.

Node.js TCP Socket Server on the Cloud [Heroku/AppFog]

Is is possible to run a Node.js TCP Socket oriented application on the Cloud, more specifically on Heroku or AppFog.
It's not going to be a web application, but a server for access with a client program. The basic idea is to use the capabilities of the Cloud - scaling and an easy to use platform.
I know that such application could easily run on IaaS like Amazon AWS, but I would really like to take advantage of the PaaS features of Heroku or AppFog.
I am reasonably sure that doesn't answer the question at hand: "Is is possible to run a Node.js TCP Socket oriented application". All PaaS companies (including Nodejitsu) support HTTP[S]-only reverse proxies for incoming connections.
Generally with node.js + any PaaS with a socket oriented connection you want to use WebSockets, but:
Heroku does not support WebSockets and will only hold open your connection for 55-seconds (see: https://devcenter.heroku.com/articles/http-routing#timeouts)
AppFog does not support WebSockets, but I'm not sure how they handle long-held HTTP connections.
Nodejitsu supports WebSockets and will hold your connections open until closed or reset. Our node.js powered reverse-proxies make this very cheap for us.
We have plans to support front-facing TCP load-balancing with custom ports in the future. Stay tuned!
AppFog and Heroku give your app a single arbitrary port to listen on which is port mapped from port 80. You don't get to pick your port. If you need to keep a connection open for extended periods of time see my edit below. If your client does not need to maintain and open connection you should consider creating a restful API which emits json for your client app to consume. Port 80 is more than adequate for this and Node.js and Express make a superb combo for creating APIs on paas.
AppFog
https://docs.appfog.com/languages/node#node-walkthrough
var port = process.env.VCAP_APP_PORT || 5000;
Heroku
https://devcenter.heroku.com/articles/nodejs
var port = process.env.PORT || 5000;
EDIT: As mentioned by indexzero, AppFog and Heroku support http[s] only and close long held connections. AppFog will keep the connection open as long as there is activity. This can be worked around by using Socket.io or a third party solutions like Pusher
// Socket.io server
var io = require('socket.io').listen(port);
...
io.configure(function () {
io.set("transports", ["xhr-polling"]);
io.set("polling duration", 12);
});
tl;dr - with the current state of the world, it's simply not possible; you must purchase a virtual machine with its own public IP address.
All PaaS providers I've found have an HTTP router in front of all of their applications. This allows them to house hundreds of thousands of applications under a single IP address, vastly improving scalability, and hence – how they offer application hosting for free. So in the HTTP case, the Hostname header is used to uniquely identify applications.
In the TCP case however, an IP address must be used to identify an application. Therefore, in order for this to work, PaaS providers would be forced to allocate you one from their IPv4 range. This would not scale for two main reasons: the IPv4 address space having been completely exhausted and the slow pace of "legacy" networks would make it hard to physically move VMs. ("legacy" networks refer to standard/non-SDN networks.)
The solution to these two problems are IPv6 and SDN, though I foresee ubiquitous SDN arriving before IPv6 does – which could then be used to solve the various IPv4 problems. Amazon already use SDN in their datacenters though there is still a long way to go. In the meantime, just purchase a virtual machine/linux container instance with a public IP address and run your TCP servers there.

Resources