Port vs socket for redis - performance

I am setting up redis on a server. What the differenece between having it listen to a port or a socket? I guess a socket may be more secure but are there any performance benefits also?

Unix domain sockets can achieve around 50% more throughput than TCP sockets (as stated in the official Redis documentation) but this also depends on the platform. However the difference tends to decrease if you make good use of pipelining.
So if the server and the client are on the same machine you could gain some speed boost by using Unix domain sockets instead.

Related

Websocket alternatives in mobile apps?

Typical system design diagrams for back-end services like Uber involves a proxy and web socket server connection to the client.
I'm curious why only web sockets (and long polling) are considered for these modern web designs. If the demand is for a location update service from a mobile app that constantly pushes location updates to the server, why don't people block out a custom tcp or udp connection between the iOS client and the server for example?
Tcp connection is really what websockets uses under the hood, but with a raw TCP connection, you have way more mature libraries that you can leverage (Netty, Kernel-bypass, FPGA)
Udp seems even better since it's stateless and recoverable during disconnections. If it's a one way stream of location updates, it seems to serve the purpose just fine.
Thoughts?
The main point of using Websockets is that it plays well with existing firewalls, proxies and other limitations. It is not uncommon that devices are used in restricted networks which only allow access to web and mail. It is also nice that it also provides a message semantic (TCP is only a byte stream) and that support for TLS is nicely integrated too. While "raw" TCP might have less overhead, the actual overhead of Websockets is fairly small. And often the overhead of the non-binary payloads (i.e. JSON, XML) is much higher which makes the additional small overhead of Websockets irrelevant.

Golang grpc: how scalable is it?

How scalable is GRPC on Go?
Can I run a GRPC server for every IoT device connected to my server app? I.e. have 10-20k of GRPC servers per process?
do you mean a new grpc listening TCP port for each service? Go can't fix the scalability of that; huge numbers of TCP listeners has scalability issues at the scope of the operating system.
If you instead mean a TCP listener doing a reverse proxy back to thousands of other devices, then Go is a pretty good fit for that. What Go does well is cheap "threads", because they don't have to allocate full thread stacks. In Go, spawning a "goroutine" costs about 4k, rather than being a 1MB minimum penalty vs a real thread.
grpc is designed to tranpsport over http2 and reuse the socket efficiently, and pack data efficiently.

Scalability of websockify?

I new with websockify. So here my situation.
Our company have servers written in C# to handle about 1000 to 2000 raw TCP sockets connect per time from Flash and mobile client to play a game online. So we consider to port Flash to Html5 and use Websockify and port native protocol build on TCP at client but still remain native TCP at server side(for mobile client still work).
So I guess Websock client and Websockify server connect via Websock protocol and Websockify and our server connect via TCP protocol
If I right, so can we do that to handle kind of amount connections on Websockify and it performance
The are implementations of websockify in several different languages. The python implementation is the default and has the most additional functionality (auth, logging, etc). However, the basic function of websockify is just to bridge transports (WebSockets to TCP sockets) so it's actually not that difficult to implement. There is a C version that you might look to get maximum efficiency although it is quite dated and probably buggy.
That being said, the python version of websockify is fairly scalable. Each new connection to websockify starts a new child process so it should be linearly scalable to the amount of CPU/memory on your host (separate processes means no GIL contention). Also, websockify is horizontally scalable if a single host can't handle the load of all connections. In other words, you could just put a load balancer (that supports WebSockets) in front of multiple websockify servers.
Also, websockify (the python version) is easy to configure to support multiple targets per instance of websockify. I've added a wiki page describing how to do that.

Shall I use WebSocket on ports other than 80?

Shall I use WebSocket on non-80 ports? Does it ruin the whole purpose of using existing web/HTTP infrastructures? And I think it no longer fits the name WebSocket on non-80 ports.
If I use WebSocket over other ports, why not just use TCP directly? Or is there any special benefits in the WebSocket protocol itself?
And since current WebSocket handshake is in the form of a HTTP UPGRADE request, does it mean I have to enable HTTP protocol on the port so that WebSocket handshake can be accomplished?
Shall I use WebSocket on non-80 ports? Does it ruin the whole purpose
of using existing web/HTTP infrastructures? And I think it no longer
fits the name WebSocket on non-80 ports.
You can run a webSocket server on any port that your host OS allows and that your client will be allowed to connect to.
However, there are a number of advantages to running it on port 80 (or 443).
Networking infrastructure is generally already deployed and open on port 80 for outbound connections from the places that clients live (like desktop computers, mobile devices, etc...) to the places that servers live (like data centers). So, new holes in the firewall or router configurations, etc... are usually not required in order to deploy a webSocket app on port 80. Configuration changes may be required to run on different ports. For example, many large corporate networks are very picky about what ports outbound connections can be made on and are configured only for certain standard and expected behaviors. Picking a non-standard port for a webSocket connection may not be allowed from some corporate networks. This is the BIG reason to use port 80 (maximum interoperability from private networks that have locked down configurations).
Many webSocket apps running from the browser wish to leverage existing security/login/auth infrastructure already being used on port 80 for the host web page. Using that exact same infrastructure to check authentication of a webSocket connection may be simpler if everything is on the same port.
Some server infrastructures for webSockets (such as socket.io in node.js) use a combined server infrastructure (single process, one listener) to support both HTTP requests and webSockets. This is simpler if both are on the same port.
If I use WebSocket over other ports, why not just use TCP directly? Or
is there any special benefits in the WebSocket protocol itself?
The webSocket protocol was originally defined to work from a browser to a server. There is no generic TCP access from a browser so if you want a persistent socket without custom browser add-ons, then a webSocket is what is offered. As compared to a plain TCP connection, the webSocket protocol offers the ability to leverage HTTP authentication and cookies, a standard way of doing app-level and end-to-end keep-alive ping/pong (TCP offers hop-level keep-alive, but not end-to-end), a built in framing protocol (you'd have to design your own packet formats in TCP) and a lot of libraries that support these higher level features. Basically, webSocket works at a higher level than TCP (using TCP under the covers) and offers more built-in features that most people find useful. For example, if using TCP, one of the first things you have to do is get or design a protocol (a means of expressing your data). This is already built-in with webSocket.
And since current WebSocket handshake is in the form of a HTTP UPGRADE
request, does it mean I have to enable HTTP protocol on the port so
that WebSocket handshake can be accomplished?
You MUST have an HTTP server running on the port that you wish to use webSocket on because all webSocket requests start with an HTTP request. It wouldn't have to be heavily featured HTTP server, but it does have to handle the initial HTTP request.
Yes - Use 443 (ie, the HTTPS port) instead.
There's little reason these days to use port 80 (HTTP) for anything other than a redirection to port 443 (HTTPS), as certification (via services like LetsEncrypt) are easy and free to set up.
The only possible exceptions to this rule are local development, and non-internet facing services.
Should I use a non-standard port?
I suspect this is the intent of your question. To this, I'd argue that doing so adds an unnecessary layer of complication with no obvious benefits. It doesn't add security, and it doesn't make anything easier.
But it does mean that specific firewall exceptions need to be made to host and connect to your websocket server. This means that people accessing your services from a corporate/school/locked down environment are probably not going to be able to use it, unless they can somehow convince management that it is mandatory. I doubt there are many good reasons to exclude your userbase in this way.
But there's nothing stopping you from doing it either...
In my opinion, yes you can. 80 is the default port, but you can change it to any as you like.

Node.js TCP Socket Server on the Cloud [Heroku/AppFog]

Is is possible to run a Node.js TCP Socket oriented application on the Cloud, more specifically on Heroku or AppFog.
It's not going to be a web application, but a server for access with a client program. The basic idea is to use the capabilities of the Cloud - scaling and an easy to use platform.
I know that such application could easily run on IaaS like Amazon AWS, but I would really like to take advantage of the PaaS features of Heroku or AppFog.
I am reasonably sure that doesn't answer the question at hand: "Is is possible to run a Node.js TCP Socket oriented application". All PaaS companies (including Nodejitsu) support HTTP[S]-only reverse proxies for incoming connections.
Generally with node.js + any PaaS with a socket oriented connection you want to use WebSockets, but:
Heroku does not support WebSockets and will only hold open your connection for 55-seconds (see: https://devcenter.heroku.com/articles/http-routing#timeouts)
AppFog does not support WebSockets, but I'm not sure how they handle long-held HTTP connections.
Nodejitsu supports WebSockets and will hold your connections open until closed or reset. Our node.js powered reverse-proxies make this very cheap for us.
We have plans to support front-facing TCP load-balancing with custom ports in the future. Stay tuned!
AppFog and Heroku give your app a single arbitrary port to listen on which is port mapped from port 80. You don't get to pick your port. If you need to keep a connection open for extended periods of time see my edit below. If your client does not need to maintain and open connection you should consider creating a restful API which emits json for your client app to consume. Port 80 is more than adequate for this and Node.js and Express make a superb combo for creating APIs on paas.
AppFog
https://docs.appfog.com/languages/node#node-walkthrough
var port = process.env.VCAP_APP_PORT || 5000;
Heroku
https://devcenter.heroku.com/articles/nodejs
var port = process.env.PORT || 5000;
EDIT: As mentioned by indexzero, AppFog and Heroku support http[s] only and close long held connections. AppFog will keep the connection open as long as there is activity. This can be worked around by using Socket.io or a third party solutions like Pusher
// Socket.io server
var io = require('socket.io').listen(port);
...
io.configure(function () {
io.set("transports", ["xhr-polling"]);
io.set("polling duration", 12);
});
tl;dr - with the current state of the world, it's simply not possible; you must purchase a virtual machine with its own public IP address.
All PaaS providers I've found have an HTTP router in front of all of their applications. This allows them to house hundreds of thousands of applications under a single IP address, vastly improving scalability, and hence – how they offer application hosting for free. So in the HTTP case, the Hostname header is used to uniquely identify applications.
In the TCP case however, an IP address must be used to identify an application. Therefore, in order for this to work, PaaS providers would be forced to allocate you one from their IPv4 range. This would not scale for two main reasons: the IPv4 address space having been completely exhausted and the slow pace of "legacy" networks would make it hard to physically move VMs. ("legacy" networks refer to standard/non-SDN networks.)
The solution to these two problems are IPv6 and SDN, though I foresee ubiquitous SDN arriving before IPv6 does – which could then be used to solve the various IPv4 problems. Amazon already use SDN in their datacenters though there is still a long way to go. In the meantime, just purchase a virtual machine/linux container instance with a public IP address and run your TCP servers there.

Resources