Why do we need products like Pusher and Socket.io to establish a websocket connection? - laravel

I've been reading about websockets and SaaS like Pusher and Socket.io recently, while working on my Laravel chat practice application. What I don't understand is, why do we need external software to establish a websocket connection? Can't the server code like Laravel just directly establish the connection with the front-end like Vue.js? Why does it have to go through the middleman like Pusher and Socket.io? Sorry for the noob question.

It doesn't have to.
Those pieces of software just happen to make it trivial to work with the Websocket protocol.
Remember, Laravel is an opinionated framework. This means that it will pick and choose its own libraries to abstract away these kinds of concepts for you so that you don't have to worry so much about what's going on under the hood.
Basically, there are two components that you need in order to be able to work with Websockets:
A Websocket Server
A Websocket Client
The reason Laravel doesn't communicate directly with the front-end using Websockets is because Laravel itself isn't a Websocket server. At least, not really. And while PHP does have support for working with the Websocket protocol - and even some libraries to leverage it a little more nicely - it just isn't used to handle long-lived processes as often as other languages.
Instead, Laravel uses the Pub/Sub functionality that Redis provides to listen to events that occur through Redis and the Predis library. The reason why it does this is because Laravel is better-suited as a middle-man for the websocket server, and all connected clients.
In this way, Laravel can both pass information up through to the Websocket server using Broadcasting Events, as well as receive event information from the Websocket server and determine if users have the ability or authorization to receive them.
If you don't want to use Pusher, there is a library that will allow you to run your own Websocket Server specifically for Laravel called Laravel Echo Server.
Under the hood, this library still uses Socket.io and Redis in order for all moving parts to communicate with each other seamlessly in a Laravel web application. The benefit here is that you won't need to worry about the number of messages being sent by the server.
The downside is that you now have to know how to manage and maintain this process on your server so that the Websocket Server will know to turn on every time you restart your server, or if a failure happens, etc.
Check out PM2 to learn more about running and maintaining server daemons.
If you don't agree with Laravel's opinions on how to handle Websockets, then you could theoretically use any other server-side language to handle the websocket protocol. It will just require a greater working knowledge of the protocol itself; and if Laravel needs to work with it, you'll have to know how to write the appropriate Service and Provider classes to be able to handle it.

Short answer? You don't have to use them. Knock yourself out writing your own server and client side websocket implementation.
Longer answer.
Why use Laravel? I can do all that with straight up PHP.
Why use Vue? I can do all that with vanilla javascript.
We use libraries and frameworks because they abstract away the details of implementation and make it easier to build products. They handle edge cases you don't think of or things you don't even know that you don't know about because they are used by thousands or millions of developers and all the knowledge and bugs they have encountered and fixed are baked into the implementation.
This is one of the hallmarks of software engineering, code reuse. Don't repeat yourself and don't write any software you don't have to. It allows you to focus on building a solution for your particular requirements, and not focus on building a bunch of infrastructure before you can even build your solution.
I've never used Pusher, but it looks like, yes, it is a SaaS product. But Socket.io is open source.

Related

What are the advantages of websocket APIs to middleware?

Some pieces of middleware support websockets natively e.g. HiveMQ: http://www.hivemq.com/mqtt-over-websockets-with-hivemq/. What advantages are conferred to a developer using the websockets API as a first class client to the middleware, rather than routing requests through an intermediary server that supports language specific APIs e.g.
Client -> Middleware
vs
Client -> Server -> Middleware
For example, we could argue that skipping an intermediary server will reduce bandwidth costs, not require a developer to write an extra layer, native SSL websockets support?
What other advantages might be provided to not just a developer, but any party through providing websockets support for middleware?
The main advantage you get is simplicity and in case of HiveMQ, scalability.
Let me explain these advantages:
Simplicity
In case of HiveMQ, you just start the server and you are good to go. All web applications which use a MQTT library over websockets can connect to the server without even knowing that websockets as transport is used. For HiveMQ itself, it's just another MQTT client. So it doesn't matter if the clients are connected via websockets or via a classic TCP connection. I think you already mentioned the other arguments in your question. And of course last but not least the operations guys will thank you if they have one system (in your case the "Server") less to maintain.
Scalability
Software like HiveMQ is very scalable and it can handle up to hundreds of thousands of concurrent connected clients. The chance is high, that the additional layer ("Server" in your case) could introduce a bottleneck. Also, things like load balancing with a HW or SW load balancer gets a lot easier if you can throw out unneeded layers. In general, your architecture of your system will get a lot of easier if you don't need these additional layers (which are not services which can be reused for other applications, like microservices are).
Last but not least it's worth noting, that HiveMQ itself is often integrated with classic middleware / ESBs. That means, people write custom plugins for integrating HiveMQ to their existing middleware. JMS or webservice calls (REST, SOAP) are often used for doing that.
Take that answer with a grain of salt, since I'm involved developing HiveMQ :-)

Phalcon php vs node.js

We are going to develop rest server for our application (and all logic is on client javascript).
So we thought to use Phalcon php, but we also need to create realtime chat system, which is much more easy to do using node.js. This made us think about using node.js instead of phalcon
Unfortunatly, we are not good expirienced in node.js, we love phalcon for its performance and internal beauty.
The quiestion is, did anybody compare phalcon and node.js performance? May be it's better to use node.js only for long polling chat requests, but i dont like when project is connected with so different tools.
You are trying to compare two different things IMO.
node.js has a lot of power and flexibility but so does Phalcon. If you want to create a chat application with Phalcon, then you will need to implement some sort of polling mechanism in your browser that would refresh the chat window every X seconds. Getting/Inserting the data from the database will be Phalcon's job. Javascript will be used to do the polling i.e. refresh the chat page every X seconds.
The problem with this approach is that you might be hitting your web server every X seconds from every client that has the chat application open - and thus refreshing the chat contents, even when there are no messages. This can become very intensive very quickly.
node.js has the ability to send messages to the subscribed clients instantly. Web sockets can do the same thing I believe.
Check this video out, which will give you an idea of how this can be achieved easily:
https://www.youtube.com/watch?v=lW1vsKMUaKg
The idea is to use technologies that will not burden your hardware, rather collaborate with it. Having a "subscription" notification system (such as sockets or node.js) reduces the load on your application since only the subscribed clients receive the new messages and no full refresh is needed from the chat clients.
Phalcon is great for the back end with its speed and it can be used to construct the message which in turn will be passed to the transport layer and sent to the client. Depending on how you want to implement this, there are plenty of options around and you can easily mix and match technologies :)
as #Nikolaos Dimopoulos said, you're trying to compare two different things.
But here is my advice, while you're experienced with PhalconPHP framework, and you want to benefit from Phalcon speed and performance, you can implement the web app in Phalcon FW, and the chatting system in Node.JS as a service.
If your web application "The Phalcon app" needs to push messages from the backend, you can use http://elephant.io/ library for that, I have done this before with Yii framework and Node, and it's working perfectly.
My advice is to use what you already know, experimenting with nodejs just for the chat application.
Mainly because you said you do not have experience with it, so, because the chat app is something a lot of people made you'll find plenty of examples.
By doing so you will learn a lot from node and might even think about migrating from Phalcon if it suits your needs, using the features offered by expressjs for example.
I would not choose one over the other based on performance.

Faye & Ruby or Node.js for scalability

I'm looking to prototype a web app that will use sockets to push a gentle stream of messages to mobile web app clients. I want to pick an architecture that will work for a large number of clients if/when it moves to production (so i dont have to change later)
I'd like to start with rails because its familiar and has a strong structure from the go meaning easier to prototype. I think Faye will provide what i need in terms of a pub-sub layer but am I going to create a bottleneck by using ruby and the high number of socket connections, or will Faye isolate/protect Ruby server from that load, if you follow?
At the outset the load will not be significant so it won't matter, i just don't want to be hobbled later on when there are a lot of socket connections and i wish i used node.js ! Server side JS would be fairly new to me but I guess there are benefits in that the JS app can include the client side also
Advice appreciated.
You can take a look at https://github.com/faye/faye-redis-node.
This plugin provides a Redis-based backend for the Faye messaging server. It allows a single Faye service to be distributed across many front-end web servers by storing state and routing messages through a Redis database server

nodejs: Ajax vs Socket.IO, pros and cons

I thought about getting rid of all client-side Ajax calls (jQuery) and instead use a permanent socket connection (Socket.IO).
Therefore I would use event listeners/emitters client-side and server-side.
Ex. a click event is triggered by user in the browser, client-side emitter pushes the event through socket connection to server. Server-side listener reacts on incoming event, and pushes "done" event back to client. Client's listener reacts on incoming event by fading in DIV element.
Does that make sense at all?
Pros & cons?
There is a lot of common misinformation in this thread that is very inaccurate.
TL/DR;
WebSocket replaces HTTP for applications! It was designed by Google with the help of Microsoft and many other leading companies. All browsers support it. There are no cons.
SocketIO is built on top of the WebSocket protocol (RFC 6455). It was designed to replace AJAX entirely. It does not have scalability issues what-so-ever. It works faster than AJAX while consuming an order of magnitude fewer resources.
AJAX is 10 years old and is built on top of a single JavaScript XMLHTTPRequest function that was added to allow callbacks to servers without reloading the entire page.
In other words, AJAX is a document protocol (HTTP) with a single JavaScript function.
In contrast, WebSocket is a application protocol that was designed to replace HTTP entirely. When you upgrade an HTTP connection (by requesting WebSocket protocol), you enable two-way full duplex communication with the server and no protocol handshaking is involved what so ever. With AJAX, you either must enable keep-alive (which is the same as SocketIO, only older protocol) or, force new HTTP handshakes, which bog down the server, every time you make an AJAX request.
A SocketIO server running on top of Node can handle 100,000 concurrent connections in keep-alive mode using only 4gb of ram and a single CPU, and this limit is caused by the V8 garbage collection engine, not the protocol. You will never, ever achieve this with AJAX, even in your wildest dreams.
Why SocketIO so much faster and consumes so much fewer resources
The main reasons for this is again, WebSocket was designed for applications, and AJAX is a work-around to enable applications on top of a document protocol.
If you dive into the HTTP protocol, and use MVC frameworks, you'll see a single AJAX request will actually transmit 700-900 bytes of protocol load just to AJAX to a URL (without any of your own payload). In striking contrast, WebSocket uses about 10 bytes, or about 70x less data to talk with the server.
Since SocketIO maintains an open connection, there's no handshake, and server response time is limited to round-trip or ping time to the server itself.
There is misinformation that a socket connection is a port connection; it is not. A socket connection is just an entry in a table. Very few resources are consumed, and a single server can provide 1,000,000+ WebSocket connections. An AWS XXL server can and does host 1,000,000+ SocketIO connections.
An AJAX connection will gzip/deflate the entire HTTP headers, decode the headers, encode the headers, and spin up a HTTP server thread to process the request, again, because this is a document protocol; the server was designed to spit out documents a single time.
In contrast, WebSocket simply stores an entry in a table for a connection, approximately 40-80 bytes. That's literally it. No polling occurs, at all.
WebSocket was designed to scale.
As far as SocketIO being messy... This is not the case at all. AJAX is messy, you need promise/response.
With SocketIO, you simply have emitters and receivers; they don't even need to know about each-other; no promise system is needed:
To request a list of users you simply send the server a message...
socket.emit("giveMeTheUsers");
When the server is ready, it will send you back another message. Tada, you're done. So, to process a list of users you simply say what to do when you get a response you're looking for...
socket.on("HereAreTheUsers", showUsers(data) );
That's it. Where is the mess? Well, there is none :) Separation of concerns? Done for you. Locking the client so they know they have to wait? They don't have to wait :) You could get a new list of users whenever... The server could even play back any UI command this way... Clients can connect to each other without even using a server with WebRTC...
Chat system in SocketIO? 10 lines of code. Real-time video conferencing? 80 lines of code Yes... Luke... Join me. use the right protocol for the job... If you're writing an app... use an app protocol.
I think the problem and confusion here is coming from people that are used to using AJAX and thinking they need all the extra promise protocol on the client and a REST API on the back end... Well you don't. :) It's not needed anymore :)
yes, you read that right... a REST API is not needed anymore when you decide to switch to WebSocket. REST is actually outdated. if you write a desktop app, do you communicate with the dialog with REST? No :) That's pretty dumb.
SocketIO, utilizing WebSocket does the same thing for you... you can start to think of the client-side as simple the dialog for your app. You no longer need REST, at all.
In fact, if you try to use REST while using WebSocket, it's just as silly as using REST as the communication protocol for a desktop dialog... there is absolutely no point, at all.
What's that you say Timmy? What about other apps that want to use your app? You should give them access to REST? Timmy... WebSocket has been out for 4 years... Just have them connect to your app using WebSocket, and let them request the messages using that protocol... it will consume 50x fewer resources, be much faster, and 10x easier to develop... Why support the past when you're creating the future?
Sure, there are use cases for REST, but they are all for older and outdated systems... Most people just don't know it yet.
UPDATE:
A LOT of people have been asking me recently how can they start writing an app in 2018 (and now soon 2019) using WebSockets, that the barrier seems really high, that once they play with Socket.IO they don't know where else to turn or what to learn.
Fortunately the last 3 years have been very kind to WebSockets...
There are now 3 major frameworks that support BOTH REST and WebSocket, and even IoT protocols or other minimal/speedy protocols like ZeroMQ, and you don't have to worry about any of it; you just get support for it out of the box.
Note: Although Meteor is by far the most popular, I am leaving it out because although they are a very, very well-funded WebSocket framework, anyone who has coded with Meteor for a few years will tell you, it's an internal mess and a nightmare to scale. Sort of like WordPress is to PHP, it is there, it is popular, but it is not very well made. It's not well-thought out, and it will soon die. Sorry Meteor folks, but check out these 3 other projects compared to Meteor, and you will throw Meteor away the same day :)
With all of the below frameworks, you write your service once, and you get both REST and WebSocket support. What's more, it's a single line of config code to swap between almost any backend database.
Feathers Easiest to use, works the same on the front and backend, and supports most features, Feathers is a collection of light-weight wrappers for existing tools like express. Using awesome tools like feathers-vuex, you can create immutable services that are fully mockable, support REST, WebSocket and other protocols (using Primus), and get free full CRUD operations, including search and pagination, without a single line of code (just some config). Also works really great with generated data like json-schema-faker so you can not only fully mock things, you can mock it with random yet valid data. You can wire up an app to support type-ahead search, create, delete and edit, with no code (just config). As some of you may know, proper code-through-config is the biggest barrier to self-modifying code. Feathers does it right, and will push you towards the front of the pack in the future of app design.
Moleculer Moleculer is unfortunately an order of magnitude better at the backend than Feathers. While feathers will work, and let you scale to infinity, feathers simply doesn't even begin to think about things like production clustering, live server consoles, fault tolerance, piping logs out of the box, or API Gateways (while I've built a production API gateway out of Feathers, Moleculer does it way, way better). Moleculer is also the fastest growing, both in popularity and new features, than any WebSocket framework.
The winning strike with Moleculer is you can use a Feathers or ActionHero front-end with a Moleculer backend, and although you lose some generators, you gain a lot of production quality.
Because of this I recommend learning Feathers on the front and backend, and once you make your first app, try switching your backend to Moleculer. Moleculer is harder to get started with, but only because it solves all the scaling problems for you, and this information can confuse newer users.
ActionHero Listed here as a viable alternative, but Feathers and Moleculer are better implementations. If anything about ActionHero doesn't Jive with you, don't use it; there are two better ways above that give you more, faster.
NOTE: API Gateways are the future, and all 3 of the above support them, but Moleculer literally gives you it out of the box. An API gateway lets you massage your client interaction, allowing caching, memoization, client-to-client messaging, blacklisting, registration, fault tolerance and all other scaling issues to be handled by a single platform component. Coupling your API Gateway with Kubernetes will let you scale to infinity with the least amount of problems possible. It is the best design method available for scalable apps.
Update for 2021:
The industry has evolved so much that you don't even need to pay attention to the protocol. GraphQL now uses WebSockets by default! Just look up how to use subscriptions, and you're done. The fastest way to handle it will occur for you.
If you use Vue, React or Angular, you're in luck, because there is a native GraphQL implementation for you! Just call your data from the server using a GraphQL subscription, and that data object will stay up to date and reactive on it's own.
GraphQL will even fall-back to REST for you when you need to use legacy systems, and subscriptions will still update using sockets. Everything is solved when you move to GraphQL.
Yes, if you thought "WTH?!?" when you heard you can simply subscribe, like with FireBase, to a server object, and it will update itself for you. Yes. That's now true. Just use a GraphQL subscription. It will use WebSockets.
Chat system? 1 line of code.
Real time video system? 1 line of code.
Video game with 10mb of open world data shared across 1m real-time users? 1 line of code. The code is just your GQL query now.
As long as you build or use the right back-end, all this realtime stuff is now done for you with GQL subscriptions. Make the switch as soon as you can and stop worrying about protocols.
Socket.IO uses persistent connection between client and server, so you will reach a maximum limit of concurrent connections depending on the resources you have on server side, while more Ajax async requests can be served with the same resources.
Socket.IO is mainly designed for realtime and bi-directional connections between client and server and in some applications there is no need to keep permanent connections. On the other hand Ajax async connections should pass the HTTP connection setup phase and send header data and all cookies with every request.
Socket.IO has been designed as a single process server and may have scalability issues depending server resources that you are bound to.
Socket.IO in not well suited for applications when you are better to cache results of client requests.
Socket.IO applications face with difficulties with SEO optimization and search engine indexing.
Socket.IO is not a standard and not equivalent to W3C Web Socket API, It uses current Web Socket API if browser supports, socket.io created by a person to resolve cross browser compatibility in real time apps and is so young, about 1 year old. Its learning curve, less developers and community resources compared with ajax/jquery, long term maintenance and less need or better options in future may be important for developer teams to make their code based on socket.io or not.
Sending one way messages and invoking callbacks to them can get very messy.
$.get('/api', sendData, returnFunction); is cleaner than
socket.emit('sendApi', sendData); socket.on('receiveApi', returnFunction);
Which is why dnode and nowjs were built on top of socket.io to make things manageable. Still event driven but without giving up callbacks.

Faye vs. Socket.IO (and Juggernaut)

Socket.IO seems to be the most popular and active WebSocket emulation library. Juggernaut uses it to create a complete pub/sub system.
Faye is also popular and active, and has its own javascript library, making its complete functionality comparable to Juggernaut. Juggernaut uses node for its server, and Faye can use either node or rack. Juggernaut uses Redis for persistence (correction: it uses Redis for pub/sub), and Faye only keeps state in memory.
Is everything above accurate?
Faye says it implements Bayeux -- i think Juggernaut does not do this -- is that because Juggernaut is lower level (IE, I can implement Bayeux using Juggernaut)
Could Faye switch to using the Socket.IO browser javascript library if it wanted to? Or do their javascript libraries do fundamentally different things?
Are there any other architectural/design/philosophy differences between the projects?
Disclosure: I am the author of Faye.
Regarding Faye, everything you've said is true.
Faye implements most of Bayeux, the only thing missing right now is service channels, which I've yet to be convinced of the usefulness of. In particular Faye is designed to be compatible with the CometD reference implementation of Bayeux, which has a large bearing on the following.
Conceptually, yes: Faye could use Socket.IO. In practise, there are some barriers to this:
I've no idea what kind of server-side support Socket.IO requires, and the requirement that the Faye client (there are server-side clients in Node and Ruby, remember) be able to talk to any Bayeux server (and the Faye server to any Bayeux client) may be deal-breaker.
Bayeux has specific requirements that servers and clients support certain transport types, and says how to negotiate which one to use. It also specifies how they are used, for example how the Content-Type of an XHR request affects how its content is interpreted.
For some types of error handling I need direct access to the transport, for example resending messages when a client reconnects after a Node WebSocket dies.
Please correct me if I've got any of this wrong - this is based on a cursory scan of the Socket.IO documentation.
Faye is just pub/sub, it's just based on a slightly more complex protocol and has a lot of niceties built in:
Server- and client-side extensions
Wildcard pattern-matching on channel routes
Automatic reconnection, e.g. when WebSockets die or the server goes offline
The client works in all browsers, on phones, and server-side on Node and Ruby
Faye probably looks a lot more complex compared to Juggernaut because Juggernaut delegates more, e.g. it delegates transport negotiation to Socket.IO and message routing to Redis. These are both fine decisions, but my decision to use Bayeux means I have to do more work myself.
As for design philosophy, Faye's overriding goal is that it should work everywhere the Web is available and should be absolutely trivial to get going with. I'ts really simple to get started with but its extensibility means it can be customized in quite powerful ways, for example you can turn it into a server-to-client push service (i.e. stop arbitrary clients pushing to it) by adding authentication extensions.
There is also work underway to make it more flexible on the server side. I'm looking at adding clustering support, and making the core pub-sub engine pluggable so you could use Faye as a stateless web frontend for another pub-sub system like Redis or AMQP.
I hope this has been helpful.
AFAIK, yes, apart from the fact Juggernaut only uses Redis for Pubsub, not persistence. Also means client libraries in most languages have already been written (since it just needs a Redis adapter).
Juggernaut doesn't implement Bayeux, but rather has a very simple custom JSON protocol
I don't know, but probably
Juggernaut is very simple, and designed to be that way. Although I haven't used Faye, from the docs it looks like it has a lot more features than just PubSub. Being built on top of Socket.IO has it advantages too, Juggernaut's supported in practically every browser, both desktop and mobile.
I'll be really interested in what Faye's author has to say. As I say, I haven't used it and it would be great to know how it compares to Juggernaut. It's probably the case of using the best tool for the job. If it's pubsub you need, Juggernaut does that very well.
Faye certainly could.
Another example of a similar project on top of Socket.IO:
https://github.com/aaronblohowiak/Push-It

Resources